Safety and Explainability of Learning Systems
Contact
Head of Group
Office
Safety and Explainability of Learning Systems
Artificial Intelligence has become ubiquitous in modern life. However, like traditional hardware and software, many intelligent systems have defects, and their decision-making is often hard to understand. The Safety and Explainability of Learning Systems Group addresses these fundamental challenges and develops innovative technology to make artificial intelligence safe, reliable, and transparent to the human user.
We have two open positions for Ph.D. candidates, to be filled immediately:
Please contact Daniel Neider for further information.
News
Two Openings for Ph.D. Candidates
The group has two new openings for Ph.D. candidates.
morePaper Accepted at FAccT
Our paper "Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash” was accepted at FAccT 2022!
morePaper Accepted at NFM
Our paper "Robust Computation Tree Logic” was accepted at NFM 2022!
morePaper Accepted at TACAS
Our paper "Scalable Anytime Algorithms for Learning Fragments of Linear Temporal Logic” was accepted at TACAS 2022!
more