Safety and Explainability of Learning Systems
Contact
Head of Group
Office
Safety and Explainability of Learning Systems
Artificial Intelligence has become ubiquitous in modern life. However, like traditional hardware and software, many intelligent systems have defects, and their decision-making is often hard to understand. The Safety and Explainability of Learning Systems Group addresses these fundamental challenges and develops innovative technology to make artificial intelligence safe, reliable, and transparent to the human user.
We have two open positions for Ph.D. candidates, to be filled immediately:
Please contact Daniel Neider for further information.