Principal Investigators

Prof. Dr. Martin Fränzle

Hybrid Systems

Email:

 

Prof. Dr.-Ing. Christoph Herrmann

Experimental Psychology

Email: 

 

 

Co-Principal Investigator

Prof. Dr. Werner Damm

Critical Systems Engineering

Email: 

 

Associated Researcher

Prof. Dr. Mark Siebel

Theoretical Philosophy

Email: 

Dr. Edgar Rose

Civic Law, Trade and Economic Law and Law informatics

Email:

PhD

M2 Acceptance and Dynamic Conflict Resolution in Automotive Applications

Abstract

M2 explores the acceptance, cooperation and governance of selfexplaining ACPS in the domain of mobility. User acceptance and trust are essential when humans interact with automated systems in complex, realistic situations. Acceptance is necessary to mitigate automation misuse and disuse and to meet legal and ethical constraints. One way to ensure a high level of user acceptance is to involve humans actively in the decision processes of Automated Cyber-Physical Systems (ACPS). Successful cooperation between these two cooperation partners can be significantly enhanced if the automation provides information about its purpose, process, and performance. For this purpose, it is necessary that future ACPS involve humans in an efficient communication about reasons and justifications of planned actions and to resolve potential conflicts of interest. For humans and ACPS to become team players they must be able to observe, understand, and predict each other's states and actions. Existing approaches are focusing on the question of human performance in future automated systems and understanding the factors of human trust in automation in general. In this project we are moving beyond managing human performance in ACPS to the challenge of a mutual partnership between the human and the machine and establish successful communication, understanding and acceptance. Our research in communicating and debating justifications builds on neurophysiological human state assessment. We will investigate increasing levels of complexity of communication between the human and the ACPS, from unidirectional explanations provided by the ACPS to humans to interactive bi-directional debates for understanding and conflict resolution. Another way to ensure a high level of user acceptance concerns the design of the legal framework that governs the decision process of ACPS in an environment of traffic control. The assumption is that a system of automated traffic control has to comply with citizens’ fundamental rights and warrant fair procedures and outcomes of decision-making in order to increase acceptance.

Research Questions and Objectives

This interdisciplinary research project explores acceptance and trust of human users into autonomous vehicles (AVs). These insights then shall find application in the design of dynamic conflict resolution strategies. This aim is pursued from a computer science, psycho-physiological and legal perspective.

Theories and Key Concepts

  1. Computer Science: The project in computer science focuses on developing mechanisms for conflict detection, explanation, and resolution in AVs. This involves creating mathematical models and logic for conflict resolution strategies, generating explanations that are understandable to humans in real-time, and incorporating psycho-physiological findings into the framework.
  2. Psychophysiology: This aspect involves studying the psychophysiological responses of individuals when making decisions in the presence of intelligent machines like self-driving cars. It uses dilemmas and driving simulator scenarios to assess emotional responses and physiological data. The goal is to unobtrusively evaluate users' status during these experiences and enable the intelligent machine to adapt its behavior.
  3. Law: In the legal perspective, the project examines liability rules that affect the behavior of individuals in the context of AVs. This includes determining the standard of care for AVs, assessing whether existing rules and regulations apply to AVs, and analyzing potential gaps in liability in scenarios involving hacking.

Methods

  1. In computer science, new models and algorithmic methods are being developed to address conflict resolution issues, incorporating logical models for causality and automated deduction.
  2. The psychophysiological perspective involves conducting experiments in a highly immersive driving simulator and controlled laboratory environments. Physiological measures, including galvanic skin response, pulse, respiration, gaze behavior, and EEG, are collected to analyze responses during decision-making in AV scenarios.
  3. Legal methodology, such as legal interpretation and comparative analysis, is employed to address legal problems related to AV liability and standard of care.

Current Status and Key Findings:

  • In computer science, a SAT model for generating explanations in conflict situations has been developed. Research is ongoing in areas such as conflict resolution and the timing of providing explanations.
  • In psychophysiology, reliable results were obtained in a driving simulator environment, and a transition to a controlled laboratory environment is planned.
  • In the legal perspective, it was found that applying the standard of care for human drivers to AVs is inadequate, and a human-machine standard based on the average driver is recommended. Additionally, a potential liability gap in hacking scenarios was identified, as certain legal exemptions may hinder victims from seeking compensation.

Publications

 

  1. Schwartz, Jacob: “Betriebsgefahr und Unabwendbarkeit bei selbstfahrenden Fahrzeugen“, in: Jürgen Taeger (ed.), „Den Wandel begleiten - IT-rechtliche Herausforderungen der Digitalisierung, Edewecht 2020, 669-686.

  2. Schwartz, Jacob: „Betriebsgefahr und Unabwendbarkeit bei selbstfahrenden Fahrzeugen“ in: InTeR 2021, 77-83 (secondary publication of conference paper above)

  3. Schwartz, Jacob: „Virtuelle Schwarzfahrer – Haftung für Cyberangriffe auf selbstfahrende Fahrzeuge“, in: Jürgen Taeger (ed.), „Im Fokus der Rechtsentwicklung - Die Digitalisierung der Welt“, Edewecht 2021, 305-321.

  4. A. Bairy, W. Hagemann, A. Rakow and M. Schwammberger, "Towards Formal Concepts for Explanation Timing and Justifications," 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW), Melbourne, Australia, 2022, pp. 98-102, doi: 10.1109/REW56159.2022.00025.

  5. Akhila Bairy. 2022. Modeling Explanations in Autonomous Vehicles. In Integrated Formal Methods: 17th International Conference, IFM 2022, Lugano, Switzerland, June 7–10, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 347–351. doi.org/10.1007/978-3-031-07727-2_20

(Stand: 19.01.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page