Contact

Press & Communication

+49 (0) 441 798-5446

More:

Research Training Group "Social Embeddedness of Autonomous Cyber Physical Systems" (SEAS)

Kontakt: 

Prof. Dr. Christoph Herrmann

Department of Psychology

Experimental Psychology Lab

 

Maren Bertheau

Department of Psychology

Experimental Psychology Lab

  • Woman in the driver's seat of a car. She does not touch the steering wheel. Next to the steering wheel is a large display that shows the vehicle from the outside and the surroundings.

    Trusting the decisions of an autonomous vehicle raises concerns for many people. Researchers at the University of Oldenburg have now shown that brainwaves can reveal when people would have reacted differently in so-called dilemma situations compared to a vehicle-controlling artificial intelligence. Evgenii & Karina Gerasimovi / AdobeStock

  • The graphic shows the same motif twice. At the top is a blue car with passengers on a two-lane road. It is heading towards two groups of people. On the left, there is a jogger, a dog, an elderly man, a woman, and a child. On the right, there is a man, a baby, a dog, a girl, and an elderly woman. In the right depiction of this graphic, an arrow indicates the direction of the car. It is heading towards the left group, which is now marked with skulls.

    Participants had to decide on dilemmas like these: They were first shown the left image and had to indicate whether the car should continue on the right or left lane. Afterwards, the researchers presented them with an image of the decision that an AI supposedly made in this situation. Awad, Edmond; Dsouza, Sohan; Kim, Richard; Schulz, Jonathan; Henrich, Joseph; Shariff, Azim et al. (2018): The Moral Machine experiment. In Nature 563 (7729

What brainwaves reveal about the acceptance of AI decisions

If a self-driving car behaves differently in a classic dilemma situation than test subjects would, this discrepancy becomes visible in brain activity. Researchers from the Department of Psychology have now been able to demonstrate this.

When a self-driving car behaves differently in a classic dilemma situation than test subjects would, this discrepancy becomes visible in brain activity. Researchers from the Department of Psychology at the University of Oldenburg demonstrated this in a study. The findings are intended to contribute to better acceptance of autonomously controlled vehicles by people over the long term. The study results were reported by the team from the Experimental Psychology lab led by Prof. Dr. Christoph Herrmann in the journal Scientific Reports.

How should a self-driving car behave when a collision with either a child or a group of people is unavoidable? What if the life of the driver could be saved, but a pet would have to be run over? Dilemmas like these are the heart of the famous Moral Machine test, which the Massachusetts Institute of Technology (USA) used several years ago to evaluate the answers from millions of people who participated in the test online. The Oldenburg researchers have now repeated this experiment—with two crucial adjustments. Firstly, they confronted their test subjects with the supposed decision made by the artificial intelligence controlling the car after the subjects made their choice. Secondly, they measured the brainwaves of their participants during this process. "The more complex a process is, the longer the processing time in the brain," explains Maren Bertheau, first author of the study and PhD candidate under Herrmann. While the mere perception of a visual stimulus becomes visible in brainwave measurements as early as about 0.1 seconds, more complex cognitive processes, such as recognizing the image, cause detectable fluctuations after about 0.3 seconds. In the study's test, participants also showed a significant fluctuation after 0.3 seconds—a moment which the researchers believe is when the brain deciphers whether human and machine acted alike. Notably, if the machine's actions deviate, there is up to a two-microvolt higher excitation according to the team's measurements. This discrepancy between congruent and non-congruent decisions persists—as the excitation flattens—for several milliseconds.

"We are looking for initial approaches that will eventually allow us to measure in everyday life whether people agree with the decision of a self-driving car," explains Herrmann. It is not about steering a vehicle with the power of one's thoughts, but about adapting the car's driving behavior as precisely as possible to the individually desired actions. It might be conceivable in the near future to measure, using mobile brainwave devices, whether an overtaking maneuver seems too risky or if the traffic light appears too orange to the eyes of the driver. "Thus, a driving profile would be created that increasingly corresponds to the comfort, sense of security, and other demands of the users," says Herrmann.

The researchers also emphasize the limitations of their current study. Their results are based on the averaged data from 34 test subjects, meaning they observed consistent activities among a large number of participants that provide clear indications of the described relationships. However, to be applicable in everyday life, the described brain activities would have to be measurable in all individuals and at all times. Whether and how this can be achieved must be the subject of further investigations. This also applies to the question of whether the assessment of traffic decisions, which are less emotionally charged than dilemma situations, can also be clearly deduced from brainwaves. The Oldenburg researchers have already addressed this question and conducted another experiment: Participants had to evaluate whether a gap was large enough for a left turn. The results of this study will be published soon.

 

Please note: This text has been machine-translated.

This might also be of interest to you:

Twelve people stand next to each other on a staircase.
University of Oldenburg/ Cassie Ann Short
University University Medicine Top News

Focus on data and AI: New "WiZArD" centre

Which methods can researchers use to analyse data sets in different disciplines particularly effectively? The newly founded "Scientific Centre for…

more: Focus on data and AI: New "WiZArD" centre
Human geneticists Hitz and Dombrowsky at the sequencing machine in Oldenburg Hospital.
Markus Hibbeler
University Medicine Top News

Researchers identify “fingerprint” of gene variant

The exact cause of congenital heart defects often remains unclear. Human geneticists in Oldenburg have now presented a method that allows them to…

more: Researchers identify “fingerprint” of gene variant
A few-months-old child wears a cap with several light-emitting diodes.
University of Oldenburg / Markus Hibbeler
University Medicine Top News Human Medicine

1,000 days that are crucial

Complications during pregnancy or childbirth can cause lasting damage to a child's brain functions. The search for risk factors and therapies combines…

more: 1,000 days that are crucial
(Changed: 16 Apr 2026)  Kurz-URL:Shortlink: https://uol.de/p82n11419en
Zum Seitananfang scrollen Scroll to the top of the page

This page contains automatically translated content.