Press & Communication

+49 (0) 441 798-5446



Prof Dr. Mathias Dietz

Medical Physics

+49-441-798-3832 (F&P)

News Single View

  • Bernhard Eurich und Mathias Dietz sitzen im Hörlabor vor einem Computer und tragen Kopfhörer.

    Mathias Dietz (right) and doctoral candidate Bernhard Eurich test the predictions of their hearing model with test subjects in the hearing lab. They can randomly vary the acoustic level and timing of the target tones and noise reaching the test person's left and right ear in order to investigate how people perceive the acoustic mixture. Photo: UOL

Hearing by numbers

Researchers have developed a mathematical model that can simulate directional hearing using considerably less computing power than others – and also describes the processes in the human brain more accurately than ever before.

Voice assistants can understand commands only thanks to complex calculations running in the background. Hearing aids also rely on lots of maths. The software in these devices makes split-second calculations, for example to determine which components of an acoustic mixture are part of the conversation that the hearing aid user is currently having and then amplify them accordingly.

Mathematical models developed by scientists like Professor Dr Mathias Dietz, a hearing researcher at the University of Oldenburg, form the basis for these calculations. "A model is essentially a complicated formula that attempts to describe a natural phenomenon as accurately as possible," he explains. In his particular field of research, this means: the better an auditory model functions, the more reliably it will predict how a human would perceive a given sound.

Humans’ intuitive knowledge of where a sound is coming from is the result of complex processes. "It is barely possible to detect which direction a sound is coming from with just one ear," explains the researcher from the Department of Medical Physics and Acoustics. "We can only distinguish whether sounds are coming from the left or right because our two ears are interconnected in the brain." This connectivity also allows us to distinguish background noise such as the babble of voices at a party from the voice of the person you are talking to, and to partially suppress that noise without even realising it.

Previous model is about 80 years old

One source of information available to our brain in directional hearing is a small difference in the arrival time of sounds at the two ears. A sound wave coming from the left hits the left ear first and is converted in the inner ear into an electrical stimulus, which is then transmitted at lightning speed along the auditory nerve. Because the sound wave takes longer to reach the right ear, there is a delay of a fraction of a second before the same process starts there. "A very intuitive theory of what happens in the brain at this moment was already put forward in the 1940s," says Dietz. The American physicist and psychologist Lloyd Alexander Jeffress imagined the process –  presented here in simplified terms – as follows::

The stimuli coming from the right and left move towards each other, passing from one neuron to the next until finally they arrive at one neuron simultaneously. Since each neuron represents a very specific spatial direction, the brain translates the neuron that is particularly excited by the simultaneous arrival of two different stimuli into a spatial perception. He developed a first auditory model based on this theory. His assumption was that a large number of neurons are involved in the process, which, as "coincidence detectors", map the entire sound environment. "This model allows for effective prediction of sound detection," says Dietz. "There is just one problem: even after some 50 years of neuroscientific studies, an extensive neural network like that proposed by Jeffress has not been found in mammals." Instead, mammals have just one nerve bundle per brain hemisphere, referred to by scientists as "channels". But the astonishing thing is that although Jeffress’s model was based on a false assumption, it worked – in fact it worked so well that researchers and engineers still use it today.

Discrepancy between physiology and models for directional hearing

Newer approaches that tried to take account of physiological findings failed. Envisioning the two channels in the human brain as a system of coincidence detectors reduced to just two neurons rendered the Jeffress model useless. It didn’t work on the basis of this – physiologically correct – assumption. In addition, models based on only two channels were unable to reliably predict whether people would be able to perceive target sounds when presented together with noise.

Dietz, whose research has been funded by a prestigious European Research Council (ERC) Starting Grant since 2018, was already irritated by this discrepancy between brain physiology and models for directional hearing in humans when he did his PhD fifteen years ago. The physicist is aiming for a unified understanding of the auditory system, which in his view means that the findings and models contributed by the various scientific disciplines should not contradict each other.

During the pandemic, when the possibilities for conducting hearing tests with test persons were very limited, Dietz and his colleagues Dr Jörg Encke and Bernhard Eurich instead focused their efforts on finally presenting a functioning two-channel model. These efforts bore fruit: the new Oldenburg model can reliably calculate how people will detect tones that are played in conjunction with background noise. To verify this, the scientists consulted numerous earlier studies in which researchers had measured how loud a target tone had to be for study participants to be able to detect it despite noise. The Oldenburg model was able to precisely simulate more than 300 of these "detection thresholds" for the first time ever.

Change in approach brought the breakthrough

This breakthrough, the details of which the scientists recently published in the journal Communications Biology, was made possible by a change in approach: the team decided to correlate the two channels for the first time. The researchers took advantage of the fact that sounds travel in waves and, due to the time difference factor, reach each of an individual’s two ears during a different phase of this wave. This phase shift is the piece of the puzzle that has now finally made it possible to predict human directional hearing in a way that corresponds to the physiological condition. "We've cracked a pretty tough nut here," says Dietz, summing up the hard work of the last few years.

The Oldenburg approach even works better than the old model when it comes to factoring the effect of two different noises into the prediction – something that had been neglected so far with the old model. Eurich explained this in greater detail in another publication. Now the doctoral candidate wants to explore how the new model can help to improve spatial hearing with hearing aids. The plan is to use the model to predict which elements of the soundscape should not be omitted from the amplification in order to ensure that hearing aid users don’t experience any loss in sound quality.

This might also be of interest to you:

Frontal view into the laboratory, the wedges can be seen on all walls. A man is in the center of the room, standing on a net of wires.
Research Top News Medical Physics and Acoustics

A room without echoes

The door of the Anoechoic Lab on Wechloy Campus opens onto a world of silence. The key feature of the recently renovated acoustics lab is its huge…

A test person sits on a chair, in the foreground some technical devices (loudspeakers, projectors) can be seen, in the background a living room scene is projected on a screen.
Top News Research Medical Physics and Acoustics

Another success for Oldenburg's hearing research

Intelligent hearing aids that work even in difficult acoustic environments – that's what researchers at the university are working on in the…

Man from behind, with hearing aid in ear and smartphone in hand
Top News Research Medical Physics and Acoustics

Better hearing through automatic speech recognition?

Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear…

(Changed: 17 Mar 2023)  |