• In the hearing laboratory, Mathias Dietz (right) and doctoral candidate Bernhard Eurich experimentally test the predictions of their hearing model with test subjects. They can vary target tones and background noise by varying the level and time differences between the left and right ear in order to investigate how people perceive this mixture. Photo: University of Oldenburg

Listen by numbers

Researchers from the Department of Medical Physics and Acoustics have developed a mathematical model that can simulate directional hearing with significantly less computing power.

Voice assistants only understand commands based on complex calculations that run in the background. There is also a lot of maths in hearing aids: from the ambient noise, installed software calculates at lightning speed which parts of this acoustic mixture belong to the conversation that the hearing aid user is currently having - and primarily amplifies these.

The basis for such calculations are mathematical models developed by Oldenburg hearing researcher Prof Dr Mathias Dietz, among others. "A model is basically a complicated formula that attempts to describe a natural phenomenon as accurately as possible," he explains. In the case of his field of research, this means that the better an auditory model works, the more reliably it can predict how people would perceive a sound.

There are complicated processes behind the intuitive knowledge of where a sound comes from. "An ear alone can only poorly register the direction of sound," explains the researcher from the Department of Medical Physics and Acoustics. "We can only recognise whether sounds are coming from the left or right because our two ears are interconnected in the brain." This interconnection also makes it possible to distinguish background noise, such as the babble of voices at a party, from the voice of the person you are talking to and to partially suppress it unnoticed.

Previous model is around 80 years old

One clue that our brain has for directional hearing is the small time difference with which a sound usually reaches our two ears. A sound wave coming from the left hits the left ear first and is converted into an electrical stimulus in the inner ear, which then races along the auditory nerve. Because the sound wave takes longer to reach the right ear, the same process starts there with a delay of a fraction of a second. "Back in the 1940s, there was already a very intuitive idea of what happens in the brain at this moment," says Dietz. The American physicist and psychologist Lloyd Alexander Jeffress visualised it - in simplified terms - as follows:

The stimuli coming from the right and left move towards each other and pass through one nerve cell after the other until they finally arrive at one nerve cell at the same time. Because each nerve cell represents a very specific spatial direction, the brain translates the cell that is particularly strongly excited by two simultaneously arriving stimuli into a spatial perception, surmised Jeffress, who developed the first auditory model on this basis. He assumed that a large number of nerve cells are involved, which act as so-called simultaneity detectors and map the entire environment. "With this model, the perception of sound can be predicted well," says Dietz. "There is just one problem: a widely branched nerve cell structure, as Jeffress envisioned it, has not been found in mammals in neuroscientific studies around 50 years later." Instead, they only have one nerve bundle per cerebral hemisphere - researchers call them channels. The amazing thing: Although Jeffress made an incorrect assumption, his model worked - and worked so well that researchers and engineers still use it today.

Discrepancy between physiology and models for directional hearing

Newer approaches that attempted to take physiological findings into account did not succeed. Conceptualising the two channels in the human brain as a system of simultaneity detectors reduced to two nerve cells rendered the Jeffress model useless. It did not work under this - physiologically correct - assumption. In particular, the models based on only two channels were not able to reliably predict whether people are able to perceive target sounds when they are presented together with an interfering sound.

Dietz, whose research has been funded by the European Research Council (ERC) with a prestigious "Starting Grant" since 2018, was already bothered by the discrepancy between physiology and the models for human directional hearing during his doctoral thesis 15 years ago. The physicist wants to understand hearing as a system. For him, this means ensuring that the findings and models contributed by different scientific disciplines do not contradict each other.

During the pandemic, when hearing tests with test subjects were hardly possible, Dietz and his colleagues Dr Jörg Encke and Bernhard Eurich concentrated on finally presenting a functioning two-channel model. With success: the new Oldenburg model can reliably calculate how people perceive sounds that are played together with a background noise. In order to verify this, the researchers consulted a large number of previous studies in which researchers had measured the minimum volume required for a target sound to be perceived by the study participants despite a background noise being played at the same time. The Oldenburg model was able to precisely simulate more than 300 of these so-called perception thresholds for the first time.

A rethink brought the breakthrough

This breakthrough, which the scientists recently published in the journal "Communications Biology", was made possible by a change in thinking: the team correlated the two channels for the first time. The scientists took advantage of the fact that sounds travel in waves and reach each of the two ears in a different phase of this wave due to the temporal difference. The phase shift that both channels have in interaction with each other is therefore the piece of the puzzle that finally makes it possible to predict human directional hearing physiologically correctly. "We've cracked a pretty tough nut," says Dietz, summarising the work of the past few years.

The Oldenburg approach works even better than the old model when it comes to realistically including the effect of two different background noises in the prediction, which the old model had previously neglected. Eurich has explained this in another publication. The doctoral candidate now wants to research how he can use the new model to improve spatial hearing with hearing aids. The aim is to predict which parts of the background noise should not be omitted during amplification so that hearing aid users do not notice any loss of quality.

Contact:

Prof Dr Mathias Dietz

Medical Physics

+49-441-798-3832(F&P)

Publications:

(Changed: 11 Feb 2026)  Kurz-URL:Shortlink: https://uol.de/p24579n7585en
Zum Seitananfang scrollen Scroll to the top of the page

This page contains automatically translated content.