Head of division

Prof. Dr. Volker Hohmann

+49 441 798 5468


Carl von Ossietzky Universität Oldenburg
Faculty VI - School of Medicine and Health Sciences
Department of Medical Physics and Acoustics
D - 26111 Oldenburg







Auditory Signal Processing and Hearing Devices

From active, non-linear processes and numerical models of hearing to internet transmission of music with MP3 or objective speech quality evaluation for mobile phones - the acoustics of hearing have a direct impact on our daily lives, not only with hearing loss or communication problems on lively parties. From a physics point of view, the challenge lies in analysing the effective function of the ear as a complex system. Transforming this analysis into a model of hearing opens up a variety of possible technical applications.

Auditory Scene Analysis:
How do we get an acoustic picture of our surroundings?

Hearing takes place subconciously. Almost everyone is oblivious of the complex processing in the ear and brain that transform sound waves into "heard" information. One of the biggest mystery is the fact that humans are able to filter out the voice of a talking person from a variety of sound sources (other persons, barking dogs, passing cars ...). With healthy persons, scene analysis works flawlessly. With hard-of-hearing persons, however, this is different. They can only communicate when no other distracting sound sources are present. Hearing aids can restore this ability only to some extent, since the complex object-forming processes have not yet been successfully replicated technically.

Imagine a cocktail party: Voices, clinking glasses, discreet music. Everbody is talking, in pairs or in larger groups - but some people don`t understand, what their opposite says. Trying to lip read what was said is futile, as ear and brain are not able to cope with the complex acoustic environment. 15 percent of all Germans suffer from inner-ear hearing loss. The trend continues upwards, since life expectancy in our society increases continually and hearing loss is a typical age-related problem.

Digital hearing aids

The first hearing aid processing acoustic signals digitally was presented in 1996. Despite the fact that digital technology was already widely spread, the possibility to fit a processing unit and digital circuitry into the confined space of a hearing aid, with minimal energy consumption, came as a surprise for the hearing aid industry. Not even mobile phones, that were regarded as a marvel of technology, had this efficiency. Thus, the introduction of the digital hearing aid can be regarded as a leap in technology. Since then, hearing aids have been in constant development and producers meanwhile offer fully digital hearing aids that allow for a complex processing of acoustic signals. The available processing power advances almost as fast as the cpu performance found in home computers.

Computers (e.g. hearing aids) can not copy the abilities of the human ear yet. A hearing aid that uniformly amplifies all acoustic signals does not help in a cocktail party situation. In fact, it has to seperate auditory objects and selectively amplify them. It has been shown that, apart from the ear's high selectivity for sounds of different frequency/pitch, amplitude modulation (fast sound level fluctuations) and sound localization are important mechanisms for object separation. Sound localization is strongly linked to binaural hearing (hearing with two ears).

(Changed: 2022-04-20)