Auditory Signal Processing and Hearing Devices
The goal of our group's work is to better understand human acoustic communication under challenging listening conditions with noise, clutter, and reverberation, and to use this knowledge to improve signal processing in hearing aids. In particular, we study numerical models of auditory scene analysis, methods for hearing aid evaluation in virtual interactive environments, and algorithms for novel hearing devices.
Hearing takes place subconciously. Almost everyone is oblivious of the complex processing in the ear and brain that transform sound waves into "heard" information. One of the biggest mystery is the fact that humans are able to filter out the voice of a talking person from a variety of sound sources (other persons, barking dogs, passing cars ...). With healthy persons, analysis of such complex auditory scenes and forming sound objects of interest, e.g., the voice of an interlocutor, works flawlessly. With hard-of-hearing persons, however, this is different. They can only communicate when no other distracting sound sources are present. Hearing devices can restore this ability only to some extent, since the complex object-forming processes have not yet been successfully replicated technically.
Imagine a cocktail party: Voices, clinking glasses, discreet music. Everbody is talking, in pairs or in larger groups - but some people don`t understand, what their opposite says. Trying to lip read what was said is futile, as ear and brain are not able to cope with the complex acoustic environment. 15 percent of all Germans suffer from inner-ear hearing loss. The trend continues upwards, since life expectancy in our society increases continually and hearing loss is a typical age-related problem.
In Auditory Signal Processing, we build quantitative numerical models of Auditory Scene Analysis and use them to demonstrate improved speech enhancement methods for hearing devices.
The first hearing aid processing acoustic signals digitally was presented in 1996. Since then, hearing devices have been in constant development and manufacturers meanwhile offer devices that allow for a complex processing of acoustic signals. Available signal processing power is advancing almost as fast as CPU power in home computers or cell phones, with much higher power efficiency.
Computers (e.g. hearing devices) can not copy the abilities of the human ear yet. A hearing device that uniformly amplifies all acoustic signals does not help in a cocktail party situation. In fact, it has to seperate auditory objects and selectively amplify them. It has been shown that, apart from the ear's high selectivity for sounds of different frequency/pitch, amplitude modulation (fast sound level fluctuations) and sound localization are important mechanisms for object separation. Sound localization is strongly linked to binaural hearing (hearing with two ears).
In Auditory Signal Processing, we are investigating highly spatially selective methods for speech enhancement, methods for controlling hearing aids by user behavior, e.g., head and eye movements, and we are developing methods for evaluating such complex signal processing methods using audiovisual virtual reality.