Perception and modeling of speech
Human listeners show remarkable performance in understanding speech in adverse listening conditions, such as in reverberant rooms or in the presence of noise. If this ability is lost, as for instance in hearing-impaired listeners, the consequences in everyday life for these people are profound.
The mechanisms behind the remarkable performance of normal-hearing listeners are not completely understood and certainly involve a number of factors such as the audibility of speech, different cognitive skills, contextual information, and language-related skills.
Our objectives are to evaluate and improve methods for both audiological testing and testing of speech recognition that are precise, easy and fast.
Furthermore, we develop and use models of human speech recognition, such as the Binaural speech intelligibility model (BSIM), the Speech Intelligibility Index (SII), or a phoneme-based microscopic speech intelligibility model in order to relate audiological test results to speech recognition scores.
We aim at understanding the role of subject-specific factors in speech recognition, such as absolute threshold, cochlear compression, cognitive and language-related skills, or attention as well as test-specific factors such as the amount of reverberation, signal-to-noise ratio or the position of sound sources relative to the listener. Understanding the role of each of these factors is important, e.g., to improve speech intelligibility in different rooms, test new hearing aids in close-to-everyday situations, or for improving communication with hearing-impaired listeners.