Binaural processing enables us to locate sounds in space. The human auditory system is sensitive to differences in the sound intensity between both ears (caused by the shadowing effect of the head) and to interaural time or phase differences, caused by different traveling times of the sound wave to both ears. Binaural processing has been believed to be quite sluggish but recent results have shown that fluctuations of interaural phase differences can be perceived at very high rates. The goal is to investigate the combination of interaural level and time differences at high temporal resolution to an interaural difference map in the auditory system. Of particular interest are the relative role of the low-frequency fine structure (instantaneous pressure fluctuations of the sound wave) and the envelope (amplitude fluctuations) at high frequencies, as well as the effect of neuronal adaptation prior to the extraction of interaural features.
Characterization of hearing deficits
Here subjective psychoacoustic methods to characterize the perceptual consequences of hearing loss beyond the classical pure-tone audiogram are of interest. A recent comparison of temporal masking curves and loudness scaling has shown that loudness scaling can offer additional information for characterization of the relative contribution of inner and outer hair cell loss. Such information is valuable for modeling the individual hearing loss. Loudness adjustment and compassion methods are currently investigated as quick and user-friendly tools to individually adjust custom hearing supportive techniques for mild to moderate hearing loss.
Models of auditory signal processing and perception
Computational models of auditory signal processing and perception help to understand the basic function of auditory processing. The models can improve interpretation of psychoacoustic data, while model predictions can also trigger new experimental questions. The models developed here are considered to mimic the basic function of the auditory system with their stages motivated by and related to the physiology of the human auditory system. The models generally transform the physical sound wave into an internal representation that covers all features of the sound assumed to be assessable for humans. The models can be “calibrated” to simulate psychophysical (masked) detection and discrimination thresholds and can then be tested to, e.g., recognize speech tokens.
Objective audio quality assessment
Audio quality is an important aspect in many modern communication and entertainment devices. In some cases speech intelligibility is the main concern while transparent audio reproduction of highest quality is the goal in other applications. Models of auditory signal processing can be used as the front-end for tools for objective audio quality evaluation. Such tools are helpful for the development of, e.g., signal processing algorithms for audio applications. The idea is to compare the internal representations produced by the auditory model for a reference and a degraded signal (e.g., telephone transmission). Deviations of the internal representation indicate that differences between reference and degraded signal are also audible. The results of listening test can be used to derive weights for the deviations and to refine the auditory model front-end.