Cocktail parties and hearing aids: Ways to better hearing

Contact

Presse & Kommunikation

+49 (0) 441 798-5446

Cocktail parties and hearing aids: Ways to better hearing

by Birger Kollmeier

Hearing impaired and elderly people often complain about their difficulties in understanding speech in the presence of background noise. Conventional hearing aids cannot help in this situation because they amplify useful signals and background noise in the same way. In addition, hearing impaired people complain that the sound is either too quiet or too loud, so that the volume control on the hearing aid has to be pressed constantly. Where this effect comes from and what possibilities "intelligent" hearing aids of the future will offer is presented in the following article.

Why are hearing aids getting more expensive and walkmen getting cheaper?". This question from former Federal Minister of Economics Manfred Bangemann makes the lack of understanding of the problem of hearing disorders all too clear. While glasses have long been socially accepted as a fashion accessory, hearing aids have the image of a prosthesis that people prefer to hide or not wear at all. However, the effects of hearing impairment in young children, for example, are much more serious than visual impairment: In contrast to children born blind, children born deaf can only communicate with their environment with great difficulty and show permanent developmental defects if their hearing impairment is not treated in time (preferably within the first year of life). Older people with hearing problems also easily withdraw from social life and become isolated. This is due in particular to the impaired "cocktail party effect", i.e. the lost ability to concentrate on one speaker in a lively situation with background noise and to suppress all other sound signals.

Conventional hearing aids can do very little about this problem because they cannot correct the underlying defect, which is usually sensorineural hearing loss. They are well suited for treating conductive hearing loss (e.g. in the case of diseases of the outer or middle ear, due to which the airborne sound can only reach the inner ear at a reduced level). The situation is more difficult with the more common sensorineural hearing loss, which is usually caused by damage to the inner ear. In addition to the pure attenuation of the sound, there is also a distortion of the perceived sound components. To use the comparison with the optical system again: Conductive hearing loss is similar to looking at the environment through sunglasses. A hearing aid acts like an additional spotlight that can compensate for this defect. A sensorineural hearing loss, on the other hand, would be like looking at the environment through a darkened pane of frosted glass, most of which is so heavily blackened that you only have a small viewing angle. It is obvious that a headlight would not bring any significant improvement in this situation.

How does the human ear work?

The outer ear is used for direction-dependent colouration (filtering) of the sound, which is transmitted through the middle ear to the fluid-filled inner ear with as little loss as possible. There, the sound is broken down into different frequency components, with the high frequencies at the beginning and the low frequencies at the end of the cochlea. From a physical point of view, this corresponds to a filter bank. In the inner ear, the sound vibrations are then converted by the hair cells into nerve impulses, which can only follow the exact course of the sound with a certain degree of inertia. Physically, this corresponds to an envelope extraction by half-path rectification and low-pass filtering with subsequent non-linear adaptation. The time course of the nerve excitations in the auditory nerve and on the subsequent stations of the auditory pathway in the brain is split into different rhythms (modulation frequencies), which physically corresponds to a modulation filter bank. Together with the "internal" noise of the neuronal system, the output of the modulation filter bank forms the "internal representation" of the acoustic signal, i.e. the trace that an acoustic sound leaves behind in our brain. It can now serve as an input variable for various pattern recognition strategies, which can be used to recognise and distinguish between different sounds in the brain. With such a model of auditory signal processing, we as physicists try to quantitatively understand the performance of the auditory system from the detection of a sound in noise to speech perception.

Audiology: What's wrong with the ear?

When a patient with a hearing impairment goes to an ear, nose and throat specialist, the function of the outer, middle and inner ear and the subsequent sound processing in the central nervous system is measured using routine audiometric and clinical procedures after an assessment of the course of the disease. In most cases, the type of disorder can be narrowed down well, and for some forms of hearing loss (e.g. conductive hearing loss in the case of outer or middle ear dysfunction) there is a good chance of recovery (e.g. after middle ear surgery) or compensation (with a hearing aid). However, for the vast majority of hearing disorders (especially inner ear damage and tinnitus, i.e. chronic ringing in the ears without an external cause), treatment success is extremely limited. Standard audiometric diagnostics are only focussed on these treatment options and are therefore unable to further differentiate hearing loss. It is only recently, therefore, that new methods for more differentiated hearing diagnostics have been developed with significant involvement of our working group, which are orientated towards the various symptoms of patients with sensorineural hearing loss:

In the "recruitment phenomenon", for example, the hearing impaired patient complains of not being able to hear anything at low volumes, while the pain threshold is suddenly reached when the speech volume is increased. Conventional hearing aids are usually unable to provide a satisfactory solution to this problem, so that the patient has to constantly turn the volume control on the hearing aid. The psychoacoustic method of auditory field scaling is used to precisely record the recruitment phenomenon in individual patients: acoustic signals (e.g. narrow-band noise impulses) are offered at randomly selected volumes. The patient's task is to indicate the perceived loudness on a scale between 0 (nothing heard) and 50 (too loud). The graph on page 12 (bottom right) shows the result for subjects with normal hearing (curve on the left) compared with the result for a person with hearing loss with recruitment (measurement points and curve on the right): While normal-hearing subjects show a continuous increase in perceived loudness with increasing loudness of the acoustic signal, the hard-of-hearing subject only shows a steep increase in perceived loudness with increasing loudness from an increased threshold loudness. The difference between normal hearing and individual hearing loss can now be utilised in "intelligent" hearing aids to give the hearing impaired person exactly the same impression of loudness as a normal hearing person for any given input signal. Another symptom that has not yet been recognised by standard audiometry is the deterioration in speech intelligibility in the presence of background noise, the reduced "cocktail party effect". This inability to concentrate on a speaker in a noisy environment and not just understand "word salad" is one of the most common complaints, especially in the case of incipient hearing loss. Even in this situation, conventional hearing aids are of little help because they amplify the useful and noise signals in the same way. Various test procedures have now been trialled in our working group and successfully used in practice in a joint project with several ENT university clinics funded by the Federal Ministry of Education and Research (BMBF). These methods measure the speech intelligibility threshold in background noise, i.e. the speech volume at which 50 per cent of speech can still be understood under the influence of background noise. The test subject's task is to select the word from a list of alternative answers by tapping the touch screen. Measurements with patients with sensorineural hearing loss show a significantly reduced speech intelligibility threshold in the background noise, although this can vary greatly from person to person. However, it is not only the performance that can be achieved by each ear alone (monaural) that is important for the cocktail party effect, but also the two-ear (binaural) spatial hearing: By comparing the signals present at both ears, our brain is able to suppress noise components coming from one direction and amplify useful signal components from another direction. This effect can be measured quantitatively with an arrangement in which the useful and interfering sources initially arrive from the same direction (e.g. from the front). If the noise source is now deflected to the side, the speech intelligibility threshold for normal-hearing listeners is improved by up to 12 dB, which can lead to a 100 per cent improvement in intelligibility. This is caused on the one hand by the binaural signal processing described above (i.e. by comparing both ear signals) and on the other hand by the purely monaural head shadowing effect, i.e. the useful sound is louder and the background noise is quieter in the "better" ear that is more towards the sound source. In order to separate these two effects from each other, the "worse" ear (facing away from the sound source) can be attenuated in the situation with separate useful and noise sources, so that the binaural gain can be measured by "switching on" the "worse" ear. This measurement can be carried out particularly elegantly with a "virtual acoustic environment", in which the patient is offered the sound signals via headphones so that the left and right ear can be tested independently of each other. The different directions of sound incidence are generated in real time using artificial head technology or on the computer using convolution operations.

This measuring technique allows the gain of two-ear hearing compared to one-ear hearing to be quantified for the individual patient. In addition to diagnostic statements about the interaction of both sides of the auditory system, it is therefore also possible to estimate the potential benefit of two-sided hearing aid fitting compared to the one-sided hearing aid fitting that has unfortunately been predominantly used to date. It is also possible to predict the benefit of new, "intelligent" hearing aids that incorporate these binaural functions. In order to put these new techniques into practice in hearing diagnostics and hearing aid fitting, a number of efforts are required, which have already been initiated within the framework of the clinical joint project, among others. However, they are also to be applied within the framework of the "Oldenburg Hearing Centre", which is currently being established: In addition to applied hearing aid research, special diagnostics of hearing disorders and the fitting of special hearing aids are planned, as well as the further education and training of people who deal with these topics in their academic appointments.

Intelligent hearing aids: 22 kilograms of computing power

Current, commercially available hearing aids are mostly adjustable amplifiers whose frequency response and control characteristics can only be specified within certain limits, so that the sound is amplified depending on the frequency. However, successful compensation of the "recruitment" phenomenon described above and the disturbed "cocktail party effect" is not possible. However, the hearing aid algorithms developed in our working group have precisely this goal: we are trying to develop signal processing strategies based on our knowledge of the impaired and undisturbed hearing process that are precisely adapted to the impaired functions of the hearing process and should lead to the most complete rehabilitation possible for hearing impaired people. Because this research is independent of the computing technology currently available in portable form, our current hearing aid weighs an impressive 22 kilograms! The block diagram for a hearing aid algorithm implemented in this hearing aid is shown on the right. The basic binaural structure is particularly important: the signal is picked up by the microphones in both ears and converted into digital signals that can be processed by the computer using an analogue-to-digital converter in each ear. Further digital signal processing first carries out a frequency analysis (short-time Fourier transformation), which roughly corresponds to the frequency analysis of human hearing. The interaural (i.e. between both ears) level difference, phase difference and coherence are then determined in each frequency band, which serve as a measure of whether the sound in the respective frequency band is coming from the front or from the side or whether a diffuse reverberation signal is present. With the help of these parameters, the signal path coming from the front can be amplified, while the sound signals coming from the side and the reverberation are attenuated. In this way, a sharp directional filter can be realised that only allows sound from the patient's line of vision (gear alignment) to pass through and suppresses all unwanted sound components. These signal processing techniques enable a significant improvement in speech intelligibility under background noise and under the influence of reverberation for most of the hearing impaired patients tested to date. Depending on the noise/useful sound situation, the gain in the signal-to-noise ratio is between 2 and 10 dB, which can lead to an improvement in intelligibility in fluent speech of 20 to 80 per cent.

In order to compensate for the "recruitment" phenomenon described above, the loudness to be expected for a normal hearing person is calculated with the noise-reduced signals, as well as the gain value for each frequency band that the individual hearing impaired person requires to achieve the same loudness impression. This loudness model is based on a number of basic functions of hearing from psychoacoustics, so that the corresponding calculation is relatively complex. However, initial results with such loudness equalisation algorithms are extremely promising: in patients who were no longer able to achieve 100 percent speech intelligibility even at high speech volumes, it was possible to achieve complete speech intelligibility again at medium volumes by using this algorithm. In an interdisciplinary joint project funded by the BMBF, we are currently working on the implementation of these algorithms in a portable device in the format of a Walkman, for example, which can also be used in field tests with hearing impaired people.

Future development: Genius in the ear for everyone?

However, the step from hearing aids in Walkman format to sub-miniature hearing aids that fit in the outer ear canal is still considerable. The hearing aid industry is therefore currently working with the highest priority on making the currently available digitally controlled analogue hearing aids even smaller and providing them with greater convenience (e.g. better remote control options). However, technology that is more "customised" to the individual, which should make remote controls as superfluous as possible, can only be achieved in the next generation of fully digital hearing aids, the hardware for which is already taking shape in the development laboratories of the hearing aid industry. For the software of this new generation and the framework concept of the device generation, however, audiological acoustics and interdisciplinary cooperation between physics, medicine, Computing Science, psychology and the engineering sciences are indispensable. Experience to date also shows that university graduates with this interdisciplinary orientation have very good career prospects.

The author

Birger Kollmeier, 36, Prof. Dr rer. nat, Dr med, university lecturer at the Department of Physics, Medical Physics working group. Studied physics from 1976 to 1982 and medicine from 1977 to 1986 in Göttingen. Research stay as a Fulbright scholarship holder in St. Louis/USA, 1982/83. Doctorate in physics on psychoacoustics in 1986 and in medicine on hearing aids in 1989. Habilitation in physics on "Measuring methodology, modelling and improving the intelligibility of speech" in Göttingen in 1991. Since the summer semester of 1993, Professor of Applied Physics/Experimental Physics at the Department of Physics at the University of Oldenburg and head of the "Medical Physics" working group, co-supervisor of the "Psychoacoustics" research training group. Research focus: Speech perception, psychoacoustics, digital signal processing, medical-physical diagnostics.

(Changed: 11 Feb 2026)  Kurz-URL:Shortlink: https://uol.de/p34434en
Zum Seitananfang scrollen Scroll to the top of the page

This page contains automatically translated content.