More:

Contact:

Jana Roßbach

Medical Physics

+49 441 798-3908  (F&P)

  • Man from behind, with hearing aid in ear and smartphone in hand

    Hearing aids are becoming more and more powerful due to the connection with smartphones. This allows the use of speech recognition software, which researchers at the University of Oldenburg are currently investigating.

Better hearing through automatic speech recognition?

Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

Automatic speech recognition technology like that used in everyday life in voice assistance systems such as Alexa or Siri could also be used to help people with impaired hearing in the future. Researchers at the University of Oldenburg have come a step closer to the goal of using this technology in hearing aids, so that they automatically adjust to the ideal programme/setting depending on the soundscape. Jana Roßbach, Prof. Dr. Birger Kollmeier and Prof. Dr. Bernd T. Meyer from the Oldenburg Cluster of Excellence Hearing4all reported on their progress in an article published in the Journal of the Acoustical Society of America.

These days, people who use a hearing aid can choose between different settings depending on whether they want to have a conversation or listen to music, for example. However, the rather limited range of preset programmes cannot reflect real-life conditions and their very diverse soundscapes. Moreover, having to constantly manually adjust your hearing aid to your environment is also incompatible with everyday use.

This is where automatic speech recognition using artificial intelligence (AI) methods could come into play. To create the ideal setting for the individual hearing aid user in every situation, AI software would have to learn to "hear" just like that person – with all their specific limitations.   

The Oldenburg researchers have now demonstrated that this is possible with an experiment in which they presented humans and machines with the same task. The scientists first determined the individual hearing status of 20 test subjects with impaired hearing. At the same time, they trained speech recognition software using audio recordings of test sentences, and taught it to repeat them in written form. To ensure that the computer faced the same difficulties as the person with impaired hearing, the researchers added background noise that simulated the individual's impairment.

In the following tests, the test persons and the AI which had been trained to simulate their hearing status were given the task of understanding and reproducing recorded sentences. The results: on average, the human test subjects and their machine counterparts were able to understand roughly the same number of words. And to the surprise of the researchers this proved to be the case in all eight listening scenarios in which various background noises were used to simulate real-life situations where speech comprehension is impeded.

The researchers emphasize that they are still at the beginning of their investigations into the use of speech recognition software in hearing aids. "In further investigations we now aim to find answers to open questions and, for example, make it possible for the speech recognition software to recognize on its own whether it is wrong or right with its prognosis," explains hearing technician and audiologist Jana Roßbach.

This might also be of interest to you:

Ein Mann mit Kopfhörern sitzt vor einem Computermonitor, der viele Regler anzeigt. Der Betrachter blickt dem Mann über die Schulter.
Top News Medical Physics and Acoustics

Singing has become quieter

Researchers from Oldenburg have discovered that lead singers used to be significantly louder in relation to the rest of their band than they are today…

more
Bernhard Eurich und Mathias Dietz sitzen im Hörlabor vor einem Computer und tragen Kopfhörer.
Hearing Research Excellence Strategy Top News Medical Physics and Acoustics

Hearing by numbers

Researchers have developed a mathematical model that can simulate directional hearing using considerably less computing power than others – and also…

more
Frontal view into the laboratory, the wedges can be seen on all walls. A man is in the center of the room, standing on a net of wires.
Research Top News Medical Physics and Acoustics

A room without echoes

The door of the Anoechoic Lab on Wechloy Campus opens onto a world of silence. The key feature of the recently renovated acoustics lab is its huge…

more
(Changed: 26 Apr 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page