Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.
Automatic speech recognition technology like that used in everyday life in voice assistance systems such as Alexa or Siri could also be used to help people with impaired hearing in the future. Researchers at the University of Oldenburg have come a step closer to the goal of using this technology in hearing aids, so that they automatically adjust to the ideal programme/setting depending on the soundscape. Jana Roßbach, Prof. Dr. Birger Kollmeier and Prof. Dr. Bernd T. Meyer from the Oldenburg Cluster of Excellence Hearing4all reported on their progress in an article published in the Journal of the Acoustical Society of America.
These days, people who use a hearing aid can choose between different settings depending on whether they want to have a conversation or listen to music, for example. However, the rather limited range of preset programmes cannot reflect real-life conditions and their very diverse soundscapes. Moreover, having to constantly manually adjust your hearing aid to your environment is also incompatible with everyday use.
This is where automatic speech recognition using artificial intelligence (AI) methods could come into play. To create the ideal setting for the individual hearing aid user in every situation, AI software would have to learn to "hear" just like that person – with all their specific limitations.
The Oldenburg researchers have now demonstrated that this is possible with an experiment in which they presented humans and machines with the same task. The scientists first determined the individual hearing status of 20 test subjects with impaired hearing. At the same time, they trained speech recognition software using audio recordings of test sentences, and taught it to repeat them in written form. To ensure that the computer faced the same difficulties as the person with impaired hearing, the researchers added background noise that simulated the individual's impairment.
In the following tests, the test persons and the AI which had been trained to simulate their hearing status were given the task of understanding and reproducing recorded sentences. The results: on average, the human test subjects and their machine counterparts were able to understand roughly the same number of words. And to the surprise of the researchers this proved to be the case in all eight listening scenarios in which various background noises were used to simulate real-life situations where speech comprehension is impeded.
The researchers emphasize that they are still at the beginning of their investigations into the use of speech recognition software in hearing aids. "In further investigations we now aim to find answers to open questions and, for example, make it possible for the speech recognition software to recognize on its own whether it is wrong or right with its prognosis," explains hearing technician and audiologist Jana Roßbach.