Contact

Dean´s Office

+49 (0)441 798-2499 

Dean of Studies Office

+49 (0)441 798-2510

Opening hours Office of the Dean of Studies

Monday and Wednesday from 15.00-17.00 hrs

Thursday from 09.00-11.00 a.m.

Anschrift

Postal address

Carl von Ossietzky University of Oldenburg
School VI Medicine and Health Sciences
Ammerländer Heerstraße 114-118
26129 Oldenburg

Visitor address

Building V03, 3rd floor, wing M
Ammerländer Heerstraße 138
26129 Oldenburg

Newsletter of University Medicine Oldenburg (German only)

School VI - Medicine and Health Sciences

The School VI Medicine and Health Sciences is the youngest School of the Carl von Ossietzky University Oldenburg. It was founded in 2012 and consists of the Department of Human Medicine, the Department of Medical Physics and Acoustics, the Department of Neurosciences, the Department of Psychology and Health Services Research.

New website of the University Medicine Oldenburg (UMO)

The website “universitätsmedizin-oldenburg.de” provides an overview of UMO's structures and news from university medicine. It complements the websites of the faculty and the cooperating hospitals and gives external visitors in particular an impression of UMO's diversity and unique selling points.

To the UMO website

The model course of study in human medicine is the first time in Germany that medical training is taking place across borders. 120 study places are currently available annually on the Oldenburg side at the European Medical School Oldenburg-Groningen.

Characteristics of the school VI are the highly regarded cross-border model course in human medicine - the European Medical School Oldenburg-Groningen (EMS) - as well as the close integration of basic research, clinical research and health care research. It thus offers students and scientists an excellent environment in which to acquire and apply the knowledge and skills necessary for the medicine of the future.

Current news

  iCal

There are no events in the current view.

Inaugural lectures, disputations and lectures in the context of habilitation procedures

  iCal

There are no events in the current view.

Insights into the School VI

  • Man from behind, with hearing aid in ear and smartphone in hand

    Hearing aids are becoming more and more powerful due to the connection with smartphones. This allows the use of speech recognition software, which researchers at the University of Oldenburg are currently investigating.

    Better hearing through automatic speech recognition?

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Automatic speech recognition technology like that used in everyday life in voice assistance systems such as Alexa or Siri could also be used to help people with impaired hearing in the future. Researchers at the University of Oldenburg have come a step closer to the goal of using this technology in hearing aids, so that they automatically adjust to the ideal programme/setting depending on the soundscape. Jana Roßbach, Prof. Dr. Birger Kollmeier and Prof. Dr. Bernd T. Meyer from the Oldenburg Cluster of Excellence Hearing4all reported on their progress in an article published in the Journal of the Acoustical Society of America.

    These days, people who use a hearing aid can choose between different settings depending on whether they want to have a conversation or listen to music, for example. However, the rather limited range of preset programmes cannot reflect real-life conditions and their very diverse soundscapes. Moreover, having to constantly manually adjust your hearing aid to your environment is also incompatible with everyday use.

    This is where automatic speech recognition using artificial intelligence (AI) methods could come into play. To create the ideal setting for the individual hearing aid user in every situation, AI software would have to learn to "hear" just like that person – with all their specific limitations.   

    The Oldenburg researchers have now demonstrated that this is possible with an experiment in which they presented humans and machines with the same task. The scientists first determined the individual hearing status of 20 test subjects with impaired hearing. At the same time, they trained speech recognition software using audio recordings of test sentences, and taught it to repeat them in written form. To ensure that the computer faced the same difficulties as the person with impaired hearing, the researchers added background noise that simulated the individual's impairment.

    In the following tests, the test persons and the AI which had been trained to simulate their hearing status were given the task of understanding and reproducing recorded sentences. The results: on average, the human test subjects and their machine counterparts were able to understand roughly the same number of words. And to the surprise of the researchers this proved to be the case in all eight listening scenarios in which various background noises were used to simulate real-life situations where speech comprehension is impeded.

    The researchers emphasize that they are still at the beginning of their investigations into the use of speech recognition software in hearing aids. "In further investigations we now aim to find answers to open questions and, for example, make it possible for the speech recognition software to recognize on its own whether it is wrong or right with its prognosis," explains hearing technician and audiologist Jana Roßbach.

    Newly appointed

    • Man from behind, with hearing aid in ear and smartphone in hand

      Hearing aids are becoming more and more powerful due to the connection with smartphones. This allows the use of speech recognition software, which researchers at the University of Oldenburg are currently investigating.

    Better hearing through automatic speech recognition?

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Automatic speech recognition technology like that used in everyday life in voice assistance systems such as Alexa or Siri could also be used to help people with impaired hearing in the future. Researchers at the University of Oldenburg have come a step closer to the goal of using this technology in hearing aids, so that they automatically adjust to the ideal programme/setting depending on the soundscape. Jana Roßbach, Prof. Dr. Birger Kollmeier and Prof. Dr. Bernd T. Meyer from the Oldenburg Cluster of Excellence Hearing4all reported on their progress in an article published in the Journal of the Acoustical Society of America.

    These days, people who use a hearing aid can choose between different settings depending on whether they want to have a conversation or listen to music, for example. However, the rather limited range of preset programmes cannot reflect real-life conditions and their very diverse soundscapes. Moreover, having to constantly manually adjust your hearing aid to your environment is also incompatible with everyday use.

    This is where automatic speech recognition using artificial intelligence (AI) methods could come into play. To create the ideal setting for the individual hearing aid user in every situation, AI software would have to learn to "hear" just like that person – with all their specific limitations.   

    The Oldenburg researchers have now demonstrated that this is possible with an experiment in which they presented humans and machines with the same task. The scientists first determined the individual hearing status of 20 test subjects with impaired hearing. At the same time, they trained speech recognition software using audio recordings of test sentences, and taught it to repeat them in written form. To ensure that the computer faced the same difficulties as the person with impaired hearing, the researchers added background noise that simulated the individual's impairment.

    In the following tests, the test persons and the AI which had been trained to simulate their hearing status were given the task of understanding and reproducing recorded sentences. The results: on average, the human test subjects and their machine counterparts were able to understand roughly the same number of words. And to the surprise of the researchers this proved to be the case in all eight listening scenarios in which various background noises were used to simulate real-life situations where speech comprehension is impeded.

    The researchers emphasize that they are still at the beginning of their investigations into the use of speech recognition software in hearing aids. "In further investigations we now aim to find answers to open questions and, for example, make it possible for the speech recognition software to recognize on its own whether it is wrong or right with its prognosis," explains hearing technician and audiologist Jana Roßbach.

    New appointees

    • Man from behind, with hearing aid in ear and smartphone in hand

      Hearing aids are becoming more and more powerful due to the connection with smartphones. This allows the use of speech recognition software, which researchers at the University of Oldenburg are currently investigating.

    Better hearing through automatic speech recognition?

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Speech recognition software is on the rise thanks to smart home systems. Researchers from Oldenburg have now taught an artificial intelligence to hear like an actual human. This could help people with hearing aids in the future.

    Automatic speech recognition technology like that used in everyday life in voice assistance systems such as Alexa or Siri could also be used to help people with impaired hearing in the future. Researchers at the University of Oldenburg have come a step closer to the goal of using this technology in hearing aids, so that they automatically adjust to the ideal programme/setting depending on the soundscape. Jana Roßbach, Prof. Dr. Birger Kollmeier and Prof. Dr. Bernd T. Meyer from the Oldenburg Cluster of Excellence Hearing4all reported on their progress in an article published in the Journal of the Acoustical Society of America.

    These days, people who use a hearing aid can choose between different settings depending on whether they want to have a conversation or listen to music, for example. However, the rather limited range of preset programmes cannot reflect real-life conditions and their very diverse soundscapes. Moreover, having to constantly manually adjust your hearing aid to your environment is also incompatible with everyday use.

    This is where automatic speech recognition using artificial intelligence (AI) methods could come into play. To create the ideal setting for the individual hearing aid user in every situation, AI software would have to learn to "hear" just like that person – with all their specific limitations.   

    The Oldenburg researchers have now demonstrated that this is possible with an experiment in which they presented humans and machines with the same task. The scientists first determined the individual hearing status of 20 test subjects with impaired hearing. At the same time, they trained speech recognition software using audio recordings of test sentences, and taught it to repeat them in written form. To ensure that the computer faced the same difficulties as the person with impaired hearing, the researchers added background noise that simulated the individual's impairment.

    In the following tests, the test persons and the AI which had been trained to simulate their hearing status were given the task of understanding and reproducing recorded sentences. The results: on average, the human test subjects and their machine counterparts were able to understand roughly the same number of words. And to the surprise of the researchers this proved to be the case in all eight listening scenarios in which various background noises were used to simulate real-life situations where speech comprehension is impeded.

    The researchers emphasize that they are still at the beginning of their investigations into the use of speech recognition software in hearing aids. "In further investigations we now aim to find answers to open questions and, for example, make it possible for the speech recognition software to recognize on its own whether it is wrong or right with its prognosis," explains hearing technician and audiologist Jana Roßbach.

    Webmaster (Changed: 17 Feb 2025)  Kurz-URL:Shortlink: https://uol.de/p29n6105en | # |
    Zum Seitananfang scrollen Scroll to the top of the page