Contact

Dean´s Office

+49 (0)441 798-2499 

Dean of Studies Office

+49 (0)441 798-2510

Opening hours Office of the Dean of Studies

Monday and Wednesday from 15.00-17.00 hrs

Thursday from 09.00-11.00 a.m.

Anschrift

Postal address

Carl von Ossietzky University of Oldenburg
School VI Medicine and Health Sciences
Ammerländer Heerstraße 114-118
26129 Oldenburg

Visitor address

Building V03, 3rd floor, wing M
Ammerländer Heerstraße 138
26129 Oldenburg

Newsletter of University Medicine Oldenburg (German only)

School VI - Medicine and Health Sciences

The School VI Medicine and Health Sciences is the youngest School of the Carl von Ossietzky University Oldenburg. It was founded in 2012 and consists of the Department of Human Medicine, the Department of Medical Physics and Acoustics, the Department of Neurosciences, the Department of Psychology and Health Services Research.

New website of the University Medicine Oldenburg (UMO)

The website “universitätsmedizin-oldenburg.de” provides an overview of UMO's structures and news from university medicine. It complements the websites of the faculty and the cooperating hospitals and gives external visitors in particular an impression of UMO's diversity and unique selling points.

To the UMO website

The model course of study in human medicine is the first time in Germany that medical training is taking place across borders. 120 study places are currently available annually on the Oldenburg side at the European Medical School Oldenburg-Groningen.

Characteristics of the school VI are the highly regarded cross-border model course in human medicine - the European Medical School Oldenburg-Groningen (EMS) - as well as the close integration of basic research, clinical research and health care research. It thus offers students and scientists an excellent environment in which to acquire and apply the knowledge and skills necessary for the medicine of the future.

Current news

  iCal

There are no events in the current view.

Inaugural lectures, disputations and lectures in the context of habilitation procedures

  iCal

There are no events in the current view.

Insights into the School VI

  • Bernhard Eurich und Mathias Dietz sitzen im Hörlabor vor einem Computer und tragen Kopfhörer.

    Mathias Dietz (right) and doctoral candidate Bernhard Eurich test the predictions of their hearing model with test subjects in the hearing lab. They can randomly vary the acoustic level and timing of the target tones and noise reaching the test person's left and right ear in order to investigate how people perceive the acoustic mixture. Photo: UOL

    Hearing by numbers

    Researchers have developed a mathematical model that can simulate directional hearing using considerably less computing power than others – and also describes the processes in the human brain more accurately than ever before.

    Voice assistants can understand commands only thanks to complex calculations running in the background. Hearing aids also rely on lots of maths. The software in these devices makes split-second calculations, for example to determine which components of an acoustic mixture are part of the conversation that the hearing aid user is currently having and then amplify them accordingly.

    Mathematical models developed by scientists like Professor Dr Mathias Dietz, a hearing researcher at the University of Oldenburg, form the basis for these calculations. "A model is essentially a complicated formula that attempts to describe a natural phenomenon as accurately as possible," he explains. In his particular field of research, this means: the better an auditory model functions, the more reliably it will predict how a human would perceive a given sound.

    Humans’ intuitive knowledge of where a sound is coming from is the result of complex processes. "It is barely possible to detect which direction a sound is coming from with just one ear," explains the researcher from the Department of Medical Physics and Acoustics. "We can only distinguish whether sounds are coming from the left or right because our two ears are interconnected in the brain." This connectivity also allows us to distinguish background noise such as the babble of voices at a party from the voice of the person you are talking to, and to partially suppress that noise without even realising it.

    Previous model is about 80 years old

    One source of information available to our brain in directional hearing is a small difference in the arrival time of sounds at the two ears. A sound wave coming from the left hits the left ear first and is converted in the inner ear into an electrical stimulus, which is then transmitted at lightning speed along the auditory nerve. Because the sound wave takes longer to reach the right ear, there is a delay of a fraction of a second before the same process starts there. "A very intuitive theory of what happens in the brain at this moment was already put forward in the 1940s," says Dietz. The American physicist and psychologist Lloyd Alexander Jeffress imagined the process –  presented here in simplified terms – as follows::

    The stimuli coming from the right and left move towards each other, passing from one neuron to the next until finally they arrive at one neuron simultaneously. Since each neuron represents a very specific spatial direction, the brain translates the neuron that is particularly excited by the simultaneous arrival of two different stimuli into a spatial perception. He developed a first auditory model based on this theory. His assumption was that a large number of neurons are involved in the process, which, as "coincidence detectors", map the entire sound environment. "This model allows for effective prediction of sound detection," says Dietz. "There is just one problem: even after some 50 years of neuroscientific studies, an extensive neural network like that proposed by Jeffress has not been found in mammals." Instead, mammals have just one nerve bundle per brain hemisphere, referred to by scientists as "channels". But the astonishing thing is that although Jeffress’s model was based on a false assumption, it worked – in fact it worked so well that researchers and engineers still use it today.

    Discrepancy between physiology and models for directional hearing

    Newer approaches that tried to take account of physiological findings failed. Envisioning the two channels in the human brain as a system of coincidence detectors reduced to just two neurons rendered the Jeffress model useless. It didn’t work on the basis of this – physiologically correct – assumption. In addition, models based on only two channels were unable to reliably predict whether people would be able to perceive target sounds when presented together with noise.

    Dietz, whose research has been funded by a prestigious European Research Council (ERC) Starting Grant since 2018, was already irritated by this discrepancy between brain physiology and models for directional hearing in humans when he did his PhD fifteen years ago. The physicist is aiming for a unified understanding of the auditory system, which in his view means that the findings and models contributed by the various scientific disciplines should not contradict each other.

    During the pandemic, when the possibilities for conducting hearing tests with test persons were very limited, Dietz and his colleagues Dr Jörg Encke and Bernhard Eurich instead focused their efforts on finally presenting a functioning two-channel model. These efforts bore fruit: the new Oldenburg model can reliably calculate how people will detect tones that are played in conjunction with background noise. To verify this, the scientists consulted numerous earlier studies in which researchers had measured how loud a target tone had to be for study participants to be able to detect it despite noise. The Oldenburg model was able to precisely simulate more than 300 of these "detection thresholds" for the first time ever.

    Change in approach brought the breakthrough

    This breakthrough, the details of which the scientists recently published in the journal Communications Biology, was made possible by a change in approach: the team decided to correlate the two channels for the first time. The researchers took advantage of the fact that sounds travel in waves and, due to the time difference factor, reach each of an individual’s two ears during a different phase of this wave. This phase shift is the piece of the puzzle that has now finally made it possible to predict human directional hearing in a way that corresponds to the physiological condition. "We've cracked a pretty tough nut here," says Dietz, summing up the hard work of the last few years.

    The Oldenburg approach even works better than the old model when it comes to factoring the effect of two different noises into the prediction – something that had been neglected so far with the old model. Eurich explained this in greater detail in another publication. Now the doctoral candidate wants to explore how the new model can help to improve spatial hearing with hearing aids. The plan is to use the model to predict which elements of the soundscape should not be omitted from the amplification in order to ensure that hearing aid users don’t experience any loss in sound quality.

    Newly appointed

    • Bernhard Eurich und Mathias Dietz sitzen im Hörlabor vor einem Computer und tragen Kopfhörer.

      Mathias Dietz (right) and doctoral candidate Bernhard Eurich test the predictions of their hearing model with test subjects in the hearing lab. They can randomly vary the acoustic level and timing of the target tones and noise reaching the test person's left and right ear in order to investigate how people perceive the acoustic mixture. Photo: UOL

    Hearing by numbers

    Researchers have developed a mathematical model that can simulate directional hearing using considerably less computing power than others – and also describes the processes in the human brain more accurately than ever before.

    Voice assistants can understand commands only thanks to complex calculations running in the background. Hearing aids also rely on lots of maths. The software in these devices makes split-second calculations, for example to determine which components of an acoustic mixture are part of the conversation that the hearing aid user is currently having and then amplify them accordingly.

    Mathematical models developed by scientists like Professor Dr Mathias Dietz, a hearing researcher at the University of Oldenburg, form the basis for these calculations. "A model is essentially a complicated formula that attempts to describe a natural phenomenon as accurately as possible," he explains. In his particular field of research, this means: the better an auditory model functions, the more reliably it will predict how a human would perceive a given sound.

    Humans’ intuitive knowledge of where a sound is coming from is the result of complex processes. "It is barely possible to detect which direction a sound is coming from with just one ear," explains the researcher from the Department of Medical Physics and Acoustics. "We can only distinguish whether sounds are coming from the left or right because our two ears are interconnected in the brain." This connectivity also allows us to distinguish background noise such as the babble of voices at a party from the voice of the person you are talking to, and to partially suppress that noise without even realising it.

    Previous model is about 80 years old

    One source of information available to our brain in directional hearing is a small difference in the arrival time of sounds at the two ears. A sound wave coming from the left hits the left ear first and is converted in the inner ear into an electrical stimulus, which is then transmitted at lightning speed along the auditory nerve. Because the sound wave takes longer to reach the right ear, there is a delay of a fraction of a second before the same process starts there. "A very intuitive theory of what happens in the brain at this moment was already put forward in the 1940s," says Dietz. The American physicist and psychologist Lloyd Alexander Jeffress imagined the process –  presented here in simplified terms – as follows::

    The stimuli coming from the right and left move towards each other, passing from one neuron to the next until finally they arrive at one neuron simultaneously. Since each neuron represents a very specific spatial direction, the brain translates the neuron that is particularly excited by the simultaneous arrival of two different stimuli into a spatial perception. He developed a first auditory model based on this theory. His assumption was that a large number of neurons are involved in the process, which, as "coincidence detectors", map the entire sound environment. "This model allows for effective prediction of sound detection," says Dietz. "There is just one problem: even after some 50 years of neuroscientific studies, an extensive neural network like that proposed by Jeffress has not been found in mammals." Instead, mammals have just one nerve bundle per brain hemisphere, referred to by scientists as "channels". But the astonishing thing is that although Jeffress’s model was based on a false assumption, it worked – in fact it worked so well that researchers and engineers still use it today.

    Discrepancy between physiology and models for directional hearing

    Newer approaches that tried to take account of physiological findings failed. Envisioning the two channels in the human brain as a system of coincidence detectors reduced to just two neurons rendered the Jeffress model useless. It didn’t work on the basis of this – physiologically correct – assumption. In addition, models based on only two channels were unable to reliably predict whether people would be able to perceive target sounds when presented together with noise.

    Dietz, whose research has been funded by a prestigious European Research Council (ERC) Starting Grant since 2018, was already irritated by this discrepancy between brain physiology and models for directional hearing in humans when he did his PhD fifteen years ago. The physicist is aiming for a unified understanding of the auditory system, which in his view means that the findings and models contributed by the various scientific disciplines should not contradict each other.

    During the pandemic, when the possibilities for conducting hearing tests with test persons were very limited, Dietz and his colleagues Dr Jörg Encke and Bernhard Eurich instead focused their efforts on finally presenting a functioning two-channel model. These efforts bore fruit: the new Oldenburg model can reliably calculate how people will detect tones that are played in conjunction with background noise. To verify this, the scientists consulted numerous earlier studies in which researchers had measured how loud a target tone had to be for study participants to be able to detect it despite noise. The Oldenburg model was able to precisely simulate more than 300 of these "detection thresholds" for the first time ever.

    Change in approach brought the breakthrough

    This breakthrough, the details of which the scientists recently published in the journal Communications Biology, was made possible by a change in approach: the team decided to correlate the two channels for the first time. The researchers took advantage of the fact that sounds travel in waves and, due to the time difference factor, reach each of an individual’s two ears during a different phase of this wave. This phase shift is the piece of the puzzle that has now finally made it possible to predict human directional hearing in a way that corresponds to the physiological condition. "We've cracked a pretty tough nut here," says Dietz, summing up the hard work of the last few years.

    The Oldenburg approach even works better than the old model when it comes to factoring the effect of two different noises into the prediction – something that had been neglected so far with the old model. Eurich explained this in greater detail in another publication. Now the doctoral candidate wants to explore how the new model can help to improve spatial hearing with hearing aids. The plan is to use the model to predict which elements of the soundscape should not be omitted from the amplification in order to ensure that hearing aid users don’t experience any loss in sound quality.

    New appointees

    • Bernhard Eurich und Mathias Dietz sitzen im Hörlabor vor einem Computer und tragen Kopfhörer.

      Mathias Dietz (right) and doctoral candidate Bernhard Eurich test the predictions of their hearing model with test subjects in the hearing lab. They can randomly vary the acoustic level and timing of the target tones and noise reaching the test person's left and right ear in order to investigate how people perceive the acoustic mixture. Photo: UOL

    Hearing by numbers

    Researchers have developed a mathematical model that can simulate directional hearing using considerably less computing power than others – and also describes the processes in the human brain more accurately than ever before.

    Voice assistants can understand commands only thanks to complex calculations running in the background. Hearing aids also rely on lots of maths. The software in these devices makes split-second calculations, for example to determine which components of an acoustic mixture are part of the conversation that the hearing aid user is currently having and then amplify them accordingly.

    Mathematical models developed by scientists like Professor Dr Mathias Dietz, a hearing researcher at the University of Oldenburg, form the basis for these calculations. "A model is essentially a complicated formula that attempts to describe a natural phenomenon as accurately as possible," he explains. In his particular field of research, this means: the better an auditory model functions, the more reliably it will predict how a human would perceive a given sound.

    Humans’ intuitive knowledge of where a sound is coming from is the result of complex processes. "It is barely possible to detect which direction a sound is coming from with just one ear," explains the researcher from the Department of Medical Physics and Acoustics. "We can only distinguish whether sounds are coming from the left or right because our two ears are interconnected in the brain." This connectivity also allows us to distinguish background noise such as the babble of voices at a party from the voice of the person you are talking to, and to partially suppress that noise without even realising it.

    Previous model is about 80 years old

    One source of information available to our brain in directional hearing is a small difference in the arrival time of sounds at the two ears. A sound wave coming from the left hits the left ear first and is converted in the inner ear into an electrical stimulus, which is then transmitted at lightning speed along the auditory nerve. Because the sound wave takes longer to reach the right ear, there is a delay of a fraction of a second before the same process starts there. "A very intuitive theory of what happens in the brain at this moment was already put forward in the 1940s," says Dietz. The American physicist and psychologist Lloyd Alexander Jeffress imagined the process –  presented here in simplified terms – as follows::

    The stimuli coming from the right and left move towards each other, passing from one neuron to the next until finally they arrive at one neuron simultaneously. Since each neuron represents a very specific spatial direction, the brain translates the neuron that is particularly excited by the simultaneous arrival of two different stimuli into a spatial perception. He developed a first auditory model based on this theory. His assumption was that a large number of neurons are involved in the process, which, as "coincidence detectors", map the entire sound environment. "This model allows for effective prediction of sound detection," says Dietz. "There is just one problem: even after some 50 years of neuroscientific studies, an extensive neural network like that proposed by Jeffress has not been found in mammals." Instead, mammals have just one nerve bundle per brain hemisphere, referred to by scientists as "channels". But the astonishing thing is that although Jeffress’s model was based on a false assumption, it worked – in fact it worked so well that researchers and engineers still use it today.

    Discrepancy between physiology and models for directional hearing

    Newer approaches that tried to take account of physiological findings failed. Envisioning the two channels in the human brain as a system of coincidence detectors reduced to just two neurons rendered the Jeffress model useless. It didn’t work on the basis of this – physiologically correct – assumption. In addition, models based on only two channels were unable to reliably predict whether people would be able to perceive target sounds when presented together with noise.

    Dietz, whose research has been funded by a prestigious European Research Council (ERC) Starting Grant since 2018, was already irritated by this discrepancy between brain physiology and models for directional hearing in humans when he did his PhD fifteen years ago. The physicist is aiming for a unified understanding of the auditory system, which in his view means that the findings and models contributed by the various scientific disciplines should not contradict each other.

    During the pandemic, when the possibilities for conducting hearing tests with test persons were very limited, Dietz and his colleagues Dr Jörg Encke and Bernhard Eurich instead focused their efforts on finally presenting a functioning two-channel model. These efforts bore fruit: the new Oldenburg model can reliably calculate how people will detect tones that are played in conjunction with background noise. To verify this, the scientists consulted numerous earlier studies in which researchers had measured how loud a target tone had to be for study participants to be able to detect it despite noise. The Oldenburg model was able to precisely simulate more than 300 of these "detection thresholds" for the first time ever.

    Change in approach brought the breakthrough

    This breakthrough, the details of which the scientists recently published in the journal Communications Biology, was made possible by a change in approach: the team decided to correlate the two channels for the first time. The researchers took advantage of the fact that sounds travel in waves and, due to the time difference factor, reach each of an individual’s two ears during a different phase of this wave. This phase shift is the piece of the puzzle that has now finally made it possible to predict human directional hearing in a way that corresponds to the physiological condition. "We've cracked a pretty tough nut here," says Dietz, summing up the hard work of the last few years.

    The Oldenburg approach even works better than the old model when it comes to factoring the effect of two different noises into the prediction – something that had been neglected so far with the old model. Eurich explained this in greater detail in another publication. Now the doctoral candidate wants to explore how the new model can help to improve spatial hearing with hearing aids. The plan is to use the model to predict which elements of the soundscape should not be omitted from the amplification in order to ensure that hearing aid users don’t experience any loss in sound quality.

    Webmaster (Changed: 17 Feb 2025)  Kurz-URL:Shortlink: https://uol.de/p29n7578en | # |
    Zum Seitananfang scrollen Scroll to the top of the page