How does hearing loss affect the experience of music? Hearing researcher Kai Siedenburg has been awarded a three-year Carl von Ossietzky Researchers' Fellowship by the university for his research at the interface between signal processing and music psychology.
The 32-year-old has been working in the Department of Medical Physics and Acoustics since the end of 2015 and will now use the fellowship to further raise his profile and acquire funding for his future research - for example in his own junior research group. He was presented with the funding certificate by the university's Vice President for Research and Transfer, Prof Dr Martin Holthaus.
"It is important to us to once again honour an outstanding and highly qualified young scientist with our university fellowship and to support him on his further path," said Holthaus. Dr Kai Siedenburg's research topic concerns many people: "Not only one in two people over the age of 65, but increasingly also many younger people suffer from hearing loss - and do not want to do without the enjoyment of music as an integral part of our cultural and social interaction."
Siedenburg is investigating how the altered perception caused by hearing loss and the use of hearing aids affects music listening. This is because hearing aids have so far been optimised primarily for speech perception, meaning that despite major technological advances, studies have not yet consistently improved the perceived quality of music. "For example, can these listeners still follow a solo violin or a vocal part in the rich orchestral accompaniment at a classical concert?" asks Siedenburg. To investigate this, he uses mathematical tools - "like an acoustic scalpel" - to break down instrumental sounds into the attack or transient on the one hand and the longer-lasting sound on the other. For example, he uses a specially developed algorithm to divide notes played on a piano into the impact of the hammer on the string, the so-called transients, and the vibration of the string, the so-called stationary component.
He uses these "dissected" sound components in some of his listening experiments to find out how listeners identify different instruments and which characteristics are important: How does the healthy ear take apart a complex musical structure and give the listener a representation of the individual instruments on the one hand and the ensemble sound on the other? Siedenburg's basic research could lead to musically intelligent algorithms that make it possible, for example, to mix and play out concerts - whether with a symphony orchestra or a pop band - optimally for different listening requirements.
Siedenburg, who studied mathematics and music at Berlin's Humboldt University and the University of California at Berkeley (USA), combines an interest in signal processing and music psychology in his research. According to Siedenburg, the effects of hearing loss on the musical experience involve a poorer sensory representation of music in the case of hearing loss, i.e. a less finely coded sensory signal. In addition, attention and experience effects of a music-psychological nature also play a role, which could initially partially compensate for the hearing loss in trained musicians, for example.
Siedenburg already specialised in signal processing in his Diplom thesis in mathematics and laid the foundations for his - recently awarded - algorithm that can decompose the sounds of musical instruments. He then completed his doctorate in music technology at McGill University in Montréal (Canada) on the ability to remember musical timbres. In 2015, he moved to the University of Oldenburg to join the "Signal Processing" working group headed by Prof. Dr Simon Doclo, one of the lead researchers in the Hearing4all cluster of excellence.