Until recently, hearing impairment has been a blind spot of the map of music perception research. We aim to provide a conceptual and empirical groundwork that may allow an optimization of hearing aids to music. This involves a host of questions: How do listeners parse and organize complex musical scenes? How is music listening affected by hearing loss? Hearing aids are currently optimized for speech -- how can we improve music listening with hearing aids?
K. Siedenburg, S. Röttges, K. C. Wagener, V. Hohmann (in press). Can you hear out the melody? Testing musical scene perception of young hearing-impaired and older normal-hearing listeners. Trends in Hearing, 24, 1-15, doi: 10.1177/2331216520945826
This project has received funding from the European Union’s Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Skłodowska-Curie Grant Agreement No. 747124. The project was entitled TIMPANI - Test, Predict, and Improve Musical Scene Perception of Hearing-Impaired Listeners.
What is timbre and what does it do in music? What are the acoustic and cognitive factors that affect timbre dissimilarity and brightness perception? Gaining a better understanding of these questions may not only inform the psychological basis of this important auditory parameter, but also improve our general understanding of music perception.
Stephen McAdams (McGill University), Charalampos Saitis (Queen Mary University of London), Daniel Pressnitzer (Ecole Normale Supérieure, Paris) & Jackson Graves (Ecole Normale Supérieure, Paris), Henning Schepker (Starkey Hearing), Christoph Reuter (University of Vienna).
Siedenburg, K., Saitis, C., McAdams, S., Popper, A. N., and Fay, R. R. (2019). Timbre: Acoustics, Perception, and Cognition. Springer Handbook of Auditory Research. Springer Nature, Heidelberg, Germany.
We develop models of acoustical sounds that shed light on how acoustic information is exploited by the human auditory system. Examples include transient extraction algorithm that helped to more detailedly isolate and study the role of transients and onsets for instrument identification. [Sound examples]
K. Siedenburg, M. R. Schädler, D. Hülsmeier (2019). Modeling the onset advantage in musical instrument recognition. The Journal of the Acoustical Society of America, 146(6), EL523–EL529
Siedenburg, K. and Doclo, S. (2017). Iterative structured shrinkage algorithms for stationary/transient audio separation. In Proc. of the 20th Int. Conf. on Digital Audio Effects (DAFx- 20), Edinburgh, Sep 5–8. [Best Paper Award]