Project A6 - Modeling musical instrument identification under realistic acoustical conditions

Project A6 - Modeling musical instrument identification under realistic acoustical conditions

Music is a form of acoustical communication that often features highly complex acoustic scenes with sounds from multiple musical instruments or voices that superimpose in time and frequency.

The goal of this project is to test and model the perception of musical scenes under realistic acoustical conditions. Using an instrument identification task, this involves the experimental characterization of human identification performance as part of a symphony orchestra scenario with binaural cues from virtually rendered concert hall acoustics.

Based on these experimental data, the project aims to construct a binaural computational model to simulate human performance.

Publications

2024

  • Benjamin AJ, Siedenburg K (2024) Evaluating audio quality ratings and scene analysis performance of hearing-impaired listeners for multi-track music. JASA Express Lett. 4 (11): 113202. DOI: 10.1121/10.0032474
  • Bürgel M, Mares D, Siedenburg K (2024) Enhanced salience of edge frequencies in auditory pattern recognition. Atten Percept Psychophys (Epub ahead of print). DOI: 10.3758/s13414-024-02971-x
  • Bürgel M, Siedenburg K (2024) Impact of interference on vocal and instrument recognition. J. Acoust. Soc. Am. 156 (2): 922–938. DOI: 10.1121/10.0028152
  • Jacobsen S, Siedenburg K (2024) Exploring the relation between fundamental frequency and spectral envelope in the perception of musical instrument sounds. Acta Acustica 8(48), 1–15. DOI: 10.1051/aacus/2024038
  • Lesimple C, Kuehnel V, Siedenburg K (2024) Hearing aid evaluation for music: Accounting for acoustical variability of music stimuli. JASA Express Lett. 4 (9): 093201. DOI: 10.1121/10.0028397
  • Siedenburg K, Bürgel M, Özgür E, Scheicht C, Töpken S (2024) Vibrotactile enhancement of musical engagement. Sci Rep. 14(1):7764. DOI: 10.1038/s41598-024-57961-8

2023

  • Gerdes K, Siedenburg K (2023) Lead-vocal level in recordings of popular music 1946-2020. The Journal of the Acoustical Society of America – Express Letters 3, 043201 (6 pages). DOI: 10.1121/10.0017773
  • Siedenburg K, Graves J, Pressnitzer D (2023) A unitary model of frequency change perception. PLOS Computational Biology 19(1): e1010307, 30 pages. DOI: 10.1371/journal.pcbi.1010307

Cooperations of PI with SFB prior to funding

  • Siedenburg K, Barg FM, Schepker H (2021). Adaptive auditory brightness perception. Scientific reports 11: 21456.
    DOI: 10.1038/s41598-021-00707-7
  • Siedenburg K, Röttges S, Wagener KC, Hohmann V (2020) Can you hear out the melody? Testing musical scene perception in young normal-hearing and older hearing-impaired listeners. Trends Hear 24: 2331216520945826, 1-15.
    DOI: 10.1177/2331216520945826
  • Siedenburg K, Schädler MR, Hülsmeier D (2019) Modeling the onset advantage in musical instrument recognition. J Acoust Soc Am 146 (6): EL523 - EL529.
    DOI: 10.1121/1.5141369
(Changed: 02 Dec 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page