Contributed Talks Monday
Contributed Talks Monday
Merle Gerken1, Florian Kramer1, Dirk Oetting2, Anna Warzybok1
1Carl von Ossietzky Universität Oldenburg, 2Hörzentrum Oldenburg gGmbH
Hearing-aid users frequently report dissatisfaction with loudness perception. A possible reason for this is the individually varying amount of binaural broadband loudness summation. This aspect of gain individualization is not considered by typical hearing aid fitting rules. Loudness perception might be pleasant in the laboratory for soft and medium signals, but not for loud sounds as the individual differences of the binaural broadband loudness perception increase with level.
Therefore, it is reasonable to investigate loudness perception of realistic broadband sounds. This study examines loudness perception of everyday signals with two different hearing aid fitting strategies, the NAL-NL2  and trueLOUDNESS . Data of aided hearing-impaired listeners are compared to data of normal-hearing listeners to quantify the loudness compensation of hearing aids. The subjective data are compared to predictions of the Dynamic Loudness Model . 22 different everyday signals were used for assessment of loudness perception, representing a variety of levels and spectra.
Empirical data showed that loudness perception is completely restored to normal with trueLOUDNESS but not with NAL-NL2 prescription rule. The main differences across the prescription rules were observed for signals with high levels. The loudness model shows discrepancies between predictions and measured data resulting in a moderate correlation.
 Keidser, G., Dillon, H., Flax, M., Ching, T., & Brewer, S. (2011). The NAL-NL2 prescription procedure. Audiology research, 1(1), 88-90.
 Oetting, D., Hohmann, V., Appell, J. E., Kollmeier, B., & Ewert, S. D. (2018). Restoring perceived loudness for listeners with hearing loss. Ear and hearing, 39(4), 664-678.
 Chalupper, J., & Fastl, H. (2002). Dynamic loudness model (DLM) for normal and hearing-impaired listeners. Acta Acustica united with Acustica, 88(3), 378-386.
Stephan D. Ewert
Medizinische Physik and Cluster of Excellence H4A, Universität Oldenburg
In everyday life, our acoustic environments are often complex with a variety of spatially distributed sources, background noise, and sound reflections from nearby structures as well as reverberation in enclosed spaces. To be able to perform reproducible experiments in complex acoustic environments under well-controlled laboratory conditions, room acoustics simulation and rendering methods are important. To better understand auditory perception in such adverse conditions, auditory models are valuable and additionally serve as instrumental tools for hearing aid research and evaluation. Here, we present an overview of our generalized powers spectrum model (GPSM) which was successfully applied to basic psychoacoustics, speech intelligibility, and (spatial) audio quality. To generate ecologically relevant virtual acoustic environments in the laboratory, we show recent advancements of our room acoustics simulator (RAZR) for perceptually evaluated and computationally efficient simulation and rendering. Utilizing the above developments, we investigated simultaneous assessment of speech intelligibility, loudness, and localization of sound sources in a virtual living room environment. The suggested setup is suited to enable a 1:1 comparison of performance and behavior in the simulated and just established, real “living room lab”.
Sónia L. Coelho-de-Sousa1, Miriam I. Marrufo-Pérez1, Marcelo Gómez-Álvarez1, Enrique A. Lopez-Poveda1
1 Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
Background. Adaptation to noise refers to the improvement in word-in-noise recognition as words are delayed a few hundred milliseconds from the noise onset. This adaptation is thought to reflect one or more physiological mechanisms that can adjust the dynamic range of auditory nerve fibers, such as statistical adaptation to the most frequent noise level preceding the words and/or noise activation of olivocochlear efferent reflexes. The loss of cochlear synapses (or synaptopathy) could impair these mechanisms, hence adaptation to noise. The aim of the present study was to investigate the impact of synaptopathy on adaptation to noise. Because synaptopathy predominantly reduces the number of cochlear synapses for auditory nerve fibers with high thresholds, we expected a larger effect of synaptopathy on adaptation to high level noise.
Methods. For 48 participants with normal-hearing (pure-tone average thresholds at 500-2000 Hz <25 dB HL), we measured (1) speech reception thresholds (SRTs; signal-to-noise ratios at 50% recognition) for disyllabic words delayed 50 or 800 ms in stationary, speech-shaped noise; (2) high-frequency thresholds (HFTs) at 12 kHz; and (3) auditory brainstem responses (ABRs) for clicks presented at 95 and 110 dB ppeSPL. SRTs were measured for fixed noise levels of 55 and 78 dB SPL by adaptively varying the speech level. Adaptation to noise was calculated as the SRT improvement in the 800-ms versus the 50-ms delay condition. Because adaptation is known to be greater for vocoded than for natural words, words were processed through a tone vocoder. The amplitudes of ABR wave I for the two click levels and its rate of growth with increasing level (slope) were used as proxies for cochlear synaptopathy.
Results. Adaptation occurred at the two noise levels (55 dB SPL, mean=0.88 dB, p=0.001; and 78 dB SPL, mean = 1.89 dB, p<0.001). At 78 dB SPL, adaptation was correlated with wave I slope [r(46)=0.089, p=0.039] but not with wave I amplitude [at 95 dB ppeSPL: r(46)=0.024, p=0.30; at 110 dB ppSPL: r(46)=0.013; p=0.43]. At 55 dB SPL, adaptation was not significantly correlated with any ABR measure. Results were similar when the potential confounding effects of HFTs were partialled out.
Conclusions. Cochlear synaptopathy (as assessed by wave I slope) could reduce adaptation to high-level noise. More data are necessary to corroborate these findings. [Work supported by the Spanish Ministry of Science and Innovation (grant PID2019-108985GB-I00), and the European Regional Development Fund.]
David López-Ramos1, Luis E. López-Bascuas2, Almudena Eustaquio-Martín1, Miriam I. Marrufo-Pérez1, Enrique A. Lopez-Poveda1
1 Universidad de Salamanca, Salamanca, Spain
2 Universidad Complutense de Madrid, Madrid, Spain
Background. Noise adaptation is defined as the improvement in auditory function as the signal of interest is delayed from the noise onset. While it is known that adaptation to noise occurs in word recognition and temporal modulation detection, it is yet unknown if it also occurs in spectral or spectro-temporal modulation detection. This work aimed at investigating if noise adaptation occurs in spectral, temporal, and spectro-temporal modulation detection and whether the magnitude of adaptation in those tasks correlates with adaptation to noise in word recognition.
Methods. 18 normal-hearing volunteers participated in the experiments. Stimuli were presented monoaurally to their left ear. In the modulation detection tasks, the signal was a 200-ms spectro-temporal modulated ripple noise. As low temporal and spectral modulation frequencies are essential for speech recognition, the spectral modulation rate was 2 cycles/oct, the temporal modulation rate was 10 Hz, and the spectro-temporal modulations consisted of the combination of these two modulations. In the speech recognition task, the signal consisted of disyllabic words unprocessed or vocoded to maintain only envelope cues. The two tasks (modulation detection and speech recognition) were performed in quiet and in white noise (at 60 dB SPL) for noise-signal onset delays of 50 ms (early condition) and 800 ms (late condition). In the modulation detection tasks, the signal level was 60 dB SPL (0 dB SNR) and the modulation depth (dB) was varied adaptively to calculate the threshold depth at 71% correct detection. In the speech recognition tasks, the speech level was adaptively varied to calculate the SNR at 50%-word recognition. Adaptation was calculated as the threshold difference between the early and late conditions.
Results. Mean adaptation was statistically significant in spectral [2.1 dB; p<0.001] and temporal [2.24 dB; p<0.001] modulation detection but not in spectro-temporal modulation detection [-0.04 dB; p=1.0]. Mean adaptation in word recognition was significant for vocoded words [2.09 dB, p<0.001] but not for natural words [0.59 dB, p=0.19]. Adaptation in natural and vocoded word recognition was not correlated with spectral modulation detection (r=-0.10, p=0.70; and r=-0.13, p=0.63, respectively), temporal modulation detection (r=-0.15, p=0.57; and r=0.09, p=0.72, respectively), or spectro-temporal modulation detection (r=-0.39, p=0.13; and r=-0.35, p=0.16, respectively).
Conclusions. Adaptation to noise occurs in spectral and temporal modulation detection but is less (or zero) when spectral and temporal modulations are simultaneously present. More data are needed to elucidate the relationship between adaptation in the detection of spectral, temporal and spectro-temporal modulations with adaptation in word recognition. [Supported by the University of Salamanca and Banco Santander to DLR and the Spanish Ministry of Science and Innovation (grant PID2019-108985GB-I00) to EALP].
Alejandro Osses, Léo Varnet
ENS Paris, France
Reverse correlation is a method that has been proven to be powerful to establish the relationship between the stimuli and the response of the participants in a specific behavioural experiment. In a listening experiment such a relationship can be related to the particular listening strategy that participants used, indicating for instance, what time-frequency information was most (or least) weighted in the course of the experiment. In this contribution, we show how we incorporated an auditory model of the hearing periphery, the modulation-filter-bank model, into the process of experimental design of a vowel-consonant-vowel (/aba/-/ada/) discrimination task and how the model was used to formulate and test research hypotheses. Given the model limitations, we also comment on how much can we reliably speculate about the involved hearing mechanisms based on the simulated outcomes.
Leanne Sijgers, Christof Röösli, Rahel Bertschinger, Adrian Dalbert, Norbert Dillier, Alexander Huber, Flurin Pfiffner
Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Zurich, University of Zurich, Switzerland
Objectives: The Inter-phase gap (IPG) offset effect is defined as the dB offset between the linear parts of electrically-evoked compound action potential (ECAP) amplitude growth functions for two stimuli differing only in IPG. The method was recently suggested to represent neural health in cochlear implant (CI) users, while being unaffected by CI electrode impedances. Hereby, a higher IPG offset effect should reflect better neural health. Here, we aimed to 1) examine whether the IPG offset effect reflects neural health in CI recipients with residual acoustic hearing, and 2) investigate the dependency of the IPG offset effect on hair cell survival and intracochlear electrode impedances. If the IPG offset effect indeed represents neural health, we hypothesized that it should negatively correlate with the ECAP threshold and the preoperative pure-tone audiogram (PTA).
Methods: Seventeen adult subjects with residual hearing at 500 Hz undergoing CI surgery at the University Hospital of Zurich were prospectively enrolled. ECAP thresholds, IPG offset effects, ECochG responses to 500Hz tone bursts, and monopolar electrical impedances were obtained at an apical, middle and basal electrode pair during and between four and twelve weeks after CI surgery.
PTAs were recorded within three weeks prior to surgery and approximately six weeks after surgery. Relationships between (changes in) ECAP threshold, IPG offset, impedance, PTA and ECochG amplitude assessed using linear regression analyses and t-tests. Lumped-element model simulations were conducted to better understand the influence of electrical impedances on the IPG offset effect.
Results: The IPG offset effect positively correlated with the ECAP threshold in intraoperative (r = .36, p = .016) and postoperative recordings (r = .58, p = .00074) and did not significantly correlate with the preoperative PTA (p = 0.982). The IPG offset effect showed a significant postoperative decrease in subjects with a postoperative ECochG amplitude drop (p = 0.026), but not in subjects without such a drop (p = 0.32). The change in impedance between intra- and postoperative recordings negatively correlated with the IPG offset effect change (p = 0.0305). The lumped-element model simulations revealed a relationship between electrode-tissue interface impedance and the IPG offset effect.
Conclusions: The study results did not confirm the hypothesized relationships between the IPG offset effect and the ECAP threshold or between the IPG offset effect and pre-operative acoustic hearing. However, they revealed a dependency of the IPG offset effect on postoperative changes in electrical impedances. These findings would limit the method’s usability for determining neural health in CI recipients with residual acoustic hearing.