Poster

Poster

Introduction Talks I

Mie Jørgensen: Investigating the Effect of High-Frequency Amplification as Tinnitus Treatment

Mie Lærkegård Jørgensen1,2 , Petteri Hyvärinen2, Sueli Caporali1 , Torsten Dau2

1 WS Audiology, Lynge, Denmark

2 Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark

The objective of the study was to investigate the effect of broadband amplification (125 Hz to 10 kHz) as tinnitus treatment for subjects with high-frequency hearing loss and compare these effects with an active placebo condition using band-limited amplification (125 Hz to 3-4 kHz). The study was a double-blinded crossover study. 22 subjects with a high-frequency (≥ 3 kHz) hearing loss and chronical tinnitus were included in the study and 17 completed the full treatment protocol. Two different hearing aid treatments were provided for 3 months each: broadband amplification that provided gain in the frequency range from 125 Hz to 10 kHz and band-limited amplification that only provided gain in the low frequency range (≤ 3-4 kHz). The effect of the two treatments on tinnitus distress was evaluated with the tinnitus handicap inventory (THI) and the tinnitus functional index (TFI) questionnaires. The effect of the treatment on tinnitus loudness was evaluated with a visual analog scale (VAS) for loudness and a psychoacoustic loudness measure. Furthermore, the tinnitus annoyance was evaluated with a VAS for annoyance. A statistically significant difference was found between the two treatment groups (broadband vs. band-limited amplification) for the treatment-related change in THI and TFI with respect to baseline. Furthermore, a statistically significant difference was found between the two treatment conditions for the annoyance measure. Regarding the loudness measure, no statistically significant differences were found between the treatments. Overall, the results from the present study suggest that, tinnitus patients with high-frequency hearing loss can experience a decrease in the tinnitus related distress, annoyance and loudness from high-frequency amplification.

Punitkumar Makani: A Combined Image- and Coordinate-Based Meta-Analysis of Whole-Brain Voxel-Based Morphometry Studies Investigating Subjective Tinnitus

Punitkumar Makani1, Marc Thioux1, Sonja J. Pyott1, Pim van Dijk1

1Department of Otorhinolaryngology – Head and Neck Surgery, University of Groningen, University Medical Centre Groningen, P.O. Box 30.001, 9700 RB Groningen, The Netherlands

Background
Previous voxel-based morphometry (VBM) studies investigating subjective tinnitus (here referred to as simply tinnitus) have reported structural differences in a variety of spatially distinct cortical regions (Adjamian et al. 2014; Elgoyhen et al. 2015). However, results have been highly inconsistent and sometimes contradictory. In the current study, we conducted a combined image- and coordinate-based meta-analysis of whole-brain VBM studies to identify robust gray matter differences associated with tinnitus, as well as examine the possible effects of hearing loss on the outcome of the meta-analysis.

Methods
The PubMed and Web of Science databases were searched for studies published up to August 2021. Additional manual searches were conducted using BrainMap, Neurosynth, and NeuroVault websites for studies published up to December 2021. The meta-analysis was conducted using Seed-Based d Mapping with Permutation of Subject Images (SDM-PSI), which allows the combination of statistical maps from the original study results and tables of coordinates reporting significant group differences. The results of the whole-brain meta-analyses were corrected for multiple comparisons using threshold-free cluster enhancement PFWE ≤ 0.05.

Results
Of the 153 identified studies, a total of 15 studies met the inclusion criteria, resulting in the inclusion of a total of 423 individuals with tinnitus and either normal hearing or hearing loss (mean age 51 years; 173 female) and 508 individuals without tinnitus and either normal hearing or matched hearing (mean age 52 years; 234 female). Unthresholded statistical images were obtained for 5 studies. We found a small but significant gray matter reduction in the left inferior/middle temporal gyrus for groups of normal hearing individuals with tinnitus compared to hearing-matched individuals without tinnitus. In sharp contrast, in groups with hearing loss, tinnitus was associated with larger gray matter volumes in the lingual gyrus and precuneus bilaterally. Those results appear heavily dependent upon matching the hearing levels between the groups with or without tinnitus.

Conclusions

The results of this meta-analysis suggest that the absence or presence of hearing loss is the driving force of changes in gray matter across individuals with and without tinnitus. Future studies should carefully account for confounders, such as hearing loss, hyperacusis, anxiety, and depression, to identify gray matter changes specifically related to tinnitus. Ultimately, the aggregation of standardized individual datasets with both anatomical and useful phenotypical information will permit a better understanding of tinnitus-related gray matter differences, the effects of potential comorbidities, and their interactions with tinnitus.

Vassilis Pelekanos: Developing methodology for evaluating acoustic radiation white matter integrity in the UK Biobank imaging data

Vassilis Pelekanos1, Anissa L Ramadhani1, Jessica de Boer1, Katrin Krumbholz1
1Hearing Sciences, School of Medicine, University of Nottingham, Nottingham, UK


It has been suggested that hearing loss (HL) is associated with changes in both brain function
and structure and that some of these changes may underlie hearing loss-related symptoms, such
as tinnitus and hyperacusis. Existing experimental evidence in humans, however, is often
widely inconsistent, converging only in a few associative (non-auditory) regions, such as the
precuneus or middle temporal gyrus. The absence of consistent changes in unimodal auditory
regions, such as the auditory cortex, seems surprising and might relate to a lack of power due
to too small sample sizes and/or the inherent difficulty in dissociating the effects of HL from
the effects of age, which is known to be a powerful driver of variation in both HL as well as
brain function and structure. The UK Biobank (UKBB) is a large-scale biometric database
containing a variety of biological and health-related measures from hundreds of thousands of
middle-aged subjects. As the database, at least for a subset of its subjects, also contains
neuroimaging and hearing-related data, it represents a unique research resource for
investigating HL-related brain changes without the restrictions on sample size posed by
ordinary experimental approaches.
The current study is part of a wider project aimed at exploiting the UKBB resource to
investigate HL-related functional and structural changes in the human primary auditory cortex
(PAC) and its afferent projection tract from the thalamus, the acoustic radiation (AR). The
study seeks to develop appropriate methodology for reliably evaluating the AR’s
microstructural integrity – a challenging task due to the tract’s relatively small size and the
crossing of other, larger tracts through its territory. We address the two key components of
white-matter analysis: (1) tract reconstruction in individual brains, and (2) choice of diffusion
MRI (dMRI) metric to evaluate microstructural integrity within the reconstructed tracts. To
address the former (1), we systematically compare the UKBB’s own reconstruction approach,
AutoPtx (available in the FSL imaging analysis suite), with a more recent, and more refined,
approach, XTRACT (developed based on the UKBB and the Human Connectome Project data).
To address the latter (2), we compare the traditional, and most widely used, dMRI metric, the
diffusion tensor-based “fractional anisotropy” (FA), with metrics deriving from the more recent
NODDI model, which seeks to directly quantify the density and orientation dispersion of white
matter fibre tracts. Our results, show, firstly, that one of the main determinants of successful
AR reconstruction is the appropriate choice of the tract’s seed region (the medial geniculate
body), and, secondly, that, in order to correctly infer AR microstructural integrity, it is crucial
to use a dMRI metric, such as NODDI, that considers, and adequately unconfounds, the
influence of crossing fibres.

Miguel Temboury: Age-related Peripheral Degeneration Reflected in Frequency Following Responses using Electrocochleography

Miguel Temboury

There are currently no objective clinical measures for diagnosing cochlear peripheral damage leading to suprathreshold hearing deficits. Loss of cochlear synapses and auditory nerve fibers (ANF) preceding hair cell damage has been reported in animal models and histopathological human studies in healthy aging subjects. The frequency following response (FFR) has been proposed to be a measure sensitive to peripheral neural degeneration. FFRs reflect synchronous neural activity phase-locked to the fine structure of periodic stimuli and FFR amplitudes are reduced in aging ‘normal-hearing’ individuals. Recent modeling evidence suggested that this reduction could originate in the auditory periphery and be mainly driven by ANF loss. However, FFRs typically represent a superposition of responses from subcortical nuclei dominated by the brainstem, such that the source of this reduction is difficult to identify with traditional recording setups targeting brainstem activity. Here we use electrocochleographic tympanic membrane electrodes to isolate peripheral FFRs (or the auditory nerve neurophonic, ANNs) and show reduced amplitudes in older listeners with near-clinically normal hearing thresholds. Traditional FFRs measured simultaneously from the ipsilateral mastoid to the vertex show a similar reduction in amplitude to the ANNs. This suggests that peripheral deficits represent the main source underlying the previously reported age effects. Since the AN is effectively an information bottleneck to retro-cochlear and more central processes along the auditory pathway, potentials generated in healthy nuclei in the brainstem (and beyond) could be degraded due to degeneration in the AN. Our results confirm the presence of peripheral neural degeneration in aging individuals with normal audiometric thresholds and suggest that the FFR might be a tool sensitive to detect this damage.

Nicole Miller-Viacava: Categorisation of biological sounds in pristine soundscapes: A psychophysical investigation based on an ecologically-valid database

Miller-Viacava, Nicole; Axel, Anne C.; Ferriere, Regis; Friedman, Nicholas R.; Le Tourneau, François-Michel; Llusia, Diego; Mullet, Timothy C.; Phillips, Yvonne F.; Willie, Jacob; Sueur, Jérôme; and Lorenzi, Christian

Evolutionary pressures may have produced specialised neural mechanisms that are hardwired to process different categories of perceptible objects, such as living things. Consistent with this theory, electrophysiological and brain-imaging studies have identified neural structures in the human brain that are involved in the categorical perception of biological versus geophysical sounds. These studies have identified several acoustic features that play a key role in the categorical perception of biological sounds, such as high spectral modulations and slow temporal modulations. However, the stimulus sets used in these studies had poor ecological validity due to an over-representation of mammals, whereas pristine soundscapes are usually dominated by birds and insects. Sounds produced by these animals may often be inharmonic or show fast temporal rates. Therefore, the role of high spectral modulations and slow temporal modulations in the identification of biological sounds may have been overestimated.

The aim of this research is to characterise the acoustic cues used by humans in the perception and categorisation of biological (versus non-biological) sounds. This is achieved by analysing 1-sec samples extracted from pristine soundscapes recorded in nine distinct terrestrial biomes. The samples are grouped accordingly to the categorization made by normal-hearing adult subjects. The spectro-temporal modulation power spectra of the stimuli are calculated and analysed to determine the cues used by the subjects in this task. Additional analyses scrutinize the potential contribution of modulation phase and cross-spectral modulation correlations to categorisation among others.

Overall, the outcome of this research will help unveil the acoustic cues and mechanisms used by humans when listening to natural soundscapes, and their capacity to monitor biological sounds sources in their close environment through their auditory brain.

Acknowledgements: This project was supported by ANR grants HEARBIODIV and ANR-17-EURE-0017.

Julian Schott: Electrically evoked auditory steady state response detection in cochlear implant recipients using a system identification approach

Julian Schott

Regular Cochlear Implant (CI) fitting is a crucial part of successful hearing restoration with Cochlear Implants. Unfortunately, the process is time consuming, difficult to perform and with variable outcomes, due to large variabilities across clinicians and CI recipients. Electrically evoked auditory steady-state responses (EASSRs) are neural auditory responses, resulting from neural phase-locking to continuous electrical stimuli. They can serve as objective measure to determine CI stimulation levels and are therefore a promising step towards fully automated, objective fitting of CIs. A major challenge when recording EASSRs are the stimulation artifacts of the CI, which contaminate the EEG recording. These artifacts are highly correlated with the EASSR and overlap the response in time as well as in the frequency domain. They are therefore difficult to distinguish from the real response and may lead to a false-positive response detection in the absence of an EASSR. Existing artifact removal techniques such as Linear Interpolation, Template Subtraction and Independent Component Analysis rely on several EASSR recordings with different modulation frequencies to safely evaluate the artifact removal quality and interpret the determined responses. The evaluation is based on the so-called apparent latency of the EASSR. We recently introduced a new approach, based on a system identification procedure. It aims to identify the group delay of a finite impulse response system, which is equivalent to the apparent latency of the EASSR.

Here in this work, we aim to illustrate the newly developed approach and present the most recent results obtained from a dataset of 16 CI users.

Bernhard Eurich: Lower interaural coherence in off-signal bands impairs binaural detection

Bernhard Eurich, Jörg Encke, Stephan D. Ewert, Mathias Dietz
Department für Medizinische Physik und Akustik, Universität Oldenburg, 26111 Oldenburg,
Germany


Differences in interaural phase configuration between a target and a masker can lead to
substantial binaural unmasking. This effect is decreased for masking noises with an
interaural time difference (ITD). Adding a second noise with an opposing ITD in most cases
further reduces binaural unmasking. Thus far, modeling of these detection thresholds
required both a mechanism for internal ITD compensation and an increased filter bandwidth.
An alternative explanation for the reduction is that unmasking is impaired by the lower
interaural coherence in off-frequency regions caused by the second masker. Based on this
hypothesis, the current work proposes a quantitative multi-channel model using monaurally
derived peripheral filter bandwidths and an across-channel incoherence interference
mechanism. This mechanism differs from wider filters since it has no effect when the masker
coherence is constant across frequency bands. Combined with a monaural energy
discrimination pathway, the model predicts the differences between a single delayed noise and
two opposingly delayed noises as well as four other data sets. It helps resolve the
inconsistency that simulating some data requires wide filters while others require narrow
filters.

Samira Saak: Comparison of user interfaces for measuring the matrix test on a smartphone

Samira Saak, Angelika Kothe, Mareike Buhl, Birger Kollmeier

Medical Physics, Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Germany

Using smartphones for mobile self-testing has the potential to provide easy access to speech testing for a large proportion of the world population - especially for communities underserved with audiological practioners. The matrix test is a repeatable speech test currently available in 20 languages, which cover about 60 % of the world population, and consequently, is an ideal candidate for mobile speech testing. Currently, the matrix test is performed with an experimenter and it is, therefore, necessary to investigate the feasibility of self-conducting the matrix test also on a smartphone given the restricted screen size and household in-ear headphones. For that purpose, the traditional closed matrix user interface, as well as three alternative interfaces are compared regarding SRT accuracy, user preference, and time efficiency.

The results show that the traditional closed matrix user interface was best in terms of accuracy, user preference, and time efficiency. In spite of the the small screen size both younger normal hearing and older hearing impaired participants were capable of performing the matrix test. The traditional closed matrix user interface is, therefore, proposed for implementation in smartphone-based matrix implementations

Janin Benecke: The self-efficacy of hearing-aid self-adjustment: a sound-matching study

Janin Benecke

Self-adjustment can bolster hearing-aid personalisation, but requires interfaces that are easy, efficient and effective. Studies have shown that individual sound quality criteria may be influenced by numerous methodological factors, such as the chosen stimulus, initial parameter settings and number of controls. This study employed a method-of-adjustment sound-matching task to assess how these factors can fundamentally affect the accuracy and reliability of self-adjustments.

In an online study, 58 listeners (with different self-assessed hearing acuity and levels of experience as musicians or in sound production) repeatedly adjusted stimuli (speech, music, outdoor noise and speech in noise) altered in frequency gain response to match unaltered (reference) stimuli using three interfaces: 1) a single slider controlling both bass and treble, 2) separate bass and treble sliders and 3) separate bass, mid and treble sliders. Participants also reported task demand, performance, effort, and frustration for each interface using the NASA Task Load Index.

Matches with the single slider interface were markedly more accurate than with the 2- and 3-slider interfaces (median absolute error, that is difference between reference and matched sound, of 1 dB compared to 2.1 – 3.8 dB) and more reliable (median standard deviation of the signed error of 1.6 dB compared to 3.0 – 6.2 dB). There were few differences between interfaces with 2 and 3 sliders apart from lesser reliability with 3 sliders. Similarly, task load index ratings for the 1-slider interface were lower compared to both other interfaces. Across interfaces, noisy stimuli produced more reliable and accurate matches. Participants with experience matched bass and treble in the 3-slider interface more reliably and accurately. While experience did not affect accuracy or reliability with 1 slider, experienced participants showed a shorter interaction duration with the slider in this interface, and overall shorter completion times per trial than inexperienced participants.

This study investigated underlying perceptual principles of frequency-gain adjustments, indicating how and when differences in frequency-gain can be heard in self-adjustments. Sound-matching produced smaller errors than previously studied just-noticeable differences in frequency gain (Caswell-Midwinter & Whitmer, 2019). A future within-participant study will assess the relationship of sound-matching performance to self-adjusted preferences.

Introduction Talks II

Shahin Safazadeh: Higher Auditory Cortical Evoked-Response to Low-Frequency sounds in Tinnitus: An fMRI Study

Shahin Safazadeh1,2,3, Marc Thioux1,2,3, Remco J. Renken2,3, and Pim van Dijk1,2
1Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center, Groningen, The Netherlands
2Graduate School of Medical Sciences (Research School of Behavioral and Cognitive Neurosciences), University of Groningen The Netherlands
3University of Groningen, Cognitive Neuroscience Center, Biomedical Sciences of Cells and Systems, Groningen, The Netherlands
Background
Several studies have investigated the auditory neural response to sounds in individuals with subjective
tinnitus, with mixed results. Only a couple of studies however have researched tinnitus-related changes
in patients with normal hearing. In this population, differences in sound-evoked auditory cortex
response can be more clearly attributed to tinnitus independently of hearing loss. In the current study,
we aimed to investigate possible group differences associated with tinnitus in a population with normal
hearing thresholds, using monaural stimulation.
Methods
Seventeen volunteers with subjective tinnitus and 17 healthy controls, all with normal hearing
thresholds, were included. Pure tones of four distinct frequencies (353, 1000, 6000, and 8000 Hz) were
monaurally presented in a sparse sampling design. Functional EPI images were acquired using a special
2D radio-frequency excitation pulse (ZOOMit, Siemens) to increase the spatial resolution.
A generalized-linear-model with the 8 conditions (4 frequencies x 2 stimulation sides) was fitted to the
preprocessed functional images. Significantly evoked voxels, at the group level, were used as the
region-of-interest. A fully-factorial model with 3 factors (group, sound frequency, and stimulated ear)
was used to investigate the effect of tinnitus on sound-evoked brain activity. The sound-evoked
responses were fed into a Principal Component Analysis (PCA) to find components representing the
variance in the signal as a result of sound frequency and lateralization. The obtained PCA loadings and
maps were then compared between the groups.
Results
In a preliminary analysis, we found a trend toward a significant interaction between tinnitus status and
stimulation sides in the right lateral auditory cortex (p<0.01). Post-hoc analyses revealed higher soundevoked
responses in participants with tinnitus, with this difference being greater when the left ear was
stimulated. PCA analyses returned a second component showing a staircase behavior across frequencies
for each ear. This data-driven analysis revealed compatible results with the strongest group differences
in the loadings of low frequencies following left ear stimulation. In the tonotopic maps, responses to
low frequencies were over-represented in the right lateral auditory regions in the tinnitus group
compared to the same hemisphere in the control group, and also, compared to the left hemisphere of
the patients’ group.
Conclusion
The tonotopic maps of both groups were similar. However, in the tinnitus group, there was stronger
activity in the lateral region of the right hemisphere, which is specialized in the coding of low sound
frequencies. This hyperactivity, which may be interpreted as the overrepresentation of lower
frequencies in the right hemisphere, was dependent on the laterality of sound presentation, with larger
group differences for contra-lateral sound stimulation. Along with previous studies, our results suggest
that in tinnitus, an augmenting mechanism may be present in the cortical region but not necessarily at
the tinnitus frequency.

Niels Overby: Combined speech enhancement and dynamic range compression

Niels Overby

Noise reduction (NR) and wide dynamic range compression (WDRC) are two essential building blocks in modern hearing aids to increase listening comfort and restore audibility. However, NR and WDRC commonly counteract each other. For example, the improvement in signal-to-noise ratio (SNR) after NR can be reduced by WDRC due to the amplification of residual noise. The extent of interaction between NR and WDRC depends on their configuration and processing arrangement. For example, a stronger NR system can be associated with a greater increase in SNR than a weaker NR system. However, a stronger NR might also attenuate soft speech components more in comparison to a weaker system and thereby weaken the effects of a following compression system. In recent years, NR using deep neural networks has been shown to be advantageous over more conventional types. This study considers how NR systems, including deep neural network-based systems, integrates with both conventional and adaptive compression systems in either a serial or a parallel arrangement. The systems were tested with noisy speech and evaluated using objective metrics (e.g. the effective compression ratio and the change in SNR). Each system was compared to a reference system that used ideal ratio-mask processing for noise reduction combined with fast-acting compression in a serial arrangement. The reference system was considered since it would provide the highest amount of both noise reduction and compression. Each system was also compared to the reference system in terms of the similarity of their objective metrics. The results showed that the choice of NR had the largest effect in terms of how similar a given system was to the reference system, followed by the choice of WDRC and the choice of processing arrangement. 

Sigrid Polspoel: Word Lists for Speech Audiometry: A Comparison Between Human and Synthetic Speech

Sigrid Polspoel, Finn Holtrop, Arjan J. Bosman, Sophia E. Kramer and Cas Smits

Objectives: The objectives of this study were (1) to determine whether the standard Dutch word lists for speech audiometry are equally intelligible in normal hearing listeners, (2) to compare the intelligibility of synthetic and human speech.
Design: Participants performed speech recognition tests in quiet with the original (human) word lists and synthetic word lists. The latter were created using the Google Cloud text-to-speech (TTS) system. Speech recognition functions were estimated for all human and synthetic lists.
Study sample: Twenty-four young adults with normal hearing.
Results: The variability in intelligibility among word lists was significantly higher in human speech material than in synthetic speech material, with list differences up to approximately 20% at fixed presentation levels in the former. The average speech recognition threshold (SRT) of the human speech material was 1.6 dB lower (better) than the SRT of the synthetic speech material.
Conclusions: The original Dutch word lists show large variations in intelligibility. These list effects can be greatly reduced by combining two lists per condition. Synthetic speech is a promising alternative to human speech in speech audiometry in quiet.

Iustina Rotaru: Decoding the locus of auditory attention from EEG in a multi-talker audio-visual experiment

Iustina Rotaru1,2, Simon Geirnaert1,2,3, Iris van de Ryck1, Nicolas Heintz1,2,3, Alexander Bertrand2,3, Tom Francart1

1KU Leuven, Department of Neurosciences, Research Group Experimental Otorhynolaryngology (ExpORL), Leuven, Belgium

2KU Leuven, Department of Electrical Engineering (ESAT), STADIUS Center for Dynamical Systems, Signal Processing and Data Analytics, Leuven, Belgium

3Leuven.AI (KU Leuven Institute for AI), Leuven, Belgium

Although acoustic noise reduction algorithms are omnipresent in hearing aids, they currently lack information about the user’s auditory intent, which limits their performance in multi-talker acoustic scenes. Novel techniques for auditory attention decoding (AAD) based on electroencephalography (EEG) can be integrated with conventional noise suppression pipelines to reduce irrelevant sounds and only amplify the sounds to which the user actually attends [1]. This could result in a neuro-steered hearing aid, i.e., a device which is effortlessly controlled by hearing-impaired users by means of their brain signals, whereby the speech intelligibility of the attended sound source is significantly improved.

A novel framework for decoding the directional focus of attention using common spatial pattern (CSP) filters was recently proposed in [2]. In short, a CSP-based attention decoder is optimized to detect the spatial focus of attention directly from instantaneous neural features which reflect spatial auditory attention patterns. Such a system could provide the target degree of arrival (DOA) to a beamformer algorithm to specifically select and amplify the desired speaker. The CSP algorithm holds 2 major advantages w.r.t. more traditional AAD paradigms [2,3]: (1) it decodes the direction of an attended sound stream directly from the user’s EEG, i.e., without requiring access to the clean audio signals and (2) it operates accurately on short time-scales (within 1-5 sec), being particularly suitable for a real-time attention decoding system.

Stepping towards a more realistic neuro-steered noise suppression system, we aimed to investigate the practical applicability of decoding the locus of auditory attention in a variety of audiovisual settings that reflect different scenarios of everyday life. To this end, we designed a new AAD protocol where we varied the degree of correlation between the spatial focus of visual and auditory attention. Thus, the auditory and visual stimuli were either (1) co-located, (2) completely un-correlated in space or (3) the visual stimulus was missing.

We found that decoding spatial attention with CSP-derived features rendered the highest accuracy when the visual and auditory stimuli were co-located (an ubiquitous scenario in everyday life), in which case it is also partially driven by eye gaze. Yet interestingly, CSP decoding was also possible in conditions where the trajectory of the visual stimulus was randomized and purposefully uncorrelated with the auditory stimulus. This suggests that CSPs can extract neural lateralization patterns purely reflecting directional auditory attention even in the absence of a co-located visual stimulus. Finally, we found that the CSP algorithm sometimes has problems to generalize to data from other sessions or listeners. Therefore, caution is required when calibrating CSP filters in a real-time system.

Altogether, we have confirmed the practical applicability of decoding the locus of auditory attention with CSP filters for everyday life usage in neuro-steered hearing aids. Nevertheless, our results indicate the need for novel algorithms tailored to a real-time application, where attention is continuously decoded during a longer time frame.

References

[1] S. Van Eyndhoven, T. Francart, A. Bertrand, EEG-informed attended speaker extraction from recorded speech mixtures with application in neuro-steered hearing prostheses, IEEE Transactions on Biomedical Engineering, 2017

[2] S. Geirnaert, T. Francart, A. Bertrand, Fast EEG-based decoding of the directional focus of auditory attention using common spatial patterns, IEEE Transactions on Biomedical Engineering, 2020

[3] S. Geirnaert, S. Vandecappelle, E. Alickovic, et al. Electroencephalography-based auditory attention decoding: Toward neurosteered hearing devices. IEEE Signal Processing Magazine, 2021

Julia Schütze: Comparison of speech intelligibility in a real and a virtual living room

Julia Schütze, Stephan D. Ewert, Birger Kollmeier

Medical Physics, Department of Medical Physics and Acoustics and Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Germany

A mismatch between the outcomes of “classical” audiological tests and the perceived benefits of aided hearing impaired patients leads to an interest in more ecologically valid testing methods, which are expected to better reflect the patients’ performance in real-world situations.

In daily-life, a highly relevant indoor (home) environment is the living room in which listening and speech communication typically involve different target or interferer sources on a sofa, a TV set, or in an adjacent room connected with an open door.

This study compares speech intelligibility in a controllable laboratory environment closely resembling an average German living room with an adjacent kitchen and in the according virtual representations.  Speech recognition thresholds of normal hearing subjects in the real world living room lab and in three different acoustic reproductions are compared using measured and simulated room impulse responses: binaural presentation via headphones, a small scale loudspeaker array, a 3-dimensional 86-channel loudspeaker array in an anechoic environment. Target and interferer positions are permuted over four different positions in the living room lab, including an acoustically challenging position in the adjacent kitchen room without line of sight. Implications are discussed in the context of the reproductions and potential applications with hearing impaired listeners.

Rahel Bertschinger: Feasibility of Extracochlear Stimulation to Induce Hearing and Reduce Tinnitus

Rahel Bertschinger1, Leanne Sijgers1, Lorenz Epprecht1, Norbert Dillier1, Christof Röösli1, Flurin Pfiffner1 and Alexander Huber1
1 Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Zurich, University of Zurich, Switzerland


The surgical criteria for cochlear implant (CI) implantation have recently been expanded to include patients with residual low frequency hearing, such that these patients can benefit from combined electrical and acoustic hearing. Unfortunately, residual hearing reduction or loss after CI surgery occurs in half of all patients (Hodges et al., 1997). Therefore, due to the invasiveness and the functional results of the CIs, such implants are hardly suitable for patients with mild to moderate hearing impairments. Electrical stimulation of the auditory nerve without the risk of the loss of residual hearing would be a promising solution for these patients. Additionally, it has been observed that a CI has an influence on tinnitus. While it has been reported that intracochlear electrical stimulation through a CI showed positive effects on tinnitus (Peter et al., 2019; Ramakers et al., 2015), it is still unclear how intra and extracochlear stimulation could lead to tinnitus relief. The goal of this project is to clarify the feasibility of an extracochlear stimulation prosthesis to induce hearing and reduce tinnitus.
In a first study, we want to investigate the effect of an extracochlear prosthesis. To do this, patients are included who have already been implanted with a CI and in whom, for various reasons, not the entire CI electrode array is located intracochlearly. An extracochlear hearing prosthesis is simulated by exclusively stimulating the electrodes located extracochlearly. Psychophysical and electrophysiological measurements are performed to compare the effects of extracochlear and intracochlear stimulation in these patients.
In a second study, we conduct electrical stimulation at various positions in the middle ear during ear surgeries of patients without sensorineural hearing impairment. For this, the patients are temporarily implanted with an extracochlear electrode and intra- and post-operative measurements, including electrophysiological recordings, hearing tests and objective audiometry are performed. To evaluate the effect of extracochlear stimulation on tinnitus perception, all study participants fill out questionnaires about tinnitus before and after the stimulation of extracochlear electrodes.
The results of these studies show whether the development of an extracochlear implant may provide benefit to a large group of patients suffering from hearing loss and/or tinnitus.
References:
Hodges, A. V., Schloffman, J., & Balkany, T. (1997). Conservation of residual hearing with cochlear implantation. The American journal of otology, 18(2), 179-183.
Peter, N., Liyanage, N., Pfiffner, F., Huber, A., & Kleinjung, T. (2019). The influence of cochlear implantation on tinnitus in patients with single-sided deafness: a systematic review. Otolaryngology–Head and Neck Surgery, 161(4), 576-588.
Ramakers, G. G., van Zon, A., Stegeman, I., & Grolman, W. (2015). The effect of cochlear implantation on tinnitus in patients with bilateral hearing loss: a systematic review. The Laryngoscope, 125(11), 2584-2592.

Kristin Sprenger: Modeling the effect of sentence context on word recognition in noisy and reverberated listening conditions for listeners with and without hearing loss

Kristin Sprenger and Thomas Brand

Department für Medizinische Physik und Akustik, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany

The influence of context information on speech recognition can be quantified using context parameters from different context models. Boothroyd and Nittrouer (1988) introduced a statistical model describing empirical data of humans’ use of speech context using two global parameters. This model has been further developed by Bronkhorst et al. (1993, 2002) so that it breaks down the global parameters to additional parameters describing the context effect for different numbers of correctly perceived words. Based on these models, Smits and Zekveld (2021) developed a new context model with a simpler set of equations and reduction of the number of parameters to only one. An important knowledge is that many of the context parameters of these three models are closely related to each other.

These three context models were implemented and applied to speech intelligibility data from different data bases. The databases include measurement data with the Göttingen sentence test (GÖSA) and Oldenburg sentence test (OLSA) from listeners with normal hearing and listeners with hearing loss in different listening conditions. We expected that listeners with hearing loss compensate their reduced auditory input by using speech context. The different listening conditions include the presentation in quiet, in steady state noise and in speech-like modulated noise in order to investigate, how speech context is used to compensate for missing information during speech recognition. Furthermore, the effect of reverberated speech on sentence recognition is analysed. This is motivated by the fact that closed word set sentences are much more robust against reverberation than everyday sentences. Significance testing was performed using bootstrapping. Significantly more context information was available for GÖSA than for OLSA. In certain listening conditions significant differences in context use were found between listeners with normal hearing and listeners with hearing loss. No differences in context use were found between the stationary noise and the modulated noise. To explain the behaviour in reverberation one has to distinguish between a priori context (words are known in advance) and a posteriori context (semantic reanalysis of recognized words).

Monica Hegde: Perceptual weighting of acoustic cues for speech processing during the first year of life

Monica Hegde, Thierry Nazzi, & Laurianne Cabrera (INCC-UMR 8002)
Integrative Neuroscience & Cognition Center (INCC), Université de Paris


Before 10 months of age, infants are not yet attuned to the consonants of their native language, meaning that, compared to adults, they are sensitive to certain non-native phonological contrasts. The nature of the mechanisms shaped by age and exposure to the native language is yet to be discovered. The current project hypothesizes that auditory mechanisms supporting speech perception may play a crucial role in perceptual attunement.
This study adopts a psychoacoustic approach suggesting that the auditory system selectively decomposes the spectral and temporal modulations of speech. Such acoustic modulations can be artificially manipulated using vocoders to assess their role in speech perception. This study will assess whether Fast and Slow temporal modulation cues play a similar role in infants' speech perception by comparing the ability of normal-hearing 6-month-olds, 10-month-olds and adults to use slow temporal envelope cues in discriminating consonant and vowel contrasts.
To explore the perceptual weighting of temporal modulations, and particularly of amplitude and frequency modulations (AM/FM), French consonant-vowel syllables differing in consonant voicing, consonant place of articulation, vowel height or vowel place of articulation were processed by 2 tone-excited vocoders to replace the original FM cues with pure tones in 32 frequency bands. AM cues were extracted in each frequency band with 2 different cutoff frequencies, 256 or 8 Hz. Discrimination was assessed for infants and adults using an observer-based testing method.
Preliminary results show a significant interaction between Age and Phase. Post-hoc
tests show us this interaction is driven by age differences in the Fast Condition and in
the Slow Condition. A significantly greater proportion of adults than infants succeed
both for the Fast and in the Slow condition. Among infants, a greater proportion of
6-month-olds succeed in the Fast condition than 10-month-olds but not in the Slow
condition. These results suggest that degradation of FM influences the discrimination
of consonants and vowels as a function of age and that degradation of FM affects
infants more than adults.

Internetkoordinator (Stand: 19.01.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page