Contributed Talks Tuesday

Contributed Talks Tuesday

Helia Relaño-Iborra: Towards predicting individual differences in the speech perception of hearing-impaired listeners

Helia Relaño-Iborra1, Johannes Zaar1,2 and Torsten Dau1

1 Hearing Systems, Department of Health Technology, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark

2 Eriksholm Research Centre, Oticon, 3070 Snekkersten, Denmark

The speech-based computational auditory signal processing and perception model [sCASP; Relaño-Iborra et al., (2019), J. Acoust. Soc. Am., 146(5), 3306–3317] successfully accounts for the speech intelligibility of normal-hearing listeners in a wide variety of listening conditions, including speech degradations as well as non-linear speech enhancement algorithms. The model combines a non-linear auditory-inspired preprocessing with a back end based on the cross-correlation between the clean and the degraded speech representations in the modulation envelope domain. In the present study, the sCASP model was evaluated as a predictor of speech intelligibility data obtained with hearing-impaired (HI) listeners in a speech in noise task. The model was tuned to the individual listeners’ auditory profile, making use of their pure tone audiogram as well as estimates of cochlear compression and outer- and inner-hair cell loss. The model was evaluated in terms of its predictive power of the average listener data and as a predictor of the individual listeners’ performance.  The predictions obtained with sCASP reflected the general decrease in performance observed for the HI-listeners as compared to results from normal-hearing listeners. Furthermore, the model correctly predicted masker-type effects in the SRTs. Generally, the model accounted well for the trends observed at a group level, whereas reasonable correlations between the measured and predicted performance across the individual listeners were only found for a subset of the data. Overall, albeit promising, the results suggest that further investigations are required for the model to account for individual performance.

Ingvi Örnolfsson: Towards an objective metric for assessing communication ability

Ingvi Örnolfsson, Torsten Dau, Axel Ahrens and Tobias May

Current strategies for evaluating the benefit of hearing aids use testing paradigms based on passive listening. However, such paradigms are unlikely to represent the challenges that hearing-impaired people experience in everyday conversations. In this study, we propose a new collaborative group task to measure the communicative ability of individual members of an interacting group. Participants were asked to judge their own confidence in a list of binary general-knowledge questions. Participants would then discuss the questions in groups of three and would afterwards answer the same questions individually. Based on previous research on similar tasks, we propose a formal cognitive model to predict a groups’ post-conversation confidence response from its members’ pre-conversation responses. Given the pre- and post-conversation responses, we use maximum likelihood estimation to derive model parameters related to the weight that the participants give to each other’s pre-conversation responses. The estimated weights are compared between conditions with conversations with versus without multi-talker background noise. We show that the weights that individuals give to their own pre-conversation responses are significantly higher in noisy conditions. This indicates that the noisy condition imposes an inhibitory effect on communication which causes individuals to stick to their own prior beliefs rather than adopting those of their interlocutors. The proposed method may serve as an objective measure of the difficulty of conversing in a given environment. The method will be used in future studies to investigate the effect of hearing impairment on individual’s conversational ability.

Mira Van Wilderode: Listening while standing: does multitasking change performance?

Mira Van Wilderode1, Nathan Van Humbeeck2, Ralf Krampe2, Astrid van Wieringen1

1University of Leuven, Department of Neurosciences, Research group Experimental Oto-Rhino-Laryngology. O&N II Herestraat 49 bus 721, 3000 Leuven, Belgium.

2University of Leuven, Department of Experimental Psychology, Research group Balance and Timing. Tiensestraat 102, 3000 Leuven, Belgium.

In daily life, we perform many concurrent tasks. Simple tasks like listening while standing seem effortless.  However, they constitute multi-task settings where attentional resources are needed to listen to the message, while maintaining your balance (postural control).

In this research we want to gain insight into the allocation of cognitive resources during listening while standing and this for different ages. To this end, we use a listening-posture dual-task for young and middle-aged, normal-hearing adults. The listening task includes word identification of CVC-words at two intensities. We assume higher attentional demands for the lower intensity. The posture task requires the participant to stand on a force platform and maintain a stable posture. Difficulty is manipulated by using a stable and a moving platform. The moving platform requires more cognitive control (more monitoring and error corrections). Combining both tasks will give us insight into the allocation of cognitive resources.

Data is currently being collected (at abstract submission: Nyoung=24 and Nmiddle-aged=23, Nolder=15). We expect no detrimental effects for young adults when combining both tasks. However, we hypothesize a decreased performance in postural control for middle-aged adults in the most difficult task (low intensity, moving platform). We expect middle-aged adults to direct their cognitive resources more to the listening task, resulting in cognitive interference (and thus more sway) in the posture task.

Results will inform us on the allocation of cognitive resources during aging. If it is indeed the case that listening interferes with postural control, a more holistic approach to hearing revalidation is advised.

Funding body: C1-project C14/19/110, KU Leuven.

Wouter David: Phase-locked responses to parameterized speech envelopes

Wouter David1, Robin Gransier1 and Jan Wouters1
1ExpORL, Department of Neurosciences, KU Leuven, Herestraat 49, box 721, 3000 Leuven, Belgium

For adequate speech perception, one essential cue of speech is the temporal envelope which contains a wide range of modulations that coincide with the syllable and phoneme rate (particularly below 20 Hz). Consequently, the auditory pathway must be able to process these envelope modulations. One way to assess this is by measuring auditory steady-state responses (ASSR) to single modulation frequencies. However, single ASSRs do not represent the overall neural ability to process the wide modulation spectrum in speech. It is thus difficult to associate specific modulation processing aspects to functional outcomes, such as speech perception. Alternatively, responses to speech can be used to assess temporal processing. However, these responses are not only associated with envelope processing itself, but also with linguistic processing when comprehension occurs. To overcome these issues, we hypothesize that stimuli with a modulation distribution, such as those found in the speech envelope, can be used to assess temporal processing. More specifically, we hypothesize that cortical responses to these stimuli show a similar speech-weighted modulation transfer function to that obtained from ASSRs elicited with single modulation frequencies. Furthermore, these neural indicators can potentially be used to gain more insight into the relationship between neural processing and functional outcomes such as speech perception.
We used the recently introduced Temporal Envelope Speech Tracking (TEMPEST) framework in which speech-like stimuli are created by modulating noise carriers with distributions of modulation frequencies. Two types of TEMPEST stimuli were generated: syllabic-like stimuli with modulations around 4 Hz and phonemic-like stimuli with modulations around 20 Hz. Phase-locked responses to these stimuli were recorded using EEG in 10 normal-hearing participants. Additionally, ASSRs from 2 to 6 Hz and from 17 to 23 Hz were recorded. We compared the transfer function of the TEMPEST responses to that of ASSRs. To this end, we made use of different electrophysiological metrics to characterize the neural phase-locked activity which are widely used in the literature in order to facilitate comparisons between studies. Additionally, we also present a proof-of-concept with cochlear implants.
Results show that the neural activity evoked by TEMPEST stimuli show similar modulation transfer functions to that obtained with ASSRs across normal-hearing listeners regardless of the electrophysiological metric used. Since TEMPEST stimuli contain a range of envelope modulation frequencies in contrast to single modulation frequency stimuli, they can be used to efficiently probe envelope processing in the auditory pathway. This suggests that TEMPEST stimuli can provide insight in the link between cortical temporal modulation processing and envelope-based speech perception in populations characterized by a broad range of audiological profiles, including cochlear implant users.
Acknowledgements: This work was partly funded by a research grant from Flanders Innovation & Entrepreneurship through the VLAIO research grant HBC.20192373, partly by a Wellcome Trust Collaborative Award in Science RG91976 to R. P. Carlyon, J. C. Middlebrooks, and J. Wouters, and partly by an SB Ph.D. grant 1S34121N from the Research Foundation Flanders (FWO) awarded to W. David.

Simon Lansbergen: Understanding the impact of real-ear measurement optimization on SSQ and speech intelligibility

Simon Lansbergen1,2 , André Goedegebure2 , Niek Versfeld1 , Wouter Dreschler1 , Gertjan
Dingemanse2
1Academic Medical Centre, University of Amsterdam,
2Erasmus University Medical Center, Rotterdam, the Netherlands


There are many different factors that might be associated with the success of hearing aid (HA) use, such as hearing loss, previous experience with HAs, age, or listening environments. In clinical practice, a HA fit is often evaluated by a Real Ear Measurement (REM) or by measuring speech performance. However, objective measures only are not sufficient to define the degree of success as they do not take into account self-perceived auditory functioning in real-life situations.
In this study the aim was to investigate the effect of REM characteristics along with other (mediating) factors on self-perceived functioning. Data was collected from subjects visiting the Erasmus Medical Centre between 2015 and 2020. The Speech Spatial and Quality (SSQ) questionnaire, a measure of self-perceived ability, was used to rate the success of the HA fitting. All subjects (n=397) completed a HA trial period that included pre- and post-SSQ results. The mean age was 63.8 years, mean hearing loss (PTA0.5,1,2,4) was 49.9 dB, and 43.6% had no previous HA experience. Data analysis included REM results, NAL-NL2 targets, and results of (un)aided speech intelligibility in quiet. The quality of the HA fit was defined as the Real Ear Aided Response to Target Difference (RTD) at 65 dBSPL. A principle component analysis was used to capture the most important information in the RTD data, which resulted in 2 principle components. The first component (RTDPC1) was interpreted as variation in overall amplification, whereas the second component (RTDPC2) emphasized variation in amplification only within the frequency range 4-8 kHz. Factors such as age and HA experience, were also included. Results showed a positive increase in SSQ-scores on all SSQ-domains (p<0.001; f>0.51~0.58, i.e., large effects).
For SSQ-Speech domain, unaided speech intelligibility had a significant and large effect on aided speech intelligibility. RTDPC2, had a small, but significant effect on aided speech intelligibility, and on post scores for all SSQ domains. In case of the SSQ-Speech domain, this effect was fully mediated by aided speech intelligibility (with no direct effect of REM). RTDPC1 had no impact on aided speech intelligibility or post-SSQ. Other factors (e.g. age, HA experience), showed small, but significant, effects on post-SSQ and/or aided speech intelligibility. Self-perceived auditory functioning with newly fitted HAs is primarily predicted by pre-SSQ scores, and to a lesser extent effect of RTDPC1. Considering the effect of RTD on self-perceived auditory functioning and aided speech intelligibility, we conclude that maintaining the shape (RTDPC2) of the fitting target seems to be more important than the overall deviation (RTDPC1).

Marlise (M.D.) van der Veen: Set-up of the BoneMRI-study: feasibility of generating synthetic CT images of the head from MRI scans using machine learning techniques

Marlise (M.D.) van der Veen1,3, Bas (M.M.S.) Jasperse2, Joost (J.P.A.) Kuijer2, Paul (P.) Merkus1, 3
1Amsterdam UMC, location Vrije Universiteit Amsterdam, Department of Otolaryngology 
2Amsterdam UMC, location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine 
3Amsterdam Public Health, Quality of Care, Amsterdam, The Netherlands 

Prior to cochlear implantation and other surgical procedures in the head and neck region, MRI in combination with CT of the bone is often the standard modality to visualize bony landmarks, for surgical planning, navigation and risk assessment. An important downside of CT, especially in children, is the radiation exposure and its associated increased risk of developing cancer later in life. This downside could be removed if the CT scan can be substituted with an MRI sequence that provides the same information as CT.  

In this recently started feasibility study we will investigate the possibility of generating CT-like images of the facial bones from MRI images using machine learning techniques. We are now in the process of obtaining sufficient numbers of paired CT and MRI scans of the head from adult patients. These paired scans will first be used to develop and train an algorithm that is capable of generating synthetic CT scans from MRI data. The remaining scans will then be used to evaluate the performance of the algorithm, by comparing the generated synthetic CT images to true CT scans.

Jose L. Santacruz: Hearing Aid Amplification Schemes Adjusted to the Individual's Tinnitus Pitch, an RCT

Jose L. Santacruz, Emile de Kleine, Pim van Dijk


Background
Hearing aids can be used as a treatment for tinnitus. There are indications that this treatment is most effective when the hearing loss -and the tinnitus pitch- fall in the range of amplification of the hearing aid. Other models suggest that a gap in the amplification around the tinnitus pitch would enhance the lateral inhibition and thereby reduce the tinnitus.
Methods
We conducted a randomized controlled trial, designed as a Latin square balanced crossover study. Eighteen tinnitus patients with moderate hearing loss were included, all had been using hearing aids for at least 6 months. Patients were fitted with hearing aids using 3 different amplification schemes: (1) standard amplification according to the NAL-NL2 prescription procedure, (2) boosted amplification at the tinnitus frequency, and (3) notch filtered amplification at the tinnitus frequency. Amplification of the three settings was evaluated with real ear measurements. After two weeks of initial adaptation (during which the NAL-NL2 was used), the hearing aids were used for a period of twelve weeks, testing each setting for four weeks.
Results
Questionnaires and psychoacoustic measurements are used to assess the outcomes of each scheme. Comparisons will be drawn across schemes and correlations across measurements will be made within subjects.
Conclusions
This double-blind RCT assesses the efficacy of 3 different amplification schemes of hearing aids in tinnitus patients

Raul Sanchez Lopez: The Nottingham Hearing BioResource: A “hearing-focused biobank” to accelerate research towards the future of precision audiological care

Raul Sanchez Lopez2,3,4 , David Baguley1,2,3, Olivia Phillips2,3, Paul Bateman2, Ruth Spriggs2,3 and Ian Wiggins2,3*
1 Nottingham Audiology Services, Nottingham University Hospitals NHS Trust, UK; 2 NIHR Nottingham Biomedical Research Centre, UK; 3 Hearing Sciences, Mental Health & Clinical Neurosciences, School of Medicine, University of Nottingham, UK; 4 Interacoustics Research Unit, Denmark


Large-scale datasets, such as the UK Biobank and the Human Connectome Project, have proved extremely powerful for facilitating research into mechanisms of human health and disease. A limitation of most existing resources, however, is that hearing health phenotypes are captured at a rudimentary level, where even basic pure-tone audiometry data are rarely available. This severely limits the scope of the questions that can be asked of these datasets from an auditory perspective. The Nottingham Hearing BioResource (NHB) represents our effort to begin leveraging the power of large, open, accessible datasets in a way that could transform how we treat and manage hearing loss and hearing-related conditions (e.g., tinnitus, hyperacusis, Ménière’s disease) in future.
At its core, the NHB will provide a comprehensive collection of high-quality, person-centric samples and data focused on hearing health and related domains. Biological samples and data will be made available to academic and commercial researchers around the world in accordance with FAIR (findability, accessibility, interoperability, and reusability) principles. By marrying genetic and biomarker information with measures of noise exposure history, advanced audiological assessment (including extended-high-frequency audiometry, wideband tympanometry, otoacoustic emissions, electrophysiology, auditory perception), and longitudinal tracking of hearing and wider health outcomes, the NHB aims to support research that will radically improve our ability to: 1) understand individual risk; 2) diagnose individual pathology; and 3) predict individual outcomes. At the same time, the NHB will provide a database of well geno/phenotyped individuals who can be recruited in a targeted way into future clinical trials of emerging treatments for hearing loss, such as those based on gene, drug, or cell-based technologies.
With input from a network of international experts, we have been working on several important aspects of the NHB, including: 1) governance, ethical and data access arrangements that align with international norms and published best practice for biobanking; 2) data infrastructure and data standards that will allow for aggregation and interoperability with other datasets around the world; 3) protocols for advanced audiometric assessment, with a particular focus on measures providing information “beyond the audiogram.” In relation to this last aspect, we will report the findings of a small-scale pilot study conducted to establish how the wideband middle ear muscle reflex (MEMR) can be robustly measured using standard clinical equipment.
We look forward to providing an update on progress with the NHB to date, to share learning amongst the hearing community and to prompt discussion.

Internetkoordinator (Stand: 19.01.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page