Discussion sessions I

Chairpersons

Dirk Junius (Scientific Co-Chair)

Tobias Neher (Industrial Co-Chair)

Birger Kollmeier (Organizational Co-Chair)

Discussion sessions I

Thursday, June 12th 2025, afternoon

I Hardware and Acoustics

a) Hearable-centered assistive systems

Proposed session chairs: Dirk Junius (WSA), Tanja Schultz. (University Bremen)

Short description: The ear is potentially a favorable location for obtaining human health data. Hearing devices with multi-modal sensors could serve as suitable central data acquisition unit. This session looks at various challenges that need to be tackled to ensure that the gathered information can efficiently be collected and provides benefit to end-users or healthcare providers, e.g.

  • Evaluations of the efficiency and accuracy of sensors and algorithms for data analysis, including cost/benefit ratio,
  • Standardization of data structures and interfaces (e.g. Fitting Software – medical records – health insurance),
  • Data fusion and data privacy,
  • Benefits for healthcare and end-users.

Agenda (will be coming soon)

 

II Audiology

b) Future fitting of hearing aids: HCP vs. ML?

Proposed session chairs: Nils Pontopidan (Oticon), N.N.

Short description: How will machine learning affect the future of hearing aid fitting? How can it benefit the work of the Hearing Health Care Practitioner (HCP), e.g. by giving smarter, data-based guidance during the fitting workflow? Can machine learning also improve self-fitting success for end-users purchasing OTC devices? And finally, what is the potential of AI-based digital assistant Apps used for fine-tuning in every-day life?

Agenda (will be coming soon)

 

III Signal Processing and AI

c) ML-based hearing device processing: from targeted signal processing to semantic hearing

Proposed session chairs: Martin McKinney (Starkey), N.N.

Short description: Machine learning (ML) offers great progress in multimicrophone processing, speaker-specific signal enhancement and other problems in (low-level) signal enhancement and noise abatement. Moreover, large language models allow to semantically interpret the environment, perform a scene classification (e.g. with additional video input) and to eventually address the listeners “hearing wish” to steer signal enhancement. How far have these audiologists´ dreams already come true? How feasible are current ML techniques for use in hearing devices with moderate processing capabilities, transmission bandwidths, and strict latency requirements?

Agenda (will be coming soon)

 

Internetkoordinator (Stand: 11.03.2025)  Kurz-URL:Shortlink: https://uol.de/p111320
Zum Seitananfang scrollen Scroll to the top of the page