Contact

Secratary's Office

+49 (0)441 798-3003

W02-1-169

How to get here

Mailing adress

Carl von Ossietzky University
Department of Medical Physics and Acoustics
Acoustics Group
26111 Oldenburg

Visitors

Carl von Ossietzky University
Department of Medical Physics and Acoustics
Acoustics Group
Carl-von-Ossietzky-Str. 9-11
26129 Oldenburg

Projects

Hearing4all

DFG EXC 2177 Cluster of Excellence

CASA for Hearing Devices

In everyday life we are exposed to situations where a multiple number of sound sources reach our ears simultaneously. Normal-hearing persons are able to focus on a single sound source and supress the others. Bregman termed this ability as "Auditory Scene Analysis." Computational Auditory Scene Analysis (CASA) refers to a field of research focused on the development of algorithms to solve the problem computationally. CASA achieves to extract the signal of interest from a mixture of various signals in the output signal from a microphone while ignoring the other signals. Since hearing-impaired persons find it often more difficult to handle scenes with a multiple number of sound sources, we want to use CASA to facilitate managing such scenes for hearing aid users.

Hearing acoustics: Perceptual principles, Algorithms and Applications (HAPPAA)

DFG SFB 1330 Collaborative research center

Project B2 - Computational Auditory Scene Analysis algorithms for improving speech communication in complex acoustic environments

The long-term goal of this project is to achieve a breakthrough in the theoretical foundation and realization of auditory-inspired algorithms for analysing and processing speech in complex acoustic conditions, in order to fundamentally improve speech communication in these conditions for people with hearing difficulties.

Main research questions are to determine the most promising auditory-inspired and technical processing principles, to identify the possibilities of exploiting machine learning techniques, to optimally integrate the different processing principles and to realize demonstrators that optimally support specific applications such as hearing aids, cochlear implants and assistive listening devices.

Project C2 - Audio reproduction in non-optimal acoustical environments

Audio content is commonly reproduced over loudspeakers in reverberant and noisy environments. This often leads to non-optimal reproduction conditions. Impairments can both be with respect to the spatial and timbre fidelity of the reproduction as well as with respect to speech intelligibility. This project will focus on robust methods for compensating for the non-optimal acoustical conditions and hearing capabilities of the listeners that can be applied in multiple scenarios.

Service-Oriented, Ubiquitous, Network-Driven Sound (SOUNDS)

European Training Network (ETN)

ESR 4 - Scalable immersive audio reproduction for higher-order ambisonics and channel-based audio content

Existing audio formats have predominantly been channel based which limits the scalability of the reproduction towards different loudspeaker set-ups. With the introduction of scene based descriptions using higher-order ambisonics and/or audio objects, as well as advanced remixing technology, the scalable and immersive reproduction of audio is within reach. Two important challenges still remain. The large existing base of channel-based audio content should be converted to the scalable immersive sound reproduction. For this advanced model-based signal-decomposition methods need to be developed which allow the parametric spatial analysis of audio content and allows primary and ambient component extraction. Secondly, for true immersive audio reproduction to be rolled out in practical room-acoustical conditions, robust acoustic compensation of the reproduction room will be developed based on optimizing perceptual metrics.

 

Auditory Cognition in Interactive Virtual Environments (Audictive)

DFG SPP 2236 Priority Program

Auditory distance perception (ADP) is an important part of spatial awareness and of key importance for evaluating and avoiding potential threats. The goal of this project is to gain a better understanding of static and dynamic ADP and their role in spatial awareness and navigation. We will investigate the effect and interaction of the three factors, familiarity with the acoustic environment, emotional valance of the source, and visual information for ADP.

Interactive Virtual Environments (iVEs) provide new possibilities to study human perception and social cognitive processing in complex scenes, thereby opening new research perspectives. Social cognition is one very prominent field of research where iVEs can help to advance experimental paradigms and to understand the complex process of social interactions. Therefore, we investigate the impact of the audio rendering on selected cognitive processes relevant for the field and specifically for the investigation and treatment of social anxiety.

AudioOpt

VIP+/BMBF Project

Auditory models for the prediction and optimization of sound quality (Auditorische Modelle zur Vorhersage und Optimierung der Geräuschqualität)

This project is aiming at a validation of perceptual models for the prediction of the tonality and dissonance sensation for complex sounds with multiple tonal components.

Completed projects

Webmaster (Changed: 11 Mar 2025)  Kurz-URL:Shortlink: https://uol.de/p32099en | # |
Zum Seitananfang scrollen Scroll to the top of the page