Contact

Secratary's Office

+49 (0)441 798-3003

+49 (0)441 798-3698

W02-1-169

How to get here

Mailing adress

Carl von Ossietzky University
Department of Medical Physics and Acoustics
Acoustics Group
26111 Oldenburg

Visitors

Carl von Ossietzky University
Department of Medical Physics and Acoustics
Acoustics Group
Carl-von-Ossietzky-Str. 9-11
26129 Oldenburg

Projects

Hearing4all

DFG EXC 2177 Cluster of Excellence

CASA for Hearing Devices

In everyday life we are exposed to situations where a multiple number of sound sources reach our ears simultaneously. Normal-hearing persons are able to focus on a single sound source and supress the others. Bregman termed this ability as "Auditory Scene Analysis." Computational Auditory Scene Analysis (CASA) refers to a field of research focused on the development of algorithms to solve the problem computationally. CASA achieves to extract the signal of interest from a mixture of various signals in the output signal from a microphone while ignoring the other signals. Since hearing-impaired persons find it often more difficult to handle scenes with a multiple number of sound sources, we want to use CASA to facilitate managing such scenes for hearing aid users.

Hearing acoustics: Perceptual principles, Algorithms and Applications (HAPPAA)

DFG SFB 1330 Collaborative research center

Project B2 - Computational Auditory Scene Analysis algorithms for improving speech communication in complex acoustic environments

The long-term goal of this project is to achieve a breakthrough in the theoretical foundation and realization of auditory-inspired algorithms for analysing and processing speech in complex acoustic conditions, in order to fundamentally improve speech communication in these conditions for people with hearing difficulties.

Main research questions are to determine the most promising auditory-inspired and technical processing principles, to identify the possibilities of exploiting machine learning techniques, to optimally integrate the different processing principles and to realize demonstrators that optimally support specific applications such as hearing aids, cochlear implants and assistive listening devices.

Project C2 - Audio reproduction in non-optimal acoustical environments

Audio content is commonly reproduced over loudspeakers in reverberant and noisy environments. This often leads to non-optimal reproduction conditions. Impairments can both be with respect to the spatial and timbre fidelity of the reproduction as well as with respect to speech intelligibility. This project will focus on robust methods for compensating for the non-optimal acoustical conditions and hearing capabilities of the listeners that can be applied in multiple scenarios.

Service-Oriented, Ubiquitous, Network-Driven Sound (SOUNDS)

European Training Network (ETN)

ESR 4 - Scalable immersive audio reproduction for higher-order ambisonics and channel-based audio content

Existing audio formats have predominantly been channel based which limits the scalability of the reproduction towards different loudspeaker set-ups. With the introduction of scene based descriptions using higher-order ambisonics and/or audio objects, as well as advanced remixing technology, the scalable and immersive reproduction of audio is within reach. Two important challenges still remain. The large existing base of channel-based audio content should be converted to the scalable immersive sound reproduction. For this advanced model-based signal-decomposition methods need to be developed which allow the parametric spatial analysis of audio content and allows primary and ambient component extraction. Secondly, for true immersive audio reproduction to be rolled out in practical room-acoustical conditions, robust acoustic compensation of the reproduction room will be developed based on optimizing perceptual metrics.

 

Auditory Cognition in Interactive Virtual Environments (Audictive)

DFG SPP 2236 Priority Program

Auditory distance perception (ADP) is an important part of spatial awareness and of key importance for evaluating and avoiding potential threats. The goal of this project is to gain a better understanding of static and dynamic ADP and their role in spatial awareness and navigation. We will investigate the effect and interaction of the three factors, familiarity with the acoustic environment, emotional valance of the source, and visual information for ADP.

Interactive Virtual Environments (iVEs) provide new possibilities to study human perception and social cognitive processing in complex scenes, thereby opening new research perspectives. Social cognition is one very prominent field of research where iVEs can help to advance experimental paradigms and to understand the complex process of social interactions. Therefore, we investigate the impact of the audio rendering on selected cognitive processes relevant for the field and specifically for the investigation and treatment of social anxiety.

AudioOpt

VIP+/BMBF Project

Auditory models for the prediction and optimization of sound quality (Auditorische Modelle zur Vorhersage und Optimierung der Geräuschqualität)

This project is aiming at a validation of perceptual models for the prediction of the tonality and dissonance sensation for complex sounds with multiple tonal components.

Completed projects

LuFo/BMWK: UHBR2Noise - HAP3: VibCom

LuFo V-3

BMWK Project

In this BMWi funded project, the perception of noise and vibration is investigated with respect to the comfort experience, building a basis to derive a comfort criterion.

EU-H2020: RegUlation and NorM for low sonic Boom LEvels (RUMBLE)

Subproject - WP3 Human response to Sonic Boom

Task 3.3: Indoor human response to sonic boom

In this EU-funded H2020-project, the perception of new sonic boom signatures is investigated which can be used as a basis to derive guidelines for acceptable levels of sonic booms.

iGF/BMWK: Forschungsvereinigung für Verbrennungskraftmaschinen e.V. (FVV)

iGF / BMWK Project

Project - Perceptional NVH-Aspects of Downspeeding

(Empfindungsgrößen Niedertouriges Fahren)

This iGF/BMWK funded project, NVH-aspects of downspeeding in combustion engines are investigated with a focus on R-Roughness. This also includes the influence of simultaneous vibrations on the perceived sound quality.

iGF/BMWK: Forschungsvereinigung für Luft- und Trocknungstechnik e.V. (FLT)

iGF / BMWK Project

Project - The characterization of the acoustic quality of fans with the preference equivalent level – development of a psycho-acoustically motivated calculation method

(Die Kennzeichnung der akustische Güte von Ventilatoren mit dem guteäquivalenten Pegel – Entwicklung eines psychoakustisch motivierten Berechnungsverfahrens)

In this iGF/BMWK funded project, the perception of ventilator and fan noise is investigated and an algorithm for the prediction of quality equivalent levels is developed.

DFG: Individualized Hearing Acoustics

DFG FOR 1732 Research Unit

Subproject - Audio Playback

Perceptive optimizing of the sound presentation in spatially distributed loudspeakers

DFG: Simulation and Evaluation of Acoustical Environments (SEACEN)

DFG FOR 1557 Research Unit

Subproject - Models of room acoustical perception

The aim of this project is to survey the connection of physical room acoustical parameters and the attributes of room acoustical perception.

DFG: The active auditory system

DFG TRR 31 Collaborative Research Center

Subproject - Binaural Cocktail-Party processing: the role of perceptual organisation and release from masking

The main goal of this project is to investigate the role of top-down and bottom-up processing in binaural cocktail party settings. Bottom-up processing of binaural cues relates to the well-known binaural release from masking. Top-down processing relates to the contribution of binaural localization cues to auditory stream segregation which allows for selectively attending to one target only. Up till now there is little research that directly compares the contributions of these two aspects to binaural cocktail party processing. This project will use a new stimulus paradigm to get a better understanding of the contribution of both cues to the cocktail-party effect.

(Changed: 04 Nov 2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page