Demonstrator D1 provides a comprehensive toolbox for simulating listeners’ perception and performance for the various acoustic conditions and signal processing schemes investigated by the CRC. It goes beyond the state-of-the art by explicitly considering the acoustic communication loop.
So far, three scenarios of demonstrator D1 have been developed that illustrate how these models can be applied in different use cases:
- In scenario 1 “Model in the measurement” the models serve as artificial listener providing responses on a trial-by-trial basis in psychoacoustic experiments which are performed in the same way as the listening experiments with the human listeners.
- Scenario 2 “Model in the room” aims to evaluate and showcase predictions of offline models for complex acoustic scenes. Scene parameters can be interactively modified to illustrate their impact on predicted model output.
- Scenario 3 “Model in the hearing device” comprises interactive real-time model predictions running on the Master-Hearing Aid (MHA) platform. The main achievement of this scenario was to realize a completely blind processing model of binaural speech perception that runs in real-time.