Complex acoustic scenes contain a multitude of sound sources and varying
backgrounds. Realistically, most sources emanate strongly non-stationary
sounds, and furthermore they dynamically appear and disappear. Motivated by
the human auditory systems, we try to use a biologically plausible approach to
extract the necessary information of the scene, aiming to identify the active
components of a scene. This enables us to classify active objects within a
scene, and the overall scene as a whole. A possible application field are
hearing aids with automatic program adaptation, which would benefit from a
robust classification of the acoustic environment.
We also analyse acoustic scenes for presence of unknown events. These
events belong to classes that were not known during the modelling (training)
stage, and appear only later. In order to avoid confusions with known classes,
it is necessary to find an algorithmic description of "known" and "unknown"
parts of the world. Within the DIRAC project, we developed a setup that is able
to detect novel objects in everyday scenes given a set of previously trained
classes (Bach & Anemüller 2010)