The application of nonlinear dynamics methods in the neurosensory sciences is one of our fields of research. In a cooperation with the International Graduate School for Neurosensory Science and Systems, we investigate the neuronal mechanisms leading to the identification and segregation of acoustical signals. The processing of sound signals in the auditory system involves several stages:
- Multiplexing and initial filtering at the outer ear
- Frequency analysis at the basilar membrane
- Transduction of mechanical vibrations to nerve impulses by hair cells and auditory nerve fibers
- Analysis and distribution of neural activity
We condense these stages into two essential steps for an effective theoretical treatment. While in the first step the sound is transformed via various stages into a neuronal spike pattern, the second step integrates all information from several neurons in a neuronal network in order to recognize the signal. The aim of the theoretical investigation is to find a classification of different input signals in terms of neuronal response patterns. To quantify differences between various response patterns we analyse the structure of the spike trains of individual neurons using symbolic dynamics and complexity measures. A model for the whole network of coupled neurons will be developed integrating the information of the single neurons to a final recognition of the signal. For each neuron of the network we consider various standard models for neuronal dynamics. As a main goal we want to investigate which mechanism, in particular which kind of coupling is most advantageous for the recognition of specific signals. One possibility could be that recognition is indicated by synchronized activity of parts or of the whole network. These theoretical studies will be developed in close interaction with experimental studies of auditory processing providing descriptions of response patterns of single neurons as well as information about the final decision of the network from psychoacoustic experiments with animals and humans (Zoophysiology Group of the Institute for Biology and Environmental Sciences). In this way it is possible to validate the theoretical models.
Comodulation Detection Differences (CDD)
Currently, we are investigating a specific effect, namely comodulation detection differences (CDD). Experiments focusing on this effect are designed to investigate the auditory system's ability to detect amplitude modulated narrow band sound signals in the presence of one or several masking noise bands (maskers). These experiments use so called comodulated stimuli which are given by a number of narrow sound bands. One of these bands acts as the signal to be detected, while the others are obscuring (i.e. masking) the detection of the signal. In this situation, three different stimulus conditions can be distinguished: all correlated (AC), in which the amplitude modulations of each masker and signal band are exactly the same; all uncorrelated (AU), in which the amplitude fluctuations of each band (signal as well as maskers) are independent of each other; and finally co-uncorrelated (CU), in which all maskers share the same time course of amplitude modulation while the signal band's envelope fluctuates independently.
Example stimuli can be heard here for the
Psychophysical experiments with humans as well as behavioural experiments with starlings in which the threshold sound level for the signal band was determined with fixed masker sound level (i.e. intensity) showed that signal detection is generally easiest in the CU-condition. Threshold differences between AC and CU conditions have been found to be on the order of up to 10 dB while AC and AU thresholds are generally similar.
Our goal is to devise a simple model that explains these findings.
Everyone who would like to experience the effect on her or his own should try CDD@home.