workshop_abstract_shinn-cunningham

Workshop on "Computational Audition"

Abstracts

 

B. Shinn-Cunningham - Segregating and selecting objects from auditory scenes

 
Although it is convenient to think of the processes of segregating objects and of selecting objects as distinct and separable stages that are engaged when listeners process complex auditory scenes, such a view is overly simplistic. Although selective attention to a sound source requires that the desired source is segregated from a scene, the very act of preparing to listen for a source with a particular attribute (e.g., from a particular location) causes changes in how subsequent inputs are processed, and thus how the scene is analyzed. Moreover, the dynamics of how a scene is parsed are complex and depend upon both volitional attention and automatic processes. In particular, once a given stream of sound is the focus of attention, subsequent sound elements that are perceptually similar are perceptually enhanced, and at least a part of this enhancement is obligatory, rather than volitional. Both psychophysical and neuroimaging evidence in support of these ideas will be reviewed, with an eye toward how such principles might be incorporated into models of auditory scene analysis.

(Stand: 19.01.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page