Projects
Projects
Tools for Open and Reproducible Neuroscience
In this DFG core facility project we develop tools and practices for open neuroimaging. Our approaches build on the Brain Imaging Data Standard, a community developed standard for data storage and meta-data. Using such data structure facilitates collaborative research since there is no need to explain the data due to its commonly known structural organization. Moreover it eases the sharing of data and allows for meta analyses and big data approaches, which are referred to as gold-standard scientific practices. Since data organization is such an important and fundamental topic within the open science community our projects are centered around BIDS.
In addidtion we provide accessble overviews of topics relevant for open neuroimagung (and beyond) like the most comprehensive review paper on tools and practices currently available and an overview of privacy regulations in the three juridictions wih the largest amounts of research data (Europe, USA, China)
- ancpBIDS: ancpBIDS is a lightweight python library to read/query/validate/write BIDS datasets. It provides a unified and modality agnostic I/O interface for researchers to handle BIDS datasets in their workflows or analysis pipelines. Its implementation is based on the BIDS schema and allows to evolve with the BIDS specification in a generic way.
- BIDS Conversion GUI: A first obstacle for useful data sharing is that data are not saved or follow an idiosyncratic structure. We currently develop a user-friendly GUI for data conversion to BIDS including capabilities for metadata annotation to make research data FAIR.
- MEGqc: Magnetoencephalographic and electroencephalic data is prone to many sources of noise or artifacts degrading data quality. Unfortunately data quality is rarely reported. This tool is based on BIDS and performs automated quality assessment with human and machine readable output.
- GAUDIE: a thoroughly validated naturalistic speech stimulus databases for emotion induction
- M-CCA: as a toool for optimized functional inter-individual data combination. This approach is constatly beeing further improved and validated.
Neurometrics
In this project, situated in the DFG Cluster o Excellence H4All, we set out to characterize cortical auditory processing of realistic speech in soundscapes and how hearing loss chnages these processes. We use functional magnetic resonance imaging (fMRI) to derive voxel-wise encoding models (VWEMs) which can be interpreted as stimulus features driving the neuronal population. As stimuli to probe these filters we use e.g. a publicly available and richly annotated movie soundtrack which we manipulate with simulators of hearing loss.
Preliminary VWEM results with MPS-features indicate that simulated hearing loss had the desired effects on speech intelligibility and differently affects processing in a range of cortical auditory areas (see figure).
The preliminary results (n=10) of wide-band envelope sound features were presented at the speech in noise conferences 2022 and the results of Mel-frequencies spectrogram sound features were presented at Fens2022 conference.
Human cognition and decision making in driving (PIRE I & II, GRK 2783)
In these projects we investigate cognition and decision making in realistic driving simulations at the behavioral and the brain level. We investigate how decision making is influenced by context (e.g. interactions with human driven vs. autonomous vehicles, DFG-PIRE), cognitive workload (DFG-PIRE), and experience (GRK 2783). Furthermore, we are using cognitive models to understand the functional relationship between cognitive workload and driving performance and work on including these models as digitila twins of human drivers into controllers for autonomous vehicles. For this research we have access to multiple driving simulator setups, including rare custom made fMRI and MEG compatible driving simulators.
Publications:
Unni A., Trende A., Biebl B., Kacianka S., Lüdtke A., Bengler K., Pretschner A., Rieger J.W.. Decision making in human-autonomous vehicle interaction. Presented at the 3rd Neuroergonomics Conference on The Brain at Work and in everyday life, 11-16th September 2021. neuroergonomicsconference.um.ifi.lmu.de/wp-content/uploads/submissions/129.pdf
Unni A., Trende A., Pauley C, Weber L., Biebl B., Kacianka S., Lüdtke A., Bengler K., Pretschner A., Fränzle M. and Rieger J.W. (2022). Investigating Differences in Behavior and Brain in Human-Human and Human-Autonomous Vehicle Interactions in Time-Critical Situations. Front. Neuroergon. 3:836518. doi.org/10.3389/fnrgo.2022.836518
Held, M., Rieger, J. W., & Borst, J. P. (2022). Multitasking While Driving: Central Bottleneck or Problem State Interference? Human Factors. https://doi.org/10.1177/00187208221143857
Cognition/emotion interactions during simulated driving
For questions about this project, please contact Katharina Lingelbach.
informed consent was obtained for images
Distracted driving due to stressful events or cognitive overload constitute a major problem for road safety. In particular, socio-emotional cues, such as children crying or arguing, can easily attract attention and thus compete for cognitive resources. By monitoring mental processes and driver states, safety-critical distractions, stress, or cognitive overload can be detected, and potentially life-threatening situations and errors avoided; for example, by means of intelligent warnings or adapting vehicle controllers. In this joint project with Fraunhofer IAO we are investigating interacting emotional and cognitive processes using magnetoencephalography (MEG) and eye-tracking in a rare MEG driving simulator.
GAUDI: A Naturalistic German AUDItory Emotional Database
For questions on GAUDIE please contact Katharina Lingelbach.
Despite a great need in the German-speaking area, thoroughly validated naturalistic speech stimulus databases for emotion induction are rare. Therefore, we developed GAUDIE (German AUDItory Emotional Database) - a validated, richly annotated and freely online accessible stimulus database of German speech sequences for emotion induction.
GAUDIE comprises 37 audio speech sequences with a total length of 92 minutes. The audio sequences last between 1 and 4 minutes and induce positive using comedian shows, neutral using weather forecasts, and negative emotions using arguments between couples and relatives.
GAUDIE was validated by 26 native German speakers (mean age: 24.69 ± 3.41 years, 19 females) on multiple validation measures, including continuous ratings capturing the temporal variations of valence and arousal. Moreover, post-presentation ratings provide discrete emotion classification and potential moderators. For the assessment of stimulus quality, we quantify how well audio sequences differentiate on the valence-arousal-dominance system and generalize regarding the perceived emotional strength and other ratings across participants.
The database GAUDIE, annotated with multiple emotion ratings, fills a gap as a German emotion inducing speech database. All stimuli, along with their annotations, can be accessed online through the OSF project repository GAUDIE
Reference:
Lingelbach K., Vukelić, M., Rieger, J. W. (2023). GAUDIE: Development, Validation and Exploration of a Naturalistic German AUDItory Induced Emotional Database. Behavior Research Methods, https://doi.org/10.3758/s13428-023-02135-z.
The car that cares
project completed
In this project, we set out to investigate human states relevant for co-operation (cognitive workload and frustration), and means of measuring them in sufficiently realistic environments using whole-head fNIRS brain activation measurements. Moreover, we also investigated the interaction between different cognitive state concepts such as working memory load and visuospatial attention influencing driving task difficulty.
Publications:
Unni A, Ihme K, Jipp M, Rieger JW (2017). Assessing the Driver’s Current Level of Working Memory Load with High Density Functional Near-Infrared Spectroscopy: A Realistic Driving Simulator Study. Front. Hum. Neurosci. 11:167. doi: 10.3389/fnhum.2017.00167 Download article (FREE)
Ihme K, Unni A*, Zhang M, Rieger JW, Jipp M (2018). Recognizing Frustration of Drivers from Face Video Recordings and Brain Activation Measurements with Functional Near-Infrared Spectroscopy. Front. Hum. Neurosci. 12:327. (* shared first author) Download article (FREE)
Scheunemann J, Unni A, Ihme K, Jipp M, Rieger JW (2019). Demonstrating brain-level interactions between visuospatial attentional demands and working memory load while driving using functional near-infrared spectroscopy. Front. Hum. Neurosci. 12:542. (* shared first author) Download article (FREE)
Brain Machine Interfacing
project completed
In a collaboration with Fraunhofer in Magdeburg, the Knight Lab at UC Berkeley and UC San Francisco we recently started a project on Brain Machine Interfacing (BMI). Our goals are to use non-invasive and invasive brain activation to control robotic devices, and for cognitive BMIs. We have recently organized the interdisciplinary 1st Magdeburg Workshop an Brain Machine Interfacing.
Publications on BMI
Collaborators: Robert T. Knight, Ulrich Schmucker, Edward Chang
Read what Bob has to say about the Newtown school shooting
Acquisition of information from natural scenes
project completed
The human visual system acquires information from cluttered natural scenes much faster and more efficient than one might expected from experiments using relatively simple stimuli (dots, gratings etc.). Only 40 ms cortical processing of a photography of a natural scene are sufficient to discriminate scenes according to their semantic content, for semantic object contetxt interactions to develop and to obtain enough information to rember the seen as previously seen. We investigate the dynamics of the information acquisition from natural scenes and the interactions between object and context at several cognitive and perceptual levels. In our investigations we combine fMRI, MEG, EEG, psychophysics, and single trial classification approaches to analyze the sequence of the brain processes involved in information extraction and recognition, the role of prior knowledge about the structure of the natural world, and to test the predictivity of brain processes for the subjective percept on a trial-by-trial basis.
Publications on natural scenes
Collaborators: Karl Gegenfurtner, Rudolf Kruse, H.H. Bülthoff
Constructive perception and eye movements:
project completed
We perceive objects in our environment as integrated wholes, even when they are covered by other objects, and thus only some fragments of the object are simulatneously visible. These subjective object percepts are of high ecological importance as they allows us to recognize and react to obects even when they are only partly visible (e.g. a predator sneaking behind trees). How the visual system constructs the subjective object percepts is still a mystery. Suggestions range from the highly cognitive "knowing what it is hypothesis" to the sensory, so-called "retinal painting hypothesis" which was put forward for example by Helmholtz some 150 years ago. The latter assumes that the eyes follow the occluded "object" and thereby paint successively visible object parts onto adjacent parts of the retina. Our investigations show that under natural free viewing conditions the retinal painting hypothsis can be rejected because retinal painting by smooth pursuit eye movements is neither necessary nor sufficient to explain the figure percept. We currently investigate the brain networks that construct the object percepts and how the brain switches between the percept of the physical stimulus and the subjective object percept.
Publications on constructive object perception and eye movements.
Collaborator: Robert Fendrich
Voluntary eye movements
project completed
Humans scan the visual environment with a rapid sequence of voluntary saccadic eye movements that move the center of regard between differnt point of interest in a scene. Despite the shifts of the retinal image we do not perceive the world as moving between saccades. This is, however, the case when the retinal images are presented as a movies. Our aim in this project was to investigate the effects of voluntary eye movements in the visual system and how the brain constructs the stable percept of the world.
Publications on constructive object perception and eye movements.
Collaborator: Ivan Bodis-Wollner
Color and motion processing
project completed
According to the classic view the brain processes visual information in a fast color insensitive and in a slow color sensitive channel.
We investigate via parametric designs using fMRI the temporal and chromatic sensitivity of visual ares in the brain with simple and complex stimuli. We implemented retinotopic mapping and other functional localizers to perform detailed measurements in independently localized visual areas.
Publications on color coding
Collaborators: Karl Gegenfurtner, Brian Wandell
Consciousness and the freedom of will
project completed
I'm interested in interdisciplinary, epistemological, and ethical aspects of the discussion about consciousness and freedom of will. Together with colleagues from Philosophy and Psychology we organized in 2002 an interdisciplinary workshop on the topic. The results are published as a book (sorry in German only).
Publications on consciousness
Collaborators: Christoph Herrmann, Silke Schicktanz, and Michael Pauen