Friday November 30th, 13:30-18:00
Atrium, VU, Medical Faculty (1st floor)
13:30 – 13:40
13:40 – 14:20
Daniel Margulies (Researcher, Max Planck Institute for Human Cognitive and Brain Sciences)
Large-scale gradients in cortical organization
What determines the spatial arrangement of distinct areas of the cerebral cortex? Insights into functional processing streams indicate that areas are arranged stepwise, such that adjacent spatial position along the cortical mantle represents functional gradients. Having been largely restricted to describing processing within specific sensory modalities, how do these principles generalize across modalities and extend to the surrounding association cortex? I will present recent work describing various features of a principal gradient in cortical connectivity that spans between primary sensory/motor areas and higher-order transmodal association regions that in humans are known as the default-mode network. This arrangement suggests developmental mechanisms giving rise to the spatial distribution of cortical functions, and provides an anatomical scaffolding for investigating the propagation of information at both local and distributed spatial scales.
14:20 – 15:00
Tomas Knapen (Assistant professor, Vrije Universiteit Amsterdam)
Mapping the dark side: Retinotopic mapping of the Default Mode Network
The brain’s default network (DN) deactivates when participants focus externally to perform a task, and activates for internally referenced mental states such as mind wandering and autobiographical memory. Processing in the DN is thought to represent the highest levels of information integration, and changes to their responses are implicated in many psychological disorders. Recent findings indicate that signals in the DN carry visual memory information, but the functional role of DN deactivations in particular remains unclear. Here we show that BOLD signal decreases in the DN are tuned to the spatial location of visual stimuli. The visual selectivity of these deactivations was similar to that of concurrent activations in the frontal and parietal regions of the multiple demand network. Furthermore, visually selective deactivations allowed us to decode the location of a visual stimulus from DN nodes, demonstrating that the DN contains functional representations of the visual field. Our results indicate that responses in the DN are pinioned to responses in the visual system, providing a candidate organization for the mnemonic functionality of the DN. Our results suggest that the DN may utilize sensory reference frames for higher-level cognition such as autobiographical memory and social thought.
15:00 – 15:20
15:20 – 16:00
Jonny Smallwood (Professor of Psychology, University of York)
The role of the default mode network in human cognition
Understanding the role of the default mode network in cognition has become a key aim in contemporary neuroscience. Although initially described as task negative self-referential system, recent work focusing on this system as a global integrator has challenged this assumption. This talk we consider evidence that the role of the default mode network in cognition arises from its location at the top of a neurocognitive hierarchy that supports representations of patterns of ongoing neural activity that can be at odds with stimuli within the immediate environment, but can also play an important role in allowing cognition to be guided by information from memory.
16:00 – 17:00
Elia Formisano (Professor of Neuroimaging Methods, Maastricht University)
From ears to brain (and back): Imaging the brain computations for sound analysis.
A friend speaking, a bird chirping, a piano playing. Any living being or non-living object that vibrates generates acoustic waveforms, which we call sounds. How does our brain transform these acoustic vibrations into meaningful percepts? This lecture illustrates current research based on functional magnetic resonance imaging (fMRI) aimed at discovering the computations the brain performs to achieve this amazing feat.
In a first part, I will present research combining high resolution fMRI with computational modelling aiming at revealing how natural sounds are encoded in auditory cortex, the part of the brain most relevant for the processing of sounds. Results show that in humans (as well as in macaque monkeys) the cortical encoding of natural sounds entails the simultaneous formation of multiple representations with different degrees of spectral and temporal detail. This multi-resolution analysis of sounds may be crucially relevant for enabling flexible and context-dependent processing of the sounds, in the highly dynamic everyday environment. In a second part, I will show how the high spatial resolution (< 1 mm) and specificity achievable with new fMRI techniques at ultra-high magnetic fields (7 and 9.4 Tesla) opens up the possibility to examine “unknown” territories in humans, such as the columnar and laminar architecture in (primary) auditory cortical areas. Finally, I will elaborate on the potential and challenges of combining computational modeling and laminar fMRI to study relevant neuro-computational mechanisms in human auditory cortex.