Neural code and top-down mechanisms of inferences during perception of vocal communication in noise – INFERNOISE
In humans and animals, the perception of noisy acoustic inputs is thought to depend on the ability to predict upcoming events, constraining and biassing perception to the most probable events. Although the predictive brain hypothesis is supported by a wide range of experimental data, the nature of predictive signals and the underlying neural circuit remain poorly understood because they have been mostly demonstrated by means of measuring the outcome of such computations: prediction errors. Recent works in rodents have identified encoding of true predictions in the sensory cortex, but only in the context where predicted stimuli result from robust motor actions (e.g. visual flow during movement or sounds generated by behaviour). Therefore, how the brain extracts statistical regularities from sequences in an unsupervised manner to generate predictions and facilitate perception despite internal or external sources of noise remains unclear. In this project, we will use a comparative approach in humans, monkeys and mice, in order to 1- identify the cortical sources underlying timing- and content-based predictions and 2- decode the representations of predictions in the period preceding the expected sound. Our approach will be based on existing data in humans and primates and on new data recorded during a behavioural task of identical structure in mice and humans. In all tasks, subjects will listen to sequences of vocalisations with defined statistical relationships such that certain events will be predictable (or not) from the preceding context. In humans and monkeys, intracranial recordings (LFP and spiking activities) will be analysed with multidimensional classification algorithms co-developed between teams to measure if it is possible to decode upcoming syllables in the preceding time window after subjects have learned the sequence statistics. From this analysis, temporal dynamics and anatomical sources of generated predictions will be revealed. In separate experiments in both humans and mice, a homologous task will be designed to test sequence predictions while titrating both behavioural relevance, and noise levels to investigate how the prediction sources in the first set of experiments contribute to robust perception in the face of noise and attention. Adjusting the level of background noise to render target sound and sequence identification more difficult, we will further decipher the mechanisms underlying perceptual robustness to inferential (internal) noise from sensory (external) noise in subcortical and cortical nodes of the auditory system and downstream, higher-order brain regions anatomically and functionally linked to auditory cortex. Combining electrophysiology and large-scale two-photon imaging at the single trial level in mice, we will characterise predictive representations at the neuronal level and quantify how sequence-based predictions are encoded in the multidimensional dynamics of cortical assemblies. Finally, using optogenetic inactivation of several cortical areas projecting to auditory cortex (e.g., the orbitofrontal cortex or the posterior parietal cortex), we will aim to identify the origin of predictive representations within top-down connections and the impact of their perturbation on behavioural responses during the perception of noisy sequences. Our complementary approaches in humans and animals will reveal, for the first time, the neural mechanisms underlying the predictive mechanisms underlying the robust perception of sounds in noise and provide datasets and analyses that allow for the direct comparison of predictive mechanisms at the micro-, meso- and macro- scales across species.
Project coordination
Jean-Marc EDELINE (Institut des Neurosciences Paris Saclay)
The author of this summary is the project coordinator, who is responsible for the content of this summary. The ANR declines any responsibility as for its contents.
Partnership
NeuroPSI Institut des Neurosciences Paris Saclay
IP-IDA Institut de l'Audition
IP-IDA IdA Cognition et Communication Auditive
Help of the ANR 610,844 euros
Beginning and duration of the scientific project:
September 2023
- 48 Months