Infant is the most powerful learner: He learns in a few months to master language, complex social interactions, etc. Powerful statistical algorithms, simultaneously acting at the different levels of functional hierarchies have been proposed to explain learning. I propose here...
Infant is the most powerful learner: He learns in a few months to master language, complex social interactions, etc. Powerful statistical algorithms, simultaneously acting at the different levels of functional hierarchies have been proposed to explain learning. I propose here that two other elements are crucial. The first is the particular human cerebral architecture that constrains statistical computations. The second is the human’s ability to access a rich symbolic system. I have planned 6 work packages using the complementary information offered by non-invasive brain-imaging techniques (EEG, MRI and optical topography) to understand the neural bases of infant statistical computations and symbolic competence from 6 months of gestation up until the end of the first year of life.
WP1 studies from which preterm age, statistical inferences can be demonstrated using hierarchical auditory oddball paradigms.
WP2 investigates the consequences of a different pre-term environment (in-utero versus ex-utero) on the early statistical computations in the visual and auditory domains and their consequences on the ongoing brain activity along the first year of life.
WP3 explores the neural bases of how infants infer word meaning and word category, and in particular the role of the left perisylvian areas and of their particular connectivity.
WP4 questions infant symbolic competency. I propose several criteria (generalization, bidirectionality, use of algebraic rules and of logical operations) tested in successive experiments to clarify infant symbolic abilities during the first semester of life.
WP5-6 are transversal to WP1-4: WP5 uses MRI to obtain accurate functional localization and maturational markers correlated with functional results. In WP6, we develop new tools to combine and analyse multimodal brain images.
With this proposal, I hope to clarify the specificities of a neural functional architecture that are critical for human learning from the onset of cortical circuits.
Phonemes are the bricks that allow speech to be efficient thanks to the combinatorial possibilities they offer. We propose that auditory-general mechanisms are insufficient to explain the sophisticated discriminatory abilities reported in infants since birth and that speech-specific skills are necessarily present to explain the speed of language acquisition. To support this proposal, we observed that 3 month-olds already have neural representations of phonetic features independent of acoustic variations. We presented 120 different natural syllables varying along place and manner of articulation, vowels and voice, collecting about 4000 responses from each infant, which we analyzed with a multivariate approach. Strikingly, we show that manner and place of articulation (for which no acoustic invariant is recognizable in the acoustic signal) are decodable in infants’ ERPs independently of any acoustic variations. We also show that these unique features are secondarily integrated into a complete phonetic representation, revealing that preverbal infants are equipped with a rudimentary linguistic combinatorial system (Phonetic representations are decodable in the infant brain, Gennari et al, submitted). We now study what are the processing differences between non-linguistic auditory stimuli (musical tones and numbers), visual categories and speech stimuli.
Second, we have shown that these phonetic representations are not only implicit but can be explicitly retrieved. We trained 3-month-old infants to pair two consonants, co-articulated with different vowels, with two visual shapes. Using event-related potentials, we show that after a learning phase, infants can generalize the learned associations to new syllables. The systematic pairing of a visual label with a phonetic category is thus easy to learn, suggesting not only that phonemes are natural categories for infants but also that the main process underlying reading (i.e., grapheme-phoneme pairing) is grounded in the early faculties of the human linguistic system.
For the first time, these two results show indisputable evidence of phonetic representations at an early age beyond general acoustic analyses. Furthermore, we showed that phonemic awareness is possible before reading is acquired if attention is directed to the correct level of analysis. (A precursor of reading ability? 3-month-old infants learn to pair a phoneme with a visual shape. Mersad et al, submitted)
Third, we have shown that infants are able to further process syllables and compute statistics on their transitions when they are presented in a continuous stream. This powerful learning mechanism is observed in sleeping neonates. Through a measure of neural entrainment, we observed that the neonate’s brain is not only following the syllabic rate but also after a while discovers the tri-syllabic words embedded in the stream and begins to be entrained at the word frequency (i.e. the power at the one-third frequency of the syllabic frequency increases). Furthermore, when trisyllabic words are presented in test, ERPs between words and part-words differ as soon as the first syllable revealing that infants have encoded the first syllable and possibly the exact word. (Neonates segment speech and memorize syllables order, Flo & al, in preparation). We now compare syllables and voices to study whether on the same stream, there is an advantage for linguistic information relative to voice information.
Fourth, I also proposed in the project that the rich symbolic system, from which human adults benefit, should be in place from the beginning. We hypothesized that this ability rests upon an early capacity to use arbitrary signs to represent any mental representation, even as abstract as an algebraic rule. In three experiments, we collected high-density EEG recordings while 5-month-old infants were presented with speech triplets characterized by their abstract syllabic structure - the location of syllable repetition - wh
Through extended experimental sessions and cutting-edge technology (256-channel net), we have overcome the usual limitations of developmental neuroimaging (low SNR, high variability, small number of trials) and exploit immature anatomy (thin skull, hairless scalp) to obtain direct access to the content of the infant brain for the first time. We have been able to decode phonetic representations (Gennari et al, submitted).
We have developped new experimental paradigms to target symbolic representations. (Kabdebon et Dehaene-Lambertz, PNAS, 2019)
Finally, we have been able to correlate EEG responses and microstructural changes within the same infants. These correlations help to understand how maturation influences cognitive development (Adibpour et al, submitted)
We will continue to explore the infant brain representations with this high resolution net, comparing speech and non-spech representations but also using MEG, to determine the specificity and robustness of the computations in the different domains.
We will also start to use MRI to better localize the different computations we have highlighted with EEG