Coordinatore | BCBL BASQUE CENTER ON COGNITION BRAIN AND LANGUAGE
Organization address
address: PASEO MIKELETEGI 69 2 contact info |
Nazionalità Coordinatore | Spain [ES] |
Totale costo | 174˙380 € |
EC contributo | 174˙380 € |
Programma | FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | FP7-PEOPLE-2010-IIF |
Funding Scheme | MC-IIF |
Anno di inizio | 2011 |
Periodo (anno-mese-giorno) | 2011-06-01 - 2013-05-31 |
# | ||||
---|---|---|---|---|
1 |
BCBL BASQUE CENTER ON COGNITION BRAIN AND LANGUAGE
Organization address
address: PASEO MIKELETEGI 69 2 contact info |
ES (SAN SEBASTIAN) | coordinator | 174˙380.80 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'The apparent ease with which we comprehend spoken language is remarkable; however, despite over a half-century of intensive research spanning a range of intellectual disciplines, the cognitive mechanisms and neurobiological bases that underlie this ability remain poorly understood. Recent advances in adjacent domains of cognitive neuroscience (e.g., scene analysis and object recognition in vision, limbic and motoric control, etc.), as well as a reappreciation for Bayesian approaches to perception have rekindled the interest in the extent to which our higher-order knowledge of the world shapes our perceptual experiences. The fundamental aim of this proposal is to investigate how and when listeners use information about phonetic and lexical patterns in their language to generate predictions of what they are to hear next. The proposed research combines several inter-disciplinary techniques (experimental psychology, cognitive neuroscience), permitting a focus on the time course of when such knowledge becomes available and is exploited by the listener. Here, the relatively rare linguistic phenomenon of long-distance sibilant harmony in Basque (an understudied linguistic isolate) is the test case in a paradigm of behavioral and electrophysiological (EEG/MEG) experiments investigating how listeners carve up the noisy, incoming acoustic signal using what they know about the linguistic patterns in their language. The advanced imaging facilities at the Basque Center on Cognition, Brain and Language (BCBL), combined with its location in the Basque Country and the concentration of experts in the Basque language, make it an optimal location to carry out the research proposed within this application.'
The mechanisms that allow humans to process spoken language remain poorly understood. An EU-funded study delved into how people process continuous speech.
Researchers, in conjunction with the Basque Centre on Cognition, Brain and Language (BCBL), launched the project 'Prediction in speech perception and spoken word recognition' (PSPSWR) designed to expand our knowledge of how speech understanding works. PSPSWR began with the hypothesis that as people listen to spoken language, they make predictions about the content based on their knowledge of their native language. The team developed a series of experiments using Basque to test this hypothesis.
The first study involved asking participants to listen to Basque pseudowords and respond when they heard a sibilant ('s'-like) sound for which they were monitoring. The three conditions were match (same point of articulation), mismatch (a different point of articulation) and control (non-sibilant). Researchers found that mismatch items showed longer reaction times than match or control items, suggesting that listeners are sensitive to these complex patterns and use their knowledge of the phonological system to parse the signal in real time.
In a follow-up, participants were again presented with pseudowords, while their on-going brain activity was recorded using electroencephalography (EEG). The findings demonstrate that auditory cortex is sensitive to such complex patterns, as mismatch items elicited the largest response just 75 ms after hearing the relevant sibilant sound over fronto-central electrode sites.
In short, the series of studies carried in the current project indicate that the brain processes information differently when it can predict future speech versus when it cannot, shedding new light on understanding the predictive mechanisms of how we understand language.