VISCUESACQWO

The role of visual cues in speech segmentation and the acquisition of word order: a study of monolingual and bilingual adults and infants

 Coordinatore UNIVERSITE PARIS DESCARTES 

 Organization address address: Rue de l'Ecole de Medecine 12
city: PARIS
postcode: 75270

contact info
Titolo: Dr.
Nome: Rosaly
Cognome: Datchi
Email: send email
Telefono: +33 1 76 53 20 33

 Nazionalità Coordinatore France [FR]
 Totale costo 258˙088 €
 EC contributo 258˙088 €
 Programma FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call FP7-PEOPLE-2013-IOF
 Funding Scheme MC-IOF
 Anno di inizio 2014
 Periodo (anno-mese-giorno) 2014-06-01   -   2017-05-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    UNIVERSITE PARIS DESCARTES

 Organization address address: Rue de l'Ecole de Medecine 12
city: PARIS
postcode: 75270

contact info
Titolo: Dr.
Nome: Rosaly
Cognome: Datchi
Email: send email
Telefono: +33 1 76 53 20 33

FR (PARIS) coordinator 258˙088.50

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

syntactic    prosodic    infants    speech    word    movements    potentially    segmentation    auditory    facial    artificial    series    language    cues    gestures    mdash    experiments    preferences    phrasal    pitch    prominence    languages    visual   

 Obiettivo del progetto (Objective)

'Adults and infants make use of both auditory and visual information in speech perception. The available visual cues include oral-articulatory movements (e.g., lip movements), as well as non-verbal gestures (e.g., head-movements). The present project seeks to investigate the role of visual cues as an aid for auditory prosody in speech segmentation and in bootstrapping syntactic development, topic that remains as yet unexplored. The present project focuses on a type of prosodic information, i.e., the acoustic realization of phrasal prominence, which has been proposed to potentially allow prelexical infants bootstrap the basic word order of the target language, a major syntactic property of natural languages. Phrasal prominence correlates systematically with word order, i.e., it is realized by means of pitch changes in OV languages, and changes in duration in VO languages. The series of experiments here presented aim to: (i) identify and measure the visual cues—facial gestures—that potentially accompany the prosodic cues—changes in pitch and duration—correlated with word order differences, and (ii) examine whether visual cues modulate or determine the segmentation preferences of adult and infant monolinguals and bilinguals of an unknown language that additionally contains prosodic cues. In a series of artificial language learning experiments participants will be familiarized with artificial languages that contain either matching or mismatching auditory and visual cues—which will displayed by means of a computer-animated avatar—and will be subsequently tested on their segmentation preferences. This research will advance our understanding of the role of visual facial information in speech processing, as well as of the cognitive mechanisms involved in the acquisition of syntax.'

Altri progetti dello stesso programma (FP7-PEOPLE)

TEMESAMA (2011)

New production technology development for most efficient and more stable application of electro-optic and nonlinear optical crystalline materials

Read More  

ASTRO-HD (2010)

Role of astrocytes in Huntington's Disease: characterization of a novel mouse model with targeted expression of mutant huntingtin in the striatum

Read More  

NEMSMART (2010)

Development of High-Performance and High-Reliability NEMS Switches for Smart Antenna Structures

Read More