Coordinatore | CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | France [FR] |
Totale costo | 2˙499˙249 € |
EC contributo | 2˙499˙249 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2013-ADG |
Funding Scheme | ERC-AG |
Anno di inizio | 2014 |
Periodo (anno-mese-giorno) | 2014-09-01 - 2019-08-31 |
# | ||||
---|---|---|---|---|
1 |
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
Organization address
address: Rue Michel -Ange 3 contact info |
FR (PARIS) | hostInstitution | 2˙499˙249.00 |
2 |
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
Organization address
address: Rue Michel -Ange 3 contact info |
FR (PARIS) | hostInstitution | 2˙499˙249.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'This project is focussed on the speech unification process associating the auditory, visual and motor streams in the human brain, in an interdisciplinary approach combining cognitive psychology, neurosciences, phonetics (both descriptive and developmental) and computational models. The framework is provided by the “Perception-for-Action-Control Theory (PACT)” developed by the PI.
PACT is a perceptuo-motor theory of speech communication, which connects in a principled way perceptual shaping and motor procedural knowledge in speech multisensory processing. The communication unit in PACT is neither a sound nor a gesture but a perceptually shaped gesture, that is a perceptuo-motor unit. It is characterised by both articulatory coherence – provided by its gestural nature – and perceptual value – necessary for being functional. PACT considers two roles for the perceptuo-motor link in speech perception: online unification of the sensory and motor streams through audio-visuo-motor binding, and offline joint emergence of the perceptual and motor repertoires in speech development. This provides the basis for the two parts of the project.
In the “Extracting Units” action, we shall study how audio-visuo-motor speech units are extracted online in the human brain. This involves analysis of the joint properties of audio, video and motor stimuli gathered in a multimodal corpus; behavioural and neurophysiological data on the extraction of coherent streams within a speech scene and on the segmentation of streams into coherent audiovisual units; and elaboration of neurocomputational models of the audio-visuo-motor binding process.
In the “Developing Units” action, we shall gather phonetic data on the joint development of perception, action and phonology, and implement and test various kinds of computational models, to assess how perceptuo-motor speech units emerge and evolve in the course of acquisition, reacquisition, evolution or learning of a given phonological system.'