Coordinatore | THE UNIVERSITY OF EDINBURGH
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | United Kingdom [UK] |
Totale costo | 1˙126˙000 € |
EC contributo | 1˙126˙000 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2007-StG |
Funding Scheme | ERC-SG |
Anno di inizio | 2008 |
Periodo (anno-mese-giorno) | 2008-09-01 - 2014-08-31 |
# | ||||
---|---|---|---|---|
1 |
THE UNIVERSITY OF EDINBURGH
Organization address
address: OLD COLLEGE, SOUTH BRIDGE contact info |
UK (EDINBURGH) | hostInstitution | 0.00 |
2 |
THE UNIVERSITY OF EDINBURGH
Organization address
address: OLD COLLEGE, SOUTH BRIDGE contact info |
UK (EDINBURGH) | hostInstitution | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'When humans process language, they rarely do so in isolation. Linguistic input often occurs synchronously with visual input, e.g., in everyday activities such as attending a lecture or following directions on a map. The visual context constrains the interpretation of the linguistic input, and vice versa, making processing more efficient and less ambiguous. Given the ubiquity of synchronous linguistic and visual processing, it is surprising that there is only a sparse experimental literature that deals with this topic, while virtually no computational models exist that capture the synchronous interpretation process. We propose an experimental research program that will investigate key features of synchronous processing by tracking participants' eye movements when they view a naturalistic scene and listen to a speech stimulus at the same time. The aim is to understand synchronous processing better by studying the interaction of saliency and ambiguity, and the role of incrementality, object context, and task factors. These experimental results will feed into a series of computational models that predict the eye-movement patterns that humans exhibit when they view a scene and listen to speech at the same time. The key modeling idea is to treat synchronous processing as an alignment problem, for which a rich literature exists in computational linguistics. Building on this literature, we will develop models that incrementally construct aligned linguistic and visual representations, and that can be evaluated against eye-tracking data.'