Coordinatore | QUEEN MARY UNIVERSITY OF LONDON
Organization address
address: 327 MILE END ROAD contact info |
Nazionalità Coordinatore | United Kingdom [UK] |
Totale costo | 203˙049 € |
EC contributo | 203˙049 € |
Programma | FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | FP7-PEOPLE-2010-IEF |
Funding Scheme | MC-IEF |
Anno di inizio | 2011 |
Periodo (anno-mese-giorno) | 2011-03-01 - 2013-02-28 |
# | ||||
---|---|---|---|---|
1 |
QUEEN MARY UNIVERSITY OF LONDON
Organization address
address: 327 MILE END ROAD contact info |
UK (LONDON) | coordinator | 203˙049.60 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'Thanks to the neural plasticity the remaining modalities can reorganise the human brain to compensate the effects of blindness. Yet, studies investigating the performance of visually impaired people in spatial tasks reported mixed results. This suggests that vision might be crucial for spatial tasks. Other studies reported that visual experience is necessary to establish an allocentric (external) reference frame to integrate multisensory inputs occurring within the peripersonal space. However, it is still unclear whether visual experience affects also the spatial representation of extra-personal environment where people perform their daily activities. Mou and McNamara (2002), surprisingly, found that memory of regularly disposed visually-learned room-sized sets of objects was better when the task required using an allocentric rather than an egocentric reference frame. Our sets will be learned through proprioception and audition, and by using the sensory substitution device called ‘vOICe’. Congenitally and late visually impaired and blindfolded sighted participants will be tested. If visual experience is necessary for allocentric spatial representation then, participants will perform better in the spatial task requiring the use of their ‘preferential’ spatial reference frame; allocentric for participants with visual experience and egocentric for those without visual experience. The use of vOICe for spatial learning will assess whether, by mimicking visual input, it can trigger the use of an allocentric reference frame by people without visual experience. This issue will be investigated also by using a tactile reference frame (similar to the ‘bumps’ on floor of public buildings) that will surround the set to trigger a ‘global’ spatial representation. Thus, these studies will improve our understanding of the development of human spatial cognition and, moreover, will find results with potential concrete application in the development of aids for visually impaired people'