Explore the words cloud of the VisNav project. It provides you a very rough idea of what is the project "VisNav" about.
The following table provides information about the project.
Coordinator |
UNIVERSITY COLLEGE LONDON
Organization address contact info |
Coordinator Country | United Kingdom [UK] |
Project website | http://data.cortexlab.net |
Total cost | 195˙454 € |
EC max contribution | 195˙454 € (100%) |
Programme |
1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility) |
Code Call | H2020-MSCA-IF-2015 |
Funding Scheme | MSCA-IF-EF-ST |
Starting year | 2016 |
Duration (year-month-day) | from 2016-05-01 to 2018-04-30 |
Take a look of project's partnership.
# | ||||
---|---|---|---|---|
1 | UNIVERSITY COLLEGE LONDON | UK (London) | coordinator | 195˙454.00 |
One of the main functions of the visual system is to help the animal navigate through its environment, and one of the main ways that animals navigate is by using visual landmarks. Visually-guided navigation requires a change of coordinates from an eye-centered to a world-centered representation of the external environment. We know much about the two ends of this transformation, i.e. about the visual selectivity of neurons in Primary visual cortex and the place selectivity of place cells in hippocampus CA1. However, little is known about how this transition happens along the way. The recent development of virtual reality environment might help filling this gap: mice can navigate through an artificial environment while their head is fixed, therefore enabling a careful monitoring of the animal behavior, accurate control of the displayed visual inputs and the stability of the recording. In this project, we propose to investigate the transition from an eye-centered to a world-centered representation across the successive visual areas of the mouse cortex. Using a combination of electrophysiological and imaging techniques, we will record simultaneously in hippocampus CA1 and in different visual cortical areas while head-fixed mice navigate through a virtual maze to get a reward at specific locations. We will also take advantage of the virtual reality setup to modify visual cues and navigational cues such as to estimate the nature of the modulation of visually driven responses by navigational information. This approach will help understand how visual processing in cortical areas is modulated by the animal’s position in the environment. We believe that it will also set the foundations for a new field of research that investigates the function of the visual cortex as part of the navigation system rather than a simple feature analyzer of the visual scene.
Are you the coordinator (or a participant) of this project? Plaese send me more information about the "VISNAV" project.
For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.
Send me an email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.
Thanks. And then put a link of this page into your project's website.
The information about "VISNAV" are provided by the European Opendata Portal: CORDIS opendata.