Opendata, web and dolomites

SweetVision SIGNED

Envisioning the Reward: Neuronal circuits for goal-directed learning

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "SweetVision" data sheet

The following table provides information about the project.

Coordinator
THE UNIVERSITY OF EDINBURGH 

Organization address
address: OLD COLLEGE, SOUTH BRIDGE
city: EDINBURGH
postcode: EH8 9YL
website: www.ed.ac.uk

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country United Kingdom [UK]
 Total cost 1˙874˙780 €
 EC max contribution 1˙874˙780 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2019-COG
 Funding Scheme ERC-COG
 Starting year 2020
 Duration (year-month-day) from 2020-04-01   to  2025-03-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    THE UNIVERSITY OF EDINBURGH UK (EDINBURGH) coordinator 1˙874˙780.00

Map

 Project objective

Our ability to learn relies on the potential of neuronal circuits to change through experience. The overall theme of this project is to understand how sensory cortical circuits are modified by experience and learning. Recent results have shown that learning the association of a visual stimulus with a reward modifies neuronal responses in primary visual cortex (V1). However, the cellular mechanisms underlying these experience-dependent changes remain largely unknown. Computational and experimental studies suggest that feedback pathways are crucial for adapting sensory processing by task demands, together with local interneurons that gate feedback through dendritic inhibition. I will test the hypothesis that feedback projections from higher level areas selectively enhance task-relevant information in V1 and that this process depends on dorsomedial striatal (DMS) output. Toward this aim, I am using chronic two-photon calcium imaging to monitor the activity of neuronal sub-populations in mouse V1, before, during and after two types of visual experience: a passive exposure to a visual stimulus and a rewarded visually-guided task. Published and preliminary results indicate that the representation of task-relevant features is enhanced and stabilised in V1 during learning while responses to non-relevant stimuli are suppressed. This project is organized around 3 aims:

1. To characterize top-down inputs to V1 neurons during passive and rewarded visual experience. 2. To characterize local circuits and single-neuron computation of task-relevant features within V1 3. To characterize the output of V1 neurons to higher cortical areas and DMS, during goal-directed learning.

The expected results will show how behavioural training changes the neocortex to improve the encoding of behaviourally relevant visual objects. This project will uncover the circuits that are changed by and in turn dynamically gate relevant sensory information when an animal is learning a goal-directed task.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "SWEETVISION" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "SWEETVISION" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

FatVirtualBiopsy (2020)

MRI toolkit for in vivo fat virtual biopsy

Read More  

TransTempoFold (2019)

A need for speed: mechanisms to coordinate protein synthesis and folding in metazoans

Read More  

MITOvTOXO (2020)

Understanding how mitochondria compete with Toxoplasma for nutrients to defend the host cell

Read More