Opendata, web and dolomites

RoboExNovo SIGNED

Robots learning about objects from externalized knowledge sources

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "RoboExNovo" data sheet

The following table provides information about the project.

Coordinator
FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA 

Organization address
address: VIA MOREGO 30
city: GENOVA
postcode: 16163
website: www.iit.it

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country Italy [IT]
 Total cost 1˙496˙277 €
 EC max contribution 1˙496˙277 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2014-STG
 Funding Scheme ERC-STG
 Starting year 2015
 Duration (year-month-day) from 2015-06-01   to  2021-05-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA IT (GENOVA) coordinator 1˙084˙873.00
2    UNIVERSITA DEGLI STUDI DI ROMA LA SAPIENZA IT (ROMA) participant 411˙404.00

Map

 Project objective

While today’s robots are able to perform sophisticated tasks, they can only act on objects they have been trained to recognize. This is a severe limitation: any robot will inevitably face novel situations in unconstrained settings, and thus will always have knowledge gaps. This calls for robots able to learn continuously about objects by themselves. The learning paradigm of state-of-the-art robots is the sensorimotor toil, i.e. the process of acquiring knowledge by generalization over observed stimuli. This is in line with cognitive theories that claim that cognition is embodied and situated, so that all knowledge acquired by a robot is specific to its sensorimotor capabilities and to the situation in which it has been acquired. Still, humans are also capable of learning from externalized sources – like books, illustrations, etc – containing knowledge that is necessarily unembodied and unsituated. To overcome this gap, RoboExNovo proposes a paradigm shift. I will develop a new generation of robots able to acquire perceptual and semantic knowledge about object from externalized, unembodied resources, to be used in situated settings. As the largest existing body of externalized knowledge, I will consider the Web as the source from which to learn from. To achieve this, I propose to build a translation framework between the representations used by robots in their situated experience and those used on the Web, based on relational structures establishing links between related percepts and between percepts and the semantics they support. My leading expertise in machine learning applied to multi modal data and robot vision puts me in a strong position to realize this project. By enabling robots to use knowledge resources on the Web that were not explicitly designed to be accessed for this purpose, RoboExNovo will pave the way for ground-breaking technological advances in home and service robotics, driver assistant systems, and in general any Web-connected situated device.

 Publications

year authors and title journal last update
List of publications.
2017 Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, Samuel Rota Bulò
Just DIAL: Domain alignment layers for unsupervised domain adaptation
published pages: 357-369, ISSN: , DOI: 10.1007/978-3-319-68560-1_32
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2019-10-29
2017 Barbara Caputo, Claudio Cusano, Martina Lanzi, Paolo Napoletano, Raimondo Schettini
On the importance of domain adaptation in texture classification
published pages: 380-390, ISSN: , DOI: 10.1007/978-3-319-68560-1_34
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 2019-10-29
2017 Igor Barros Barbosa, Marco Cristani, Barbara Caputo, Aleksander Rognhaugen, Theoharis Theoharis
Looking beyond appearances: Synthetic training data for deep CNNs in re-identification
published pages: , ISSN: 1077-3142, DOI: 10.1016/j.cviu.2017.12.002
Computer Vision and Image Understanding 2019-10-29
2017 Tatiana Tommasi, Novi Patricia, Barbara Caputo, Tinne Tuytelaars
A deeper look at dataset bias
published pages: 37-55, ISSN: , DOI: 10.1007/978-3-319-58347-1_2
2019-10-29
2017 Antonio D’Innocente, Fabio Maria Carlucci, Mirco Colosi, Barbara Caputo
Bridging between computer and robot vision through data augmentation: A case study on object recognition
published pages: 384-393, ISSN: , DOI: 10.1007/978-3-319-68345-4_34
2019-10-29
2016 Ilja Kuzborskij, Francesco Orabona, Barbara Caputo
Scalable greedy algorithms for transfer learning
published pages: , ISSN: 1077-3142, DOI: 10.1016/j.cviu.2016.09.003
Computer Vision and Image Understanding 2019-10-29
2017 Massimiliano Mancini, Samuel Rota Bulo, Elisa Ricci, Barbara Caputo
Learning Deep NBNN Representations for Robust Place Categorization
published pages: 1794-1801, ISSN: 2377-3766, DOI: 10.1109/LRA.2017.2705282
IEEE Robotics and Automation Letters 2/3 2019-10-29

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "ROBOEXNOVO" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "ROBOEXNOVO" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

TransTempoFold (2019)

A need for speed: mechanisms to coordinate protein synthesis and folding in metazoans

Read More  

DEEPTIME (2020)

Probing the history of matter in deep time

Read More  

Mu-MASS (2019)

Muonium Laser Spectroscopy

Read More