Coordinatore | THE UNIVERSITY OF BIRMINGHAM
Organization address
address: Edgbaston contact info |
Nazionalità Coordinatore | United Kingdom [UK] |
Totale costo | 4˙490˙320 € |
EC contributo | 3˙420˙000 € |
Programma | FP7-ICT
Specific Programme "Cooperation": Information and communication technologies |
Code Call | FP7-ICT-2011-9 |
Funding Scheme | CP |
Anno di inizio | 2013 |
Periodo (anno-mese-giorno) | 2013-03-01 - 2016-02-29 |
# | ||||
---|---|---|---|---|
1 |
THE UNIVERSITY OF BIRMINGHAM
Organization address
address: Edgbaston contact info |
UK (BIRMINGHAM) | coordinator | 0.00 |
2 |
UNIVERSITA DI PISA
Organization address
address: Lungarno Pacinotti contact info |
IT (PISA) | participant | 0.00 |
3 |
UNIVERSITAET INNSBRUCK
Organization address
address: INNRAIN contact info |
AT (INNSBRUCK) | participant | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
The challenge laid out in this call for proposals is to advance technologies for, and understand the principles of cognition and control in complex systems. We will meet this challenge by advancing methods for object perception, representation and manipulation so that a robot is able to robustly manipulate objects even when those objects are unfamiliar, and even though the robot has unreliable perception and action. The proposal is founded on two assumptions. The first of these is that the representation of the object's shape in particular and of other properties in general will benefit from being compositional (or very loosely hierarchical and part based). The second is that manipulation planning and execution benefits from explicitly reasoning about uncertainty in object pose, shape etcetera; how it changes under the robot's actions, and the robot should plan actions that not only achieve the task, but gather information to make task achievement more reliable.nnThese two assumptions are mirrored in the structure of the proposed work, as we will develop two main strands of work:nni) a multi-modal compositional, probabilistic representation of object properties to support perception and manipulation, and ii) algorithms for reasoning with this representation, that will estimate object properties from visual and haptic data, and also plan how to actively gather information about shape and other object properties (frictional coefficients, mass) while achieving a task. These two strands will be combined and tested on robots performing aspects of a dishwasher loading task. The outcome will be robust manipulation (i.e. under unreliable perception and action) of unfamiliar objects from familiar categories or with familiar parts.