Coordinatore | SYDDANSK UNIVERSITET
Organization address
city: Odense M contact info |
Nazionalità Coordinatore | Denmark [DK] |
Totale costo | 3˙899˙781 € |
EC contributo | 2˙959˙592 € |
Programma | FP7-ICT
Specific Programme "Cooperation": Information and communication technologies |
Code Call | FP7-ICT-2009-6 |
Funding Scheme | CP |
Anno di inizio | 2011 |
Periodo (anno-mese-giorno) | 2011-03-01 - 2014-02-28 |
# | ||||
---|---|---|---|---|
1 |
SYDDANSK UNIVERSITET
Organization address
city: Odense M contact info |
DK (Odense M) | coordinator | 0.00 |
2 |
AGENCIA ESTATAL CONSEJO SUPERIOR DE INVESTIGACIONES CIENTIFICAS
Organization address
address: CALLE SERRANO contact info |
ES (MADRID) | participant | 0.00 |
3 |
GEORG-AUGUST-UNIVERSITAET GOETTINGEN STIFTUNG OEFFENTLICHEN RECHTS
Organization address
address: WILHELMSPLATZ contact info |
DE (GOETTINGEN) | participant | 0.00 |
4 |
JOZEF STEFAN INSTITUTE
Organization address
address: Jamova contact info |
SI (LJUBLJANA) | participant | 0.00 |
5 |
RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Organization address
address: Templergraben contact info |
DE (AACHEN) | participant | 0.00 |
6 |
UNIVERSITAET INNSBRUCK
Organization address
address: INNRAIN contact info |
AT (INNSBRUCK) | participant | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
IntellAct addresses the problem of understanding and exploiting the meaning (semantics) of manipulations in terms of objects, actions and their consequences for reproducing human actions with machines. This is in particular required for the interaction between humans and robots in which the robot has to understand the human action and then to transfer it to its own embodiment. IntellAct will provide means to allow for this transfer not by copying movements of the human but by transferring the human action on a semantic level. IntellAct will demonstrate the ability to understand scene and action semantics and to execute actions with a robot in two domains. First, in a laboratory environment (exemplified by a lab in the International Space Station (ISS)) and second, in an assembly process in an industrial context.nIntellAct consists of three building blocks: (1) Learning: Abstract, semantic descriptions of manipulations are extracted from video sequences showing a human demonstrating the manipulations; (2) Monitoring: In the second step, observed manipulations are evaluated against the learned, semantic models; (3) Execution: Based on learned, semantic models, equivalent manipulations are executed by a robot.nnThe analysis of low-level observation data for semantic content (Learning) and the synthesis of concrete behaviour (Execution) constitute the major scientific challenge of IntellAct.nBased on the semantic interpretation and description and enhanced with low-level trajectory data for grounding, two major application areas are addressed by IntellAct: First, the monitoring of human manipulations for correctness (e.g., for training or in high-risk scenarios) and second, the efficient teaching of cognitive robots to perform manipulations in a wide variety of applications.nnTo achieve these goals, IntellAct brings together recent methods for (1) parsing scenes into spatio-temporal graphs and so-called „semantic Event Chains‟, (2) probabilistic models of objects and their manipulation, (3) probabilistic rule learning, and (4) dynamic motion primitives for trainable and flexible descriptions of robotic motor behaviour. Its implementation employs a concurrent-engineering approach that includes virtual-reality-enhanced simulation as well as physical robots. Its goal culminates in the demonstration of a robot understanding, monitoring and reproducing human action.