Coordinatore | DEUTSCHES FORSCHUNGSZENTRUM FUER KUENSTLICHE INTELLIGENZ GMBH
Organization address
address: Trippstadter Strasse 122 contact info |
Nazionalità Coordinatore | Germany [DE] |
Totale costo | 4˙232˙648 € |
EC contributo | 3˙232˙177 € |
Programma | FP7-ICT
Specific Programme "Cooperation": Information and communication technologies |
Code Call | FP7-ICT-2009-4 |
Funding Scheme | CP |
Anno di inizio | 2010 |
Periodo (anno-mese-giorno) | 2010-01-01 - 2012-12-31 |
# | ||||
---|---|---|---|---|
1 |
DEUTSCHES FORSCHUNGSZENTRUM FUER KUENSTLICHE INTELLIGENZ GMBH
Organization address
address: Trippstadter Strasse 122 contact info |
DE (KAISERSLAUTERN) | coordinator | 0.00 |
2 |
Nome Ente NON disponibile
Organization address
city: Guimaraes contact info |
PT (Guimaraes) | participant | 0.00 |
3 |
CENTRE NATIONAL DE LA RECHERCHE SCIENTIFIQUE
Organization address
address: Rue Michel -Ange contact info |
FR (PARIS) | participant | 0.00 |
4 |
TECHNOLOGIE INITIATIVE SMARTFACTORY KL E.V.
Organization address
address: Gottlieb-Daimler-Strasse contact info |
DE (Kaiserslautern) | participant | 0.00 |
5 |
TRIVISIO PROTOTYPING GMBH
Organization address
address: KARCHERSTRASSE contact info |
DE (Dreieich) | participant | 0.00 |
6 |
UNIVERSIDADE DO MINHO
Organization address
address: Largo do Paco contact info |
PT (BRAGA) | participant | 0.00 |
7 |
UNIVERSITE DE TECHNOLOGIE DE COMPIEGNE
Organization address
address: Centre Benjamin Franklin, Rue Roger Couttolenc contact info |
FR (COMPIEGNE) | participant | 0.00 |
8 |
UNIVERSITY OF BRISTOL
Organization address
address: Senate House, Tyndall Avenue contact info |
UK (BRISTOL) | participant | 0.00 |
9 |
UNIVERSITY OF LEEDS
Organization address
address: Woodhouse Lane contact info |
UK (LEEDS) | participant | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
The automatic capture, recognition and rendering of human sensory-motor activities represent essential technologies in many diverse applications, ranging from 3D virtual manuals through to training simulators and novel computer games. Although capture systems already exist on the market, they focus primarily on capturing raw motion data, matched to a coarse model of the human body. Moreover, the recorded data is organised as a single cinematic sequence, with little or no reference to the underlying task activity or workflow patterns exhibited by the human subject. The result is data that is difficult to use in all but the most straightforward of applications, requiring extensive editing and user manipulation, especially when cognitive understanding of human action is a key concern, such as in virtual manuals or training simulators.The aim of the COGNITO project is to address these issues by advancing both the scope and the capability of human activity capture, recognition and rendering. Specifically, we propose to develop novel techniques that will allow cognitive workflow patterns to be analysed, learnt, recorded and subsequently rendered in a user-adaptive manner. Our concern will be to map and closely couple both the afferent and efferent channels of the human subject, enabling activity data to be linked directly to workflow patterns and task completion. We will focus particularly on tasks involving the hand manipulation of objects and tools due to their importance in many industrial applications.