Coordinatore | UNIVERSITY COLLEGE LONDON
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | United Kingdom [UK] |
Totale costo | 1˙478˙208 € |
EC contributo | 1˙478˙208 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2007-StG |
Funding Scheme | ERC-SG |
Anno di inizio | 2008 |
Periodo (anno-mese-giorno) | 2008-11-01 - 2014-10-31 |
# | ||||
---|---|---|---|---|
1 |
QUEEN MARY UNIVERSITY OF LONDON
Organization address
address: 327 MILE END ROAD contact info |
UK (LONDON) | beneficiary | 0.00 |
2 |
UNIVERSITY COLLEGE LONDON
Organization address
address: GOWER STREET contact info |
UK (LONDON) | hostInstitution | 0.00 |
3 |
UNIVERSITY COLLEGE LONDON
Organization address
address: GOWER STREET contact info |
UK (LONDON) | hostInstitution | 0.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'Recent research has uncovered real potential for humans to interact with computers in natural ways by using their body motion, gestures and facial expressions. This has resulted in a huge surge of research within the Computer Vision community to develop algorithms able to understand, model and interpret human motion using visual information. Commercial motion capture solutions exist that can reconstruct the full motion of a human body or the deformations of a face. However these systems are severely restricted by the need to use markers on the subject and multiple calibrated cameras besides being costly and technically complex. Imagine instead the possibility of pointing a camera at a person for a few seconds and obtaining a fully parameterised detailed 3D model in a completely automated way. This 3D model could subsequently be used for animation tasks, to assist physiotherapists in the rehabilitation of patients with injuries or ultimately to guide a robot in a surgical operation. The aim of this project is to bring this scenario closer to reality by conducting the ground-breaking research needed to crack some of the challenging open problems in visual human motion analysis. So far visual human motion tracking systems have typically modelled the human body as a 3D skeleton ignoring the fact that each of its articulated parts is not strictly rigid but can also deform, since they are surrounded by soft tissue, muscles and clothes. Think of a torso performing small twists, a bicep flexing or a face performing different facial expressions. In this grant I are interested in recovering the full detailed 3D shape of the human body, including a model for the supporting 3D skeleton that captures its underlying articulated structure and a collection of deformable models to describe the non-rigid nature of each of its parts. Crucially, I plan to obtain these models without the use of markers, prior models or exemplars --- purely from image measurements.'