CAPREAL

Performance Capture of the Real World in Motion

 Coordinatore MAX PLANCK GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFTEN E.V. 

Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie.

 Nazionalità Coordinatore Germany [DE]
 Totale costo 1˙480˙800 €
 EC contributo 1˙480˙800 €
 Programma FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call ERC-2013-StG
 Funding Scheme ERC-SG
 Anno di inizio 2013
 Periodo (anno-mese-giorno) 2013-09-01   -   2018-08-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    MAX PLANCK GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFTEN E.V.

 Organization address address: Hofgartenstrasse 8
city: MUENCHEN
postcode: 80539

contact info
Titolo: Mr.
Nome: Volker Maria
Cognome: Geiss
Email: send email
Telefono: 4968190000000
Fax: 4968190000000

DE (MUENCHEN) hostInstitution 1˙480˙800.00
2    MAX PLANCK GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFTEN E.V.

 Organization address address: Hofgartenstrasse 8
city: MUENCHEN
postcode: 80539

contact info
Titolo: Prof.
Nome: Christian
Cognome: Theobalt
Email: send email
Telefono: 4968190000000
Fax: 4968190000000

DE (MUENCHEN) hostInstitution 1˙480˙800.00

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

limitations    limited    real    lighting    shape    surface    graphics    capture    computer    dense    scenes    scene    technology    world    is    models    of    reflectance    moving    dynamic    performance   

 Obiettivo del progetto (Objective)

'Computer graphics technology for realistic rendering has improved dramatically; however, the technology to create scene models to be rendered, e.g., for movies, has not developed at the same pace. In practice, the state of the art in model creation still requires months of complex manual design, and this is a serious threat to progress. To attack this problem, computer graphics and computer vision researchers jointly developed methods that capture scene models from real world examples. Of particular importance is the capturing of moving scenes. The pinnacle of dynamic scene capture technology in research is marker-less performance capture. From multi-view video, they capture dynamic surface and texture models of the real world. Performance capture is hardly used in practice due to profound limitations: recording is usually limited to indoor studios, controlled lighting, and dense static camera arrays. Methods are often limited to single objects, and reconstructed shape detail is very limited. Assumptions about materials, reflectance, and lighting in a scene are simplistic, and we cannot easily modify captured data.

In this project, we will pioneer a new generation of performance capture techniques to overcome these limitations. Our methods will allow the reconstruction of dynamic surface models of unprecedented shape detail. They will succeed on general scenes outside of the lab and outdoors, scenes with complex material and reflectance distributions, and scenes in which lighting is general, uncontrolled, and unknown. They will capture dense and crowded scenes with complex shape deformations. They will reconstruct conveniently modifiable scene models. They will work with sparse and moving sets of cameras, ultimately even with mobile phones. This far-reaching, multi-disciplinary project will turn performance capture from a research technology into a practical technology, provide groundbreaking scientific insights, and open up revolutionary new applications.'

Altri progetti dello stesso programma (FP7-IDEAS-ERC)

HIGHZ (2009)

HIGHZ: Elucidating galaxy formation and evolution from very deep Near-IR imaging

Read More  

CENTRIOLSTRUCTNUMBER (2011)

Control of Centriole Structure And Number

Read More  

JUDGINGHISTORIES (2014)

"Experience, Judgement, and Representation of World War II in an Age of Globalization"

Read More