Coordinatore | INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | France [FR] |
Totale costo | 2˙454˙090 € |
EC contributo | 2˙454˙090 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2010-AdG_20100224 |
Funding Scheme | ERC-AG |
Anno di inizio | 2011 |
Periodo (anno-mese-giorno) | 2011-01-01 - 2016-12-31 |
# | ||||
---|---|---|---|---|
1 |
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Organization address
address: Domaine de Voluceau, Rocquencourt contact info |
FR (LE CHESNAY Cedex) | hostInstitution | 2˙454˙090.00 |
2 |
INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
Organization address
address: Domaine de Voluceau, Rocquencourt contact info |
FR (LE CHESNAY Cedex) | hostInstitution | 2˙454˙090.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'Digital video is everywhere, at home, at work, and on the Internet. Yet, effective technology for organizing, retrieving, improving, and editing its content is nowhere to be found. Models for video content, interpretation and manipulation inherited from still imagery are obsolete, and new ones must be invented. With a new convergence between computer vision, machine learning, and signal processing, the time is right for such an endeavor. Concretely, we will develop novel spatio-temporal models of video content learned from training data and capturing both the local appearance and nonrigid motion of the elements---persons and their surroundings---that make up a dynamic scene. We will also develop formal models of the video interpretation process that leave behind the architectures inherited from the world of still images to capture the complex interactions between these elements, yet can be learned effectively despite the sparse annotations typical of video understanding scenarios. Finally, we will propose a unified model for video restoration and editing that builds on recent advances in sparse coding and dictionary learning, and will allow for unprecedented control of the video stream. This project addresses fundamental research issues, but its results are expected to serve as a basis for groundbreaking technological advances for applications as varied as film post-production, video archival, and smart camera phones.'