Coordinatore | RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie. |
Nazionalità Coordinatore | Germany [DE] |
Totale costo | 1˙499˙960 € |
EC contributo | 1˙499˙960 € |
Programma | FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013) |
Code Call | ERC-2012-StG_20111012 |
Funding Scheme | ERC-SG |
Anno di inizio | 2012 |
Periodo (anno-mese-giorno) | 2012-11-01 - 2017-10-31 |
# | ||||
---|---|---|---|---|
1 |
RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Organization address
address: Templergraben 55 contact info |
DE (AACHEN) | hostInstitution | 1˙499˙960.00 |
2 |
RHEINISCH-WESTFAELISCHE TECHNISCHE HOCHSCHULE AACHEN
Organization address
address: Templergraben 55 contact info |
DE (AACHEN) | hostInstitution | 1˙499˙960.00 |
Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.
'The goal of CV-SUPER is to create the technology to perform dynamic visual scene understanding from the perspective of a moving human observer. Briefly stated, we want to enable computers to see and understand what humans see when they navigate their way through busy inner-city locations. Our target scenario is dynamic visual scene understanding in public spaces, such as pedestrian zones, shopping malls, or other locations primarily designed for humans. CV-SUPER will develop computer vision algorithms that can observe the people populating those spaces, interpret and understand their actions and their interactions with other people and inanimate objects, and from this understanding derive predictions of their future behaviors within the next few seconds. In addition, we will develop methods to infer semantic properties of the observed environment and learn to recognize how those affect people’s actions. Supporting those tasks, we will develop a novel design of an object recognition system that scales up to potentially hundreds of categories. Finally, we will bind all those components together in a dynamic 3D world model, showing the world’s current state and facilitating predictions how this state will most likely change within the next few seconds. These are crucial capabilities for the creation of technical systems that may one day assist humans in their daily lives within such busy spaces, e.g., in the form of personal assistance devices for elderly or visually impaired people or in the form of future generations of mobile service robots and intelligent vehicles.'