VISLIM

Visual Learning and Inference in Joint Scene Models

 Coordinatore TECHNISCHE UNIVERSITAET DARMSTADT 

Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie.

 Nazionalità Coordinatore Germany [DE]
 Totale costo 1˙374˙030 €
 EC contributo 1˙374˙030 €
 Programma FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call ERC-2012-StG_20111012
 Funding Scheme ERC-SG
 Anno di inizio 2013
 Periodo (anno-mese-giorno) 2013-06-01   -   2018-05-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    TECHNISCHE UNIVERSITAET DARMSTADT

 Organization address address: Karolinenplatz 5
city: DARMSTADT
postcode: 64289

contact info
Titolo: Dr.
Nome: Melanie
Cognome: Meermann-Zimmermann
Email: send email
Telefono: +49 6151 1675972

DE (DARMSTADT) hostInstitution 1˙374˙030.00
2    TECHNISCHE UNIVERSITAET DARMSTADT

 Organization address address: Karolinenplatz 5
city: DARMSTADT
postcode: 64289

contact info
Titolo: Prof.
Nome: Stefan
Cognome: Roth
Email: send email
Telefono: +49 6151 155668
Fax: +49 6151 155669

DE (DARMSTADT) hostInstitution 1˙374˙030.00

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

impact    images    image    joint    estimating    pertinent    mutual    multiple    scenes    attributes    vision    abstractions    representations    modeling    visual    jointly    attribute    scene    computer   

 Obiettivo del progetto (Objective)

'One of the principal difficulties in processing, analyzing, and interpreting digital images is that many attributes of visual scenes relate in complex manners. Despite that, the vast majority of today's top-performing computer vision approaches estimate a particular attribute (e.g., motion, scene segmentation, restored image, object presence, etc.) in isolation; other pertinent attributes are either ignored or crudely pre-computed by ignoring any mutual relation. But since estimating a singular attribute of a visual scene from images is often highly ambiguous, there is substantial potential benefit in estimating several attributes jointly. The goal of this project is to develop the foundations of modeling, learning and inference in rich, joint representations of visual scenes that naturally encompass several of the pertinent scene attributes. Importantly, this goes beyond combining multiple cues, but rather aims at modeling and inferring multiple scene attributes jointly to take advantage of their interplay and their mutual reinforcement, ultimately working toward a full(er) understanding of visual scenes. While the basic idea of using joint representations of visual scenes has a long history, it has only rarely come to fruition. VISLIM aims to significantly push the current state of the art by developing a more general and versatile toolbox for joint scene modeling that addresses heterogeneous visual representations (discrete and continuous, dense and sparse) as well as a wide range of levels of abstractions (from the pixel level to high-level abstractions). This is expected to lead joint scene models beyond conceptual appeal to practical impact and top-level application performance. No other endeavor in computer vision has attempted to develop a similarly broad foundation for joint scene modeling. In doing so we aim to move closer to image understanding, with significant potential impact in other disciplines of science, technology and humanities.'

Altri progetti dello stesso programma (FP7-IDEAS-ERC)

SAFERVIS (2012)

Uncertainty Visualization for Reliable Data Discovery

Read More  

PHOTOMETA (2013)

Photonic Metamaterials: From Basic Research to Applications

Read More  

INSILICO-CELL (2012)

Predictive modelling and simulation in mechano-chemo-biology: a computer multi-approach

Read More