VISLIM

Visual Learning and Inference in Joint Scene Models

 Coordinatore TECHNISCHE UNIVERSITAET DARMSTADT 

Spiacenti, non ci sono informazioni su questo coordinatore. Contattare Fabio per maggiori infomrazioni, grazie.

 Nazionalità Coordinatore Germany [DE]
 Totale costo 1˙374˙030 €
 EC contributo 1˙374˙030 €
 Programma FP7-IDEAS-ERC
Specific programme: "Ideas" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call ERC-2012-StG_20111012
 Funding Scheme ERC-SG
 Anno di inizio 2013
 Periodo (anno-mese-giorno) 2013-06-01   -   2018-05-31

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    TECHNISCHE UNIVERSITAET DARMSTADT

 Organization address address: Karolinenplatz 5
city: DARMSTADT
postcode: 64289

contact info
Titolo: Dr.
Nome: Melanie
Cognome: Meermann-Zimmermann
Email: send email
Telefono: +49 6151 1675972

DE (DARMSTADT) hostInstitution 1˙374˙030.00
2    TECHNISCHE UNIVERSITAET DARMSTADT

 Organization address address: Karolinenplatz 5
city: DARMSTADT
postcode: 64289

contact info
Titolo: Prof.
Nome: Stefan
Cognome: Roth
Email: send email
Telefono: +49 6151 155668
Fax: +49 6151 155669

DE (DARMSTADT) hostInstitution 1˙374˙030.00

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

representations    pertinent    attributes    scene    abstractions    scenes    visual    modeling    estimating    computer    vision    impact    multiple    joint    images    jointly    image    mutual    attribute   

 Obiettivo del progetto (Objective)

'One of the principal difficulties in processing, analyzing, and interpreting digital images is that many attributes of visual scenes relate in complex manners. Despite that, the vast majority of today's top-performing computer vision approaches estimate a particular attribute (e.g., motion, scene segmentation, restored image, object presence, etc.) in isolation; other pertinent attributes are either ignored or crudely pre-computed by ignoring any mutual relation. But since estimating a singular attribute of a visual scene from images is often highly ambiguous, there is substantial potential benefit in estimating several attributes jointly. The goal of this project is to develop the foundations of modeling, learning and inference in rich, joint representations of visual scenes that naturally encompass several of the pertinent scene attributes. Importantly, this goes beyond combining multiple cues, but rather aims at modeling and inferring multiple scene attributes jointly to take advantage of their interplay and their mutual reinforcement, ultimately working toward a full(er) understanding of visual scenes. While the basic idea of using joint representations of visual scenes has a long history, it has only rarely come to fruition. VISLIM aims to significantly push the current state of the art by developing a more general and versatile toolbox for joint scene modeling that addresses heterogeneous visual representations (discrete and continuous, dense and sparse) as well as a wide range of levels of abstractions (from the pixel level to high-level abstractions). This is expected to lead joint scene models beyond conceptual appeal to practical impact and top-level application performance. No other endeavor in computer vision has attempted to develop a similarly broad foundation for joint scene modeling. In doing so we aim to move closer to image understanding, with significant potential impact in other disciplines of science, technology and humanities.'

Altri progetti dello stesso programma (FP7-IDEAS-ERC)

CLAPO (2013)

The Coevolution of Life and Arsenic in Precambrian Oceans

Read More  

FRRO (2012)

The Fragments of the Republican Roman Orators

Read More  

BIDECASEOX (2009)

Bio-inspired Design of Catalysts for Selective Oxidations of C-H and C=C Bonds

Read More