VISION

Video Content Description System

 Coordinatore DUBLIN CITY UNIVERSITY 

 Organization address address: Glasnevin
city: DUBLIN
postcode: 9

contact info
Titolo: Prof.
Nome: Noel E.
Cognome: O'connor
Email: send email
Telefono: +353 1 700 5078
Fax: +353 1 700 7995

 Nazionalità Coordinatore Ireland [IE]
 Totale costo 204˙587 €
 EC contributo 204˙587 €
 Programma FP7-PEOPLE
Specific programme "People" implementing the Seventh Framework Programme of the European Community for research, technological development and demonstration activities (2007 to 2013)
 Code Call FP7-PEOPLE-2010-IEF
 Funding Scheme MC-IEF
 Anno di inizio 2011
 Periodo (anno-mese-giorno) 2011-05-01   -   2013-04-30

 Partecipanti

# participant  country  role  EC contrib. [€] 
1    DUBLIN CITY UNIVERSITY

 Organization address address: Glasnevin
city: DUBLIN
postcode: 9

contact info
Titolo: Prof.
Nome: Noel E.
Cognome: O'connor
Email: send email
Telefono: +353 1 700 5078
Fax: +353 1 700 7995

IE (DUBLIN) coordinator 204˙587.20

Mappa


 Word cloud

Esplora la "nuvola delle parole (Word Cloud) per avere un'idea di massima del progetto.

date    unlike    designed    content    metadata    automatically    specifically    team    boxes    video    stored    extraction    broadcasts    algorithms    configurable    architecture    easily    genres    vision    device    real    supervisor    structuring    consuming    description    databases    sports    search    browsing    extract    football    flexible    hardware    performed    time    live    viewers    indexing   

 Obiettivo del progetto (Objective)

'To gain value from multimedia repositories and make them easily accessible, the content they contain must be indexed and structured for user searching and browsing. However, most if not all of the video content in these databases is stored without any sort of indexing or analysis, and without any associated metadata. Thus, locating clips and browsing content is difficult, time consuming, and generally inefficient.

The research proposed by us aspires to address this research opportunity by providing hardware-centric tools for automatically indexing and structuring video content in real-time. Unlike the previous work, we propose to develop a flexible and configurable architecture for feature extraction and content structuring that could be adapted to implement multiple different systems and approaches that have been proposed in the literature to date for different content genres.

This project aims to design and verify a hardware system that can extract video features, and use these as a basis to structure and index live sports broadcasts real time, leveraging known characteristics of the content. The proposed system is conceived as a pure hardware device purposely developed for this kind of task—computations will be done without the use of any kind of computer. Within the framework of this project, and under the guidance of the supervisor and co-supervisor, the applicant would like to develop a hardware module that handles low-level feature extraction, and to propose a top layer for the system in which video content description can be performed, based on the low-level features which have been extracted. Whilst fictional content is specifically targeted, the system will be designed in such a way as to be easily extended to other genres.'

Introduzione (Teaser)

Relying on TV set-top boxes to search and access content can be burdensome and time consuming for viewers, especially for sporting events. An EU initiative designed a novel system to automatically group live sports in real time for on-the-spot viewing.

Descrizione progetto (Article)

To date, research has primarily concentrated on implementing very specific video content description algorithms in video database search applications. Most of the stored video content in these databases lacks indexing, analysis and associated metadata. What is more, the majority of research on content structuring has focused on football at the expense of other sports or genres.

To address this need, the EU-funded 'Video content description system' (VISION) project developed a system that can extract video features and use them for structuring and indexing live sports broadcasts. It is designed for use beyond sports.

Project partners designed and then validated a hardware device with underlying architecture that is both flexible and configurable. The system can be embedded into current set-top boxes and video recording devices.

The team developed algorithms for the hardware platform that analyse raw audio, video and images specifically for sports held indoors or outdoors involving two teams. In addition to football, this includes team sports such as rugby, cricket, basketball and baseball. Unlike previous algorithms, computational operations for the system are performed without the use of computers. It also takes advantage of free Internet libraries.

Thanks to VISION, sports enthusiasts will now have devices at their disposal to better access content. Viewers can generate an automatic game summary if tuning in late or watch highlights while a match is being played.

Altri progetti dello stesso programma (FP7-PEOPLE)

PARTNERS (2008)

Comparative embryonic stem cell research in mammalians

Read More  

DCGGEOPHYS (2014)

Subsurface conditions in Himalayan glaciers – implications for outburst flood risk prediction

Read More  

BIOFORS (2013)

Elucidation of forskolin biosynthetic pathway in Coleus forskohlii

Read More