Opendata, web and dolomites

VHIALab

Vision and Hearing In Action Laboratory

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

Project "VHIALab" data sheet

The following table provides information about the project.

Coordinator
INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE 

Organization address
address: DOMAINE DE VOLUCEAU ROCQUENCOURT
city: LE CHESNAY CEDEX
postcode: 78153
website: www.inria.fr

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country France [FR]
 Total cost 149˙866 €
 EC max contribution 149˙866 € (100%)
 Programme 1. H2020-EU.1.1. (EXCELLENT SCIENCE - European Research Council (ERC))
 Code Call ERC-2017-PoC
 Funding Scheme ERC-POC
 Starting year 2018
 Duration (year-month-day) from 2018-02-01   to  2019-01-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    INSTITUT NATIONAL DE RECHERCHE ENINFORMATIQUE ET AUTOMATIQUE FR (LE CHESNAY CEDEX) coordinator 149˙866.00

Map

 Project objective

The objective of VHIALab is the development and commercialization of software packages enabling a robot companion to robustly interact with multiple users. VHIALab builds on the scientific findings of ERC VHIA (February 2014 - January 2019). Solving the problems of audio-visual analysis and interaction opens the door to multi-party and multi-modal human-robot interaction (HRI). In contrast to well investigated single-user spoken dialog systems, these problems are extremely challenging because of noise, interferences and reverberation present in far-field acoustic signals, overlap of speech signals from two or more different speakers, visual clutter due to complex situations, people appearing and disappearing over time, speakers turning their faces away from the robot, etc. For these reasons, today's companion robots have extremely limited capacities to naturally interact with a group of people. Current vision and speech technologies only enable single-user face-to-face interaction with a robot, benefitting from recent advances in speech recognition, face recognition, and lip reading based on close-field microphones and cameras facing the user. As a consequence, although companion robots have an enormous commercialization potential, they are not yet available on the consumer market. The goal of VHIALab is to further reduce the gap between VHIA's research activities and the commercialization of companion robots with HRI capabilities. We propose to concentrate onto the problem of audio-visual detection and tracking of several speakers, to develop an associated software platform, to interface this software with a commercially available companion robot, and to demonstrate the project achievements based on challenging practical scenarios.

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "VHIALAB" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "VHIALAB" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.1.)

REPLAY_DMN (2019)

A theory of global memory systems

Read More  

E-DIRECT (2020)

Evolution of Direct Reciprocity in Complex Environments

Read More  

HYDROGEN (2019)

HighlY performing proton exchange membrane water electrolysers with reinforceD membRanes fOr efficient hydrogen GENeration

Read More