Opendata, web and dolomites

ECOMODE

Event-Driven Compressive Vision for Multimodal Interaction with Mobile Devices

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

 ECOMODE project word cloud

Explore the words cloud of the ECOMODE project. It provides you a very rough idea of what is the project "ECOMODE" about.

battery    temporal    uncontrolled    elderly    excels    motor    groups    recognition    compressive    unrestricted    modal    visual    computers    interface    smart    ecomode    pave    inspired    platform    mobile    chin    technologies    pillars    hardware    auditory    ict    lighting    equipped    dynamics    components    generation    computer    combines    human    assisted    industrialization    modern    contrast    accessing    sensing    sensor    demonstrating    audio    exploits    command    integrability    power    software    encoding    disabilities    society    biologically    edc    energy    nature    input    sparse    designed    gain    background    everyday    efficiency    immunity    commercial    yielding    committed    relies    matured    phones    sensors    suffering    solution    realize    visually    lip    noise    reliably    event    tablet    vision    acquired    barrier    robustness    channels    mild    conventional    ideal    air    participate    gesture    interconnected    finger    platforms    experiencing    impaired    clear    advancing    outdoor    speech    communication    handling    services    subsequent    motion    powered   

Project "ECOMODE" data sheet

The following table provides information about the project.

Coordinator
UNIVERSITE PIERRE ET MARIE CURIE - PARIS 6 

There are not information about this coordinator. Please contact Fabio for more information, thanks.

 Coordinator Country France [FR]
 Project website http://www.ecomode-project.eu/index.php
 Total cost 3˙798˙206 €
 EC max contribution 3˙798˙206 € (100%)
 Programme 1. H2020-EU.2.1.1.4. (Content technologies and information management: ICT for digital content, cultural and creative industries)
 Code Call H2020-ICT-2014-1
 Funding Scheme RIA
 Starting year 2015
 Duration (year-month-day) from 2015-01-01   to  2018-12-31

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    SORBONNE UNIVERSITE FR (PARIS) coordinator 758˙125.00
2    UNIVERSITE PIERRE ET MARIE CURIE - PARIS 6 FR (PARIS) coordinator 0.00
3    FONDAZIONE ISTITUTO ITALIANO DI TECNOLOGIA IT (GENOVA) participant 592˙708.00
4    AGENCIA ESTATAL CONSEJO SUPERIOR DEINVESTIGACIONES CIENTIFICAS ES (MADRID) participant 556˙278.00
5    FONDAZIONE BRUNO KESSLER IT (TRENTO) participant 488˙429.00
6    Streetlab FR (Paris) participant 449˙665.00
7    PROPHESEE FR (PARIS) participant 286˙250.00
8    FONDATION DE COOPERATION SCIENTIFIQUE VOIR ET ENTENDRE FR (PARIS) participant 257˙500.00
9    INNOVATI NETWORKS SL ES (MADRID) participant 232˙506.00
10    EXPERIS MANPOWERGROUP SL ES (MADRID) participant 176˙743.00

Map

 Project objective

The visually impaired and the elderly, often suffering from mild speech and/or motor disabilities, are experiencing a significant and increasing barrier in accessing ICT technology and services. Yet, in order to be able to participate in a modern, interconnected society that relies on ICT technologies for handling everyday issues, there is clear need also for these user groups to have access to ICT, in particular to mobile platforms such as tablet computers or smart-phones. The proposed project aims at developing and exploiting the recently matured and quickly advancing biologically-inspired technology of event-driven, compressive sensing (EDC) of audio-visual information, to realize a new generation of low-power multi-modal human-computer interface for mobile devices. The project is based on two main technology pillars: (A) an air gesture control set, and (B) a vision-assisted speech recognition set. (A) exploits EDC vision for low and high level hand and finger gesture recognition and subsequent command execution; (B) combines temporal dynamics from lip and chin motion acquired using EDC vision sensors with the auditory sensor input to gain robustness and background noise immunity of spoken command recognition and speech-to-text input. In contrast to state-of-the-art technologies, both proposed human-computer communication channels will be designed to work reliably under uncontrolled conditions. Particularly, mobile devices equipped with the proposed interface technology will facilitate unrestricted outdoor use under uncontrolled lighting and background noise conditions. Furthermore, due to the sparse nature of information encoding, EDC excels conventional approaches in energy efficiency, yielding an ideal solution for mobile, battery-powered devices. ECOMODE is committed to pave the way for industrialization of commercial products by demonstrating the availability of the required hardware and software components and their integrability into a mobile platform.

 Deliverables

List of deliverables.
Website and other ICT support online Websites, patent fillings, videos etc. 2020-03-11 12:13:10
Definition of Industrial Advisory Board Other 2020-03-11 12:13:10
Open Workshop Other 2020-03-11 12:13:11

Take a look to the deliverables list in detail:  detailed list of ECOMODE deliverables.

 Publications

year authors and title journal last update
List of publications.
2017 Evangelos Stromatias, Miguel Soto, Teresa Serrano-Gotarredona, Bernabé Linares-Barranco
An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data
published pages: , ISSN: 1662-453X, DOI: 10.3389/fnins.2017.00350
Frontiers in Neuroscience 11 2020-03-11
2017 Xavier Clady, Jean-Matthieu Maro, Sébastien Barré, Ryad B. Benosman
A Motion-Based Feature for Event-Based Pattern Recognition
published pages: , ISSN: 1662-453X, DOI: 10.3389/fnins.2016.00594
Frontiers in Neuroscience 10 2020-03-11
2017 Nadia Mana, Ornella Mich, and Michela Ferron
How to increase older adults’ accessibility to mobile technology? The new ECOMODE camera
published pages: , ISSN: , DOI:
ForItAAL2017 -- Italian Forum on Ambient Assisted Living 2020-03-11
2016 Badino L.
Phonetic Context Embeddings for DNN-HMM Phone Recognition
published pages: , ISSN: , DOI:
Proc. of Interspeech 2016 2020-03-11
2015 Teresa Serrano-Gotarredona, Bernabé Linares-Barranco
Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details
published pages: , ISSN: 1662-453X, DOI: 10.3389/fnins.2015.00481
Frontiers in Neuroscience 9 2020-03-11
2017 Badino, L., Franceschi, L., Donini, M., Pontil, M.
A Speaker Adaptive DNN Training Approach for Speaker-independent Acoustic Inversion
published pages: , ISSN: , DOI:
Proc. Of Interspeech 2020-03-11
2017 A. Yousefzadeh, M. Jablonski, T. Iakymchuk, A. Linares-Barranco, A. Rosado, L. A. Plana, S. Temple, T. Serrano-Gotarredona, S. Furber, and B. Linares-Barranco
On Multiple AER Handshaking channels over High-Speed Bit-Serial Bi-Directional LVDS Links with Flow-Control and Clock-Correction on Commercial FPGAs for Scalable Neurmorphic Systems
published pages: , ISSN: , DOI:
IEEE Trans. on Biomedical Circuits and Systems 2020-03-11
2016 Michela Ferron, Ornella Mich, and Nadia Mana
Wizard of Oz Studies with Older Adults: A Methodological Note
published pages: 93-100, ISSN: , DOI:
Symposium on Challenges and Experiences in Designing for an Ageing Society 2020-03-11
2018 Luis A. Camuñas-Mesa, Yaisel L. Domínguez-Cordero, Alejandro Linares-Barranco, Teresa Serrano-Gotarredona, Bernabé Linares-Barranco
A Configurable Event-Driven Convolutional Node with Rate Saturation Mechanism for Modular ConvNet Systems Implementation
published pages: , ISSN: 1662-453X, DOI: 10.3389/fnins.2018.00063
Frontiers in Neuroscience 12 2020-03-11
2018 N. Mana, G. Schiavo, M. Ferron, O. Mich
Investigating redundancy in multimodal interaction with tablet devices for older adults
published pages: 183-183, ISSN: 1569-1101, DOI: 10.4017/gt.2018.17.s.178.00
Gerontechnology 17/s 2020-03-11
2018 Sio-Hoi Ieng, Eero Lehtonen, Ryad Benosman
Complexity Analysis of Iterative Basis Transformations Applied to Event-Based Signals
published pages: , ISSN: 1662-453X, DOI: 10.3389/fnins.2018.00373
Frontiers in Neuroscience 12 2020-03-11
2018 Amirreza Yousefzadeh, Garrick Orchard, Teresa Serrano-Gotarredona, Bernabe Linares-Barranco
Active Perception With Dynamic Vision Sensors. Minimum Saccades With Optimum Recognition
published pages: 927-939, ISSN: 1932-4545, DOI: 10.1109/tbcas.2018.2834428
IEEE Transactions on Biomedical Circuits and Systems 12/4 2020-03-11
2018 L. A. Camuñas-Mesa, T. Serrano-Gotarredona, S. Ieng, R. Benosman and B. Linares-Barranco
Event-Driven Stereo Visual Tracking Algorithm to Solve Object Occlusion
published pages: 4223-4237, ISSN: 2162-237X, DOI: 10.1109/tnnls.2017.2759326
IEEE Transactions on Neural Networks and Learning Systems 29/9 2020-03-11
2018 A. Savran, R. Tavarone, B. Higy, L. Badino and C. Bartolozzi
Energy and Computation Efficient Audio-Visual Voice Activity Detection Driven by Event-Cameras
published pages: , ISSN: , DOI: 10.1109/fg.2018.00055
2020-03-11
2017 Amirreza Yousefzadeh, Miroslaw Jablonski, Taras Iakymchuk, Alejandro Linares-Barranco, Alfredo Rosado, Luis A. Plana, Steve Temple, Teresa Serrano-Gotarredona, Steve B. Furber, Bernabe Linares-Barranco
On Multiple AER Handshaking Channels Over High-Speed Bit-Serial Bidirectional LVDS Links With Flow-Control and Clock-Correction on Commercial FPGAs for Scalable Neuromorphic Systems
published pages: 1133-1147, ISSN: 1932-4545, DOI: 10.1109/tbcas.2017.2717341
IEEE Transactions on Biomedical Circuits and Systems 11/5 2020-03-11
2018 N. Mana, O. Mich, M. Ferron
Are mid-air gestures perceived as strenuous when used to interact with mobile technology by older adults?
published pages: 85-85, ISSN: 1569-1101, DOI: 10.4017/gt.2018.17.s.085.00
Gerontechnology 17/s 2020-03-11
2018 Raffaele Tavarone, Leonardo Badino
Conditional-Computation-Based Recurrent Neural Networks for Computationally Efficient Acoustic Modelling
published pages: 1274-1278, ISSN: , DOI: 10.21437/interspeech.2018-2195
Interspeech 2018 2020-03-11

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "ECOMODE" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "ECOMODE" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.2.1.1.4.)

SEWA (2015)

Automatic Sentiment Estimation in the Wild

Read More  

POPART (2015)

Previz for On-set Production - Adaptive Realtime Tracking

Read More  

Film265 (2015)

Improving European VoD Creative Industry with High Efficiency Video Delivery

Read More