Opendata, web and dolomites

ULTRACEPT SIGNED

Ultra-layered perception with brain-inspired information processing for vehicle collision avoidance

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

 ULTRACEPT project word cloud

Explore the words cloud of the ULTRACEPT project. It provides you a very rough idea of what is the project "ULTRACEPT" about.

thirsty    gps    shaping    demonstrated    power    lighting    computing    environment    advantages    serve    visual    difficult    inputs    neural    capacity    months    safer    cope    energy    ordinary    cues    road    save    badly    parallel    innovative    buildings    collision    accepted    expensive    once    stage    takes    rain    inspired    objects    solution    radar    single    car    data    detect    reliability    doomed    material    extracting    few    normal    low    world    night    multiple    acceptable    killed    dim    communication    fog    sensors    accident    segmentation    likes    spatial    sensitive    weather    lidar    issue    too    size    vision    life    pedestrians    reflective    vehicles    society    brain    miniaturized    temporal    vehicle    proposes    detection    styles    accidents    modalities    happen    autonomous    critical    layered    metallic    laser    avoidance    bio    people    surfaces    human    consumption    million    unconnected    absorbing    cities    trustworthy    lives    huge   

Project "ULTRACEPT" data sheet

The following table provides information about the project.

Coordinator
UNIVERSITY OF LINCOLN 

Organization address
address: Brayford Pool
city: LINCOLN
postcode: LN6 7TS
website: www.lincoln.ac.uk

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country United Kingdom [UK]
 Total cost 2˙191˙500 €
 EC max contribution 1˙894˙500 € (86%)
 Programme 1. H2020-EU.1.3.3. (Stimulating innovation by means of cross-fertilisation of knowledge)
 Code Call H2020-MSCA-RISE-2017
 Funding Scheme MSCA-RISE
 Starting year 2018
 Duration (year-month-day) from 2018-12-01   to  2022-11-30

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    UNIVERSITY OF LINCOLN UK (LINCOLN) coordinator 891˙000.00
2    UNIVERSITY OF NEWCASTLE UPON TYNE UK (NEWCASTLE UPON TYNE) participant 324˙000.00
3    AGILE ROBOTS AG DE (MUNCHEN) participant 279˙000.00
4    WESTFAELISCHE WILHELMS-UNIVERSITAET MUENSTER DE (MUENSTER) participant 171˙000.00
5    UNIVERSITAET HAMBURG DE (HAMBURG) participant 162˙000.00
6    VISOMORPHIC TECHNOLOGY LTD UK (LONDON) participant 58˙500.00
7    DINO ROBOTICS GMBH DE (KARLSRUHE) participant 9˙000.00
8    Guangzhou University CN (GUANGZHOU) partner 0.00
9    GUIZHOU UNIVERSITY CN (Guiyang) partner 0.00
10    HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY CN (WUHAN) partner 0.00
11    INSTITUTE OF AUTOMATION CHINESE ACADEMY OF SCIENCES CN (BEIJING) partner 0.00
12    LINGNAN NORMAL UNIVERSITY CN (ZHANJIANG GUANGDONG) partner 0.00
13    NATIONAL UNIVERSITY CORPORATION TOKYO UNIVERSITY OF AGRICULTURE AND TECHNOLOGY JP (FUCHU SHI TOKYO) partner 0.00
14    NORTHWESTERN POLYTECHNICAL UNIVERSITY CN (XI AN) partner 0.00
15    TSINGHUA UNIVERSITY CN (BEIJING) partner 0.00
16    UNIVERSIDAD DE BUENOS AIRES AR (BUENOS AIRES) partner 0.00
17    UNIVERSITI PUTRA MALAYSIA MY (SELANGOR DARUL EHSAN) partner 0.00
18    XI'AN JIAOTONG UNIVERSITY CN (XI'AN) partner 0.00

Map

 Project objective

Autonomous vehicles, although in its early stage, have demonstrated huge potential in shaping future life styles to many of us. However, to be accepted by ordinary users, autonomous vehicles have a critical issue to solve – this is trustworthy collision detection. No one likes an autonomous car that is doomed to a collision accident once every few years or months. In the real world, collision does happen at every second - more than 1.3 million people are killed by road accidents every single year. The current approaches for vehicle collision detection such as vehicle to vehicle communication, radar, laser based Lidar and GPS are far from acceptable in terms of reliability, cost, energy consumption and size. For example, radar is too sensitive to metallic material, Lidar is too expensive and it does not work well on absorbing/reflective surfaces, GPS based methods are difficult in cities with high buildings, vehicle to vehicle communication cannot detect pedestrians or any objects unconnected, segmentation based vision methods are too computing power thirsty to be miniaturized, and normal vision sensors cannot cope with fog, rain and dim environment at night. To save people’s lives and to make autonomous vehicles safer to serve human society, a new type of trustworthy, robust, low cost, and low energy consumption vehicle collision detection and avoidance systems are badly needed.

This consortium proposes an innovative solution with brain-inspired multiple layered and multiple modalities information processing for trustworthy vehicle collision detection. It takes the advantages of low cost spatial-temporal and parallel computing capacity of bio-inspired visual neural systems and multiple modalities data inputs in extracting potential collision cues at complex weather and lighting conditions.

 Deliverables

List of deliverables.
Preliminary visual neural system models for collision cues extraction Documents, reports 2020-03-06 15:56:46
Database for verification Other 2020-03-06 15:56:43
Project website Websites, patent fillings, videos etc. 2020-02-07 12:43:54

Take a look to the deliverables list in detail:  detailed list of ULTRACEPT deliverables.

 Publications

year authors and title journal last update
List of publications.
2020 Jin Xiao, Yuhang Tian, Ling Xie, Xiaoyi Jiang, Jing Huang
A Hybrid Classification Framework Based on Clustering
published pages: 2177-2188, ISSN: 1551-3203, DOI: 10.1109/tii.2019.2933675
IEEE Transactions on Industrial Informatics 16/4 2020-03-05
2019 Qinbing Fu, Hongxin Wang, Cheng Hu, Shigang Yue
Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review
published pages: 263-311, ISSN: 1064-5462, DOI: 10.1162/artl_a_00297
Artificial Life 25/3 2020-03-05
2019 Qinbing Fu, Cheng Hu, Jigen Peng, F. Claire Rind, Shigang Yue
A Robust Collision Perception Visual Neural Network With Specific Selectivity to Darker Objects
published pages: 1-15, ISSN: 2168-2267, DOI: 10.1109/tcyb.2019.2946090
IEEE Transactions on Cybernetics 2019-12-17
2019 Daqi Liu, Nicola Bellotto, Shigang Yue
Deep Spiking Neural Network for Video-Based Disguise Face Recognition Based on Dynamic Facial Movements
published pages: 1-10, ISSN: 2162-237X, DOI: 10.1109/tnnls.2019.2927274
IEEE Transactions on Neural Networks and Learning Systems 19 July 2019 2019-12-16
2019 Hongxin Wang, Jigen Peng, Xuqiang Zheng, Shigang Yue
A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds
published pages: 1-15, ISSN: 2162-237X, DOI: 10.1109/TNNLS.2019.2910418
IEEE Transactions on Neural Networks and Learning Systems 01 May 2019 2019-12-16

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "ULTRACEPT" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "ULTRACEPT" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.3.)

TREND (2019)

Transition with Resilience for Evolutionary Development

Read More  

ENDORSE (2018)

Safe, Efficient and Integrated Indoor Robotic Fleet for Logistic Applications in Healthcare and Commercial Spaces

Read More  

VIDEC (2020)

Visualizing Death Inducing Protein Complexes

Read More