Opendata, web and dolomites

STEP2DYNA SIGNED

Spatial-temporal information processing for collision detection in dynamic environments

Total Cost €

0

EC-Contrib. €

0

Partnership

0

Views

0

 STEP2DYNA project word cloud

Explore the words cloud of the STEP2DYNA project. It provides you a very rough idea of what is the project "STEP2DYNA" about.

dynamic    secondments    computer    reliability    farming    people    realized    jointly    innovative    conferences    modelers    robots    lives    bio    proposes    demonstrated    restricted    neural    environments    advantages    exploited    designers    human    step2dyna    lacking    day    millions    strengths    radar    sensors    breadth    precision    solution    leader    vision    miniaturized    market    vehicle    position    consumption    visual    demands    unless    computing    industry    readily    fatalities    east    autonomous    size    happens    unmanned    capability    expertise    area    3560    workshops    low    staff    spatial    laser    parallel    inspired    goods    detection    possessed    ladar    temporal    realizing    save    worldwide    badly    neurobiologists    safe    complement    society    delivering    acceptable    households    collision    biological    uavs    world    accidents    takes    energy    gps    chips    designed    serious    aerial    serving    chip    engineers    died    sme    secondly    asia    serve    multidisciplinary    robotics    vehicles    capacity   

Project "STEP2DYNA" data sheet

The following table provides information about the project.

Coordinator
UNIVERSITY OF LINCOLN 

Organization address
address: Brayford Pool
city: LINCOLN
postcode: LN6 7TS
website: www.lincoln.ac.uk

contact info
title: n.a.
name: n.a.
surname: n.a.
function: n.a.
email: n.a.
telephone: n.a.
fax: n.a.

 Coordinator Country United Kingdom [UK]
 Project website http://www.step2dyna.eu/
 Total cost 1˙228˙500 €
 EC max contribution 1˙008˙000 € (82%)
 Programme 1. H2020-EU.1.3.3. (Stimulating innovation by means of cross-fertilisation of knowledge)
 Code Call H2020-MSCA-RISE-2015
 Funding Scheme MSCA-RISE
 Starting year 2016
 Duration (year-month-day) from 2016-07-01   to  2020-06-30

 Partnership

Take a look of project's partnership.

# participants  country  role  EC contrib. [€] 
1    UNIVERSITY OF LINCOLN UK (LINCOLN) coordinator 517˙500.00
2    UNIVERSITY OF NEWCASTLE UPON TYNE UK (NEWCASTLE UPON TYNE) participant 243˙000.00
3    AGILE ROBOTS AG DE (MUNCHEN) participant 126˙000.00
4    UNIVERSITAET HAMBURG DE (HAMBURG) participant 121˙500.00
5    Guangzhou University CN (GUANGZHOU) partner 0.00
6    HUAZHONG UNIVERSITY OF SCIENCE AND TECHNOLOGY CN (WUHAN) partner 0.00
7    NATIONAL UNIVERSITY CORPORATION KYUSHU UNIVERSITY JP (FUKUOKA) partner 0.00
8    TSINGHUA UNIVERSITY CN (BEIJING) partner 0.00
9    UNIVERSIDAD DE BUENOS AIRES AR (BUENOS AIRES) partner 0.00
10    UNIVERSITI PUTRA MALAYSIA MY (SELANGOR DARUL EHSAN) partner 0.00
11    XI'AN JIAOTONG UNIVERSITY CN (XI'AN) partner 0.00

Map

 Project objective

In the real world, collision happens at every second - often results in serious accidents and fatalities. For example, there are more than 3560 people died from vehicle collision per day worldwide. On the other sector, autonomous unmanned aerial vehicles (UAVs) have demonstrated great potential in serving human society such as delivering goods to households and precision farming, but are restricted due to lacking of collision detection capability. The current approaches for collision detection such as radar, laser based Ladar and GPS are far from acceptable in terms of reliability, energy consumption and size. A new type of low cost, low energy consumption and miniaturized collision detection sensors are badly needed to not only save millions of people’s lives but also make autonomous UAVs and robots safe to serve human society. STEP2DYNA consortium proposes an innovative bio-inspired solution for collision detection in dynamic environments. It takes the advantages of low cost spatial-temporal and parallel computing capacity of visual neural systems and realized it in chip specifically for collision detection in dynamic environments.

Realizing visual neural systems in chips demands multidisciplinary expertise in biological system modelling, computer vision, chip design and robotics. This breadth of expertise is not readily possessed within one institution. Secondly, the market potential of the collision detection system could not be well exploited, unless by a dedicated partner from industry. Therefore, this consortium is designed to bring neurobiologists, neural system modelers, chip designers, robotics researchers and engineers from Europe and East of Asia together and complement each others’ research strengths via staff secondments, jointly organised workshops and conferences. Through this project, the partners will build up strong expertise in this exciting multidisciplinary area and the European SME will position well as a market leader in collision detection.

 Deliverables

List of deliverables.
Kick off workshop Other 2020-02-25 17:05:50
Preliminary demonstrator system for robust collision detection Demonstrators, pilots, prototypes 2020-02-25 17:05:50
Progress Report Documents, reports 2020-02-25 17:05:50
Neural vision chip structure identification Demonstrators, pilots, prototypes 2020-02-25 17:05:50
Identified visual neural systems for realization Demonstrators, pilots, prototypes 2020-02-25 17:05:50
Implementation of visual neural sytems to a robotic platform Demonstrators, pilots, prototypes 2020-02-25 17:05:50
Visual neural systems integration for single cue extraction enhancements Demonstrators, pilots, prototypes 2020-02-25 17:05:50
The first and second training seminars Other 2020-02-25 17:05:50
Project website Websites, patent fillings, videos etc. 2020-02-25 17:05:50
Preliminary visual neural system models for collision cues extraction Demonstrators, pilots, prototypes 2020-02-25 17:05:50

Take a look to the deliverables list in detail:  detailed list of STEP2DYNA deliverables.

 Publications

year authors and title journal last update
List of publications.
2019 Suda, R and Yamawaki, Y
Effects of luminance contrast on the looming-sensitive neuron of the praying mantis Tenodera aridifolia
published pages: 4-10, ISSN: , DOI:
ELCAS Journal 4, 2019-03 2020-03-05
2019 Qinbing Fu, Hongxin Wang, Cheng Hu, Shigang Yue
Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review
published pages: 263-311, ISSN: 1064-5462, DOI: 10.1162/artl_a_00297
Artificial Life 25/3 2020-02-27
2019 Daqi Liu, Nicola Bellotto, Shigang Yue
Deep Spiking Neural Network for Video-Based Disguise Face Recognition Based on Dynamic Facial Movements
published pages: 1-10, ISSN: 2162-237X, DOI: 10.1109/tnnls.2019.2927274
IEEE Transactions on Neural Networks and Learning Systems 19 July 2019 2020-02-25
2019 Qinbing Fu, Cheng Hu, Jigen Peng, F. Claire Rind, Shigang Yue
A Robust Collision Perception Visual Neural Network With Specific Selectivity to Darker Objects
published pages: 1-15, ISSN: 2168-2267, DOI: 10.1109/tcyb.2019.2946090
IEEE Transactions on Cybernetics 2020-02-25
2017 Fu, Qinbing; Hu, Cheng; Yue, Shigang
Collision Selective Visual Neural Network Inspired by LGMD2 Neurons in Juvenile Locusts
published pages: , ISSN: , DOI:
arXiv:1801.06452 [q-bio.NC] 6 2020-02-25
2019 Liang, Hongzhuo; Li, Shuang; Ma, Xiaojian; Hendrich, Norman; Gerkmann, Timo; Sun, Fuchun; Zhang, Jianwei
Making Sense of Audio Vibration for Liquid Height Estimation in Robotic Pouring
published pages: , ISSN: , DOI:
arXiv:1903.00650 [cs.RO] 1 2020-02-25
2018 Liang, Hongzhuo; Ma, Xiaojian; Li, Shuang; Görner, Michael; Tang, Song; Fang, Bin; Sun, Fuchun; Zhang, Jianwei
PointNetGPD: Detecting Grasp Configurations from Point Sets
published pages: , ISSN: , DOI:
arXiv:1809.06267 [cs.RO] 1 2020-02-25
2018 Zhao, Biao; Yue, Shigang
A Resilient Image Matching Method with an Affine Invariant Feature Detector and Descriptor
published pages: , ISSN: , DOI:
arXiv:1802.09623 [cs.CV] 7 2020-02-25
2018 Li, Shuang; Ma, Xiaojian; Liang, Hongzhuo; Görner, Michael; Ruppel, Philipp; Fang, Bing; Sun, Fuchun; Zhang, Jianwei
Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network
published pages: , ISSN: , DOI:
arXiv:1809.06268 [cs.RO] 1 2020-02-25
2019 Fu, Qinbing; Hu, Cheng; Liu, Pengcheng; Yue, Shigang
Synthetic Neural Vision System Design for Motion Pattern Recognition in Dynamic Robot Scenes
published pages: , ISSN: , DOI:
arXiv:1904.07180 [cs.NE] 1 2020-02-25
2018 Fu, Qinbing; Bellotto, Nicola; Yue, Shigang
A Directionally Selective Neural Network with Separated ON and OFF Pathways for Translational Motion Perception in a Visually Cluttered Environment
published pages: , ISSN: , DOI:
arXiv:1808.07692 [cs.NE] 3 2020-02-25
2018 Qinbing Fu, Cheng Hu, Jigen Peng, Shigang Yue
Shaping the collision selectivity in a looming sensitive neuron model with parallel ON and OFF pathways and spike frequency adaptation
published pages: 127-143, ISSN: 0893-6080, DOI: 10.1016/j.neunet.2018.04.001
Neural Networks 106 2020-02-25
2018 Cheng Hu, Qinbing Fu, Tian Liu, Shigang Yue
A Hybrid Visual-Model Based Robot Control Strategy for Micro Ground Robots
published pages: 162-174, ISSN: , DOI: 10.1007/978-3-319-97628-0_14
From Animals to Animats 15 2020-02-25
2018 Junxiong Jia, Shigang Yue, Jigen Peng, Jinghuai Gao
Infinite-dimensional Bayesian approach for inverse scattering problems of a fractional Helmholtz equation
published pages: , ISSN: 0022-1236, DOI: 10.1016/j.jfa.2018.08.002
Journal of Functional Analysis 2020-02-25
2018 Qinbing Fu, Cheng Hu, Pengcheng Liu, Shigang Yue
Towards computational models of insect motion detectors for robot vision
published pages: , ISSN: , DOI:
19th Towards Autonomous Robotic Systems (TAROS) 2020-02-25
2018 Daqi Liu, Shigang Yue
Event-Driven Continuous STDP Learning With Deep Structure for Visual Pattern Recognition
published pages: 1-14, ISSN: 2168-2267, DOI: 10.1109/TCYB.2018.2801476
IEEE Transactions on Cybernetics 2020-02-25
2018 Jiannan Zhao, Cheng Hu, Chun Zhang, Zhihua Wang, Shigang Yue
A Bio-inspired Collision Detector for Small Quadcopter
published pages: , ISSN: , DOI:
International Joint Conference on Neural Networks (IJCNN) 2018 2020-02-25
2018 Cheng Hu, Qinbing Fu, Shigang Yue
Colias IV: The Affordable Micro Robot Platform with Bio-inspired Vision
published pages: 197-208, ISSN: , DOI: 10.1007/978-3-319-96728-8_17
19th Towards Autonomous Robotic Systems (TAROS) 2020-02-25
2018 Xuelong Sun, Michael Mangan, Shigang Yue
An Analysis of a Ring Attractor Model for Cue Integration
published pages: 459-470, ISSN: , DOI: 10.1007/978-3-319-95972-6_49
Living Machines 2018: Conference on Biomimetic and Biohybrid System 2020-02-25
2018 Shuang Li, Hongzhuo Liang, Jianwei Zhang
Path Planning for Wheeled Mobile Service Robots based on Improved Genetic Algorithm
published pages: Pages 249-252, ISSN: , DOI:
i-CREATe 2018 Proceedings of the 12th International Convention on Rehabilitation Engineering and Assistive Technology 2020-02-25
2019 Wang, Hongxin; Peng, Jigen; Fu, Qinbing; Wang, Huatian; Yue, Shigang
Visual Cue Integration for Small Target Motion Detection in Natural Cluttered Backgrounds
published pages: , ISSN: , DOI:
11 2020-02-25
2019 Yair Barnatan, Daniel Tomsic, Julieta Sztarker
Unidirectional Optomotor Responses and Eye Dominance in Two Species of Crabs
published pages: , ISSN: 1664-042X, DOI: 10.3389/fphys.2019.00586
Frontiers in Physiology 10 2020-02-25
2018 Hongzhuo Liang, Shuang Li, Michael Görner, and Jianwei Zhang
Generating Robust Grasps for Unknown Objects in Clutter Using Point Cloud Data
published pages: Pages 298-301, ISSN: , DOI:
i-CREATe 2018 Proceedings of the 12th International Convention on Rehabilitation Engineering and Assistive Technology 2020-02-25
2018 Hongxin Wang, Jigen Peng, Xuqiang Zheng, Shigang Yue
A Robust Visual System for Small Target Motion Detection Against Cluttered Moving Backgrounds
published pages: 1-15, ISSN: 2162-237X, DOI: 10.1109/tnnls.2019.2910418
IEEE Transactions on Neural Networks and Learning Systems 2020-02-25
2018 Hongxin Wang, Jigen Peng, Shigang Yue
A Directionally Selective Small Target Motion Detecting Visual Neural Network in Cluttered Backgrounds
published pages: 1-15, ISSN: 2168-2267, DOI: 10.1109/TCYB.2018.2869384
IEEE Transactions on Cybernetics 2020-02-25
2018 Huatian Wang, Jigen Peng, Paul Baxter, Chun Zhang, Zhihua Wang, Shigang Yue
A Model for Detection of Angular Velocity of Image Motion Based on the Temporal Tuning of the Drosophila
published pages: 37-46, ISSN: , DOI: 10.1007/978-3-030-01421-6_4
2020-02-25
2018 Wang, Hongxin; Peng, Jigen; Yue, Shigang
A Feedback Neural Network for Small Target Motion Detection in Cluttered Backgrounds
published pages: , ISSN: , DOI: 10.1007/978-3-030-01424-7_71
1 2020-02-25
2017 Daqi Liu, Shigang Yue
Fast unsupervised learning for visual pattern recognition using spike timing dependent plasticity
published pages: 212-224, ISSN: 0925-2312, DOI: 10.1016/j.neucom.2017.04.003
Neurocomputing 249 2020-02-25
2017 Jiawei Xu, Shigang Yue, Federica Menchinelli, Kun Guo
What has been missed for predicting human attention in viewing driving clips?
published pages: e2946, ISSN: 2167-8359, DOI: 10.7717/peerj.2946
PeerJ 5 2020-02-25
2017 Qinbing Fu and Shigang Yue
Modeling Direction Selective Visual Neural Network with ON and OFF Pathways for Extracting Motion Cues from Cluttered Background
published pages: , ISSN: , DOI:
2017 International Joint Conference on Neural Networks, Anchorage, Alaska 30th anniversary 2020-02-25
2016 Xuqiang Zheng, Zhijun Wang, Fule Li, Feng Zhao, Shigang Yue, Chun Zhang, Zhihua Wang
A 14-bit 250 MS/s IF Sampling Pipelined ADC in 180 nm CMOS Process
published pages: 1381-1392, ISSN: 1549-8328, DOI: 10.1109/TCSI.2016.2580703
IEEE Transactions on Circuits and Systems I: Regular Papers 63/9 2020-02-25
2016 Qinbing Fu, Shigang Yue and Cheng Hu
Bio-inspired Collision Detector with Enhanced Selectivity for Ground Robotic Vision System
published pages: , ISSN: , DOI:
2016 British Machine Vision Conference (BMVC) 27th conference 2020-02-25

Are you the coordinator (or a participant) of this project? Plaese send me more information about the "STEP2DYNA" project.

For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.

Send me an  email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.

Thanks. And then put a link of this page into your project's website.

The information about "STEP2DYNA" are provided by the European Opendata Portal: CORDIS opendata.

More projects from the same programme (H2020-EU.1.3.3.)

ROVER (2020)

RELIABLE TECHNOLOGIES AND MODELS FOR VERIFIED WIRELESS BODY-CENTRIC TRANSMISSION AND LOCALIZATION

Read More  

OPEN (2019)

Outcomes of Patients’ Evidence With Novel, Do-It-Yourself Artificial Pancreas Technology

Read More  

MAIL (2019)

Identifying Marginal Lands in Europe and strengthening their contribution potentialities in a CO2sequestration strategy

Read More