Explore the words cloud of the VisualGrasping project. It provides you a very rough idea of what is the project "VisualGrasping" about.
The following table provides information about the project.
Coordinator |
JUSTUS-LIEBIG-UNIVERSITAET GIESSEN
Organization address contact info |
Coordinator Country | Germany [DE] |
Total cost | 159˙460 € |
EC max contribution | 159˙460 € (100%) |
Programme |
1. H2020-EU.1.3.2. (Nurturing excellence by means of cross-border and cross-sector mobility) |
Code Call | H2020-MSCA-IF-2017 |
Funding Scheme | MSCA-IF-EF-ST |
Starting year | 2018 |
Duration (year-month-day) | from 2018-04-02 to 2020-07-02 |
Take a look of project's partnership.
# | ||||
---|---|---|---|---|
1 | JUSTUS-LIEBIG-UNIVERSITAET GIESSEN | DE (GIESSEN) | coordinator | 159˙460.00 |
I ask how vision guides grasping, and conversely, how learning to grasp objects constrains visual processing. Grasping an object feels effortless, yet the computations underlying grasp planning are nontrivial and there is an extensive literature describing the multifaceted features of visually guided grasping. I aim to bind this fragmented body of knowledge into a unified framework for understanding how humans visually select grasps. To do so I will use motion-tracking hardware (already in place at the University of Giessen) to measure and model human grasping patterns to 3D objects. I will rely on Dr. Fleming’s unique expertise with physical simulation to simulate human grasping with objects varying in shape and material. Joining behavioral measurements with computer simulations will provide a powerful data- and theory-driven approach to fully map out the space of human grasping behavior. The complementary goal of this proposal is to understand how grasping constrains visual processing of object shape and material. I plan to tackle this goal by building a computational model of visual processing for grasp planning. Both Dr. Fleming and I have previous experience with computational modelling of visual function. I will exploit powerful machine learning techniques to infer what kinds of visual representations are necessary for grasp planning. I will train Deep Neural Nets (for which the hardware and software is already in place and in use by the Fleming lab) using extensive physics simulations. Dissecting the learned network architecture and comparing the network’s performance to human behavior will tell us what information about shapes, material, and objects the human visual system encodes to plan motor actions. In short, with this research I aim to determine how processing within the human visual system is shaped by and guides hand motor action.
year | authors and title | journal | last update |
---|---|---|---|
2019 |
Guido Maiello, Vivian C. Paulun, Lina K. Klein, Roland W. Fleming Object Visibility, Not Energy Expenditure, Accounts For Spatial Biases in Human Grasp Selection published pages: 204166951982760, ISSN: 2041-6695, DOI: 10.1177/2041669519827608 |
i-Perception 10/1 | 2019-11-18 |
2018 |
Guido Maiello, Vivian C. Paulun, Lina K. Klein, Roland W. Fleming The Sequential-Weight Illusion published pages: 204166951879027, ISSN: 2041-6695, DOI: 10.1177/2041669518790275 |
i-Perception 9/4 | 2019-11-18 |
Are you the coordinator (or a participant) of this project? Plaese send me more information about the "VISUALGRASPING" project.
For instance: the website url (it has not provided by EU-opendata yet), the logo, a more detailed description of the project (in plain text as a rtf file or a word file), some pictures (as picture files, not embedded into any word file), twitter account, linkedin page, etc.
Send me an email (fabio@fabiodisconzi.com) and I put them in your project's page as son as possible.
Thanks. And then put a link of this page into your project's website.
The information about "VISUALGRASPING" are provided by the European Opendata Portal: CORDIS opendata.