Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 3 - CONT-ACT (Control of contact interactions for robots acting in the world)

Teaser

What are the algorithmic principles that would allow a robot to run through a rocky terrain, lift a couch while reaching for an object that rolled under it or manipulate a screwdriver while balancing on top of a ladder? By trying to answer these questions in CONT-ACT, we would...

Summary

What are the algorithmic principles that would allow a robot to run through a rocky terrain, lift a couch while reaching for an object that rolled under it or manipulate a screwdriver while balancing on top of a ladder? By trying to answer these questions in CONT-ACT, we would like to understand the fundamental principles for robot locomotion and manipulation and endow robots with the robustness and adaptability necessary to efficiently and autonomously act in an unknown and changing environment. It is a necessary step towards a new technological age: ubiquitous robots capable of helping humans in an uncountable number of tasks.

Dynamic interactions of the robot with its environment through the creation of intermittent physical contacts is central to any locomotion or manipulation task. Indeed, in order to walk or manipulate an object, a robot needs to constantly physically interact with the environment and surrounding objects. Our approach to motion generation and control in CONT-ACT gives a central place to contact interactions. Our main hypothesis is that it will allow us to develop more adaptive and robust planning and control algorithms for locomotion and manipulation. The project is divided in three main objectives: 1) the development of a hierarchical receding horizon control architecture for multi-contact behaviors, 2) the development of algorithms to learn representations for motion generation through multi-modal sensing (e.g. force and touch sensing) and 3) the development of controllers based on multi-modal sensory information through optimal control and reinforcement learning.

Work performed

In the first half of the project, we have developed the main parts of the receding horizon control architecture (1st objective of the project). We have proposed new methods to plan whole-body multi-contact behaviors for legged robots in near realtime. We are now able to plan complicated motions, for example a humanoid climbing up stairs, walking over stepping stones or using its hand and legs to climb up on an obstacle. An important part of our work was to study the mathematical structure of the optimization problems related to the multi-contact behaviors. Leveraging this structure, we proposed a novel algorithm based on second-order cone programing that allows to compute dynamic motions significantly faster than the state of the art. Complementary to this work, we have studied how the timing of contact creation changes the dynamic capabilities of the robot. We have proposed a new algorithm able to quickly adapt step location and timing, we have also proposed a computationally efficient way to include hand contact when necessary and have demonstrated in simulation that it significantly improves the stability of the robot when it is pushed or when it slips on the ground. In parallel to these optimal control problems, we have also made progress on the problem of fusing multiple sensor modalities (force, inertial and position sensors) to get good estimation of the state of the robot during contact tasks (2nd objective of the project). We recently demonstrated how unsupervised learning could be leveraged to learn contact modes and improve sensor-fusion and estimation during walking on uneven terrains. Finally, we have studied how uncertainty in the knowledge of contact locations changes the optimal way of creating a contact on an object. We have used risk-sensitive optimal control techniques to propose a new algorithm able to handle contact uncertainty during contact interactions. As a result, the controller is able to create very gentle touch with an object when its position is uncertain, which allows to increase the safety and robustness of the interaction (3rd objective of the project).

Final results

Thus far, we have gained a better understanding in problems related to the motion of robots in contact with their environment and we have proposed algorithms able to compute complicated locomotion patterns in multi-contact that are significantly faster than the state of the art. We are now able to plan in near realtime complicated motions, for example a humanoid climbing up stairs, walking over stepping stones or using its hand and legs to climb up on an obstacle. This opens many possibilities to create more reactive and robust behaviors. Moreover, these results significantly improve our understanding of the fundamental algorithmic principles of locomotion and manipulation. We hope that this will be useful to the development of autonomous legged robots that are able to locomote in unknown and challenging environments. Potential applications of such robots include disaster relief scenarios, construction and service robots.