The researchers of the MIT improbable AI Lab and the NSF AI Institute for artificial intelligence and fundamental interactions (IAIFI) have reached a new milestone in fast-moving robots.

Legged robots have difficulty making quick moves like sprinting and turning quickly in the wild. The researchers show an end-to-end learned controller that makes the MIT Mini Cheetah move faster and more rapidly than ever before, up to 3.9 m/s. This system runs fast and turns quickly on grass, ice, and gravel, responding well to disturbances. 

What was the challenge?

It's hard to run fast on natural terrain. As a robot tries to move more quickly, changes in the landscape significantly affect how well the controller works. One way to solve these problems would be to change the hand-built models used in model-based control. There has been a lot of progress in this direction. But in model-based control, the robot's behavior and reliability depend on how creative and hardworking the human designer is. Hence, the designer must develop simplified, reduced-order models that let the robot figure out what actions to take in real-time. 


Indoor Sprint (3.9 m/s)

Outdoor 10-Meter Dash (3.4 m/s)

Image source: CSAIL MIT

How can we control things in real-time in complicated environments? One option is to find the best way for the robot to move based on a full physics model. The problem is that a complete model can't be used to optimize a trajectory in real-time for a complex task like running fast on natural terrains. Therefore, Reinforcement learning is a way to learn a policy like this. In this method, a person creates a set of training environments and reward functions for telling the computer what tasks to do.

Goal

The researchers' goal is to build a system that can move over different terrains at a wide range of linear and angular speeds. This system is like an RL setup with multiple tasks, where running with each combination of linear and angular acceleration is a separate task. Similar to what has been before, we found that a robot can learn to do more than one thing when walking with a small range of speeds. But the training fails when the content of commanded velocities includes high speeds.

Conclusion

The behaviors in this work are diverse but limited compared to the total space of conceivable locomotion tasks. For example, the system regulates the robot's ground-plane body velocity. Other forms of behavior, such as jumping, crouching, coordinated dance, and loco-manipulation, fell outside the purview of this study and may have necessitated an entirely other task specification. In addition, their system lacks eyesight, so they cannot do activities that require planning, such as climbing stairs or avoiding obstacles. 

Although their system displays high speed, the researchers emphasize that its characteristic locomotion stride should not be as "better" than the numerous alternatives. Contrariwise, many users of legged robots desire to optimize for objectives other than speed, such as energy efficiency or wear reduction. Moreover, body speed alone is a vague target, as there may be a variety of equally preferred motions that achieve the same speed. Combining learnt nimble locomotion with additional requirements, such as supplementary purposes or human preferences, is a potential area for future research.

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in