Researchers developed an AI system that lets robots make complex strategies for manipulating objects with whole hands. This model can develop effective plans using a typical laptop in approximately a minute. 

For example, You have to carry a heavy box up several stairs. You could use your whole body to handle the box if you spread your fingers and lifted it with both hands, then propped it up on your forearms and balanced it against your chest. 

Contact-rich manipulation planning

Robots struggle with full-body manipulation, but humans excel at it. The robot must account for every possible contact event, including when the box touches the carrier's fingers, arms, or chest. Planning for this endeavour rapidly becomes intractable as there are billions of possible touch events. Contact-rich manipulation planning is a new method developed to streamline this procedure. An AI approach called smoothing is used to reduce the number of judgments needed to find a good manipulation plan for the robot from the vast number of contact occurrences.

Despite its infancy, this technique can replace huge robotic arms that can only grip using their fingers with more minor, mobile robots that can control objects with their complete arms or bodies. It could lessen energy use and hence lower expenses. This technology is promising for application in Mars or other solar system body exploration robots because of its ability to quickly adapt to a new environment using only an onboard computer. 

Robot learning

Reinforcement learning is a machine-learning method in which a robot learns a task by trial and error and receives rewards for progress. This form of learning, according to researchers, takes a black-box approach because the system must learn everything about the world through trial and error. It's been used successfully for contact-rich manipulation planning, in which the robot tries to figure out the optimal method to move an object in a specific way. However, this trial-and-error approach necessitates significant processing because a robot may have billions of potential touch sites to consider when choosing how to use its fingers, hands, arms, and body to connect with an object.

However, suppose researchers specifically create a physics-based model based on their knowledge of the system and the task they want the robot to complete. In that case, that model adds structure to this environment and makes it more efficient.

Decision making

In the grand scheme of things, many choices a robot could make about moving an item are insignificant. For example, it doesn't matter much if a tiny change in the position of one finger causes it to touch the item or not. Smoothing removes many small, unimportant choices and leaves only a few important ones.

Reinforcement learning automatically does smoothing, which tries many contact places and then takes a weighted average of the results. Using this information, the MIT researchers made a simple model that does a similar type of smoothing. It lets the model focus on how the robot interacts with objects and predicts how it will act in the long run. They showed that this method could come up with complex ideas just as well as reinforcement learning.

Training

Even though smoothing makes choices much more accessible, searching through the decisions that are left can still be hard. So, the experts put their model together with an algorithm that can quickly and effectively look at all the possible choices the robot could make. With this mix, a standard laptop could do the calculations in about a minute. They first tried their idea in models where robotic hands had to move a pen into a specific position, open a door, or pick up a plate. In each case, their model-based method worked just as well as reinforcement learning, but it did so much faster. They got the same results when they tried their model on real robotic arms.

Conclusion

However, as their model is based on an oversimplified approximation of reality, it cannot account for highly dynamic motions like things falling. Despite being useful for more leisurely manipulation tasks, their method cannot produce a strategy allowing a robot to toss a can into a garbage can. The researchers hope to improve their way in the future to handle such complex motions.

Sources of Article

Image source: Unsplash

Want to publish your content?

Publish an article and share your insights to the world.

ALSO EXPLORE

DISCLAIMER

The information provided on this page has been procured through secondary sources. In case you would like to suggest any update, please write to us at support.ai@mail.nasscom.in