A person's body image is an important piece of information that determines how we function in the real world. Though it might not be accurate or realistic, an individual could be identified as athletic or fashion conscious according to their body image. 

 When an athlete prepares for an event, their brain plans to move without bumping, tripping or falling over. Humans acquire this ability as infants. What if machines could do that too? 

 A Columbia Engineering team recently announced that they had created a robot that can learn a model of its entire body without human assistance. Researchers explained the development of this one-of-a-kind model in a new study published by Science Robotics.  

According to the developers, their robot created a kinematic model of itself. It then used its self-model to plan motion, reach goals and avoid obstacles in various situations. It even automatically recognized and then compensated for damage to its body.  

Exploring like an infant 

The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the camera as it undulated reply. The observers remarked that the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. It was similar to how an infant would explore itself for the first time in a hall of mirrors. 

After three hours, the robot's internal deep neural network had finished learning the relationship between its motor actions and the volume it occupied in its environment.  

Hod Lipson, professor of mechanical engineering and director of Columbia's Creative Machines Lab, stated that they were curious to see how the robot imagined itself. After the team struggled with various visualization techniques, the self-image gradually emerged. The robot's self-model was accurate to about 1% of its workspace. 

Self-reliant autonomous systems 

 The ability of the robots to model themselves will not only save labor, but it will also allow the robot to keep up with its own wear-and-tear and even detect and compensate for damage. The creators argue that this is essential as it makes autonomous systems more self-reliant. For example, if something in a factory is not moving right, the model could detect it and call for assistance. 

According to the study's first author Boyuan Chen, somewhere inside the human brain, a notion of self exists. This self-model informs us what volume of our immediate surroundings we occupy and how that volume changes as we move. 

Self-awareness in robots 

The study is a part of Hod Lipson's decades-long attempt to find ways to grant robots self-awareness. He explained that self-modelling is a primitive form of self-awareness. If a robot, animal or human has an accurate self-model, according to Lipson, it can function better in the world, make better decisions and has an evolutionary advantage. 

The researchers are aware of the limits, risks and controversies surrounding granting machines greater autonomy through self-awareness. Lipson noted the trial compared to that of humans. But to start somewhere, we must go "slowly and carefully". In his opinion, it will aid us in reaping the benefits while minimizing the risks.  

 

Sources of Article

Source: Columbia Engineering

Want to publish your content?

Publish an article and share your insights to the world.

Get Published Icon
ALSO EXPLORE