Our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. Getting to know our bodies helps us in all aspects, from moving our body without bumping, tripping, or falling over, to getting dressed to knowing when there’s an injury hindering our abilities.
Now, a Columbia Engineering team has created a robot that, for the first time, is able to learn a model of its entire body from scratch without any human assistance. The robot created a kinematic model of itself and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
Researchers say they achieve this by placing a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands.
After about three hours, the robot came to a stop, signaling that its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network; it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson.
In physical experiments, engineers demonstrate how a visual self-model is accurate to about 1% of the workspace, enabling the robot to perform various motion planning and control tasks. Visual self-modeling can also allow the robot to detect, localize, and recover from real-world damage, leading to improved machine resiliency.
Engineers say giving robots the ability to model themselves without any human assistance could be an important step forward in automation. Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear and even detect and compensate for damage. The authors of the study published by Science Robotics argue that this ability is important as we need autonomous systems to be more self-reliant.
“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain, we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.”
Engineers teach a robot to imagine itself, become self-aware
Source: Tambay News
0 Comments