Imagine a future where disasters strike and packs of robotic rescue dogs come to the rescue.
That’s what AI researchers at Stanford University and Shanghai Qi Zhi Institute are working on. They have developed a new vision-based algorithm that helps robodogs scale high objects, leap across gaps, crawl under thresholds, and squeeze through crevices – and then bolt to the next challenge. The algorithm represents the brains of the robodog.
“The autonomy and range of complex skills that our quadruped robot learned is quite impressive,” said Chelsea Finn, assistant professor of computer science and senior author of a new peer-reviewed paper on the robodogs. “And we have created it using low-cost, off-the-shelf robots – actually, two different off-the-shelf robots.”
The robodog developed by the authors is said to be autonomous, meaning it can assess physical challenges and perform a variety of agility skills based on the obstacles it encounters.
“What we’re doing is combining both perception and control, using images from a depth camera mounted on the robot and machine learning to process all those inputs and move the legs in order to get over, under, and around obstacles,” said Zipeng Fu, first author of the study.
“Our robots have both vision and autonomy – the athletic intelligence to size up a challenge and to self-select and execute parkour skills based on the demands of the moment,” Fu said.
To succeed, researchers first synthesized and refined the algorithm using a computer model and then transferred it to the two real-world robots. Subsequently, the robots underwent reinforcement learning, where they attempted to move forward in any way possible and were rewarded based on their performance. This is how the algorithm learned the best approach to new challenges.
However, most existing reinforcement learning reward systems involve too many variables, which can slow down computational performance. Therefore, the streamlined reward process used for the robodog parkour is exceptional and surprisingly straightforward.
The team conducted a series of experiments using real-world robodogs to demonstrate their new agility approach in highly challenging environments. They relied solely on off-the-shelf computers, visual sensors, and power systems of the robodogs. The results of the experiments revealed that the upgraded robodogs were capable of climbing obstacles more than one-and-a-half times their height, jumping over gaps greater than one-and-a-half times their length, crawling under barriers three-quarters of their height, and tilting themselves to pass through a slit narrower than their width.
Researchers now plan to take advantage of advancements in 3D vision and graphics to introduce real-world data to their simulated environments. This will bring a new level of real-world autonomy to their algorithm.