Robotics technology is advancing rapidly, and robots are becoming more versatile and capable of performing a wide range of tasks. Many companies are now investing heavily in humanoid robots that can navigate their way around existing workspaces and take over tasks from human workers.
For robots to become truly versatile and start taking over a wide variety of tasks, they need to be able to upskill themselves quickly based on human instructions or demonstrations.
Now, Toyota Research Institute (TRI) has claimed a breakthrough generative AI approach based on Diffusion Policy to quickly and confidently teach robots new, dexterous skills. This advancement is a significant improvement in robot utility and could lead to the development of Large Behavior Models (LBMs) for robots. This is similar to the Large Language Models (LLMs) that have revolutionized conversational AI in recent years.
The previous methods used to teach robots new behaviors were not efficient and often limited to specific tasks performed in a controlled environment. The process was slow and inconsistent, requiring a considerable amount of time and effort from roboticists to write complex code or use numerous trial-and-error cycles to program behaviors.
“Our research in robotics is aimed at amplifying people rather than replacing them,” said Gill Pratt, CEO of TRI and Chief Scientist for Toyota Motor Corporation. “This new teaching technique is both very efficient and produces very high performing behaviors, enabling robots to much more effectively amplify people in many ways.”
TRI has developed a robot behavior model that learns from haptic demonstrations along with a language description of the desired outcome. The skill is learned using an AI-based Diffusion Policy, enabling a new behavior to be deployed autonomously from dozens of demonstrations. This approach not only produces reliable, repeatable, and efficient results but does so at a remarkable speed.
The robot platform is custom-built for dexterous dual-arm manipulation tasks with a special focus on enabling haptic feedback and tactile sensing. TRI researchers have already taught robots more than 60 difficult, dexterous skills using the new approach, including pouring liquids, using tools, and manipulating deformable objects. TRI says the achievements were realized just by giving the robots new information without writing a single line of new code. They now hope to teach hundreds of new skills by the end of the year and 1,000 by the end of 2024.
“The tasks that I’m watching these robots perform are simply amazing – even one year ago, I would not have predicted that we were close to this level of diverse dexterity,” remarked Russ Tedrake, Vice President of Robotics Research at TRI, in an official release. “What is so exciting about this new approach is the rate and reliability with which we can add new skills. Because these skills work directly from camera images and tactile sensing, using only learned representations, they are able to perform well even on tasks that involve deformable objects, cloth, and liquids – all of which have traditionally been extremely difficult for robots.”