Wednesday, March 27, 2024

Low-cost, energy-efficient robot hand learns how not to drop the ball

Grasping objects of different sizes, shapes, and textures is challenging for a robot. Also, adjusting to objects slipping through them at the moment can be challenging if many parts of the robot must be moved at once. Now, researchers at the University of Cambridge have designed a low-cost, energy-efficient robotic hand that can grasp a range of objects – and not drop them – using just the movement of its wrist and the feeling in its ‘skin.’

Using just the movement of its wrist and the feeling in its ‘skin,’ the soft, 3D-printed robot hand can carry out a range of complex movements. The robotic hand was trained to grasp a variety of objects with the correct amount of pressure while using a minimal amount of energy. Also, it was able to predict whether it would drop them by using the information provided by sensors placed on its skin.

The team used a 3D-printed anthropomorphic hand implanted with tactile sensors so that the hand could sense what it was touching. Researchers conducted more than 1200 tests with the robot hand, observing its ability to grasp small objects without dropping them. The robot was initially trained using small 3D-printed plastic balls and grasped them using a pre-defined action obtained through human demonstrations.

The robot hand than can grasp a variety of objects with the correct amount of pressure.
The robot hand can grasp a variety of objects with the correct amount of pressure. Credit: University of Cambridge

After finishing the training with the balls, it then attempted to grasp different objects, including a peach, a computer mouse, and a roll of bubble wrap, and was able to successfully grasp 11 of 14 objects.

“This kind of hand has a bit of springiness to it: it can pick things up by itself without any actuation of the fingers,” said first author Dr. Kieran Gilday, who is now based at EPFL in Lausanne, Switzerland. “The tactile sensors give the robot a sense of how well the grip is going, so it knows when it’s starting to slip. This helps it to predict when things will fail.”

“The sensors, which are sort of like the robot’s skin, measure the pressure being applied to the object,” said co-author Dr. Thomas George-Thuruthel, who is now based at University College London (UCL) East. “We can’t say exactly what information the robot is getting, but it can theoretically estimate where the object has been grasped and with how much force.”

The passive design of the Cambridge-designed hand, using a small number of sensors, is easier to control, provides a wide range of motion, and streamlines the learning process.

Future development of the system includes adding computer vision capabilities or teaching the robot to exploit its environment, which would enable it to grasp a wider range of objects.

Journal reference:

  1. K. Gilday, Dr. T. George-Thuruthel, Prof. F. Iida. Predictive Learning of Error Recovery with a Sensorised Passivity-based Soft Anthropomorphic Hand. Advanced Intelligent Systems; 2023. DOI: 10.1002/aisy.202200390