Friday, April 26, 2024

Bees’ waggle dance inspires alternative visual communication system for robots

In a noisy environment, such as a factory floor, you may have noticed humans are adept at non-verbal communication like visual gestures to coordinate. We aren’t the only ones; in fact, honeybees take non-verbal communication to a whole new level. By waggling their backside while parading through the hive, they can let other honeybees know about the location of food. The direction of this ‘waggle dance’ lets other bees know the direction of the food with respect to the hive and the sun, and the duration of the dance lets them know how away it is. It is a simple but effective way to convey complex geographical coordinates.

Inspired by this phenomenon, an international team of researchers set out to devise a system for robot-robot communication that does not rely on digital networks.

A recent study in Frontiers in Robotics and AI presents a simple technique whereby robots view and interpret each other’s movements or a gesture from a human to communicate a geographical location. The first robot traces a shape on the floor, and the shape’s orientation and the time it takes to trace it tell the second robot the required direction and distance of travel. This technique could prove invaluable in situations where robot labor is required, but network communications are unreliable, such as in a disaster zone or in space.

Researchers tested the visual communication system using a simple task, where a package in a warehouse needs to be moved. The system allows a human to communicate with a ‘messenger robot,’ which supervises and instructs a ‘handling robot’ that performs the task.

In this situation, the human can communicate with the messenger robot using gestures, such as a raised hand with a closed list. The robot can recognize the gesture using its onboard camera and skeletal tracking algorithms. Once the human has shown the messenger robot where the package is, it conveys this information to the handling robot. This involves positioning itself in front of the handling robot and tracing a specific shape on the ground. The orientation of the shape indicates the required direction of travel, while the length of time it takes to trace it indicates the distance.

The researchers put it to the test using a computer simulation and with real robots and human volunteers. The robots interpreted the gestures correctly 90% and 93.3% of the time, respectively, highlighting the potential of the technique.

“This technique could be useful in places where communication network coverage is insufficient and intermittent, such as robot search-and-rescue operations in disaster zones or in robots that undertake spacewalks,” said Prof Abhra Roy Chowdhury of the Indian Institute of Science, senior author on the study. “This method depends on robot vision through a simple camera, and therefore it is compatible with robots of various sizes and configurations and is scalable,” added Kaustubh Joshi of the University of Maryland, the first author of the study.