Wednesday, April 24, 2024

Soft robots detect human touch using camera and shadows

Currently, there are several groups developing electronic skin for robots. However, Cornell University researchers are applying a simpler approach, using shadow-imaging cameras to give robots a sense of touch.

Known as ShadowSense, the experimental system incorporates a USB camera located inside the robot that captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software. This is versatile compared to a large number of contact sensors that would add weight and complex wiring to the robot.

The technology evolved from a project to create inflatable robots that could help guide people to safety during emergency evacuations, for example, through a smoke-filled building, where the robot could detect the touch of a hand and lead the person to an exit.

By placing a camera inside the robot, we can infer how the person is touching it and what the person’s intent is just by looking at the shadow images,” lead researchers and a doctoral student Yuhan Hu said. “We think there is interesting potential there because there are lots of social robots that are not able to detect touch gestures.

The current robot prototype is primarily a nylon skin inflatable bladder stretched around a cylindrical skeleton with wheels. The researchers developed a neural network-based algorithm to distinguish up to six touch gestures (touching with a palm, punching, touching with two hands, hugging, pointing, and not touching at all) with an accuracy of 87.5 to 96%, depending on the lighting.

The system can be programmed to respond to certain touches and gestures, such as rolling away or issuing a message through a loudspeaker, and the robot’s skin has the potential to be turned into an interactive screen. Researchers also point out that the ShadowSense technology’s applications are not limited to robotics, as it could also be used in touch screens or electronic gadgets.

While the technology has certain limitations, for example requiring a line of sight from the camera to the robot’s skin, these constraints could actually spark a new approach to social robot design that would support a visual touch sensor like the one we proposed,” said Guy Hoffman, associate professor. “In the future, we would like to experiment with using optical devices such as lenses and mirrors to enable additional form factors.

Besides, ShadowSense technology offers a comfort that is increasingly rare in these high-tech times: privacy. Using shadows and touch as the means of interaction also addresses some of the privacy concerns around voice and facial recognition.