Engineers taught a robot how to respond to human facial expressions

Engineers taught a robot how to respond to human facial expressions
Eva is practicing random facial expressions by recording what it looks like from the front camera. Credit: Columbia Engineering

Have you ever smiled at a robot that can smile at you in response? Unfortunately, this does not happen often. While our facial expressions play a big role in building trust, most robots still sport the blank and static face of a professional poker player.

A team of engineering researchers at New York City’s Columbia University has been working for five years to create EVA, a new autonomous robot with a soft and expressive face that responds to match the expressions of nearby humans. This ability will help strengthen the “relationship” between such robots and humans.

The robot is able to express six emotions – anger, fear, joy, sadness, and surprise as well as an array of more nuanced emotions.

To create such a robot is not a trivial task. The fact is that for the manufacture of robots, rigid materials have been used for a long time that cannot plastically change to express emotions. To overcome this problem, the EVA robot consists of a 3D-printed adult-human-sized synthetic skull with a soft rubber face on the front.

It uses artificial ‘muscles’ (i.e., cables and motors) that pull on specific points on EVA’s face, mimicking the movements of the more than 42 tiny muscles attached at various points to the skin and bones of human faces. After weeks of tugging cables to make EVA smile, frown, or look upset, the team noticed that EVA’s blue, disembodied face could elicit emotional responses from their lab mates.

EVA uses a deep learning artificial intelligence to “read” and then mirror the expressions on nearby human faces. And its ability to mimic a wide range of different human facial expressions is learned by trial and error from watching videos of itself.

The team filmed hours of footage of EVA making a series of random faces to teach EVA what its own face looked like. Then, VA’s internal neural networks learned to pair muscle motion with the video footage of its own face. Now that EVA had a primitive sense of how its own face worked (known as a “self-image”), it used a second network to match its own self-image with the image of a human face captured on its video camera. After several refinements and iterations, EVA acquired the ability to read human face gestures from a camera and to respond by mirroring that human’s facial expression.

Of course, so far, this is only a laboratory experiment, and the expression of emotions by a robot is far from how a person does it. Engineers note that robots capable of responding to a wide variety of human body language would be useful in workplaces, hospitals, schools, and homes.