A group of engineers from Cornell University has invented an earphone capable of identifying facial expressions even when they’re wearing a face mask. The ear-mounted wearable sensing device, called C-Face, can continuously track full facial expressions by observing the contour of the cheeks and can then translate them into emojis or silent speech commands.
C-Face is mounted on the ear over earphones, becoming a useful tool for communication in these times when we go everywhere with our face covered by a mask, or in which we have to keep social distances and avoiding meetings. The user could express emotions to online collaborators without holding cameras in front of their faces. Obviously, it is the technology of the future, and it is still in the study phase, but the prototypes work.
“This device is simpler, less obtrusive, and more capable than any existing ear-mounted wearable technologies for tracking facial expressions,” said Cheng Zhang, assistant professor of information science. “In previous wearable technology aiming to recognize facial expressions, most solutions needed to attach sensors on the face and even with so much instrumentation, they could only recognize a limited set of discrete facial expressions.“
The device uses two miniature RGB cameras that are positioned below each ear with headphones or earphones to continuously reconstruct facial expressions by deep learning the contours of the face. The camera record changes in facial contours caused when facial muscles move. These subtle changes are fed into a deep learning model, which continuously outputs 42 facial feature points representing the shapes and positions of the mouth, eyes, and eyebrows.
C-Face can translate those received signals into eight emojis, including neutral or angry faces, as well as eight silent speech commands designed to control a music device, such as “play,” “next song,” and “volume up.” Other possible uses include having avatars in games or other virtual settings express a person’s actual emotions.
Researchers say that among the nine participants, they found that emoji recognition was more than 88% accurate, and silent speech was nearly 85% accurate. Thus, even using masks, this device can help you communicate better in virtual environments, and over time it will even serve to control computer systems using only facial signals to change a song or reproduce commands based on facial expressions.
One limitation to C-Face is the earphones’ limited battery capacity, Zhang said. As its next step, the team plans to work on a sensing technology that uses less power.