Home Wearable Wearable Sonar Uses Sound, Not Cameras, to Track Facial Expressions

Wearable Sonar Uses Sound, Not Cameras, to Track Facial Expressions

0

Engineers at Cornell University have developed a new wearable device that can monitor a person’s facial expressions via sonar and reproduce them on a digital avatar. Removing the camera from the equation could alleviate privacy concerns.

The device, which the team calls EarIO, consists of a headset with a microphone and speakers on either side that can be attached to any conventional headset. Each speaker emits pulses of sound beyond the range of human hearing to the wearer’s face, while the echoes are picked up by the microphone.

As the user makes various facial expressions or speaks, the echo profile will change slightly due to the way the user’s skin moves, stretches and crinkles. A specially trained algorithm recognizes these echo profiles and quickly reconstructs the expressions on the user’s face to display them on the digital avatar.

“Through the power of artificial intelligence, the algorithm has discovered complex connections between muscle movements and facial expressions that are not recognized by the human eye,” said Liq, co-author of the study. “We can use this to infer complex information that is much harder to capture – the entire front of the face.”

The research team tested the EarIO system on 16 participants, running the algorithm on a regular smartphone. Sure enough, the device was able to reconstruct facial expressions as well as a regular camera. Background noises such as wind, speech or street noise did not interfere with its ability to recognize faces.

The team says sonar has some advantages over using a camera. The acoustic data requires much less energy and processing power, which also means the device can be smaller and lighter. Cameras can also capture a lot of other personal information that users may not intend to share, so sonar may be more private.

As for the possible use of the technology for this purpose, it could be a convenient way to replicate your physical facial expressions on a digital body in a game, VR or virtual world.

The team says further work is still needed to tune out other distractions, such as when users turn their heads, and to simplify the training system for AI algorithms.

The research was published in the Computer Society Journal of Interactive, Mobile, Wearable and Ubiquitous Technologies.

Exit mobile version