Cornell researchers created an earphone that can track facial expressions
The earphone uses two RGB cameras that are positioned below each ear. They can record changes in cheek contours when the wearer's facial muscles move.
Once the images have been reconstructed using computer vision and a deep learning model, a convolutional neural network analyzes the 2D images. The tech can translate those to 42 facial feature points representing the position and shape of the wearer's mouth, eyes and eyebrows.
C-Face can translate those expressions into eight emoji, including ones representing neutral or angry faces. The system can also use facial cues to control playback options on a music app. Other
POSTED IN Games