Ruidong Zhang, a doctoral student in the field of information science, wearing EchoSpeech glasses.

AI glasses with a silent-speech recognition interface use acoustic-sensing and artificial intelligence to continuously recognize up to 31 unvocalized commands, based on lip and mouth movements. The low-power, wearable interface requires just a few minutes of user training data before recognizes commands and can be run on a smartphone.

Outfitted with a pair of microphones and speakers smaller than pencil erasers, the EchoSpeech glasses become a wearable AI-powered sonar system, sending and receiving soundwaves across the face and sensing mouth movements. A deep learning algorithm then analyzes these echo profiles in real time, with about 95 percent accuracy.

Recently, the lab has shifted away from cameras and toward acoustic sensing to track face and body movements, citing improved battery life; tighter security and privacy; and smaller, more compact hardware. EchoSpeech builds off the lab’s similar acoustic-sensing device called EarIO, a wearable earbud that tracks facial movements. (Image credit: Cornell)

For more information, visit here .