A new Speechin necklace is out on the market, and it is designed to recognize silent speech.
The experimental device is being developed by Cornell University’s Asst. Prof. Cheng Zhang and doctoral student Ruidong Zhang. It builds upon the NeckFace necklace that Cheng Zhang revealed last year, which monitored the wearer’s facial expressions.
Besides a microprocessor, battery, and Bluetooth module, Speechin also has an upward-facing infrared camera that images the underside of the wearer’s chin. It’s held in this orientation via a set of “wings” that extend out to either side, along with a coin that serves as a weight on its bottom. To address privacy concerns, it doesn’t point directly at the user’s face.
This device makes use of machine-learning-based algorithms. As a result, it is able to determine which commands its wearer is silently speaking, based on their chin movements. It can then forward those commands to a paired smartphone.
The system was initially trained by monitoring the chin movements of 20 volunteers as they silently spoke known words and phrases – 10 of those people spoke English, while the other 10 spoke Mandarin. In the tests that followed, the speakers spoke 54 commonly used commands in English, along with 44 Mandarin words and phrases.
The necklace proved to be 90.5 and 91.6 percent accurate at recognizing the English and Mandarin speech, respectively. However, the accuracy significantly decreased when the devices were used while walking because their individual walking styles caused their heads to move in an unpredictable manner.
The technology will be improved in future for it to be used in noisy places too. The Speechin necklace could additionally be used by people who lack the power of speech.
A paper on the research was recently published in the journal Proceedings of the Association of Computing Machinery on Interactive, Mobile, Wearable and Ubiquitous Technologies.