Community /

Synthetic Telepathy with Subvocal Recognition | NeuroTechSC

This is an update from the OpenBCI Discovery Program. Click here for details on how to apply.

NeuroTechSC is developing a silent speech interface that uses deep learning and neuromuscular signals (EMG) to identify the phonemes that a user is subconsciously vocalizing. Eight EMG electrodes are placed along a user’s jawline and record the signals that a user subvocally articulates when reading or thinking of a specific word. These signals are passed into a machine learning model which predicts which specific phonemes the user is subvocalizing. 

This project expands upon a binary question-answering silent speech interface that recently won 1st place in the U.S. in the 2020 NeuroTechX competition. In that project, the model was predicting the answer to a yes or no question. A demonstration video of that project can be viewed here

OpenBCI’s 8-channel Biosensing Cyton Board allows us to record the EMG signals a user makes when subvocalizing a word. Although we had already purchased a Cyton board for our original project, this second board will enable us to prototype and improve much more rapidly. While previously, only our hardware team had access to the board for recording purposes, now our ML or UI team could test using it concurrently. 

We believe that an accurate subvocal phonemic recognition headset will spark the adoption of neurotechnology as a novel interface for AI assistants. Invasive approaches are still a while away from mainstream adoption and EEG approaches suffer from information density issues, but subvocal recognition occupies a healthy medium. Ultimately, this second board will enable us to develop phonemic recognition capabilities which would be the first steps in truly revolutionizing silent-speech interfaces.

We plan to collaborate with a professor soon to publish a research paper. A demonstration video will be shared on our YouTube and LinkedIn pages, which are both kept up-to-date. Links to all our pages can be found at 

Our Team (in alphabetical order)
Project Leads: Chris Toukmaji, Kate Voitiuk, Rohan Pandey
Team Representatives: Alex Soliz (Writing), Conrad Pereira (UI), Jessalyn Wang (Machine Learning), Sabrina Fong (Hardware), Sahil Gupta (Data)
Hardware Team: Kanybek Tashtankulov, Micaela House
Data Team: Avneesh Muralitharan, Neville Hiramanek, Vijay Chimmi, Xander Merrick
Machine Learning Team: Aarya Parekh, Dina Chen, Marc Fischer
UI Team: Eric Dong, Kevin Xu, Loveleen Bassi
Writing Team: Kiki Galvez, Michael Pasaye, Taylor Pistone



I came across the term locked-in-syndrome for the first time. I understand that this a very broad term. I would like to understand whether HM would be falling under this category (HM is a famous patient who could only remember his life till the age of about 27 due to a hippocampus removal). Stephen Hawking did suffer from a form of a locked-in-syndrome a well and a special communication system was built for him.

Do connect with me:-

I find your research amazing…


May it be used by people suffering from locked-in syndrome? My PhD thesis director and a fellow researcher at my current center are working on a project funded by the Spanish government with people in that situation, and this technology could make such an extremely positive difference in their way of communicating! Please let me know, my email is [email protected]. The project is the following: Cheers, keep up the great work!


Could this technology be used for the nonverbal who understand language but have neurological deficiencies that make speaking difficult?

Leave a Reply