This is an update from the OpenBCI Discovery Program. Click here for details on how to apply.
NeuroTechSC is developing a silent speech interface that uses deep learning and neuromuscular signals (EMG) to identify the phonemes that a user is subconsciously vocalizing. Eight EMG electrodes are placed along a user’s jawline and record the signals that a user subvocally articulates when reading or thinking of a specific word. These signals are passed into a machine learning model which predicts which specific phonemes the user is subvocalizing.
This project expands upon a binary question-answering silent speech interface that recently won 1st place in the U.S. in the 2020 NeuroTechX competition. In that project, the model was predicting the answer to a yes or no question. A demonstration video of that project can be viewed here.
OpenBCI’s 8-channel Biosensing Cyton Board allows us to record the EMG signals a user makes when subvocalizing a word. Although we had already purchased a Cyton board for our original project, this second board will enable us to prototype and improve much more rapidly. While previously, only our hardware team had access to the board for recording purposes, now our ML or UI team could test using it concurrently.
We believe that an accurate subvocal phonemic recognition headset will spark the adoption of neurotechnology as a novel interface for AI assistants. Invasive approaches are still a while away from mainstream adoption and EEG approaches suffer from information density issues, but subvocal recognition occupies a healthy medium. Ultimately, this second board will enable us to develop phonemic recognition capabilities which would be the first steps in truly revolutionizing silent-speech interfaces.
We plan to collaborate with a professor soon to publish a research paper. A demonstration video will be shared on our YouTube and LinkedIn pages, which are both kept up-to-date. Links to all our pages can be found at neurotech.ucsc.edu.
Our Team (in alphabetical order)
Project Leads: Chris Toukmaji, Kate Voitiuk, Rohan Pandey
Team Representatives: Alex Soliz (Writing), Conrad Pereira (UI), Jessalyn Wang (Machine Learning), Sabrina Fong (Hardware), Sahil Gupta (Data)
Hardware Team: Kanybek Tashtankulov, Micaela House
Data Team: Avneesh Muralitharan, Neville Hiramanek, Vijay Chimmi, Xander Merrick
Machine Learning Team: Aarya Parekh, Dina Chen, Marc Fischer
UI Team: Eric Dong, Kevin Xu, Loveleen Bassi
Writing Team: Kiki Galvez, Michael Pasaye, Taylor Pistone