This is an update from the OpenBCI Discovery Program. Click here for details on how to apply.
NeuroTechSC is developing a silent speech interface that uses deep learning and neuromuscular signals (EMG) to identify the phonemes that a user is subconsciously vocalizing. Eight EMG electrodes are placed along a user’s jawline and record the signals that a user subvocally articulates when reading or thinking of a specific word. These signals are passed into a machine learning model which predicts which specific phonemes the user is subvocalizing.
This project expands upon a binary question-answering silent speech interface that recently won 1st place in the U.S. in the 2020 NeuroTechX competition. In that project, the model was predicting the answer to a yes or no question. A demonstration video of that project can be viewed here.
OpenBCI’s 8-channel Biosensing Cyton Board allows us to record the EMG signals a user makes when subvocalizing a word. Although we had already purchased a Cyton board for our original project, this second board will enable us to prototype and improve much more rapidly. While previously, only our hardware team had access to the board for recording purposes, now our ML or UI team could test using it concurrently.
We believe that an accurate subvocal phonemic recognition headset will spark the adoption of neurotechnology as a novel interface for AI assistants. Invasive approaches are still a while away from mainstream adoption and EEG approaches suffer from information density issues, but subvocal recognition occupies a healthy medium. Ultimately, this second board will enable us to develop phonemic recognition capabilities which would be the first steps in truly revolutionizing silent-speech interfaces.
We plan to collaborate with a professor soon to publish a research paper. A demonstration video will be shared on our YouTube and LinkedIn pages, which are both kept up-to-date. Links to all our pages can be found at neurotech.ucsc.edu.
Our Team (in alphabetical order)
Project Leads: Chris Toukmaji, Kate Voitiuk, Rohan Pandey
Team Representatives: Alex Soliz (Writing), Conrad Pereira (UI), Jessalyn Wang (Machine Learning), Sabrina Fong (Hardware), Sahil Gupta (Data)
Hardware Team: Kanybek Tashtankulov, Micaela House
Data Team: Avneesh Muralitharan, Neville Hiramanek, Vijay Chimmi, Xander Merrick
Machine Learning Team: Aarya Parekh, Dina Chen, Marc Fischer
UI Team: Eric Dong, Kevin Xu, Loveleen Bassi
Writing Team: Kiki Galvez, Michael Pasaye, Taylor Pistone
https://drive.google.com/file/d/1fs4nf4OBoDRPpdzZtJfAxJLj6gOYuzM0
This is an awesome technology; please restrict it to good people
Your comment reminds me of this paper: “Forbidden knowledge in machine learning reflections on the limits of research and publication”
https://link-springer-com.sire.ub.edu/article/10.1007/s00146-020-01045-4
I came across the term locked-in-syndrome for the first time. I understand that this a very broad term. I would like to understand whether HM would be falling under this category (HM is a famous patient who could only remember his life till the age of about 27 due to a hippocampus removal). Stephen Hawking did suffer from a form of a locked-in-syndrome a well and a special communication system was built for him.
Do connect with me:- https://www.linkedin.com/in/santanu-banerjee-093929150/
I find your research amazing…
I’m a victim of a hacked cbi, or neural dusting, is there any help in shutting it down, even have control of my vitals. 3085282488 please help
May it be used by people suffering from locked-in syndrome? My PhD thesis director and a fellow researcher at my current center are working on a project funded by the Spanish government with people in that situation, and this technology could make such an extremely positive difference in their way of communicating! Please let me know, my email is [email protected]. The project is the following: https://www.antropologia.urv.cat/en/research/projects/locked-in-syndrome/ Cheers, keep up the great work!
Could this technology be used for the nonverbal who understand language but have neurological deficiencies that make speaking difficult?
Are there any ways to defend against subvocal recognition, especially done remotely with a microphone?