Hi Everyone, Rishi here!
I just received the new OpenBCI equipment and started testing with it! If you have not read my first post, here is a link to it: https://openbci.com/community/enabling-oral-communication-through-subvocal-recognition-for-aphasia/
In summary, I am creating a Brain-Computer Interface to help patients with Aphasia (a verbal disorder after a stroke) speak through the translation of subvocal communication. In my last experiment, all of my equipment was homemade and I got an accuracy of 75% on average. With OpenBCI’s new equipment I am hoping to increase this accuracy. Additionally, I am working on making it translate in real-time.
I ran my first set of tests today on trying to classify whether the device could detect a difference between the word “hi” and background noise. I will run it through the Machine Learning model and let all of you know how it goes!
If you have any questions, suggestions, or tips, please send me an email at [email protected]!