can synthesized speech be produced by processing EEG ?

vinnyMS1vinnyMS1 Canada
edited April 2022 in Software

i'm trying to create an android app that speaks through the gamma wave with sound. the gamma wave speaks by replacing eeg (numbers) with words then words are associated with mp3 files and the app plays the sound file (mp3 file) that belongs to the word.

the gamma wave becomes speech feedback
it's going to be an android app
is it good to create?

Comments

  • wjcroftwjcroft Mount Shasta, CA

    @vinnyMS1 said:
    ... the gamma wave speaks by replacing eeg (numbers) with words then words are associated with mp3 files and the app plays the sound file (mp3 file) that belongs to the word. the gamma wave becomes speech feedback. it's going to be an android app

    Vinny, hi.

    I'm sorry, but EEG cannot be decoded directly into words, no matter which waveband is used: gamma, beta, alpha, theta, delta.

    On this previous thread you made,

    https://openbci.com/forum/index.php?p=/discussion/3257/what-is-the-minimum-maximum-eeg-data-values

    I shared a link, showing what are some typical BCI Brain Computer Interfaces, and the concepts they use. None of these involve word decoding directly from EEG. However there ARE BCI's which use VEP Visual Evoked Potentials, such as cVEP and SSVEP, which allow the user to place attention on an area of the screen that is 'flashing' at a certain rate. The measured visual response in the brain then allows a program to determine which 'letter' or 'command' on the screen is being selected.

    https://www.gtec.at/product/bcisystem/ [typical BCI paradigms]

    This system uses cVEP:

    https://openbci.com/forum/index.php?p=/discussion/2783/mindaffect-announces-open-source-release-of-their-cvep-bci-speller-games-etc

    Regards, William

  • vinnyMS1vinnyMS1 Canada
    edited April 2022

    what happens if the android app works as explained.
    Is it something openBCI can support?
    I want to join an openBCI team if possible as an idea person.
    In theory the idea works already i need some more time to create it.
    I have some JAVA programming experience

  • wjcroftwjcroft Mount Shasta, CA

    Vinny, hi.

    Not sure of your question. Are you referring to an Android version of MindAffect? That may be possible.

    If you are referring to an Android app that translates EEG directly into speech, no that is impossible as already explained. If you want to find teams to work on neurotech, take a look at NeuroTechX, which has local chapters and many online resources. You can also sign up for their Slack access from links there.

    https://neurotechx.com/

    Regards, William

  • vinnyMS1vinnyMS1 Canada
    edited April 2022

    it's possible the for the app to decode the EEG. but i want to know who would be interested in it. the gamma wave will speak in real time with a neural voice

    then while it speaks, the user will be able to control what it says with their brain by focusing more. then eventually by training to control the wave will say exactly what the user thinks

  • wjcroftwjcroft Mount Shasta, CA
    edited April 2022

    As posted previously, this is a good list of proven BCI paradigms,

    https://www.gtec.at/product/bcisystem/ [typical BCI paradigms, including MindAffect cVEP types]

    There is no form of BCI that can deduce human thought / words, directly from gamma EEG or any type of EEG. The signal to noise ratio is just too low. Remember that EEG is the massed / summed activity of huge numbers of neurons. Even with cortical EEG (electrodes inserted through skull and resting on surface of brain), this is difficult; see paper posted below on April 24. There is no generalized scalp EEG 'thought' to speech or transcription technology. Other forms of BCI listed in the link above, are all that current technology supports at this time.

    If you want to pursue practical, real-world types of BCI with other collaborators, I suggest checking out NeuroTechX.

  • Vinny,

    Imagine a basement of a large ten-story office building filled with a thousand typists, all typing different reports at once. You put a decibel sound meter on the outside of the roof of the building. You want to use the readings of the decibel meter to read what a single typist in the middle of the basement is typing.

    That task is much easier than reading thoughts with scalp EEG gamma.

  • i created it already as a concept, it will work

  • @Billh said:
    Vinny,

    Imagine a basement of a large ten-story office building filled with a thousand typists, all typing different reports at once. You put a decibel sound meter on the outside of the roof of the building. You want to use the readings of the decibel meter to read what a single typist in the middle of the basement is typing.

    That task is much easier than reading thoughts with scalp EEG gamma.

    the gamma will speak to the delta and will make the person sound through the brain as it speaks through voice

  • wjcroftwjcroft Mount Shasta, CA

    If you post your BCI goals on the NeuroTechX Slack, you may get some feedback, comments and suggestions. Also people make posts to find potential collaborators for projects. Sign up for the Slack here:

    https://neurotechx.com/

    Regards, William

  • wjcroftwjcroft Mount Shasta, CA

    Just found this paper, using cortical EEG (ECoG) to decode speech.

    https://www.frontiersin.org/articles/10.3389/fnins.2019.01267/full
    "Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices"

  • wjcroftwjcroft Mount Shasta, CA

    Another ECoG study. States that they achieved a 32% accuracy score. Which is worse than 50% chance rate. However they did get better accuracy on phonemes.

    https://pubmed.ncbi.nlm.nih.gov/33836507/
    "Generalizing neural signal-to-text brain-computer interfaces"

  • @wjcroft said:
    Just found this paper, using cortical EEG (ECoG) to decode speech.

    https://www.frontiersin.org/articles/10.3389/fnins.2019.01267/full
    "Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices"

    thanks

    that's almost like my work, very interesting

  • wjcroftwjcroft Mount Shasta, CA

    The subjects in this case, had their skulls temporarily opened for brain surgery. Not sure what type. This allowed placement of the temporary ECoG electrode array used during the experiment portion of the surgery operation.

    They volunteered for the experiment, which I assume occurred after the primary surgery, but before the ECoG was removed and the skull was re-closed. For this type of work, ECoG was required. Normal EEG electrodes, placed at the scalp would receive too much noise to perform this experiment.

  • i created something different just with basic EEG electrodes it can make the brain speak directly with a software and a brain sensor.
    the gamma wave speaks in real time with 1 word per 1 second or 2 seconds

    i will use NeuroSky mindwave

  • wjcroftwjcroft Mount Shasta, CA

    Vinny, thanks.

    However given the ECoG research papers listed previously, that required direct placement of electrodes on the cortical surface to achieve their results -- it seems very unlikely that they would have opted for brain surgery, if it was at all possible with surface EEG. The noise level is just too high with scalp EEG, as mentioned previously.

    Regards, William

  • in the noise there's still valuable data just like a audio editor can split with accuracy what noise is in a sound file and extract the singers voice, the EEG that is noise can be filtered with software. the key is advanced software not advanced sensor (stays basic) or surgeries needed. the software splits and filters out noise. the sensor is the basic version we have already

  • wjcroftwjcroft Mount Shasta, CA

    Did you look at the papers? They used EXTREMELY complex and sophisticated EEG signal processing to get the signal out of the noise. These are expert scientists with peer reviewed publication records. If it had been possible for them to use scalp EEG, they would have done so. They could not use scalp EEG because the noise level is just way way too high.

    You stated above "I created something different just with basic EEG electrodes it can make the brain speak directly with a software and a brain sensor. The gamma wave speaks in real time with 1 word per 1 second or 2 seconds." Is this actually a true statement?? Or is it your intention? You have worded it as if you have already accomplished this.

    Regards, William

  • i have created it already i will share the results later. the brain changes how it speaks every time it hears a speech through the gamma wave. when the person hears the gamma wave he or she can accept or reject. by rejecting the brain changes what it's saying and can start saying the opposite or other things instead of what it hears. when you hear "today is a bad day", you could reject it and when you reject it the brain can start saying "today is kind of a good day", if you reject it again it will say "today is a good day" or "today is a VERY good day". hearing your brain and rejecting what it says can make the brain talk different. accept what it says and you stay the same.

    i will share it when ready

Sign In or Register to comment.