Classify EEG of seeing two different objects [resolved]
I have started my brain signal capturing project recently with a BCI cyton module, dry electrodes and Mark4 helmet. My objective is to classify the brain signal of seeing two different objects. For that, I have done an experiment with a ball and bottle that appears in every 10 seconds alternatively in the plane background. Unfortunately, I didn't get any spikes or differences in the signal when the object appeared. I repeated the experiment a lot of times and was also done with blue and white colored paper instead of ball and bottle. But, all the time I got the same type of output with some random spikes. But, I am getting a proper spike in the eye opening and closing. However, I am unable to classify the brain signal as well as I didn't know why I was getting only false signals. Are my electrodes not capable of detecting the signal properly ? Are my experiments not good enough to make any spikes? Can you help me to do the experiment properly or do you have any suggestions for me?
Comments
Hi Snobin,
EEG and BCI systems generally do not do "thought detection" or "object detection". Because the ensembles / networks of neurons that do such operations do not result in clearly distinguished scalp signals.
What is your goal with your BCI, in terms of how you want to use it in a project? Common BCI paradigms that you can lookup online include: Motor Imagery, P300, Visual Evoked Potentials (SSVEP and cVEP). An example of a cVEP (code based VEP) is the free open source MindAffect project:
Regards, William
Thankyou @wjcroft for your immediate feedback. Actually, I want to classify the signals that are output by the brain when it sees the objects(bottle, ball) and not seeing anything(empty background) with the help of a neural network. Now, I am getting only the same type of output signals with random spikes and it could not be differentiable. Instead of the random signal spikes, I am expecting signal spikes only when the object appears.
Hi Snobin, please try a web search on some of the BCI paradigms I mentioned above. There is no ability of BCI to detect on "object type". The number of neurons involved in recognizing specific objects is too small to register at the scalp EEG locations. EEG requires large numbers of neurons to fire simultaneously to overcome the poor signal to noise characteristics.
The P300 'oddball' BCI paradigm, is slightly related to what you are attempting. However as you can see from the previous video, cVEP speed and available features generally exceed P300 BCIs.
https://en.wikipedia.org/wiki/Oddball_paradigm
Regards, William
Another page describing P300 oddball, and how such experiments are performed and measured.
https://backyardbrains.com/experiments/p300
You can also find related ML pages:
https://www.google.com/search?q=p300+oddball+machine+learning
Again, if you are mainly looking for a free open source BCI solution, that operates much faster than P300 or Motor Imagery or SSVEP, consider the MindAffect mentioned previously. It is much superior in many ways.
I'm using a cyton board , gold cup electrode for my project
I was trying to find the differences in brain signals while viewing different visuals (eg : car , lion , tree ..etc) But as of now i'm not able to see any changes in the signals i captured .
I've used positions O1,Oz,O2,PO3,POz,PO4,P3 and P4 as electrode sites.
I've been stuck in this issue for some week now also i'm new to this area ,I hope and looking forward to Get help from your side to find a solution
Image descriptions : red box is the time interval where i expect any changes in signal
@Nabeen, hi.
I've merged your new thread into this existing thread. See some of the previous comments.
Here are some other links to explore, the first a list of commonly used BCI paradigms,
https://www.gtec.at/product/bcisystem/
And this page, which is a tutorial on P300 with Cyton,
https://docs.openbci.com/Examples/VideoExperiment/
If you are looking for a BCI paradigm which has the most friendly and productive user experience, see the MindAffect links / video above.
Regards, William
Hi @wjcroft,
I've done some experiments with visual cortex signals using Cyton board for classifying signals corresponding to different visuals. And I couldn't get any significant difference in signals, Is it because the signals are weak(from visual cortex)? Does signal amplification help to solve this problem? Is it possible to add a pre-amplifier to the electrode output to the Cyton? If it is, can you share any details to do that?
Razan, hi.
If you look back at the previous comments, it's generally not possible to classify EEG based on 'type' of object: 'car', 'door', 'tree', 'river', etc., because the neurons and networks involved in the pattern recognition are so few, the signal is not detectable from the scalp. Amplification will not help because the signal to noise level is so low. Re-read my November 5 comment and visit the "list of commonly used BCI paradigms" link.
Regards, William
Hi @wjcroft,
I did some tests with this tutorial on P300 with Cyton, Not the exact arrangement ie, only with video as stimuli and not using a photoresistor to provide an additional input as I just wanted to analyze the changes in signals according to different images or at least the cat(or dog) versus black image. Isn't it possible to see changes in signals in image(cat/ dog) is shown versus nothing(black image) ?
Do you understand the P300 'oddball' paradigm?
https://en.wikipedia.org/wiki/Oddball_paradigm
P300 processing requires signal averaging, because the signal is so weak.
https://backyardbrains.com/experiments/p300
If you structure your experiment as an oddball paradigm, for example, successive separate images (for example) of:
And tell the subject to look for dogs, then the P300 will fire on the dogs. Precisely because they are the 'odd' / oddball stimulus, not the norm. Read the above links. The subject's brain is producing P300 when the 'oddball' stimulus occurs.