How to differentiate 2 items
in Cyton
Hi guys,
I am trying to classify two different types of hand movements, 1. Open hand, 2. Closed hand. Basically, when the user thinks about opening/closing the hand, the model should be able to detect that. That is the ideal situation. If that is not possible then I am trying to at least differentiate between 2 different items.
I am using an Ultracortex Mark IV headset, and I have modified my headset so that all 16 electrodes are focused on the frontal lobe as attached.
Thanks for any help in advance!
Comments
Hi Aaravsharma,
Normally Motor Imagery BCI is done with using left or right hand movements. This is because the lateralization of the motor centers makes it easier to detect. Your open/close method does not have that advantage.
https://www.google.com/search?q=motor+imagery+bci
There is a (now deprecated) tutorial where you might get some ideas. But we don't recommend using this because the creator of the tutorial is no longer responding to issues.
https://docs.openbci.com/Examples/EEGProjects/MotorImagery/
Another older example (by Jeremy Frey) is here:
https://openbci.com/community/openbci-crossing-swords-with-motor-imagery/
His website:
https://phd.jfrey.info/projects/
William
Also to note, if you look at the example links, you can see that they concentrate most electrodes around the motor strip. This is where Motor Imagery classification is detected best.
Yup, in the modified headset, I made sure to add some around the motor strip as well. Thanks for confirming that! I'll also check out that older tutorial you sent. By the way, most of the electrodes are at the frontal lobe. Do you think that there would be a significant accuracy increase if most of them moved to the motor strip?
Thanks!!
I suggest patterning your electrode setup after ideas used in the tutorials or other BCI Motor Imagery papers.
https://www.google.com/search?q=motor+imagery+bci
All of these examples require 'training' a classifier. The first tutorial above only needed 8 channels. But Jeremy's uses 16.
There are better BCI paradigms than Motor Imagery. For example VEP, Visual Evoked Potentials. A recent community post:
https://openbci.com/community/mind-controlled-robot-openbci-mindaffectbci-maqueen-v2/
https://openbci.com/community/mind-controlled-game-openbci-mindaffectbci-unity/
Okay, I'll take a look at that. Right now I have the Cyton + Daisy, so I have 16 electrodes. I am going to move some of the electrodes from the frontal lobe to the motor strip(since other tutorials have that). Also taking a look at VEP.
I just checked out Jeremy's tutorial and I was wondering if there was an alternative to this, since he was detecting clenched teeth, eye closed, and eye blinks. Since I am controlling an Exoskeleton, I don't want the controls to be those, instead, I want the user to think about perhaps their left and right hand moving.
I also checked the cVEP approach, and the main problem with this is that it requires the user to look at a screen to control the arm.
Are there any alternatives to such tutorials? I liked the initial tutorial I was talking about, but the author remained unresponsive..
I am also adding more electrodes focused around the Motor Strip(since I am trying to do Motor Imagery either way).
My impression is that Jeremy only used the 'artifact' generation (teeth, blinks) as examples. He definitely goes on to train left / right Motor Imagery detection.
Some BCI setups use AR glasses with cVEP. That way you can be looking out at the environment while making selections from various menus.
https://www.google.com/search?q=bci+with+ar+glasses
Okay, I will add more electrodes to my motor strip, and complete the tutorial from Jeremy, thank you!!
Another advantage for Jeremy is that he would probably answer an email if you sent it to him. His email address is listed in the CV / resume on his website.
Oh yea, that's great then. Thank you!!
Hi wjcroft, thank you for your help in directing me to Jeremy's tutorial. I am halfway through it, although I ran into some confusion. There are 2 files he talks about 'mi-csp-0-acquisition.xml' and 'mi-csp-1-acquisition.xml'. I needed to go into mi-csp-0-acquisition.xml on the OpenVIBE designer and modify some boxes, which he mentioned and explained(and I completed correctly).
But right after that, he says 'You’ll have to do the same for the next scenario, mi-csp-1-acquisition.xml'. When I open up mi-csp-1-acquisition.xml, the boxes of mi-csp-0-acquisition.xml and mi-csp-1-acquisition.xml are different...
Do you have any suggestions on what to do?
Jeremy's tutorial is just elaborating on the existing OpenViBE material, specifically:
https://openvibe.inria.fr/motor-imagery-bci-with-common-spatial-pattern-filter/
https://openvibe.inria.fr/category/documentation/user/existing-scenarios-documentation/
Also see the links on this search result:
https://www.google.com/search?q=mi-csp-1-acquisition.xml
I'm not an OpenViBE expert. They do have a discussion forum, on which you can post:
https://openvibe.inria.fr/forum/viewtopic.php?t=9964
https://openvibe.inria.fr/forum/
https://openvibe.inria.fr/forum/search.php?keywords=motor+imagery [search query on MI]
And again, you may be able to email Jeremy. But I would first suggest exploring the other OpenViBE resources. The post from Jeremy on the OpenBCI Community page is from 2015. TEN years ago!
Thanks for sending those links over, I'll check out the first couple of links you sent and the forums if I still can't get a solution.
Wow, yea 2015 was long ago!!
Thanks for your help again!
Hey wjcroft,
I am running into another error and I can't find the solution of it. I am trying to post my question on the OpenViBE Forum page, but I need access to do so, and their Board Member hasn't approved it yet..
Could you help me in providing Jeremy's email address? I checked all over and I can't find it. Thank you for your help in advance!
Press the 'CV' button at the top of his page,
https://phd.jfrey.info/
Thank you