Hello. I want to acquire P300 in real time, for example, typing text by my mind. It's possible for any of your OpenBCI kits? Or I must get searching another interface?
Chip has firmware that emulates the OpenEEG protocol (which OpenViBE already supports). But the downside is that OpenEEG is low resolution (10 bit samples) vs. our 24 bit samples. And OpenEEG is only 6 channels.
I've started looking at the OpenViBE driver internals, but this would be an ideal project for a savvy C++ / GTK / Glade wizard. The C++ part is straightforward and our BrainBay protocol parser is almost the same thing used in OpenViBE. The GTK API is used for their acquisition server device control panel, and is a bit more arcane. Glade is the GTK form layout configuration tool.
Update (~ December 24): I just saw a post (on the LinkedIn Signal Processing forum) from Yann Renard, Mensia and OpenViBE founder; they are currently working on the OpenBCI native driver. My guess is that it will be available in January 2015.
I know there are some people out there waiting for a P300 speller based on OpenBCI, I tried to do my best to document how to achieve that with OpenViBE:
The method can be improved -- I'm waiting as well for the release of a real acquisition driver -- and I will shortly propose a dedicated page on the "docs" repo so everyone could contribute to the topic. It's time to kick off practical applications
Your blog post really "spells out" all the steps concisely. :-)
I'm certain that this will get tweeted and featured all over the place. Great publicity for OpenBCI. @biomurph@conor_obci@chipaudette
Do you have an estimate of how well the Coadapt prototype speller setup performs, relative to say a commercial unit such as g.tec's Intendix? I have no idea how these spellers are benchmarked and rated. You mention a few accuracy figures in your blog, but it sounds like you are headed to some further refinements and such.
Also can you tell us a little bit more about your headset setup shown in the blog. It looks like the g.tec g.GAMMAcap, you mentioned something about 3D printed holders. Do you inject gel into those, I see the small holes. You may want to do a separate post in the Electrodes category with more details on your electrode holders.
High Fives!
William
PS this is a nit pick, but on your 3rd image in the blog post, the Device Configuration / Generic Raw File Reader window shows a sample rate of 256 vs. 250. Is that a typo; curious if the accuracy may improve with the 250 sps. Maybe the learning algorithm really just bypasses that issue(!) :-)
To be honest, it was the first time I tried to setup a P300 speller system, so I have no idea how OpenBCI performs compared to commercial systems. Basically, I just stopped digging once I managed to spell my first letters, so there's lot's of improvements that can be made I'm sure.
That said, in our lab we do possess a medical grade unit, a g.tec system from which I borrowed the cap as you noted with your sharp eyes. It's in my TODO list to compare both units, I already drafted the protocol... I just have to find some time (where are the interns?? ).
Concerning the headset setup, I shamelessly replicated how g.tec used their electrodes: the holes in the cap stretch around the holders, in which are enclosed the electrodes. A hole in the holder let us put the conductive solution with a plastic syringe. To be fair it's once again a g.tec branded product that I used here, but I the past we were short of this pricey solution and just made our own with aloe vera and salt, so I think it's a good solution (bah-dum tshh) for DIY wet electrodes.
Since I did not want to do anything permanent (e.g., use glue)... and also because I had enough troubles to create my firsts 3D-print-enabled 3D models, the gold cup electrodes could still move a little bit once attached, it's just the tension from the fabric that maintained them approximately in their enclosing. At least it seemed more tightly attached to my head than when I tried to use the ten20 paste (how do you manage to hold a thing onto your scalp with this non-sticky thing??). Here again I have some ideas to improve, but before I spend more time on the matter I want to see how the spiderclaw v3 performs
Sample rate: this is not a typo, this was in fact the *one* big issue for me. Currently the OpenViBE telnet reader only accepts powers of 2 as incoming sampling rate -- and I suspect the whole software is a lot happier when it's powers of 2 all over the place. The "oversampling factor" option accepts only integer and the drift correction creates nasty artifacts. The only practical solution: change the sampling rate of the signal before it's sent over TCP, i.e. in my modified python (or Processing) code. I had two problems then:
1. how to detect when an extra sample is needed. The use of a constant factor (i.e. 1.024) induces jitters in the long run with the slightest delay in computations -- and I did not optimize my code. Then I relied on timestamps to compute how many samples should be sent between 2 calls to achieve a 256Hz sampling rate. In the python script, it's 2 samples instead of 1 every now and then in the callback function
2. how to compute the new value(s). I chose linear interpolation. Very simple, *too* simple, I'm afraid it creates artifacts, even though it's better than the drift correction of the openvibe acquisition server. For instance my alpha waves were not as steady in OpenViBE plot as what was shown in the Processing GUI.
In an ideal word we would have a big buffer -- say, 128 samples -- and perform clever signal processing to prevent artifacts such as aliasing: oversampling to 500Hz, filtering, and *then* re-sampling to 256Hz from what I understand. While the Processing environment may not be a fit for such processing (bah-dum tshh... no, wait, it's not fun, it's a pain to search for signal processing in Processing ), I found some leads for my python streaming server, see discussions in https://github.com/mne-tools/mne-python/issues/121 which uses http://code.google.com/p/upfirdn/
But then, even if I had not been lazy and had try to implement such kind of algorithm, the required buffer would have produced a large and variable drift on the acquisition server side. Maybe there's a nice solution awaiting inside OpenViBE source code but... -- how to phrase this in a polite way? -- ...it takes a little effort to understand how it's working under the hood. Long story short: if someone knows how to overcome this then we have right now a perfect OpenBCI to whatever bridge, otherwise let's see how Mensia implements this
re: sample rate. I believe this will all be solved with the native acquisition server / driver for OpenBCI (using 250 sps). From all the info I could see online, there is no built-in restriction that drivers must use powers of two sample rates. Any rates can be specified when using the skeleton generator. The telnet driver just happened to be compiled with the small set of rates (that usually suffice for many amps.)
Here's a thought I had regarding the telnet driver, if you can recompile that with a tweeked Glade specification replacing 256 with 250, then that selection would work. May even be possible to do a binary patch of the executable(!) :-) I don't know how Glade GUI specs are stored in apps. On Windows, such specs are in resource files that then get compiled into the binary.
re: your 3D printed holder; this would be great to see as a post under the Electrodes category, with a few close up photos. I recently priced the g.GAMMAcap's from the U.S. distributor, they are only about $230. Actually less than Electro-Caps.
re: OpenBCI Twitter link to your blog tutorial. I think will be forthcoming from Conor, as well as a feature on the Community page.
re: INRIA, your blog says that is your organization / department. Isn't this right where OpenViBE was created? :-) I guess Yann is mostly at Mensia now, but maybe we could nudge him with a link to your tutorial. Possibly he has a beta test version at this point.
There is no restriction about sampling frequencies. I started to modify the source code of OpenViBE and realized that it's even simpler than expected: the parameters of the acquisition drivers, such as sampling frequencies, are stored in a plain .ui file. For instance all it takes to add a "250" to the telnet driver is to put one more value in "/dist/share/openvibe/applications/acquisition-server/interface-Generic-RawTelnetReader.ui". Should work on Windows with the downloadable binary.
And for those who don't even want to bother with the ".ui" file, it's possible to force this parameter in the regular configuration files, I already have a script that launches the acquisition driver with the right parameters. There's one alike in my OpenBCI GUI Processing branch, I'll post the final version once I update my tutorial. I want to test this setup first to be sure, maybe the problems I had in the past were related to chunk size instead of sampling frequencies, maybe not.
If it's okay, then we *already* have our perfect "bridge". I don't know how many hours I lost because of that sampling rate, thanks for pointing out the right direction )
Maybe in the future it would be better to use a LSL stream. It seems more robust and versatile, easier to configure on the client side (e.g. automatic channel names and sampling rate in acquisition sever). An LSL driver is pending in the git repo of OpenViBE. Hopefully it'll be integrated in the next release.
And the reason why it may be more practical to use a streaming server -- even when the Mensia driver will be out -- is because we will be able to configure on the fly the board, without relying on what is implemented on the client side -- especially handy if the firmware is updated. Could be to select the reference, N or P pins, to add or not accelerometers, read leftovers analogous or digital pins, etc.
Community page: I'd by happy to contribute
Inria: Yes, you got me, my PhD takes place in one of its research centers, and I've been seated for 2 years next to an "official" OpenViBE engineer. Mensia has its own agenda and I don't want interfere too much (read: bother them with my hackish attempts), I'll see if it's worth the private investigation
I was planning to add a "applications" tab to the docs, so definitely I think it's the way to go The nice things with SSVEP that were published previously could go there along with the P300 speller example, so people could start to reproduce/extend by themselves "real" BCIs. (I don't want to sound like I'm sold to OpenViBE but... in theory there's ready-to-use SSVEP scenarios in it. I add them to my "TO-DOcument" list).
Edit: saw your comment about the community page, nice I hope it is not too much troubles to integrate blog posts (since I use jekyll, I have markdown "sources" if it helps for the future). Now I really have to hurry to update my tutorial
[can OpenViBE P300 speller invoke a C++ helper program?]
actually i'm trying to write a eeg based cell phone controlling program so i need to is it possible to combine this with another program written in c++?
I merged your question about calling C++ from OpenViBE speller into this existing thread for the speller.
When you post your questions on an existing thread of the same topic, then those folks who have the expertise get an email notification so you are more likely to get a response.
My impression is that changing the output code of the speller should be straightforward. Have you looked at the OpenViBE documentation? And the code for the speller? OpenViBE is in C++ so your helper would be compatible with that.
Using VRPN or LSL on some other protocol to trigger external events, it's possible to use OpenViBE only for the data acquisition / signal processing / machine learning part. Actually, one of the P300 example shipped with OpenViBE, the "CoAdapt", relies on an external program to show the matrix. But if you want to have everything on a phone... it's another story.
I'm new to BCI. According to my understanding, one can use a BCI headset (i.e., OpenBCI, etc) to control any arbitrary device by "teaching" it commands. My question is then as follows: is it theoretically possible for someone to grab a BCI headset, and then write a program which allows a user to "teach" the device each and every word in the English language?
For example, the user could set the device to learn the word "cat" by thinking about the word "cat". Then, the next time the user wants to type the word "cat", he or she could just think of the word cat. I think there are about 10,000 words in the English language. As a theoretical thought experiment, do BCI headsets have the capacity to handle such an undertaking (I know it would probably take a ridiculously long time to teach the headset every English word, but I'm just curious)?
@knife121 , hi. I merged your question into this existing thread on the P300 Speller, which uses OpenViBE. It is the type of BCI speller that is also found in commercial applications, as from g.tec (Intendex). Do some web searches for P300 speller and you'll find some videos.
i downloaded the p300 files from the net tried to use them but every time i run that speller program it selects alphabets on its own my openbci device is also not connected to it and it starts writing FJZ4ZVIMDA so how do i connect my openbci 8 bit board to the open vibe ??? thnkq u
Ayush, I merged your question into this existing thread on the speller. When you post on an existing thread, you get more response because previous posters (such as Jeremy @jfrey ) are email notified. Thanks.
You have to setup the Acquisition Server to receive from OpenBCI. Are you following Jeremy's tutorial? Did you follow the easier tutorial from Rodrigo? (Not on the speller but on OpenViBE in general).
@ayushmh , I already merged your question into this existing thread. Can you just be patient and see if someone on this existing speller thread will get back to you? You can also try sending a private message to @jfrey or Rodrigo, @Rceballos98 , who wrote the tutorial I mentioned on the previous post. Are you able to get his tutorial to run? That shows how to connect the OpenBCI to OpenViBE. What is your issue with not being able to connect?
It is my understanding that the P300 speller tutorial for OBCI might be a little outdated since the new OpenViBE release. I think @jfrey might have a bit more information on that.
Let us know if you manage to connect the board following the first tutorial.
Did you follow the suggestion from @Rceballos98 (August 13 post above)? What happens when you run his tutorial? Please reply in this thread, which is for helping those with P300 speller questions.
I've followed jfrey's blog till that Optional full screen mode and i'm not able to move any further coz neither I'm able to run that python script nor I know, how or which script is to run I've ran every single script you provided in that OpenBCI_Python-Master folder
I've tried this in both Windows and Ubuntu but I'm not able to move any further
The python scripts were necessary when there was no OpenBCI driver included in OpenViBE, in order to steam EEG signals through LSL. Now that you can select OpenBCI directly in the OpenViBE acquisition driver, you can get rid of the python scripts. I'd say at the moment the simplest and most effecient solution would be to use the acquisiton server of OpenViBE 1+ with the designer from OpenViBE 0.18, in order to use both the co-adapt speller and the OpenBCI driver. What, is it getting even more complicated?
Maybe first try to play around with OpenViBE -- e.g. display raw signals, watch for muscular artifacts -- before you venture in P300 speller applications
It's no longer necessary to use the Python at all, since Jeremy added OpenBCI directly to OpenViBE Acquisition Server. Before you run OpenViBE, ensure that you can get good signals with the OpenBCI_GUI. That will guarantee that your COM ports are working, etc.
in OpenVibe SAS (signal acquisition server) after setting everything according to the tutorial, when I clicked on the connect button it does nothing and I was wondering after this problem is solved what do I've to do to get a P300 speller working thank you
@ayushmh, hi. I merged your question about the P300 Speller to this thread, which is intended to help those with questions on the speller. Your thread mentioned above was regarding a ttyusb0 not opening problem. Which turned out that you figured out yourself.
When you post in this thread, others with expertise, such as Jeremy @jfrey, can see and potentially answer.
My question for you is, are you using the current 1.2 release of OpenViBE, or are you using the version mentioned in Jeremy's tutorial? Your best bet may be to try to get it converted and working under the current 1.2 version, as there could be bugs and enhancements in the intervening versions. Such as Jeremy's OpenBCI acquisition mods that went in with 1.0. I think you are just going to have to dive into the source code and tweek whatever is necessary to get it to build under the current version. The current version does not have any pre-built modules for the speller, because that was dropped.
Does that make sense?
Regards,
William
PS please re-read Jeremy's post from June 7 above. He suggests using the 0.18 version that the tutorial was created for, but to grab the acquisition server from 1.2 that has his OpenBCI mods. Since the AS is relatively isolated from the rest of OpenViBE, this sounds straightforward to me. Let us know how that works out for you. You can either attempt to get the speller code running under 0.18, or get it to run under 1.2. The former may be easier.
I'm using 1.2 build version tell me if I should downgrade to any other version.I followed @jfrey 's blog till the Calibration but I'm stuck on this step : launch dist/openvibe-coadapt-p300-stimulator.sh. When you are ready to proceed, press the s key on the keyboard. Watch for the letters until the session ends.
I don't know what to do further and I wanted to ask if I should try running this much and if yes then how thank you
There is no pre-built speller shell file or executables in the 1.2. So you have two choices. Did you read the previous PS paragraph??
PS please re-read Jeremy's post from June 7 above. He suggests using the 0.18 version that the tutorial was created for, but to grab the acquisition server from 1.2 that has his OpenBCI mods. Since the AS is relatively isolated from the rest of OpenViBE, this sounds straightforward to me. Let us know how that works out for you. You can either attempt to get the speller code running under 0.18, or get it to run under 1.2. The former may be easier.
yup, I read it @wjcroft I'd say at the moment the simplest and most effecient solution would be to use the acquisiton server of OpenViBE 1+ with the designer from OpenViBE 0.18, in order to use both the co-adapt speller and the OpenBCI driver. According to Jeremy's post on June 8 What I should do is start the AS from openvibe 1.0.0 and load the designer from 0.18 in order to use them both and this would run my P300 speller, right ? Thank you
Comments
Typical P300 spellers based on a row-column matrix, use 8 or more sensors.
http://www.gtec.at/content/download/1829/11425/file/NSL26188.pdf
Fz Cz Pz Oz P3 P4 PO7 PO8 (8 sensors)
6 x 6 matrix flashed a row or column at a time
This appears to be similar to their commercial product, which is a 10 x 5 matrix:
http://www.gtec.at/Products/Complete-Solutions/intendiX-Specs-Features
http://openvibe.inria.fr/coadapt-p300-stimulator-tutorial/
Fz C3 Cz C4 P7 P3 Pz P4 P8 O1 Oz O2 (10 sensors)
10 x 5 matrix
https://www.google.com/search?q=openvibe+p300+speller
----
Chip has firmware that emulates the OpenEEG protocol (which OpenViBE already supports). But the downside is that OpenEEG is low resolution (10 bit samples) vs. our 24 bit samples. And OpenEEG is only 6 channels.
I've started looking at the OpenViBE driver internals, but this would be an ideal project for a savvy C++ / GTK / Glade wizard. The C++ part is straightforward and our BrainBay protocol parser is almost the same thing used in OpenViBE. The GTK API is used for their acquisition server device control panel, and is a bit more arcane. Glade is the GTK form layout configuration tool.
http://openvibe.inria.fr/documentation-index/
See Acquisition Drivers section.
There are also FOUR P300 spellers listed in the Existing Scenarios section.
William
----
Update (~ December 24): I just saw a post (on the LinkedIn Signal Processing forum) from Yann Renard, Mensia and OpenViBE founder; they are currently working on the OpenBCI native driver. My guess is that it will be available in January 2015.
PS this is a nit pick, but on your 3rd image in the blog post, the Device Configuration / Generic Raw File Reader window shows a sample rate of 256 vs. 250. Is that a typo; curious if the accuracy may improve with the 250 sps. Maybe the learning algorithm really just bypasses that issue(!) :-)
re: sample rate. I believe this will all be solved with the native acquisition server / driver for OpenBCI (using 250 sps). From all the info I could see online, there is no built-in restriction that drivers must use powers of two sample rates. Any rates can be specified when using the skeleton generator. The telnet driver just happened to be compiled with the small set of rates (that usually suffice for many amps.)
http://openvibe.inria.fr/tutorial-creating-a-new-driver-for-the-acquisition-server/
You can see references to a number of drivers that use non-power-of-two rates:
https://www.google.com/search?q=openvibe+acquisition+"sampling+rate"
Here's a thought I had regarding the telnet driver, if you can recompile that with a tweeked Glade specification replacing 256 with 250, then that selection would work. May even be possible to do a binary patch of the executable(!) :-) I don't know how Glade GUI specs are stored in apps. On Windows, such specs are in resource files that then get compiled into the binary.
re: your 3D printed holder; this would be great to see as a post under the Electrodes category, with a few close up photos. I recently priced the g.GAMMAcap's from the U.S. distributor, they are only about $230. Actually less than Electro-Caps.
http://www.cortechsolutions.com/Products/EC/EC-HC
re: OpenBCI Twitter link to your blog tutorial. I think will be forthcoming from Conor, as well as a feature on the Community page.
re: INRIA, your blog says that is your organization / department. Isn't this right where OpenViBE was created? :-) I guess Yann is mostly at Mensia now, but maybe we could nudge him with a link to your tutorial. Possibly he has a beta test version at this point.
Best,
William
> An LSL driver is pending in the git repo of OpenViBE. Hopefully it'll be integrated in the next release.
Ah, so the two elements, the LSL support in OpenViBE, and our own OpenBCI connection to LSL. I suspect their side is already working...
https://code.google.com/p/labstreaminglayer/wiki/OVAS
Great progress, thanks again Jeremy. Let us know how the 250 sps effects accuracy / repeatability etc. of the speller.
William
actually i'm trying to write a eeg based cell phone controlling program so i need to is it possible to combine this with another program written in c++?
I merged your question about calling C++ from OpenViBE speller into this existing thread for the speller.
My impression is that changing the output code of the speller should be straightforward. Have you looked at the OpenViBE documentation? And the code for the speller? OpenViBE is in C++ so your helper would be compatible with that.
http://openvibe.inria.fr/coadapt-p300-stimulator-tutorial/
http://openvibe.inria.fr/coadapt-p300-stimulator-some-design-notes/
Thanks, William
[original thread: Can BCI be used to type words?]
I'm new to BCI. According to my understanding, one can use a BCI headset (i.e., OpenBCI, etc) to control any arbitrary device by "teaching" it commands. My question is then as follows: is it theoretically possible for someone to grab a BCI headset, and then write a program which allows a user to "teach" the device each and every word in the English language?
For example, the user could set the device to learn the word "cat" by thinking about the word "cat". Then, the next time the user wants to type the word "cat", he or she could just think of the word cat. I think there are about 10,000 words in the English language. As a theoretical thought experiment, do BCI headsets have the capacity to handle such an undertaking (I know it would probably take a ridiculously long time to teach the headset every English word, but I'm just curious)?
thnkq u
I've followed jfrey's blog till that Optional full screen mode
and i'm not able to move any further coz neither I'm able to run that python script nor I know, how or which script is to run I've ran every single script you provided in that OpenBCI_Python-Master folder
I've tried this in both Windows and Ubuntu but I'm not able to move any further
Thank you for helping
Did you read through and try all the steps in this tutorial as suggested previously by @Rceballos98 ?
http://docs.openbci.com/research tools/OpenViBE
It's no longer necessary to use the Python at all, since Jeremy added OpenBCI directly to OpenViBE Acquisition Server. Before you run OpenViBE, ensure that you can get good signals with the OpenBCI_GUI. That will guarantee that your COM ports are working, etc.
Regards,
in OpenVibe SAS (signal acquisition server) after setting everything according to the tutorial, when I clicked on the connect button it does nothing and I was wondering after this problem is solved what do I've to do to get a P300 speller working
thank you
http://openbci.com/index.php/forum/#/discussion/748/openvibe-acquisition-server-dev-ttyusb0-could-not-open
When you post in this thread, others with expertise, such as Jeremy @jfrey, can see and potentially answer.
My question for you is, are you using the current 1.2 release of OpenViBE, or are you using the version mentioned in Jeremy's tutorial? Your best bet may be to try to get it converted and working under the current 1.2 version, as there could be bugs and enhancements in the intervening versions. Such as Jeremy's OpenBCI acquisition mods that went in with 1.0. I think you are just going to have to dive into the source code and tweek whatever is necessary to get it to build under the current version. The current version does not have any pre-built modules for the speller, because that was dropped.
Does that make sense?
Regards,
William
PS please re-read Jeremy's post from June 7 above. He suggests using the 0.18 version that the tutorial was created for, but to grab the acquisition server from 1.2 that has his OpenBCI mods. Since the AS is relatively isolated from the rest of OpenViBE, this sounds straightforward to me. Let us know how that works out for you. You can either attempt to get the speller code running under 0.18, or get it to run under 1.2. The former may be easier.
launch
dist/openvibe-coadapt-p300-stimulator.sh
. When you are ready to proceed, press thes
key on the keyboard. Watch for the letters until the session ends.I don't know what to do further and I wanted to ask if I should try running this much and if yes then how
thank you
the 0.18 version that the tutorial was created for, but to grab the
acquisition server from 1.2 that has his OpenBCI mods. Since the AS is
relatively isolated from the rest of OpenViBE, this sounds
straightforward to me. Let us know how that works out for you. You can
either attempt to get the speller code running under 0.18, or get it to
run under 1.2. The former may be easier.
I'd say at the moment the simplest and most effecient solution would be
to use the acquisiton server of OpenViBE 1+ with the designer from
OpenViBE 0.18, in order to use both the co-adapt speller and the OpenBCI
driver.
According to Jeremy's post on June 8
What I should do is start the AS from openvibe 1.0.0 and load the designer from 0.18 in order to use them both and this would run my P300 speller, right ?
Thank you