research study with Cyton stream + Stimulus marker + Button response
Hi community!
We're planning to use OpenBCI hardware (Cyton + Daisy) to run a visual perception study and we have some doubts about which would be the best approach to synchronize EEG stream data (ideally 16 channels) + EEG markers (stimuli + button responses). Our goal is to present visual stimuli to participants in a go/no-go task (e.g., green/red lights) using external software (e.g., PsychoPy, Psychtoolbox, or EEG-ExPy) and record the participants' responses. For that, we need to mark the EEG stream with the stimulus onset and with the button response.
So far, I have read different entries on this forum from the past years and I have come up with two different potential ways of pursuing the study that I would rather double-check with you (so, here we go!):
(1) ANALOG APPROACH: Use the OpenBCI GUI to record the EEG data + an external button attached to the analog input of the Cyton board + a photosensor attached to the screen to detect the stimulus changes and also plugged to the analog input of the board (as shown in this example and here). This option will save directly the data from the OpenBCI GUI in our preferred (.txt) format. Easy peasy.
(2) DIGITAL APPROACH: Use EEG-ExPy for on-screen stimuli presentation + BrainFlow Python + LSL to be able to use two different markers (stimulus + response) + LabRecorder to connect to both streams (they will be synchronized and merged into one XDF file). Here, I would need to do more research on whether EEG-ExPy/Psytoolbox allows for the recollection of responses (e.g., via keyboard) and it can all be done in digital mode, or whether we need the external button for it. Also, in the use of BrainFlow Pyton and/or LSL.
QUESTIONS
- Regarding the response time accuracy: Is it correct that the external trigger approach (1) more accurate (vs digital) since the trigger data recorded in the Cyton 'Aux' channels is sampled at the same time as the channel data (i.e., 250 Hz in the case of Cyton 8 channel alone or 125 Hz in the case of Cyton + Daisy)?
- Restrictions of channels (Cyton + Daisy): In the analog approach (1) can we use Cyton + Daisy or we can only use the Cyton because we are using the analog mode? In the digital approach (2), do we have this channel restriction?
- Best approach for computer/s use: Would it be ideal to use one computer for stimulus presentation and another for EEG recording? Or that would depend on the approach we decide to use?
- Digital approach (2): BrainFlow Python + LSL + LabRecorder is better recommended than OpenBCI GUI Networking + LSL widget, right (see previous forum entry)?
Is there a third approach more suitable for our needs that we are not considering?
What do you think the best approach would be in our case?
Thank you beforehand and keep rocking & rolling with OpenBCI!
Irene Vigué-Guix
Comments
Hi Irene, nice to see you.
Thanks for your past contributions on your blog, OpenBCI Community page, and this forum..
https://sentipensarte.wordpress.com/ [Spanish site, but Google will translate if you are using Chrome]
https://irenevigueguix.wordpress.com/ [original English site]
The original site has a number of BCI tutorials.
re: highest time accuracy.
Is with the external trigger approach, both for photodiode/resistor and buttons. LSL depends on the fact that the EEG amplifier must be very carefully characterized in terms of total delay between the sample being taken and arriving at the computer. This is tricky with Cyton because there are sometimes variable delays induced by OS buffering, usb buffering, radio packet transmission collisions, etc. Most LSL labs use a hard wired usb-interfaced amp such as g.tec, which has been carefully characterized for delays.
re: Aux channels with Daisy
Yes this works fine, there is space in the packets always for the Aux data. But note that with Daisy, your sample rate drops from 250 Hz to 125 Hz and there is a form of 'averaging' that happens between samples.
https://docs.openbci.com/Cyton/CytonDataFormat/#16-channel-data-with-daisy-mdule
This may have some slight impact on P300 timings.
re: one or two computers.
One computer is going to be substantially easier.
re: GUI widget
This will not be needed since you are not using LSL
re: labeling 'approaches'
Calling the external trigger 'analog' and LSL 'digital' may not be best terminology. Perhaps better would be non-LSL vs LSL.
Best regards, William
With the non-LSL Aux data approach, the GUI is NOT needed. You can do all the work in your Python (or other language) program, as the same Aux data is available there.
Hi Irene!
I'm doing a similar university student project and i'm just wondering if you already have conducted such studies and which setup you have chosen.
At this moment, my setup consists of OpenBCI Gelfree BCI Cap + PsychToolbox visual trigger code + Emotibit. All streams synchronized via LSL and saved as an XDF File via LabRecorder. Emotibit is just an addon and is negligible for ERP.
So far, for ERP viualization, my EEG Data is maybe too bad because no accurate ERP image could be plotted. Even if the latencies of the trigger transmission are taken into account, which are about 0,002s each trigger.
I would be very grateful for any thoughts or ideas and would be particularly pleased if I could open the discussion on this topic again
Kind regards
Julluki
Hi Julluki,
Did you read the previous comment I left from January 2024, regarding the Cyton aux data 'external trigger' approach? LSL triggers are subject to too much timing jitter and latency because of the factors listed. See this tutorial.
https://docs.openbci.com/Examples/VideoExperiment/
Regards, William