Placing triggers in data with button press

2»

Comments

  • If I use the D11, D12,D13,  what column is the data being written? It's definitely not AuxData, right?? The connection mode is the same as D17? Many thanks.
  •  @pushtheworld, in fact, the AuxData is x,y,z, if we use D11 for AuxData's z, so how can we write the mark to instream by D12 or D13 ?Thank you very much.
  • alitosalitos México
    edited March 2018
    Data Columns are 2-9 and Digital inputs are Columns 10-14
  • @zhanglei please see above answer
  •  @pushtheworld, I know that Digital inputs are Columns 10-12. In fact, the digital input only include three columns. I mean AuxData x is for columns 10, and Auxdata y, z  is for column 11,12. The Columns do not have Columns 13 and 14?
  • edited April 2018
    Hi everyone! A while back, I said I was going to try to use external triggering with a P300 paradigm and cyton 8 channel, 32-bit board. My senior design group was able to do this, but we ended up using a VEP paradigm instead. Nonetheless, I have been documenting our entire project work and setup for external triggering (and analysis) at this link:

    https://docs.google.com/document/d/1rdp4mJKKu5mBQRHH0cVUDW4-uLGTncGEp0tp2nXT6yE/edit#heading=h.a8ayp95sr5wc

    Please provide suggestions on the document and if I can contribute this to any of the OpenBCI webpages, that would be fantastic! 

    In case anyone is curious, we are nearing the time for submission of our senior design project. Our beginning goal was to create an EEG-fNIRS based BCI, for real-time computer cursor control but we decided to focus on EEG only with our time constraints. As of now, we are using OpenViBE for data acquisition, recording training/testing data, and streaming via LSL to MATLAB. We've used MATLAB for preprocessing, creating offline feature extraction code (CSP), and creating offline SVM binary classifiers. Most importantly, we've been able to implement a 3 class ONLINE SVM classifier and interface it with a cursor control application we also made in MATLAB. Online accuracy doesn't seem to be great, but it's working nonetheless. We'll be trying to improve this over the next couple of weeks we have left before submitting our project.

    Hopefully the document is at least somewhat helpful! I'm in the midst of organizing my Github as well, but will definitely get around to it after my group has submitted the project.

    @pushtheworld: I submitted a request for streaming markers via LSL on Github. In case you'd like to see the request:

  • Wow @faheemersh this is so cool! We should feature your project in the next newsletter!
  • @zhanglei: No problem! I'm happy to help!

    @pushtheworld: That'd be awesome! Can I write something up with my group and maybe it can be in the next newsletter? We'll officially be done with the project in early May, but will be demoing on April 27.
  • wjcroftwjcroft Mount Shasta, CA
    Faheem, you can post a note on the Community section, pointing to your doc.


    William
  • @faheemersh Can you give me your interface based on E-prime 3 or a picture. How you set the mark into the data? In fact, D17 only can set one mark. Doesn't P300 need a lot of mark? Thank you very much.
  • @zhanglei

    I'm not exactly sure of your question, but I'll try to answer. If you're asking how to see the marker in the data, I wrote something like this in my shared document: If you open the text file produced from the OpenBCI GUI recording, there will now be 5 columns, each representing a digital input in, after the samples column and eight EEG channels. Find the column containing D17 markers (should be in the 4th column of these 5 columns) and use it for analysis. 

    I do not have access to E-Prime as it is not on my computer. I had to borrow another's computer. The P300 paradigm we tried to use only had one event marker; basically, the rest phase was a dark screen, and the event was a large X appeared in red in the middle of the screen for a short duration. This just repeats itself over and over. If you open the text file after inputting this marker to the stream, all the zeros in the D17 column correspond to the duration that the marker (the large X) showed up on the screen.

    Regardless, if your paradigm has more than one marker, I believe you will need a need a multichannel optoisolator, not like the one we used in our project. We did buy this multichannel optoisolator but ended up not using it since we found OpenViBE. Here's the link to the data sheet of the quad channel optoisolator we bought: https://www.vishay.com/docs/83526/83526.pdf

    Basically, you'll need to create the same circuit for the single optoisolator. See the picture below:
    image

    @wjcroft, @pushtheworld, please correct me if I've made a mistake.
  • edited April 2018
    Or see this image if the previous one does not show up.
  • @faheemersh Thank you very much for your explaining your paradigm. It need only  have one mark for epoch. And I know the multichannel optoisolator. image. My paradigm just like this.
  • image. The picture just like this. Maybe you do some work on it. Its purpose is to output letters.
  • @zhanglei What software are you using for the paradigm? And how does your paradigm work exactly? It seems like its just a variant of a P300 speller. 
  • yes, it only a variant of a p300 speller. My software is psychtoolbox based on matlab. I can use a serial port for mark. Do you do this job?
  • alitosalitos México
    edited April 2018

    What is "Marker mode" for?, 
    is equal to read digital mode?, 
    Can it be used as a trigger?

    Would you please explain it to me?

    Thanks so much for your attention

    Regards

    JMAT
Sign In or Register to comment.