Sending Trigger Pulse to PC port

yttuncelyttuncel Ankara
edited November 2016 in Software
Hello,

I have recently got my hands on an openBCI R&D Kit and a ultracortex v3. I am using an FPGA for my stimulus presentation (for more precise timing especially in SSVEP and cVEP studies) and I generate a trigger signal from the FPGA as well. With other EEG measuring devices (I've used biopac and brain products) I just feed the amplifiers with my generated trigger through a parallel port on the amplifier itself. There is no other way as their software is not open source nor they have a way to directly stream data into matlab. With openBCI there are guides on how to send trigger signal to the board and there to PC, just like these traditional amplifiers. But I wonder if there is a way to send the trigger pulses directly to PC simultaneous with the EEG data stream coming from the oBCI board?

Thank you for this awesome project!

Best,
Yigit

Comments

  • wjcroftwjcroft Mount Shasta, CA
    edited November 2016
    Yigit, hi.

    LabStreamingLayer can do this. At one point they had a parallel port interface. You could still do this with the PC audio input as well.

    https://github.com/sccn/labstreaminglayer/wiki

  • Hello again, 

    I couldn't find their parallel port interface, but I think I will be able to meet my goal by recording the audio channel with LSL as you suggested. One question that comes my mind is, can LSL listen to two (or more) different input channels simultaneously? I mean, can it pull samples from audio and EEG at the same time without losing any samples?

    Also I have a couple other questions and will be glad if you answered them briefly.

    Why are we first reading the stream in python and then in Matlab?
    Where is the timestamp first generated?
    How does GUI communicate with the dongle, does it use LSL as well?


    On another level, a more hardware related question is, is there any possibility to cancel the bias circuitry (ADS1299's way of equating the body potential to analog ground to reduce common mode interference) on the board itself? I have a feeling it might be increasing 50Hz int. in my case.

  • wjcroftwjcroft Mount Shasta, CA
    edited November 2016
    LSL forum for support questions is here,


    re: two or more channels; yes that is what LSL is designed for. For example you could use the left and right audio channels. Or multiple EEG devices, Game Controller might be another option for you. Source is available so you could adapt for other interfaces.

    re: Python. The OpenBCI interface to LSL is currently done through Python.

    re: dongle. It emulates a serial port, which receives the data stream from the main board. Both the GUI and LSL use this same serial port. Only one can use the port at a time. I think the V2 GUI (still in beta) will be able to feed the LSL stream simultaneously with displaying it's familiar user interface.

    re: Bias. I would leave that connected, this same signal is called Ground on other EEG amps. It's necessary to center the differential amplifiers. It automatically adapts to whatever mains frequency you have.

    William
  • Thank you for the answers. 

    I want to control the whole thing from Matlab, thats why I'm asking about Python. As far as I understand Python script listens to the emulated serial port and buffers incoming EEG data and when requested it outputs a sample, am I right? Is doing this whole process in Matlab possible?
    From GUI you can actually send commands to the ADS1299. Is the same possible for Python? I want to turn lead off detection mode on to measure electrode impedances.
    One other question, can we control the Python script from Matlab under Windows? I tried system/dos commands that run the script in Matlab command window but I can't get them to input a string, say "/start".

    You are right, I've read about Right Leg bias circuitry, it should adapt to common frequency, I'll keep it connected.

    Thank you for your time!
  • wjcroftwjcroft Mount Shasta, CA
    edited November 2016
    Yigit, hi.

    If you look again at the link I mentioned before,


    The LSL distribution consists of:
      • The core transport library (liblsl) and its language interfaces (C, C++, Python, Java, C#, MATLAB). The library is general-purpose and cross-platform (Win/Linux/MacOS, 32/64) and forms the heart of the project.
      • A suite of tools built on top of the library, including a recording programonline viewersimporters, and apps that make data from a range of acquisition hardware available on the lab network (for example audio, EEG, or motion capture).
    The OpenBCI_Python package serves the purpose of the app that reads from the device and creates the LSL stream. That LSL stream can then be consumed by various library functions, including MATLAB.

    The V2 OpenBCI_GUI is projected to be able to generate the LSL stream directly, obviating the need for the Python step.

    re: commands, yes those can be sent from Python, it is documented.


    re: impedance measurement. This requires two steps, turning on the ADS1299 feature, and then detecting that generated frequency in the FFT. The latter algorithm is done in the GUI.

    re: controlling Python from Matlab. I'll mention Jeremy @jfrey here in case he has any suggestions.

    William
  • Hello William,

    Your responses have been very helpful, thank you. As far as I understood, listening to parallel port cannot be perfectly synchronous to reading the EEG data through LSL stream (LSL stream to be read has its own timer in MATLAB and since it pulls chunks of data parallel port cannot be read at the same instants/with the same sampling rate). So I eliminated that approach. I now have a optocoupler between my FPGA and oBCI and feeding the trigger through the analog inputs of PIC32. I know it samples the analog input at the same time ADS1299 samples the EEG. But on Matlab when I create 2 LSL streams, one for Aux and other for EEG, they do have a time shift. Is there any way to instantiate these two streams at the exact time instant. What I mean is,

    in Matlab I call vis_stream twice with: (for testing purposes I read the same stream (OpenBCI_AUX)  and observe the time difference between the two)

    vis_stream('streamname','OpenBCI_AUX','bufferrange',60,'timerange',10,'datascale',5,'channelrange',[1:4],'samplingrate',250,'refreshrate',20)
    vis_stream('streamname','OpenBCI_AUX','bufferrange',60,'timerange',10,'datascale',5,'channelrange',[1:4],'samplingrate',250,'refreshrate',20)

    On average the second stream is delayed by 247 samples. Is there a way to make them perfectly same?

    Thank you!
  • wjcroftwjcroft Mount Shasta, CA
    Yigit, hi.

    I'm going to mention AJ Keller @pushtheworld here, he is the author of the V2 firmware. Can you look at the OpenBCI serial port output at startup and see which firmware you are running? This page shows how to identify your firmware.


    I'm not sure why your AUX channels are showing up almost a whole second behind the EEG data. Have you seen this post on using trigger signals? This thread was created originally in V1, but trigger ability exists in V2 as well.


    William

    PS the whole idea of LSL is to allow sync of streams from different devices. Yes there will be a slight skew. So if you need millisecond range sync, the board external trigger is the way to go.



  • There is an example of using LSL on the Node code and you can do time syncing to get your time stamps from the node module only with V2 firmware. Check out interfacing section on the read me!
  • William hi,

    My firmware version appears to 2.0.0 as printed in the beginning of the python script. 

    My guess is starting the visualizers takes some processing time (on the order of a second seems too much though). I'm not sure if this should be an issue however.

    Yes I have seen the tutorial for external triggering. That's why I mentioned using an optocoupler and analog inputs of PIC32. I have no problem in observing the trigger signal, all are sent to PC alongside the EEG data. (I even tried applying 100ms-101ms-102ms-103ms-104ms pulses as triggers and tested the accuracy of the sampling rate). The problem I am currently facing is plotting/observing/visualizing the EEG data and trigger data simultaneously. Due to the delay/time shift between the visualizers I cannot be sure to which EEG sample a trigger pulse exactly corresponds to. I can take a wild guess by delaying the first visualizer by 250 samples, but that would not be the best way to solve this. I hope I'm making myself clear, if not, I can post a more thorough post with plots and so on.

    I will take a look at the node code and let you guys know if I come up with anything. @wjcroft @pushtheworld

    Thanks!
  • wjcroftwjcroft Mount Shasta, CA
    It does not seem like you need LSL then. You already have the trigger signals in the raw data. It looks like your visualization is doing the distortion.
  • edited November 2016
    @yttuncel are you able to get the true time the visual appeared on the screen? Always a tricky problem that can ruin ERP. With hardware triggers to the board, I was able to get time sync down to +/- 4ms on node time sync. I use the time from the computer as the master clock and find a translation from the Pic32 clock to the master clock through a protocol I invented. You could then check out the code in the examples section of the new master branch. I think that code tutorial you saw was outdated to V1 firmware. You would get a time stamp and you would have two extra bytes to inject your pulse into!
  • @wjcroft

    I do need LSL for viewing the stream in Matlab, and the python script creates 2 LSL streams, so I call 2 visualizers (vis_stream) to observe/store them both.


    I am using a FPGA for stimulus generation, not a computer. So I'm %100 sure trigger is generated at the same time instant the stimulus is generated. I'm getting confused here, is the PIC32 clock/timestamps transferred to PC in the new firmware? I thought the timestamps were generated in the dongle, isn't that the case?

    [chunk, timestamp] = lsl.pull_chunk(); --> LSL reads timestamps this way. Where is this timestamp generated in the grand scheme of things? Also, the differences between these timestamps are not uniform (not 4ms, sometimes as high as 40ms). Is this something expected?


  • edited November 2016





    @yttuncel

    > So I'm %100 sure trigger is generated at the same time instant the stimulus is generated

    There is no delay from when the stimulus is actually show to the user though? All monitors have refresh rates which happen most of the time at best 60Hz, or every 15ms.

    onward...

    I believe the python LSL implementation is adding the time stamp as it arrives from the serial port which happens as flushes once the buffer is full, then then a chunk of data get's brought in. 

    Note: As far as I know OpenBCI_Python DOES NOT HAVE ANY FIRMWARE V2 FEATURES. 

    Check out this table called Firmware Version 2.0.0 (Fall 2016 to Now) under Binary Format. See all those different "Stop Bytes"? That's how I was able to solve the time sync issue. You can see in this easy to read node code section how I send the packets to different functions based on that final four bits.

    The original V1 firmware was unable to send different stop bytes, so I rewrote the radio firmware to pass through an extra four bits which could designate what type of packet was coming through. 

    Now checkout the node time sync example. There you can see the node is resyncing every second. You can also read about the time sync strategy in detail in this short write up.

    The actual source code for making the time sync possible can be found here in a private node code function. I actually talk about that difference in time stamp problem from serial flush and how I over came that along with a pretty print all in the comments in that function.

    With the node code you would get your 16 bits of aux data in a nice buffer in a nice JSON object.

    Here is the code I used to validate the time syncing, it may help you. I used an Arduino to send a pulse to the Board, thinking that's something like your FPGA.

    Good luck

  • wjcroftwjcroft Mount Shasta, CA
    Here's another (old) post where Fred @atom2626, wrote a Matlab routine to parse the OpenBCI stream directly. This potentially bypasses LSL and gives you direct access to the data stream, which includes your AUX samples.








  • You could also TCP into Matlab from node. would love to see an example of that!

  • @wjcroft

    That's awesome. I'd like to hear @atom2626 's comments on the subject as well and would like to know if he made any updates on his Matlab routine. If I can bypass python/LSL in a reliable way, that would be awesome.

    Speaking of LSL, another approach could be modifying the python script in the repo to generate a single stream instead of two. A single LSL stream could include both openBCI_AUX and openBCI_EEG streams contents. Would this be feasible and applicable?

    The current trigger is more of an event marker. It does not trigger the start of data recording, it just puts a marker on the aux channel. I wonder if an actual trigger could be made, so that PIC controls the start of the data recording upon arrival of the trigger coming from the FPGA. I cannot make up my mind if this would solve the issue or not, what are your thoughts?


    I am not using a monitor in my experiments (I'm not studying ERP, I'm currently dealing with SSVEP and cVEP experiments), I blink a power LED in its linear region with precise timing thanks to the FPGA. That's why I say I'm sure about the trigger part. Since cVEP includes heavy averaging, I do not want the slightest timing error between trigger and EEG (even 4ms seems large to me, because in cVEP I will use a bit length of ~12-14 ms, and with 250sps I have 3-4 samples per bit. With 4ms timing error the already small number of samples might get corrupted and will make the experiment invalid). From here I have a question:

    Is bluetooth the reason for running the ADS at 250sps? It can support up to 16ksps, and if possible I would want to use 1ksps. With 1ksps (1ms sampling interval) the timing error would be less influential on the experiment.

    I will take a look in the node code and your previous comment to have a better understanding of your method. I will let you know asap.

    Thanks for your interest in the subject! You are awesome people.
  • wjcroftwjcroft Mount Shasta, CA
    edited December 2016
    Yes, sounds straightforward to mod the LSL to provide a combined stream. Jeremy @jfrey is the author.

    Yes the RFduinos are the limiting factor with the GZLL protocol. Faster Bluetooth modules exist (versions 2 and 3; RFduino is essentially 4 only, BLE and GZLL.) You CAN up the sample rate. AJ is working on a wifi or Bluetooth 2 version. Winslow and I did a "wired USB" link that is opto isolated. That code is based on the V1 firmware.


  • @wjcroft

    At http://docs.openbci.com/software/02-OpenBCI_Streaming_Data_Format#openbci-v3-data-format-room-for-improvement, the last paragraph states a possibility for improving the sampling rate. As far as I understood from your discussion with Winslow and other sources in the documentation it is not possible to change the communication speed of current bluetooth connection (with current hardware), so a sacrifice from data size is mandatory for higher sampling rates. Is modifying the firmware on board enough for doing that, or do I also need to modify the python script as well? From what I understand, I need to change the protocol to transmit less bytes, say 24 instead of 32, so that bluetooth connection can keep up with the sampling rate, and for this I believe a change is required at both ends. 

    I'm asking this because I believe having 500sps or even 1ksps would greatly influence my experiment. 

    Is doing this possible in node @pushtheworld? Btw, I have observed your methods, and you are making use of different stop bytes to distinguish time sync information from data, and use that information every X seconds/minutes to synchronize board and PC. What I don't understand is how exactly you achieve this. You are calling .syncClocksFull() and it returns a timesyncobject and you check if valid is true or not. You fill this object in the private function you shared. I couldn't follow what exactly is done inside the function though.


  • wjcroftwjcroft Mount Shasta, CA
    The increased sample rate that Winslow and I did was with opto isolated wired usb.

    To push more samples per second through the RFduinos would require modifying the packet format to place more samples in each packet. In other words you'd have a completely different packet format. How many channels do you need? If only 1 or 2 then your 500 or 1000 sps may be feasible. Still, this is not trivial to do.

    AJ might have some comments on his wifi solution. That has essentially no speed limitations.
  • On another note, as I said I'm using an FPGA for generating stimulus, and trigger signal is fed directly to PIC32. So in my case time sync between PC and oBCI is not necessary (assuming I understood this correctly and this is what you are doing in your timesync implementation @pushtheworld). What I want to achieve is to start the sampling of the ADS at the exact moment I send the trigger pulse to PIC. So I want to instantiate the ADS but I want it not to sample anything until its told to do so by PIC. In the end the first sample will be at the instant (not at the instant of course, there will be some processing delays) PIC receives the trigger. With this I believe I would achieve the best averaging of the EEG data.
  • wjcroftwjcroft Mount Shasta, CA
    re: starting the ADS1299 on your trigger pulse. No. The ADS has to run continuously. You can't start/stop it at high rates.

  • wjcroftwjcroft Mount Shasta, CA
    I also agree that you do not need the exact time stamps. The sample packets are spaced at a constant rate approximating every 4 ms. That does drift very slightly over time, but since your triggers are in the AUX info, are already synchronized.

  • Why don't you do it in reverse? Let the pic sample rate drive the stimulus? Set a pin high on the OpenBCI board, connect that to FPGA, if FPGA wants to show as stim, do it then
Sign In or Register to comment.