Different sampling rate between EEG signal and AUX signal, using LSL from GUI [workaround]

Rai_satoRai_sato Rochester, NY
edited March 2020 in Software

Hello,
I found an issue that sampling rates between EEG signal and AUX signal which contain external trigger signals turn to be different values when transmitting them via the Lab Stream layer.

I tried a real-time EEG analysis to communicate them using Matlab and LSL, however, the amount of the AUX chunk was a tenth of the EEG signal. (GUI setting below). I used an 8ch Cyton board on this test. As the EEG signal's sampling rate should be 250Hz, the AUX rate goes to be just 25Hz.

The slower sampling rate, especially for ’trigger’ signals coming from the AUX channel, does not allow us to align EEG epochs. An on-set time of an epoch now can be located in between two triggers (since EEG is sampled in 250Hz and Triggers is sampled in 10 times sparse time window). This may cause serious ‘jitter’ problem and we cannot guarantee the same onset time of collected epochs.

This issue occurs in both macOS and Windows 10. Could you find a way to utilize the same sampling frequency 250Hz both for EEG (timeseries) and triggers (AUX)?

Comments

  • wjcroftwjcroft Mount Shasta, CA

    Rai, hello.

    Richard @retiutut has already mentioned your issue to me. This mismatch in sample rates is apparently due to a bug in the current GUI/Hub implementation. More than likely in the Hub portion. Since the Hub will be phased out shortly and replaced with BrainFlow library access directly from the GUI -- my guess is that the bug will be resolved within a couple weeks at most.

    In the meanwhile, you could use one of the other means of generating an LSL stream, without using the GUI. Such as the Python repos, Brainflow examples, etc.

    Regards, William

  • retiututretiutut Louisiana, USA
    edited February 2020

    I agree with William that removing the Hub may fix this issue, as we are simplifying data acquisition with Brainflow.

    I was able to see similar results using the following Python sketch. Some code is commented out to test a single stream, but I think this is a valid test. If anyone has any improvements to the following code, please share the updated code on this thread. Rai originally noticed this when using Matlab, I think.

    """Example program to demonstrate how to read a multi-channel time-series
    from LSL in a chunk-by-chunk manner (which is more efficient).
    
    Please restart this script if you change one of the data types.
    Also, the # Chan should match the data type (Examples: 1 for Focus, 3 for Accel)
    
    """
    
    from pylsl import StreamInlet, resolve_stream
    import time
    
    numStreams = 3
    # first resolve an EEG stream on the lab network
    print("looking for an EEG stream...")
    # stream1 = resolve_stream('type', 'EEG')
    # stream2 = resolve_stream('type', 'FFT')
    stream3 = resolve_stream('type', 'EEG')
    
    # create a new inlet to read from the stream
    # inlet = StreamInlet(stream1[0])
    # inlet2 = StreamInlet(stream2[0])
    inlet3 = StreamInlet(stream3[0])
    
    def testLSLSamplingRates():
        print( "Testing Sampling Rates..." )
        start = time.time()
        numSamples1 = 0
        numSamples2 = 0
        numSamples3 = 0
        while time.time() < start + 3:
        # get a new sample (you can also omit the timestamp part if you're not
        # interested in it)
            for i in range(numStreams):
                if i == 0:
                    # chunk, timestamps = inlet.pull_chunk()
                    # if timestamps:
                        # numSamples1 += 1
                    numSamples1 = 0
                elif i == 1:
                    # chunk, timestamps2 = inlet2.pull_sample()
                    # if timestamps2:
                        # numSamples2 += 1
                    numSamples2 = 0
                elif i == 2:
                    chunk, timestamps3 = inlet3.pull_sample()
                    if timestamps3:
                        numSamples3 += 1
                # print("Stream", i + 1, " == ", chunk)
        print( "Stream 1 Sampling Rate == ", numSamples1 / 5 , " | Type : EEG")
        print( "Stream 2 Sampling Rate == ", numSamples2 / 5 , " | Type : FFT")
        print( "Stream 3 Sampling Rate == ", numSamples3 / 5 , " | Type : AUX")
    
    
    testLSLSamplingRates()
    

    If integrating Brainflow does not seem to make EEG && AUX (Analog) Sample Rates appear at ~250Hz from LSL output, I think the following code may be the culprit from W_Networking.pde:

        Boolean checkForData() {
            if (this.dataType.equals("TimeSeries")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("FFT")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("EMG")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("BandPower")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("Accel/Aux")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("Focus")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("Pulse")) {
                return dataProcessing.newDataToSend;
            } else if (this.dataType.equals("SSVEP")) {
                return dataProcessing.newDataToSend;
            }
            return false;
        }
    

    Which seems silly, because all cases return the same variable...

  • wjcroftwjcroft Mount Shasta, CA

    Richard, what is the easiest way for Rai to generate his own LSL stream? Use one of the Python repos, or something done with BrainFlow examples? Link?

  • Rai_satoRai_sato Rochester, NY
    edited February 2020

    Thank you William and Richard,
    I have already tried python version LSL sending system. These are the codes I used. I send the EEG sig and AUX from python side and receive them by Matlab. I used the Cython board and I did not connect any external pin on this test.
    python code

    from pyOpenBCI import OpenBCICyton, OpenBCIWiFi
    from pylsl import StreamInfo, StreamOutlet
    import numpy as np
    
    SCALE_FACTOR_EEG = (4500000)/24/(2**23-1) #uV/count
    SCALE_FACTOR_AUX = 0.002 / (2**4)
    
    print("Creating LSL stream for EEG. \nName: OpenBCIEEG\nID: OpenBCItestEEG\n")
    info_eeg = StreamInfo('OpenBCIEEG', 'EEG', 8, 200, 'float32', 'OpenBCItestEEG')
    
    print("Creating LSL stream for AUX. \nName: OpenBCIAUX\nID: OpenBCItestEEG\n")
    info_aux = StreamInfo('OpenBCIAUX', 'AUX', 3, 200, 'float32', 'OpenBCItestAUX')
    
    outlet_eeg = StreamOutlet(info_eeg)
    outlet_aux = StreamOutlet(info_aux)
    
    def lsl_streamers(sample):
        outlet_eeg.push_sample(np.array(sample.channels_data)*SCALE_FACTOR_EEG)
        outlet_aux.push_sample(np.array(sample.aux_data)*SCALE_FACTOR_AUX)
    
    
    board = OpenBCICyton(port='/dev/cu.usbserial-DM00PUPR')
    board.start_stream(lsl_streamers)
    

    Matlab code

    % EEG connection
    % instantiate the library
    disp('Loading the library...');
    lib = lsl_loadlib();
    
    % resolve a stream...
    disp('Resolving an EEG stream...');
    result_EEG = {};
    while isempty(result_EEG)
        result_EEG = lsl_resolve_byprop(lib,'type','EEG');
    end
    
    result_AUX = {};
    while isempty(result_AUX)
        result_AUX = lsl_resolve_byprop(lib,'type','AUX');
    end
    
    % create a new inlet
    disp('Opening an inlet...');
    inlet_EEG = lsl_inlet(result_EEG{1});
    inlet_AUX = lsl_inlet(result_AUX{1});
    
    [chunk_EEG,stamps1] = inlet_EEG.pull_chunk();
    [chunk_AUX,stamps2] = inlet_AUX.pull_chunk();
    

    The amounts of both chunks were almost same, so I thought seemingly both sampling rates were also same. However, when looking up to the data of the AUX signals, actually these data were sent only every 10 times.

    So I expect It means AUX signals are also downsampled (or reduced in some way) even using the python system although superficial sampling rates of both signals are the same.

  • wjcroftwjcroft Mount Shasta, CA

    Rai, thanks.

    When using the Python / LSL sending program, you must put the Cyton into either 'digital' or 'analog' (Aux mode), via a serial port SDK command:

    https://docs.openbci.com/docs/02Cyton/CytonSDK#board-mode

    The default "board mode", is to send the Accelerometer data at reduced rate.

    Regards, William

  • Rai_satoRai_sato Rochester, NY

    Hi William, thank you.
    I tried to change the board mode, but I could not do it because commando line said that command not recognized.

    ------------user.py-------------
    Board type: OpenBCI Cyton (v3 API)
    Port:  /dev/cu.usbserial-DM00PUPR
    
    ------------SETTINGS-------------
    Notch filtering:True
    user.py: Logging Disabled.
    
    -------INSTANTIATING BOARD-------
    Connecting to V3 at port /dev/cu.usbserial-DM00PUPR
    Serial established...
    Warning: No Message
    No daisy:
    8 EEG channels and 3 AUX channels at 250.0 Hz.
    
    ------------PLUGINS--------------
    Found plugins:
    [ print ]
    [ csv_collect ]
    [ streamer_lsl ]
    [ noise_test ]
    [ streamer_tcp ]
    [ streamer_osc ]
    [ udp_server ]
    [ sample_rate ]
    
    
    
    Activating [ streamer_lsl ] plugin...
    Creating LSL stream for EEG. Name:OpenBCI_EEG- ID:openbci_eeg_id1- data type: float32.8channels at250.0Hz.
    Creating LSL stream for AUX. Name:OpenBCI_AUX- ID:openbci_aux_id1- data type: float32.3channels at250.0Hz.
    Plugin [ streamer_lsl] added to the list
    --------------INFO---------------
    User serial interface enabled...
    View command map at http://docs.openbci.com.
    Type /start to run (/startimp for impedance 
    checking, if supported) -- and /stop
    before issuing new commands afterwards.
    Type /exit to exit. 
    Board outputs are automatically printed as: 
    %  <tab>  message
    $$$ signals end of message
    
    -------------BEGIN---------------
    
    --> //
    Command not recognized...
    
    --> /2
    Command not recognized...
    
  • wjcroftwjcroft Mount Shasta, CA

    You appear to be sending the 'command' to Python. You need to send the SDK string directly to the Cyton serial port.

  • retiututretiutut Louisiana, USA
    edited February 2020

    I guess we would try to take a Brainflow-Python example and stream out data using Pylsl to Matlab. (though this can be tested with send/receive Python scripts).

  • Rai_satoRai_sato Rochester, NY

    @wjcroft said:
    You appear to be sending the 'command' to Python. You need to send the SDK string directly to the Cyton serial port.

    Thank you. I set up analog mode correctly I guess, but I could not start LSL stream because command line said this kind of error.

  • wjcroftwjcroft Mount Shasta, CA

    The order in which you send the SDK commands, is important. Did you send the analog mode command BEFORE starting the stream? There are also some timing guidelines for sending SDK commands, mentioned in the docs.

  • wjcroftwjcroft Mount Shasta, CA
    edited February 2020

    Also, the second screenshot shows that you have not changed the value you scan for end byte / stop byte. End byte with analog mode is 193 or 0xC1.

    https://docs.openbci.com/docs/02Cyton/CytonDataFormat#binary-format

  • wjcroftwjcroft Mount Shasta, CA

    @Rai_sato, let us know if the proper end byte resolved your issue, and you are now streaming ok with Aux data.

  • Rai_satoRai_sato Rochester, NY

    @wjcroft, I am sorry for the late reply. I could not do it in these 3 days.
    Still, I could not solve this problem because it seems I could not change the value properly.
    I first changed the board mode using a 'screen' command on command prompt and launched the user.py with LSL stream plugins like below.
    So could you let us know how to change the board mode in your environment usually?

    1. type screen commend

      audio_studio:python audiostudio$ source /Users/audiostudio/.local/share/virtualenvs/python-M5SInwkT/bin/activate
      (python) audio_studio:python audiostudio$ screen /dev/cu.usbserial-DM00PUPR 115200
      
    2. put '/2' to change the board setting
      Success: analog$$$

    3. then launch the user.py and do the LSL stream

      (python) audio_studio:python audiostudio$ pipenv shell
      Launching subshell in virtual environment…
      (python) audio_studio:python audiostudio$  . /Users/audiostudio/.local/share/virtualenvs/python-M5SInwkT/bin/activate
      (python) (python) audio_studio:python audiostudio$ python3 OpenBCI_Python/user.py -p /dev/cu.usbserial-DM00PUPR --add streamer_lsl --plugins-path OpenBCI_Python/openbci/plugins/
      ------------user.py-------------
      Board type: OpenBCI Cyton (v3 API)
      Port:  /dev/cu.usbserial-DM00PUPR
      
      ------------SETTINGS-------------
      Notch filtering:True
      user.py: Logging Disabled.
      
      -------INSTANTIATING BOARD-------
      Connecting to V3 at port /dev/cu.usbserial-DM00PUPR
      Serial established...
      �nBCI V3 8-16 channel
      On Board ADS1299 Device ID: 0x3E
      LIS3DH Device ID: 0x33
      Firmware: v3.1.2
      $$$
      No daisy:
      8 EEG channels and 3 AUX channels at 250.0 Hz.
      
      ------------PLUGINS--------------
      Found plugins:
      [ streamer_osc ]
      [ streamer_tcp ]
      [ sample_rate ]
      [ csv_collect ]
      [ print ]
      [ udp_server ]
      [ noise_test ]
      [ streamer_lsl ]
      
      
      
      Activating [ streamer_lsl ] plugin...                                                                                                                                                                                                 
      Creating LSL stream for EEG. Name:OpenBCI_EEG- ID:openbci_eeg_id1- data type: float32.8channels at250.0Hz.
      Creating LSL stream for AUX. Name:OpenBCI_AUX- ID:openbci_aux_id1- data type: float32.3channels at250.0Hz.
      Plugin [ streamer_lsl] added to the list
      --------------INFO---------------
      User serial interface enabled...
      View command map at http://docs.openbci.com.
      Type /start to run (/startimp for impedance 
      checking, if supported) -- and /stop
      before issuing new commands afterwards.
      Type /exit to exit. 
      Board outputs are automatically printed as: 
      %  <tab>  message
      $$$ signals end of message
      
      -------------BEGIN---------------
      
      --> /start
      
  • wjcroftwjcroft Mount Shasta, CA

    The Python streaming program(s) contain examples of where SDK command(s) are sent to the board, over the serial port. For example when the command is sent to start board, 'b'.

    https://github.com/OpenBCI/pyOpenBCI/blob/master/pyOpenBCI/cyton.py

    In that file, the lines like: self.ser.write(b'b') (sending a single byte, 'b'). You would simply output a string to turn on the analog or digital mode you want. You should NOT change the board mode with a separate unix shell command, since the serial port will be opened / closed every time a separate command runs. Potentially resetting the Cyton dongle. The mode command should be sent from inside the Python, just before the 'b' command is sent.

    Similarly in the other Python repo, the serial write commands look like: board.ser_write(bytes(c))

    https://github.com/OpenBCI/OpenBCI_Python

    The pyOpenBCI is the newer and better organized, maintained repo. OpenBCI_Python is deprecated.

  • Rai_satoRai_sato Rochester, NY

    @wjcroft, Thank you for answering. I could correctly get the AUX signals using the python method.
    The sampling rate of AUX was also same as EEG signals.
    We are looking forward you to fixing the GUI issue.

  • retiututretiutut Louisiana, USA
    edited February 2020

    @Rai_sato Can you please share with us exactly what you used? This is important for others who also want EEG and AUX data without using the GUI.

  • Rai_satoRai_sato Rochester, NY
    edited February 2020

    lsl_exampllsl_example_analogBoardMode.py

    from pyOpenBCI import OpenBCICyton
    from pylsl import StreamInfo, StreamOutlet
    import numpy as np
    
    SCALE_FACTOR_EEG = (4500000)/24/(2**23-1) #uV/count
    SCALE_FACTOR_AUX = 0.002 / (2**4)
    
    
    print("Creating LSL stream for EEG. \nName: OpenBCIEEG\nID: OpenBCItestEEG\n")
    
    info_eeg = StreamInfo('OpenBCIEEG', 'EEG', 8, 250, 'float32', 'OpenBCItestEEG')
    
    print("Creating LSL stream for AUX. \nName: OpenBCIAUX\nID: OpenBCItestEEG\n")
    
    info_aux = StreamInfo('OpenBCIAUX', 'AUX', 3, 250, 'float32', 'OpenBCItestAUX')
    
    outlet_eeg = StreamOutlet(info_eeg)
    outlet_aux = StreamOutlet(info_aux)
    
    def lsl_streamers(sample):
        outlet_eeg.push_sample(np.array(sample.channels_data)*SCALE_FACTOR_EEG)
        outlet_aux.push_sample(np.array(sample.aux_data)*SCALE_FACTOR_AUX)
        #print(np.array(sample.aux_data)*SCALE_FACTOR_AUX)
    
    board = OpenBCICyton(port='/dev/cu.usbserial-DM00PUPR')
    
    board.write_command('/2') # change the cyton AUX board mode to Analog mode.
    
    board.start_stream(lsl_streamers)
    

    cyton.py

    import serial
    from serial import Serial
    
    from threading import Timer
    import time
    import logging
    import sys
    import struct
    import numpy as np
    import atexit
    import datetime
    import glob
    # Define variables
    SAMPLE_RATE = 250.0  # Hz
    START_BYTE = 0xA0  # start of data packet
    END_BYTE = 0xC1  # end of data packet; changed the variable from 0xC0 to 0xC1
    

    the following is the same as the original code

  • retiututretiutut Louisiana, USA
    edited March 2020

    RESOLUTION: FOR NOW, PLEASE USE @rai_sato's EXAMPLE ABOVE

    On Windows, I was able to get this to work and tested using Python send code above, and receiving using GUI LSL stream test. This is a working solution!

    On Mac, I get errors involving the end byte, even if I change 0xC0 to 0xC1in cyton.py, and then reinstall pyOpenBCI from my local folder:

    ID:<51> <Unexpected END_BYTE found <192> instead of <193>
    OR 
    ID:<114> <Unexpected END_BYTE found <193> instead of <192>
    
  • @retiutut I'm using the scripts (lsl example and cyton) provided by @rai_santo in combination with the files from https://github.com/openbci-archive/pyOpenBCI.
    I was not able to fix the jittering this way as seen in the screenshot below. Is there anything else I must do to get it working?

    I would really appreciate your support!
    thx
    Alex

  • retiututretiutut Louisiana, USA

    What is the picture above showing exactly? What is making the visualization? Can you share some code so that this can be replicated?

Sign In or Register to comment.