sample rate drift or jitter

edited April 2017 in Hardware
[original post title: Actual sampling rate around 250.5 Hz??]

Hello,

While using the Python software library I noticed something odd: it seems that the board sends *too many* samples.

I have this script to test:

# test_sample_rate.py

import open_bci_v3 as bci
import time
from threading import Thread

# counter for sampling rate
nb_samples_out = -1

# try to ease work for main loop
class Monitor(Thread):
    def __init__(self):
        Thread.__init__(self)
        self.nb_samples_out = -1
        # Init time to compute sampling rate
        self.tick = time.time()
        self.start_tick = self.tick

    def run(self):
      while True:
        # check FPS + listen for new connections
        new_tick = time.time()
        elapsed_time = new_tick - self.tick
        current_samples_out = nb_samples_out
        print "--- at t: ", (new_tick - self.start_tick), " ---"
        print "elapsed_time: ", elapsed_time
        print "nb_samples_out: ", current_samples_out - self.nb_samples_out
        sampling_rate = (current_samples_out - self.nb_samples_out)  / elapsed_time
        print "sampling rate: ", sampling_rate
        self.tick = new_tick
        self.nb_samples_out = nb_samples_out
        time.sleep(10)


def count(sample):
  # update counters
  global nb_samples_out
  nb_samples_out = nb_samples_out + 1

if __name__ == '__main__':
  # init board
  port = '/dev/ttyUSB0'
  baud = 115200
  monit = Monitor()
  # daemonize thread to terminate it altogether with the main when time will come
  monit.daemon = True
  monit.start()
  board = bci.OpenBCIBoard(port=port, baud=baud, filter_data=False)
  board.startStreaming(count)

This is the output once the board is connected:

--- at t:  20.0204308033  ---
elapsed_time:  10.0100939274
nb_samples_out:  2508
sampling rate:  250.547099577
--- at t:  30.0305190086  ---
elapsed_time:  10.0100882053
nb_samples_out:  2508
sampling rate:  250.547242797
--- at t:  40.036908865  ---
elapsed_time:  10.0063898563
nb_samples_out:  2507
sampling rate:  250.539908598
--- at t:  50.0413858891  ---
elapsed_time:  10.0044770241
nb_samples_out:  2507
sampling rate:  250.587811234
It is consistent with the drift observed within OpenViBE acquisition when I use my streaming server. I do not see something wrong with the python library.

I've got the chipkit version (16 channels but no daisy board attached at the moment) and I did not try to modify the firmware.

Could be quite troublesome for signal processing to have a varying / not round sampling rate :\

Comments

  • wjcroftwjcroft Mount Shasta, CA
    Jeremy, hi.

    I think I'm recalling that in the design stages, there was the suggestion to use a separate clock oscillator to keep the ADS1299 sample rate very accurate. Joel @biomurph I'm sure will comment here. But was likely dropped due to added board cost and space issues.

    Am I reading your program correctly, is it deriving sample rate based on the packet arrival times on the serial port? This does vary somewhat due to OS and RFduino buffering issues.

    My guess is that the actual measured 250.xx sample rate will be relatively constant, although offset from the desired exact 250.000 sps. So it looks like the drift adjustments let you set whatever that value is, as long as it is relatively constant?

  • edited February 2015
    The latency of the serial port reading may vary, but it is not the software which is creating *new* packets. My code just counts the number of packets received during a time window XX seconds long (time computed on the OS side). I chose a long window, 10 seconds, to counter buffering issues.

    The problem with openvibe drift correction is that it does not correct a continuous offset, it measure the difference between the number of sample expected (eg 250) and the number received (251). When the difference is too important either it removes or interpolates values (to be confirmed), creating an artifact. Once the drift is corrected, it starts over again. And with a 0.5Hz offset we are talking of one artifact per second with the usual parameters (2ms drift safeguard). 
  • wjcroftwjcroft Mount Shasta, CA
    > And with a 0.5Hz offset we are talking of one artifact per second with the usual parameters (2ms drift safeguard).

    Umm, isnt it more like, every two seconds -- there is an extra sample to deal with? Is the removal operation sophisticated enough to look at the samples on either side (3 samples total) and do some type of three way averaging? Or does it simply drop the sample?

    If it's the latter case, then the artifact generation should not be that serious, I would think. Better than the case if it didnt have 250 samples available every second.

    I don't know which way the onboard clock generator tends to be biased. It would be great if it was consistent on all boards. Or does it depend on component values that vary between boards. If it tends to be a constant across all OpenBCI boards, then you could have an extra selection on the menu of "250.55" sps, as kludgy as that sounds(!)  :-)

    William

  • biomurphbiomurph Brooklyn, NY
    Interesting, I would think that if the serial port buffer + radio xfer was causing this the drift would be in the other direction, more like 249.5SPS...

    The internal CLK of the ADS1299 is 2.048MHz with +/- 0.5% at 25C (+/- 2.5% at range -40C~85C)
    CLK pin is broken out on the diagonal Daisy header row. It's possible to verify the CLK frequency on this pin.
    You can tell the ADS to output it's CLK to this pin by writing to the CONFIG1 register.
    By default, the CONFIG1 register value is 0x96.
    If you change it to 0xB6, that will tell the ADS to output its CLK.
    You can modify the library to do the following in the initialize_ads function

    WREG(CONFIG1,0xB6,BOARD_ADS);

    if you're using the Daisy library (if you received a 16 channel kit, your 32bit board is running the daisy library)

    I did some extensive testing early on in this journey, and verified the CLK and DRDY pin behavior. Both of them were spot on the expected timing.
  • wjcroftwjcroft Mount Shasta, CA
    edited March 2016
    For reference, here are some posts regarding clock accuracy on the Emotiv EPOC,


    One of our forum members @ratlabguy (David Hairston) is an expert in this area and wrote this paper in 2012, comparing various commercial EEG amps,

    Accounting for Timing Drift and Variability in Contemporary Electroencepholography Systems

    William

  • OpenViBE seems to accommodate better than I thought a 250.5 Hz sample rate, visually I didn't see artifacts with one sample removed here and then during acquisition. I'll have to test with an actual application, though.

    Thanks for the pointers, I will try to investigate furthermore the sampling rate issue next week, when I'll be back near my beloved OpenBCI :)
  • Hello,

    Back to business with OpenBCI. I won't try to touch the firmware right now, first I will solder the daisy module and try the 16 channels configuration. In the meantime, I made a pull request for the python script that checks the sampling rate, I'd be interested by results other than mines, including 8bit and 32bit / 8 channels boards.
  • Another interesting issue that I noticed is one that occurs when running the sample_rate on macs with the latency problem. (http://openbci.com/forum/index.php?p=/discussion/199/latency-timer-os-x-new-info-plist)

    For some reason, the latency issue makes the sample rate higher!! Like this:

    --- at t:  220.094504118  ---
    elapsed_time:  10.0056400299
    nb_samples_out:  2525
    sampling rate:  252.35766952
    --- at t:  230.100144148  ---
    elapsed_time:  10.0056400299
    nb_samples_out:  2405
    sampling rate:  240.364433741
    --- at t:  240.105775118  ---
    elapsed_time:  10.00563097
    nb_samples_out:  2525
    sampling rate:  252.357898025
    --- at t:  250.111575127  ---
    elapsed_time:  10.0058000088
    nb_samples_out:  2525
    sampling rate:  252.35363467
    --- at t:  260.117279053  ---
    elapsed_time:  10.0057039261
    nb_samples_out:  2525
    sampling rate:  252.35605797
    --- at t:  270.12196207  ---
    elapsed_time:  10.0046830177
    nb_samples_out:  2525
    sampling rate:  252.381809151
    --- at t:  280.122792006  ---
    elapsed_time:  10.0008299351
    nb_samples_out:  2525
    sampling rate:  252.479045878


    I replicated the error on a windows machine with the latency set to 16 ms and got the results above. Changing the latency then made the output be:
  • elapsed_time:  10.0013157155
    nb_samples_out:  2505
    sampling rate:  250.467045663
    --- at t:  740.09013041  ---
    elapsed_time:  10.0012960105
    nb_samples_out:  2504
    sampling rate:  250.367552102
    --- at t:  750.091445304  ---
    elapsed_time:  10.0013148944
    nb_samples_out:  2505
    sampling rate:  250.467066225
    --- at t:  760.092758557  ---
    elapsed_time:  10.0013132524
    nb_samples_out:  2505
    sampling rate:  250.467107348
    --- at t:  770.09304551  ---
    elapsed_time:  10.0002869532
    nb_samples_out:  2505
    sampling rate:  250.492812028
    --- at t:  780.093372694  ---
    elapsed_time:  10.0003271842
    nb_samples_out:  2505
    sampling rate:  250.491804305
    --- at t:  790.094683483  ---
    elapsed_time:  10.0013107892
    nb_samples_out:  2505
    sampling rate:  250.467169033

    If the large driver latency can affect the calculated sampling rate so consistently as to increase it by 2, could any latency also be causing the small 0.5 increase?
  • wjcroftwjcroft Mount Shasta, CA
    Rodrigo, hi.

    Take a look at the graphs on the thread below. (1 ms vs the default 16 ms latency.) With the default latency, there is a lot of pileup happening with samples inside the buffer of the driver / OS. 


    The other graph showing 1 ms latency, demonstrates there is still some jitter even with 'optimal' latency. I'm expecting that when the Mac FTDI driver is fixed -- performance with then be similar to Windows with the 1 ms latency setting.
  • edited March 2016
    [original post title: LSL sample rate jitter]

    Hi,

    I use OpenBCI_Python-master\plugins\streamer_lsl.py to send eeg data and LabRecorder from LSL distribution to store it. For each run I have different sample rates in output .xdf files (125, 121.2038, 121.8009). 
    I understand that python not good for time critical tasks on the other hand frequencies not so large here. Board and usb dongle within sight on distance about meter so here shouldn't be many errors during transfer.
    Does anybody meet such problem?
    Any suggestions, does it help to rewrite streamer in C?

    Thanks,
    Sergei
  • wjcroftwjcroft Mount Shasta, CA
    @Sergei, hi. I merged your post into this existing thread.

    See the previous posts regarding the jitter / offset you also see. It's a hardware issue, not software.

Sign In or Register to comment.