Sample Rate for Raw Text (CSV) Output


I am trying to feed the output from the GUI (5.1) into a program (offline) that will be expecting the data to be regularly-spaced according to the sample rate.

I have used the "Session Data" from the GUI, in the "OpenBCI" format - the CSV data with headers marked with a '%' delimiter like so:

Files are named "OpenBCI-RAW-YYYY-MM-DD_HH-MM-SS.txt" - as from the older versions of the GUI, and as opposed to the files output by the Brainflow streamer directly since the move to using this library for the capture.

The sample rate for the session was set at 250 Hz, and this is what is marked in the header.

However, the timestamps in the individual entries don't correspond with a 250 Hz datastream. For the purposes of this discussion, I will remove the lines corresponding to IMU and "other" signal readings, as they are on a different clock - so we should only be considering data packets relating to the bio data from the 16 main channels.

As can be seen here, the difference between the unix stamps between individual frames is to the order of 10^-5 seconds, and the spacing is not entirely regular:

For 250Hz, a theoretically perfect output would increase by 4ms per line.

I've seen several mention of if filters etc. are applied to the raw data, and know the answer is of course, no, that these are the voltages as they come off the device and over the air. However, not found any posts so far on the timestamps. Is this as would be expected? I would expect variation like this in the raw Brainflow data, as getting a perfectly realtime stream with negligible latency is near impossible. But I also would have expected the output form the GUI to match the sample rate, especially when this is in the header, for ease of use.

Would someone be able to confirm please if this is the designed behaviour, or do I have an issue in my setup? And if so, are there any specifically recommended algorithms for interpolating the stream so that I can get values regularly-spaced at 250 Hz? I know there are a few options for how to interpolate time series data like this, and don't want to introduce any further problems into the data pipeline by picking an option that is poor for this specific sort of signal.



  • Thank you for the link.

    So, the timestamp is the stamp on the packet when it hits the software side? Should I then assume that, if the sample index increases linearly without interruption, that we can ignore the timestamp, and thus if no packets were dropped, the data from the board are correctly spaced by 250 Hz?

    I originally was looking into this because, when loaded into Neuromore or EDFBrowser with the correct sampling rate, it seemed to me that the data were being displayed much more slowly than in the GUI. I recorded my screen whilst capturing, and the stream is lasting several times longer in Neuromore than in the GUI.


  • wjcroftwjcroft Mount Shasta, CA

    Timestamp is assigned at the laptop side, and is influenced by CPU loading, buffering etc. If you read the link above, you can see that the sample rate will be constant, but not exactly 250.000 Hz. Will be off by a few tenths of a hz either way. This slight difference remains relatively constant over sessions.

    I can't comment on what is happening in neuromore or EDFBrowser, unclear what is happening there. If all three programs are just reading the CSV file, Inputting the recording should be fast. Display time series scrolling etc, will depend on the app. They may not be scrolling in real-time.

  • Thank you, I think that answers it. If it's off by a small fraction at the source I'm not worried, so I'll just ignore the timestamps column. Thanks for your help.

Sign In or Register to comment.