Cyton + Daisy EEG data 'chunks' [resolved]

AvenAven East Lansing, MI
edited October 2020 in Cyton

Hello!

The EEG data collected by the Cyton+Daisy board doesn't seem to be uniformly collected. What I mean by this is that the data looks like it's being collected in "chunks" rather than with equal separation. The result is plots that look like this (each of those vertical lines is a chunk of about dozen data points):

I don't think this is a problem with the way I'm collecting the data, as using the Brainflow synthetic board works just fine. I also don't have much experience in EEG data analysis so I don't know if this is a common problem (or known limitation) with certain hardware, or if there is a way around it. My main concern is that this chunking will affect the results of the FFT analysis that I am performing with the data.

Any advice or insight is appreciated!

Comments

  • wjcroftwjcroft Mount Shasta, CA

    Aven, hi.

    Please see the appropriate 'buffering' section in the Getting Started document.

    https://docs.openbci.com/docs/01GettingStarted/01-Boards/CytonGS#vi-fixing-ftdi-buffering-on-mac-os

    William

  • AvenAven East Lansing, MI

    Thank you so much! I completely forgot about this section in the docs!

  • retiututretiutut Louisiana, USA

    You can also use the "Smoothing" feature in the GUI if you do not want to modify the FTDI buffer.

  • wjcroftwjcroft Mount Shasta, CA

    I think Aven is using Brainflow and Python for some type of realtime analysis. Mentioned above and in their other thread on Brainflow access to Pulse Sensor.

  • grahambriggsgrahambriggs Corvallis
    edited October 2020

    I experienced the same problem when reading the data with my own code. This problem is caused by the way the observations get time tagged. Observations are time tagged by the brainflow library at the moment they are received in your code, they are NOT time tagged by the Cyton board at the time they are actually taken. It is described in this thread: https://openbci.com/forum/index.php?p=/discussion/2329/how-steady-is-cytons-sampling-rate-resolved

    From your graph, it looks like your are reading at about 20Hz. If you zoom in on your graph, you should notice that at each of the chunks, there will be about 12 readings, all with a time tag within a few microseconds. This is because if you are reading at 20Hz, you will read about 12 readings each cycle, and they will all be time tagged by the brainflow library code running on your device at the time they are received, not at the actual reading time. For example, change your sample rate to 1Hz, and you will notice 250 readings each with time tag within a few microseconds, followed by almost a second of 'no data', followed by another chunk with 250 readings with nearly same time tag.

    This is only an issue if you are doing your own processing code where time tag is important. For example, notice that all brainflow processing and filter function take only an array of doubles (time series of raw readings). They do not look at time tag, they assume the the readings are evenly spaced at the sample frequency, the actual time tag is not important.

    FWIW: I have the following logic in my code to 'correct' time tags so they are more accurate. Again, this does not affect brainflow library data filter or transform functions. It does make it easier to look at the raw data in a graph:

    //  In the read data function
    private void ReadData()
    {
    var rawData = board.get_board_data();
    double oldestReadingTime, period;
    CalculateReadingPeriod(rawData, out oldestReadingTime, out period);
    
    for (int i = 0; i < rawData.Columns(); i++)
    {
        var nextReading = new OpenBciCyton8Reading(rawData, i);
    //  fix the time tag of the reading here
        nextReading.TimeStamp = oldestReadingTime + ((i + 1) * period);
         data.Add(nextReading);
    }
     ...
    }
    
    
    private void CalculateReadingPeriod(double[,] rawData, out double oldestReadingTime, out double period)
     {
                double newestReadingTime = rawData[22, 0];
                oldestReadingTime = rawData[22, rawData.Columns() - 1];
                if (LastReadingTimestamp > 0)
                {
                    oldestReadingTime = LastReadingTimestamp;
                    LastReadingTimestamp = newestReadingTime;
                }
                else
                {
            //  first epoch won't be corrected, but all subsequent epochs should have corrected time
                    LastReadingTimestamp = oldestReadingTime;
                }
    
                period = (newestReadingTime - oldestReadingTime) / rawData.Columns();
     }
    
Sign In or Register to comment.