Failing to see any SSVEP. Is my raw data bad quality?

2»

Comments

  • Actually, I meant just wrapping the Arduino in foil. It is physically separate from the OpenBCI.

  • Thanks for your feedback as always.

  • wjcroftwjcroft Mount Shasta, CA

    Because of the oxide layer formed on aluminum, making grounded connections to such foil is daunting.

  • Ah I understand. Maybe the stimulus box will have to have multiple layers.

  • wjcroftwjcroft Mount Shasta, CA

    I do not think you will need any shielding. Try that first.

  • edited November 2022

    I've heard from other researchers that they needed shielding with OpenBCI, so I am curious in which circumstances one does or does not need shielding.

  • Maybe with high-frequency stimulation?

  • skidrowicskidrowic United States
    edited June 2024

    Hi Wj and Matt, I find myself with a similar problem as you guys, but was not resolved after making sure the pins on my board were in the down position. When I record close-eyed data I fail to see a consistent peak at 10hz, and here is a video link of the OpenBCI GUI output:

    I'm not using the 4th channel, and I'm sitting with my chip pretty far from my laptop and desk, and have tried lots of different combinations of filtering/ denoising. Let me know if you might have any insight into the issue. Thanks!

  • wjcroftwjcroft Mount Shasta, CA

    @skidrowic said:
    ... When I record close-eyed data I fail to see a consistent peak at 10hz,

    Hi SkidRow,

    I'm not sure why you are commenting on this SSVEP thread. Eyes closed alpha is not the same as 'visual evoked potential' (VEP) issues. Best alpha production is on electrodes in occipital or parietal sites. Some people generate more than others, so suggest trying with a friend you connect up.

    William

  • skidrowicskidrowic United States

    Hi William,
    Thanks for your response, apologies I was doing the eyes closed alpha test since I read earlier in this thread you recommending Matt to do the same to see if he gets a peak at 10 hz. I've been using my computer monitor to flash at 13 hz, and used a photoreceptor detailed on mind affect's page to make sure my screen is flickering accurately. However I still get charts that, similar to matt, have large jumps in amplitude. I will try the closed eyed test on another individual to see if they have a more defined peak at 10. Here is a raw eeg plot, and the corresponding fft plot:

  • wjcroftwjcroft Mount Shasta, CA

    The 'fuzziness' shown in your red, blue, green, orange plot above looks to be mains noise. Yet your Youtube video shows you have your notch set at 50 Hz. Mains frequency here in the US is 60 Hz. You need to change the notch.

  • skidrowicskidrowic United States

    I've adjusted my notch to the correct value of 60, but still got no real peak. Here is a recording of data where my eyes are closed for 10 seconds. I've also attached a link to my python script as well. Thank you again for any help I greatly appreciate it
    https://drive.google.com/file/d/1_cYsEqgHkOUW9NFnQC4UrJHF1C23ErCf/view?usp=sharing

  • wjcroftwjcroft Mount Shasta, CA

    Suggest perhaps working with a friend and have them close their eyes while you watch time series and FFT. The above time series is 700 seconds long. So your 10 second eyes closed is lost in that ocean of eyes open. The blink artifact does not help.

  • skidrowicskidrowic United States

    Ok sure, apologies for the confusion the x axis is data points so the recording is around 3-4 seconds, will try with a friend and perfom ICA to remove blink artifacts

  • edited January 14

    Hi OpenBCI community,

    I have the same problem of 'Failing to see any SSVEP', but I’m working with the Cyton and Daisy boards to record SSVEP signals, but I’m failing to detect clear SSVEP responses in the frequency domain. The setup is described below:

    Setup Details
    Hardware: OpenBCI Cyton + Daisy boards
    Montage:
    Cyton Board: FP1, FP2, C3, C4, T3, T4, O1, O2
    Daisy Module: F7, F8, F3, F4, T5, T6, P3, P4 (not used in the test below)
    Software: Python, BrainFlow, PsychoPy, MNE
    Stimulus: Flickering white circle at 15 Hz for 10 seconds, alternating with 10 seconds of rest (5 cycles total).

    Despite preprocessing (Notch at 50 Hz, Bandpass 1–40 Hz, z-score normalization), I can’t observe distinct peaks at the stimulus frequency (15 Hz) in the FFT analysis.

    Code

    class EEGSSVEPRecorder:
        def __init__(self, port):
            self.params = BrainFlowInputParams()
            self.params.serial_port = port
            self.board = BoardShim(BoardIds.CYTON_BOARD.value, self.params)
    
            self.is_running = True
            self.frequency = 15
            self.duration_ssvep = 10
            self.duration_rest = 10
            self.total_cycles = 5
            self.current_label = "Rest"
            self.eeg_data = []
    
            monitor = monitors.Monitor("testMonitor", width=34.5, distance=60)
            monitor.setSizePix([1920, 1080])
            self.win = visual.Window(size=[1920, 1080], fullscr=True, color='black', units='pix')
            self.circle = visual.Circle(self.win, radius=100, fillColor='white', lineColor='white')
    
        def record_eeg(self):
            while self.is_running:
                try:
                    data = self.board.get_board_data()
                    timestamps = data[BoardShim.get_timestamp_channel(BoardIds.CYTON_BOARD.value)]
                    eeg_channels = BoardShim.get_eeg_channels(BoardIds.CYTON_BOARD.value)
                    for i in range(data.shape[1]):
                        row = [timestamps[i], self.current_label] + [data[channel][i] for channel in eeg_channels]
                        self.eeg_data.append(row)
                except Exception as e:
                    print(f"Error while recording EEG: {e}")
    
        def flicker_circle(self, duration):
            flicker_interval = 1.0 / (2 * self.frequency)
            next_toggle_time = time.time()
            is_circle_on = True
            start_time = time.time()
            self.current_label = "SSVEP"
            while time.time() - start_time < duration:
                if time.time() >= next_toggle_time:
                    is_circle_on = not is_circle_on
                    next_toggle_time += flicker_interval
                if is_circle_on:
                    self.circle.draw()
                self.win.flip()
                if event.getKeys(['escape', 'q']):
                    self.is_running = False
                    return
    
        def rest_period(self, duration):
            self.current_label = "Rest"
            start_time = time.time()
            while time.time() - start_time < duration:
                self.win.flip()
                if event.getKeys(['escape', 'q']):
                    self.is_running = False
                    return
    
        def run_experiment(self):
            self.board.prepare_session()
            print("BrainFlow session prepared successfully.")
            self.board.start_stream()
            print("Stream started successfully.")
    
            eeg_thread = Thread(target=self.record_eeg)
            eeg_thread.start()
    
            for cycle in range(self.total_cycles):
                print(f"Cycle {cycle + 1}/{self.total_cycles}")
                self.flicker_circle(self.duration_ssvep)
                if not self.is_running:
                    break
                self.rest_period(self.duration_rest)
                if not self.is_running:
                    break
    
            self.is_running = False
            self.board.stop_stream()
            self.board.release_session()
            eeg_thread.join()
            self.save_to_csv(r"C:\Users\galos\OneDrive\Desktop\eeg_ssvep_data_S1_1_BrainFlow.csv")
            self.cleanup()
    
    
    
        def save_to_csv(self, filename):
            print(f"Saving data to {filename}...")
            eeg_channels = BoardShim.get_eeg_channels(BoardIds.CYTON_BOARD.value)
            header = ["Timestamp", "Label"] + [f"Channel_{i}" for i in eeg_channels]
            try:
                with open(filename, mode='w', newline='') as file:
                    writer = csv.writer(file)
                    writer.writerow(header)
                    writer.writerows(self.eeg_data)
                print(f"EEG data saved to {filename}")
            except Exception as e:
                print(f"Error saving CSV: {e}")
    
        def cleanup(self):
            print("Experiment completed. Exiting.")
            self.win.close()
            core.quit()
    

    Attachments

    Are there specific adjustments required when using Cyton and Daisy boards for SSVEP detection as Ganglion board?
    Could there be issues with my preprocessing pipeline or stimulus configuration?
    Has anyone faced similar challenges with Cyton + Daisy boards, and how did you solve them?

    Thank you for your help!

  • wjcroftwjcroft Mount Shasta, CA

    Hi Ghada,

    There are a number of other threads here on the forum mentioning issues with SSVEP detection. Have you looked over those?

    https://www.google.com/search?as_q=ssvep+cvep+calibration&as_sitesearch=openbci.com

    Some of the challenges with SSVEP are suggested below. A counter example is the success of the MindAffect cVEP system which works robustly.

    • dry (passive) electrodes work poorly relative to wet electrode systems, due to signal strength
    • square wave light stimulation induces other harmonics besides the fundamental, confusing your FFT results
    • importance of accurate calibration with photodiode measuring your monitor setup, as done by MindAffect
    • disabling of certain other display system parameters/settings that influence accuracy see MindAffect tutorial
    • Etc.

    Suggest looking at the cVEP tutorials posted recently on the Community page.

    https://openbci.com/community/
    https://openbci.com/community/mind-controlled-robot-openbci-mindaffectbci-maqueen-v2/

    https://mindaffect-bci.readthedocs.io/en/latest/index.html
    https://mindaffect-bci.readthedocs.io/en/latest/FAQ.html
    https://mindaffect-bci.readthedocs.io/en/latest/installation.html#osoptref
    https://mindaffect-bci.readthedocs.io/en/latest/installation.html#framerate-check [vsync setting]

    William

  • Thank you for sharing the resources. I appreciate the insights and suggestions regarding SSVEP detection and the challenges involved. I did go through some of the threads mentioned and have reviewed the links you shared. I wanted to address a few specific points:

    SSVEP vs. cVEP: I noticed in one of the shared posts that you mentioned, "MindAffect does NOT use SSVEP, (steady state VEP), instead it uses what is called cVEP, code based VEP. cVEP requires even more tight timing than the SSVEP you are using." My goal, however, is specifically to detect SSVEP signals using the Cyton and Daisy boards. As such, my focus remains on optimizing for SSVEP rather than transitioning to a cVEP-based approach.

    MindAffect Package: While I appreciate the suggestion to explore the MindAffect package, I’ve noticed a significant push towards using it across multiple posts. Unfortunately, the provided GitHub link (https://github.com/mindaffect/pymindaffectBCI) is broken, and I haven’t been able to clone the repository for further exploration. Additionally, I didn’t find any mention of this package in OpenBCI's official documentation under third-party tools, software, or developer resources. This raises concerns about its accessibility and integration with my current setup.

    Timing and Signal Peaks: Timing seems less likely to be the issue in my case, as I’m not observing any recognizable peaks, even at incorrect frequencies or durations. This makes me suspect that the issue might be related to electrode sensitivity, hardware configuration, or preprocessing strategies.

    Given these challenges, I would greatly appreciate any insights or suggestions tailored to improving SSVEP detection using the Cyton and Daisy boards. Specific advice on electrode placement, signal amplification, or preprocessing workflows would be especially helpful.

    Thank you again for your time, understanding, and assistance!

  • wjcroftwjcroft Mount Shasta, CA
    edited January 15

    As mentioned in the Community tutorial I linked to, this is the current Github location of Mindaffect.

    https://github.com/wjcroft/pymindaffectBCI

    And the docs,

    https://mindaffect-bci.readthedocs.io/en/latest/

    Re: timing / calibration

    Have you done a calibration with your monitor, using a photo sensor? The issue with your setup is likely NOT electrodes or 'hardware configuration'. Suggest using a similar electrode placement as Mindaffect headset uses.

    "Timing seems less likely to be the issue in my case, as I’m not observing any recognizable peaks, even at incorrect frequencies or durations."

    The point I'm trying to make, and that Mindaffect emphasizes, is that unless you take appropriate precautionary actions, the frequency stability of your monitor can induce 'jitter' or variable frequency flicker into the stimulation you are presenting. This smears out the fundamental frequency you are trying to detect. Mindaffect documentation shows several calibration tests they perform to ensure the timing is accurate.

  • wjcroftwjcroft Mount Shasta, CA

    Here is a December 2024 paper published on SSVEP research, using OpenBCI. You might find tips and suggestions to achieve the classification accuracy you desire.

    https://www.ascspublications.org/wp-content/uploads/woocommerce_uploads/2024/12/Jbins3Vol.Issue1_12_30_2024.pdf

    SSVEP Brain-Computer Interface Performance Examination Based on the Color and Shape Stimulus Configuration
    Abstract: The Brain-computer Interface (BCI) has become an essential communication tool, providing a direct connection to the brain. This technology is particularly transformative for individuals with locked-in syndrome, such as those with Amyotrophic Lateral Sclerosis (ALS). It also promises direct communication without relying on typing or voice commands. One of the most
    effective BCI methods involves the use of Steady State Visual Evoked Potential (SSVEP), a specific type of brain signal. However, there is a need for new modalities to enhance performance, particularly due to its relatively slow response time compared to typical communication methods. Color and shape visual stimuli are known to yield distinguishable brain responses. Therefore, this study aimed to investigate how color and shape affect the performance of the SSVEP BCI. The research recorded
    Electroencephalogram (EEG) signals from eight channels as participants viewed visual stimuli displayed on a monitor. The stimuli included flashing lights with variations in frequency, color, and shape. The results showed the highest accuracy configuration of 72%, with an Information Transfer Rate (ITR) of 15.5 bits per minute. Although the findings were statistically insignificant, they suggest that color and shape may influence the performance of SSVEP BCIs.
    ...
    II. RELATED WORKS
    A. SSVEP-based Brain-Computer Interface
    The use of SSVEP in BCI systems has been explored in several studies. In 2009, Bin et al. [14] developed an online multi-channel SSVEP-based BCI system with high performance, achieving an average accuracy of 95.3% and an information transfer rate of
    58 ± 9.6 bit min−1. Ko et al. [15] 2014 used the standard frequency pattern method to enhance the accuracy of an EEG-based SSVEP BCI system, achieving an accuracy of 95%. In 2016, Punsawad et al. [8] proposed a half-field SSVEP-based BCI system
    with a single flickering frequency, which increased the number of commands and reduced visual fatigue. A study in 2018 demonstrated a high-speed SSVEP-based BCI system using dry claw-like electrodes, achieving an average classification accuracy of 93.2% and an ITR of 92.35 bits/min using 1-second-long SSVEPs. The proposed dry electrode reduces system preparation time,
    increases wearing comfort, and improves the user experience, while the adopted TRCA algorithm improves the classification accuracy and ITR of the system [16].

Sign In or Register to comment.