Cyton + Wifi, Packet Loss / Noise Issues

stellarpowerstellarpower Scotland
edited January 2023 in Other Platforms

Hi,

I have just received my 16-channel board today, what I believe to be equivalent to a Cyton + Daisy + Wifi shield (I got it second-hand, and it's an unofficial (although common) board, so the layout is just a little different). The board comes without a dongle; so connecting via Wifi is the only way I have to communicate with it.

So far I have managed to follow the WiFi setup instructions perfectly fine. The shield firmware is on 2.0.5 (the latest GH release). I'm not sure how I could find out the Cyton's firmware.

I am new to this, so may not fully know what I am doing. I was trying to set up the basics and just get a signal, but I had some problems with extremely noisy signals, then not receiving a signal at all, and having all the inputs railed. It's not completely clear to me what railing actually means, but I am assuming the DC bias is measured as being too high, and the amplifier comes in at some point and attenuates the signal or something like that. I've also failed to find a complete explanation of how the different groundings work/should be connected up (AGND, bias, SRB).

However whilst I was fiddling away, I noticed that there is a packet loss indication in the status bar of the GUI, and figured it would make sense to get to the bottom of this first. The GUI widget reports over 99% loss - which seems would be good enough reason to me for seeing all sorts of weirdness in the trace itself.

I am a bit suspicious, as the packet loss is always 99.22%. It doesn't deviate from this by more than 0.05% - I've tried a few different way to relieve this but none has resulted in any improvement whatsoever.

Variables I have tried changing:

  • WiFi Direct vs using my existing Router (both AC/N and G bands on separate SSIDs)
  • Taking the device outside away from noise in the building (so WiFi Direct)
  • Low battery apparently can cause noise; I have tried charging. It hasn't yet reached full charge, but filling it for an hour or two hasn't seemed to make any change.
  • I don't see an option in the 5.1.0 GUI to choose between TCP and UDP. I presume it's been removed in favour of always using UDPx3, but this would be an obvious thing to try if it can be changed.
  • Sample Rate

I also have seen some artefacts (with no cables connected) on the trace - I believe these are the cyclical noise pattern that others have mentioned. However, I'm not completely sure I can say this is what I was seeing.

Could anyone provide any recommendations as to what I could do to diagnose this please?

Thanks a lot!

«1

Comments

  • stellarpowerstellarpower Scotland
    edited January 2023

    Update: playing with a few settings whilst I find my feet. I know there's documentation for all of this, but I think a more in-depth tutorial guide / book from pretty much first principles and basics through to a reasonably thorough level might be nice.

    Just working on the device with no electrodes attached for the time being, and ignoring the dropped packets - turning the gain down on the hardware settings seems to stop all the channels from railing pretty much instantly, so the traces are starting to look a bit more like I would have expected. I am just inserting unconnected jumper wires in, but it seems there's a massive 25 Hz spike. I thought to expect signals in the μV range - with the gain at 6*, the white noise in the spectrogram seems to be about 1 μV; but the 25Hz peaks at over 200 μV, which is where I would expect my EEG signal to be in terms of rough amplitude. Is all this normal? Surely must be related to AC frequency, but it seems pretty steep; and I don't think I've seen a half line-frequency spike before personally:

    Originally, the noise seemed to be so large I thought something was wrong; but maybe I just need more time to get used to the typical amplitudes and orders of magnitude I should expect to be working with. This is what I get when connecting my ear clip to SRB1 and BIAS, and is similar to what I had before looking into the dropped packets issue:

  • Update2: Looks like the packet loss issue is resolved when connecting to the board as a 16-channel. I thought that as the daisy was an add-on, doing so would simply disable the 8 secondary channels - and believed I'd seen this being done somewhere online. Evidently not though, as now the noise looks a lot more under control, and previously turning off a particular channel didn't seem to do anything, whereas now it does successfully zero that one off, and I get a helpful text annotation.

  • Quoting from #231:

    Thanks for getting back to me so quickly.

    I assume you are using the UDPx3 mode, and separate battery power supplies (2 AA packs). This seemed to provide the most stable results. In part because the voltage is high enough (6V) to ride out any dips.

    I'm using an unofficial board, and both the wifi + daisy combined shield and the main board are powered by a rechargeable Li-ion (I think) battery. My inexpensive multimeter is reading 4V across the supply rail.

    According to the description, the manufacturers had improved upon some of the original issues around the noise - I have asked and am waiting for clarification.

    On the UDP mode - in the v5 GUI, this option has been removed. I believe it is being used by default, but am not sure how I can verify. Is there a raw command I can send?

    Is there any known difference when using software other than the GUI? Running it on my laptop last night, the GUI was using a huge amount of CPU and drained the (laptop) battery within about 40 minutes. I have seen this bug that suggested similar artefacts ocurred when the software on the local side lagged and wasn't able to keep up with the signal. Is there any chance simply using a program in Brainflow to dump the data to e.g. CSV, and then analysing it later, would improve things at all? My next thought was to try this as using the GUI to stream the data is one variable I haven't yet removed from the equation.

    So if it can be powered from 6V instead - are both boards then ambivalent to the voltage they're powered from, within reason? I would have assumed that this would need to be relatively exacting, or given that they are designed to run off batteries, do they in fact have internal voltage regulators that are more sophisticated than I would have assumed?

    In terms of diagnosing this correctly, is there anything else that spikes looking like this could likely be? Should I be seeing packet drops when this occurs in the GUI widget? As it is currently not registering any at all, having fixed a previous issue in this regard.

    And if this is definitely due to wifi issues, is there anything that can be done to mitigate this? One or two channels would be sufficient for the time being, if fully powering off some features reduced the draw causing the spikes. Otherwise it looks like I've bought an expensive doorstop.

    Thanks for your help.

  • wjcroftwjcroft Mount Shasta, CA
    edited January 2023

    @stellarpower said:
    Quoting from #231:
    Thanks for getting back to me so quickly.

    I assume you are using the UDPx3 mode, and separate battery power supplies (2 AA packs). This seemed to provide the most stable results. In part because the voltage is high enough (6V) to ride out any dips.

    I'm using an unofficial board, and both the wifi + daisy combined shield and the main board are powered by a rechargeable Li-ion (I think) battery. My inexpensive multimeter is reading 4V across the supply rail.

    According to the description, the manufacturers had improved upon some of the original issues around the noise - I have asked and am waiting for clarification.

    On the UDP mode - in the v5 GUI, this option has been removed. I believe it is being used by default, but am not sure how I can verify. Is there a raw command I can send?

    Richard @retiutut, can you confirm that the latest GUI, when used with the Shield, defaults to UDPx3 with no user adjustment needed? You may also want to comment on GUI vs Brainflow below. My impression is that his huge CPU load must be something unique to his system, other than the GUI. Because loading like this is not an issue on other machines. Is it possible that his graphics setup is somehow eating lots of battery?

    Is there any known difference when using software other than the GUI? Running it on my laptop last night, the GUI was using a huge amount of CPU and drained the (laptop) battery within about 40 minutes. I have seen this bug that suggested similar artefacts ocurred when the software on the local side lagged and wasn't able to keep up with the signal. Is there any chance simply using a program in Brainflow to dump the data to e.g. CSV, and then analysing it later, would improve things at all? My next thought was to try this as using the GUI to stream the data is one variable I haven't yet removed from the equation.

    No, the GUI is calling Brainflow internally already. And these power supply dips and ADS1299 transients are not related to CPU load or laptop battery usage. Can you try a test on another computer?

    So if it can be powered from 6V instead - are both boards then ambivalent to the voltage they're powered from, within reason? I would have assumed that this would need to be relatively exacting, or given that they are designed to run off batteries, do they in fact have internal voltage regulators that are more sophisticated than I would have assumed?

    All battery powered electronics boards, including the OpenBCI boards contain onboard voltage regulator chips. See the voltage limits printed on the boards. Yes 6V idea is to reduce the impact of the power dips, but it does not 'solve' it.

    In terms of diagnosing this correctly, is there anything else that spikes looking like this could likely be? Should I be seeing packet drops when this occurs in the GUI widget? As it is currently not registering any at all, having fixed a previous issue in this regard.

    The power dips can cause a range of glitches. Glad that you no longer have any packet loss. Even though the loss may be gone, this artifact has been definitely seen by other Cyton + Shield users.

    And if this is definitely due to wifi issues, is there anything that can be done to mitigate this? One or two channels would be sufficient for the time being, if fully powering off some features reduced the draw causing the spikes. Otherwise it looks like I've bought an expensive doorstop.

    OpenBCI has chosen to remove the Shield from the store for this reason. It is possible that an engineering firmware update may solve the issue. As discussed on the Github thread.

    William

  • wjcroftwjcroft Mount Shasta, CA

    On the Github thread:

    If it seems from the forum post that this is in fact due to the power spikes then I will add any new relevant details here.

    No, please do not add further comments on your unique situation to the Github thread. Keep your comments on this thread. Thanks.

  • wjcroftwjcroft Mount Shasta, CA

    re: your 50 Hz and 25 Hz FFT peaks, these are both mains noise, the 25 Hz being a subharmonic.

    In general the only reliable way to connect channels is to body locations. And not omitting the reference and ground / bias connections also to the body. When the channel and other pins are connected, it tends to cancel much of the environmental mains noise. But it could also be that you are in a high EMF area. Position your setup away from extensions cords, power supplies, lights, other equipment, power conduits in (floors, walls, ceilings), etc.

  • On Graphics, I'm using wayland, so I should be fine there. I will have to have a look some time. I was also running this on a high-power server before - I only moved to the laptop in an attempt to move into another room when I thought it might be power line-related noise. It did seems better at first being further from devices plugged into the mains, but then later it came straight back again. Given packets don't seem to be registered as dropped, I was wondering if the GUI is essentially lagging in consuming the data, giving the impression of a similar problem, but later on in the chain.

    Certainly my graphs look different in Neuromore. I realised the data coming out from Brianflow isn't normalised around zero, i.e it seems the DC bias is included. I presume the GUI is doing something like a 1-45Hz filter on the data before displaying in the time series widget. Is it doing any (other) preprocessing I should replicate if I want to graph in a different setup? I'll check the sources in a minute to see.

    I did find some issues today with one of my electrodes; afraid all I have to hand right now is a low-end multimeter, and I tested the DC resistance to the end of the lead, so I will rectify this tomorrow and try again, in case not getting a full path from the body was part of the problem at some point.

    On that note, in general, should I expect to see a trace when there isn't a full circuit? I had thought getting a reading in essence on an open circuit might be due to the very low voltages and high SNR involved (I guess the problem with moving cables is almost the same as static; it's reading a voltage just from movement of electrons along the materials). However it is quite different to usual electrical common-sense when using e.g. an oscilloscope in a normal setup, and if you have your reference plugged into some circuit board, you should always get a zero trace if you haven't yet connected the signal electrode to anything.

    this artifact has been definitely seen by other Cyton + Shield users.

    So we have at least two known types of artefact, just to confirm - those resulting in packet loss, and , and those that don't. Are these known to have different recognisable shapes on the trace? Any others we have documented?

    So do you think it would be safe overall to disconnect the (I presume 3.7V) lithium battery and instead connect a 4*AA pack, providing 6V? The impression I have is that this board is pretty much the same as the ones in the store, just with the elements positioned and laid out differently, but I'm not an electronics guy, and obviously a little bit reticent about overvolting it blindly in case it causes damage. If it's standard though I may give it a try if nothing else ameliorates the situation.

    Thanks a lot!

  • wjcroftwjcroft Mount Shasta, CA

    @stellarpower said:
    On Graphics, I'm using wayland, so I should be fine there. I will have to have a look some time. I was also running this on a high-power server before - I only moved to the laptop in an attempt to move into another room when I thought it might be power line-related noise. It did seems better at first being further from devices plugged into the mains, but then later it came straight back again. Given packets don't seem to be registered as dropped, I was wondering if the GUI is essentially lagging in consuming the data, giving the impression of a similar problem, but later on in the chain.

    Can you clarify your OS and hardware? As you mention the Wayland, which replaces X11, that might explain the CPU load and battery consumption. I would try some more tests with default graphics configurations. The GUI uses Processing (Java) which in turn uses OpenGL for graphics. I believe OpenGL is sensitive to the underlying layers.

    Certainly my graphs look different in Neuromore. I realised the data coming out from Brianflow isn't normalised around zero, i.e it seems the DC bias is included. I presume the GUI is doing something like a 1-45Hz filter on the data before displaying in the time series widget. Is it doing any (other) preprocessing I should replicate if I want to graph in a different setup? I'll check the sources in a minute to see.

    Yes, the GUI filters out the DC offset, but recordings are unfiltered. You can adjust GUI filtering parameters via the menus.

    I did find some issues today with one of my electrodes; afraid all I have to hand right now is a low-end multimeter, and I tested the DC resistance to the end of the lead, so I will rectify this tomorrow and try again, in case not getting a full path from the body was part of the problem at some point.

    On that note, in general, should I expect to see a trace when there isn't a full circuit?

    The amplifier is always sampling. But will pickup a lot of noise if the channels, reference, bias/ground are not making good contact. If you have paste and cups, I would suggest testing with that as a benchmark. Check your impedances. Typically with paste you get below 5 or 10K ohms.

    I had thought getting a reading in essence on an open circuit might be due to the very low voltages and high SNR involved (I guess the problem with moving cables is almost the same as static; it's reading a voltage just from movement of electrons along the materials). However it is quite different to usual electrical common-sense when using e.g. an oscilloscope in a normal setup, and if you have your reference plugged into some circuit board, you should always get a zero trace if you haven't yet connected the signal electrode to anything.

    Because the EEG amplifier is operating in microvolt range, it is much more sensitive to environmental noise when disconnected. Typical oscilloscope probes are shielded.

    this artifact has been definitely seen by other Cyton + Shield users.

    So we have at least two known types of artefact, just to confirm - those resulting in packet loss, and , and those that don't. Are these known to have different recognisable shapes on the trace? Any others we have documented?

    The power supply dips seem to cause a range of symptoms. Depends on your situation.

    So do you think it would be safe overall to disconnect the (I presume 3.7V) lithium battery and instead connect a 4*AA pack, providing 6V? The impression I have is that this board is pretty much the same as the ones in the store, just with the elements positioned and laid out differently, but I'm not an electronics guy, and obviously a little bit reticent about overvolting it blindly in case it causes damage. If it's standard though I may give it a try if nothing else ameliorates the situation.

    I would not expect the dual 6V supplies (separate for mainboard and shield) to solve your Wifi Shield issues. It has not for other users. But it may improve the situation somewhat because the dips are counterbalanced somewhat more.

    Thanks a lot!

    Regards, William

  • For reference, this is the GUI CPU use on my server, before I have even started streaming:




    Then when I started streaming, it went up to ~130%. I emptied out some cache memory and restarted, and it dropped to a few percent, before rising back up over 100% when sitting idle. I will need to investigate, it is certainly dithering between negligible use and taking more than one full core. Given that it is backed by the JVM, and got triggered by clearing the memory caches, I wonder if it is suddenly invoking the garbage collector. I know this can be an issue with Java-based applications.


    Also, this machine isn't using Wayland due to issues with the Nvidia 3090; only my laptop, which is on Jammy Jellyfish, where graphics support on Wyaland has come on in leaps and bounds over Focal.
    Inxi Output

  • So I've downloaded the 4.2 release of the GUI, and the 2.1 release of the hub to go with it. I'm no longer seeing these periodic pulses in the datastream. I am seeing lots of hum-like noise, but this is also what I see when using Neuromore or nme-scan, using Brainflow as the backend, so this would suggest that the pulses might be a software issue in v5 of the GUI. I can well believe there's hum due to some other issue in my setup, especially as this is new to me, and that this is in agreement across all three software packages, so I will do my best to gather more precise details later - but it is looking like a possibility that these bursts of activity are originating from a software issue.

  • wjcroftwjcroft Mount Shasta, CA

    I'm going to let Richard @retiutut respond to your comments regarding GUI CPU utilization and draining your laptop battery. Your point about the pulses disappearing with GUI 4.2 is very interesting. However, it seems doubtful that Shield issues are due to GUI software, because the severe hardware power supply dips have been measured by multiple engineers.

    The Hub architecture has long been abandoned, and the Hub predates Brainflow being available.

  • Thanks - yes, I agree with you on the shield power issues. To clarify, I'm thinking that this and what I'm seeing might be separate issues but with similar symptomology. The previously-identified bug with the head plot (#349) demonstrated that similar artefacts and packet losses could be seen when lag surfaced in the GUI, i.e. from a purely software-based cause. So this then got me wondering if this is also more of a software rather than a hardware issue, as the GUI graph was still the final point in the chain. Also the originators of this board apparently claim to have resolved the wifi power issues (possibly using that capacitor over the supply to the IC), and the previous owner seems to have gotten a trace from it, but could e.g. be streaming into Matlab.

    I'll keep plugging away over the coming week, and if Richard is in accordance the I can properly collate evidence, work up an MRE and open a bug on github.

  • Thanks for all your help. Using the v4 GUI as of just now I can finally see myself blinking at least. There's some noise, but without those bursts dominating the trace, I am at least able to see the results of what I'm doing and can narrow it down - e.g. identifying where I can go that will have less ambient noise. I'm using dry electrodes, but giving them a good lick before applying and adding a bit more pressure has helped quite significantly on the SNR.

    Richard, let me know what you want from me if you want me to investigate further. I was using the pre-compiled 5.x GUI, but I should be able to get processing installed easily enough. Any reason why it wouldn't work building/running in a privileged container, and if I can step through it with some extension in VS Code?

  • wjcroftwjcroft Mount Shasta, CA
    edited January 2023

    Richard @retiutut, can you comment on any of Stellar's questions? He is asking / wondering:

    • Why he got huge CPU loads and his laptop battery pack draining when running the OpenBCI_GUI 5.1 with his graphics setup using the 'Wayland' window system protocol in place of the more traditional X11 window manager. I asked him to try a run with a more vanilla setup, and never heard back. I assume he is using Linux with the Wayland (which Linux?) His latest comment above mentions "privileged container"; again I urge him to just use the simplest and most vanilla configurations possible. Maybe even trying booting into Windows.
    • He states his Wifi + Cyton noise bursts seen with GUI 5.1 are much reduced or gone with the older GUI 4.2, is that even possible given the verified Shield hardware power dips? I've already stated that 4.2 + Hub is completely obsolete. Nonetheless if the GUI / Hub timing in consuming packets results in less noise present, that deserves some looking into.

    Regards, William

  • retiututretiutut Louisiana, USA
    edited January 2023

    About an alternative Linux OS, we do our best to support Ubuntu, Mac OS, and Windows. I think Processing GUI renders most graphics through the CPU, but that's really out of our hands and more in how Java FX interacts with the OS and machine. The HeadPlot widget still needs to be rewritten/reworked.

    The difference between GUI v4 and v5 is that GUI v5 uses BrainFlow. So the underlying library for connecting to all boards is different.

    If the we all really want to continue debugging Cyton + WiFi Shield, I think the next phase of attempts should be modifying Cyton firmware. Though, this carries risk of bricking a Cyton if the firmware flash fails. Anyone reading this, proceed at your own risk.

  • wjcroftwjcroft Mount Shasta, CA

    @stellarpower, can you post images before and after showing the improvement with the Cyton + Shield 'noise' bursts. In other words, showing that the 4.2 GUI makes these disappear ENTIRELY? Or are they just reduced?

  • I'm using Ubuntu - as mentioned, Jammy on my laptop and Focal on the server. Wayland/X11shouldn't be relevant to the equation, as the server does not have Wayland installed and is only running X11. Also, whilst Wayland is still new, it is significantly less resource-intensive than X is, so should be to the GUI's favour. The server has an RTX 3090 GPU installed which we use for training neural nets . So I think graphics shouldn't be an issue In any case.

    In terms of Brainflow, the bursts are not at all visible in the data I am getting out of mne-scan or Neuromore, both of which are backed by Brianflow, nor in the 4.2-series GUI. Only have they so far appeared in the 5.1-series GUI. I can attach a screenshot if you want in a bit.

  • wjcroftwjcroft Mount Shasta, CA

    @stellarpower said:
    ...
    In terms of Brainflow, the bursts are not at all visible in the data I am getting out of mne-scan or Neuromore, both of which are backed by Brianflow, nor in the 4.2-series GUI. Only have they so far appeared in the 5.1-series GUI. I can attach a screenshot if you want in a bit.

    Very interesting data point, @retiutut. This seems to point to GUI timing as being a factor in the noise bursts. And (miracles happen), maybe there is nothing wrong with the Shield + Cyton.

  • And (miracles happen), maybe there is nothing wrong with the Shield + Cyton.

    I think this is what I'm seeing. I presume the manufacturers placed that capacitor over it, or something else to that effect to stabilise the supply during these points of high current draw. But despite bad everyday noise on the trace, I don't see any evidence of these artefacts outside of the v5 GUI. Admittedly both mne-scan and Neuromore have different plots. Maybe they are flying under my radar as I am switching between different software, but, I don't see anything that makes me suspicious. I also am not seeing significant dropped packets, and these artefacts also have an extremely distinctive shape, and I'm not sure if this is necessarily matching the shape others have seen when the issue has been down to hardware.

  • wjcroftwjcroft Mount Shasta, CA

    Yeah I forgot that your board may have an increased power supply capacitor. Still with that in place, one would expect ALL apps receiving the Wifi stream, would be noise burst free. Very curious indeed.

  • retiututretiutut Louisiana, USA

    It's not hard for me to evaluate GUI screenshots with adequate detail. This usually tells me much more than qualitative descriptions, though I appreciate the context descriptions.

  • wjcroftwjcroft Mount Shasta, CA

    I hope further Wifi testing is possible comparing results at the OpenBCI lab between GUI 4.2 and 5.1. Even though Richard does not have the extra capacitor, it certainly sounds like something odd is going on.

    @stellarpower, can you also send images of the front and back of your Shield board. The extra capacitor should be obvious.

  • I'll try and capture you some screen recordings and grab stdout/err; probably over the weekend as I'm going to need to catch up with my day job for a bit. If you can refine what you need to see, i'll do my best to include it.

    The product details are as follows (although the 8-channel version), hopefully this will get you some better images than with my phone. I see a largeish capacitor marked 337C on my board.
    https://www.ebay.co.uk/itm/284876280716

    Also whilst I'm at it; seems the default filter for the time series widget is a 4th-order Butterworth bandpass of 5-50Hz. If I knock up the same in any other software that's capturing from Brainflow, should I expect to see more or less the same trace as in the GUI? Or is there something else the widget is doing to the data that may result in a differing graph? Might help to compare like with like.

  • stellarpowerstellarpower Scotland
    edited January 2023

    I'm just now seeing spikes like this, if this is what you'd expect from the hardware issues:

    These are a lot more regular than those I first observed. They seem to be happening right now without fail with the same period. And look more like a dip than the bursts above. This is in the v4 GUI.

  • retiututretiutut Louisiana, USA
    edited January 2023

    I don't believe that the GUI v4 or GUI v5 data pipelines would be related to what we see in this image. It's too odd and likely is accurate data from the board. Both the spikes/dips and small waves in-between are not expected behavior. Floating channels should show more random waves and/or be railed.

  • wjcroftwjcroft Mount Shasta, CA

    I thought you said the time series signal from the 4.2 GUI was perfectly clean?? This is certainly not.

    re: high frequency in the above looks mains related, 25 Hz? (subharmonic of 50 Hz)

    re: filtering, all apps have filter settings, if you want to compare time series graphs you need to set them the same.

    re: your clone setup. I don't understand the THREE boards. Cyton is one board, Shield is one board, what is the THIRD board on the bottom of that stack?

  • I'm just now seeing spikes like this,

    Yes, it was clean before. At least, not with artefacts like this (in the 4.2). I think I have let go of the idea of a perfectly clean signal from one of these devices, but if something is halfway usable that is enough for the interim.

    I can more easily believe these are from hardware, as they are so regularly-spaced. The original ones above are irregular enough I can believe it's more likely a software issue.

    I see large 25 Hz in the FFT and have meant to take it outside to see if this is any better. It wouldn't connect last night (seems I have to clear all my settings form the old GUI due to #374 et al.) and is now raining, so I haven't managed to get away from power supplies and successfully capture any data yet.

    Is there any documentation on how the bias works, without being too specific on the electronics of the chip? The documentation on connection of a cap seems to suggest this could be used to filter ambient noise.

    The third board is used for charging, and for holding the battery. The wireless and daisy are on the same board in my 16-channel one.

  • wjcroftwjcroft Mount Shasta, CA

    Bias is the same as 'Ground' in other EEG systems. Identical. This connection both acts as a ground and injects a small anti-mains signal to improve signal to noise ratio.

    Unless your pins (channels, reference, bias) are all connected to the scalp at decent impedance, you are going to see noise issues.

    When you have more time I would like you to confirm that MNE, neuromore, and GUI 4.2 all have ZERO evidence of noise bursts. The image above did not help to confirm that previous statement.

  • If the bias is used as the ground, then could you please explain what the difference is between that and the SRB2? Common-sense would tell me to connect the signal to one pin and the ground to what it's measured against, but given that getting started guide mentions to connect the reference to SRB2 / SRB1&2 rather than bias, I feel there must be something more complicated going on.

    I have zero evidence of noise bursts like these, with this shape of three wavelets, in MNE, Neuromore, and 4.2 - only in GUI 5:

    I have more screenshots, but not sure where I saved them. I can take some more, but have a meeting in ten minutes. Bursts like there were flooding most of the channels in GUI 5. Unlike the spikes in GUI 4, they are not entirely regular. They came semi-regularly, the amplitude varied, the spacing varied, etc.

    However, I am now seeing regular spikes/dips like this in GUI4:

    Thanks

  • wjcroftwjcroft Mount Shasta, CA
    edited January 2023

    The period of the noise signals / bursts differs between the two images. The top period is about 3/4 second, the bottom about 1.3 seconds.

    If you are commenting on the quality of OpenBCI products on other websites. I suggest you include the fact that your manufacturer has significantly reengineered the boards. And possibly compromised the performance. Otherwise readers of your posts may assume you are using original OpenBCI products.

    William

This discussion has been closed.