Setup for data acquisition during instrumental performance

edited March 2023 in Headware

Hello,

I would like to make real-time EEG measurements during live music performances. The biggest challenge seems to be gathering data from wind instrumentalists. As far as I can tell, the specific challenges are:

  1. Heavy involvement of the facial muscles while playing wind instruments leads to artifacts that overpower usable data.
  2. The while the performer is on stage, they often are in the vicinity of badly shielded electrical equipment, which interferes with the measurements.

Are there any EEG setups that are particularly well suited for these setups? Are some kinds of electrodes better for this than others? Which placement would work best?

I tried using a Muse S (mostly because of it’s low cost), but that one is not very well suited for the task, since all it’s electrodes are very close to the face. I thought maybe the cEEGrid system might work (I heard they might be available soon through OpenBCI)?

I would also be grateful for any hints towards any literature that might me helpful.

Thank you for your help!

Edit: I hope I have put this in the right category. Please move the thread, if that is not the case.

Comments

  • wjcroftwjcroft Mount Shasta, CA

    Hi Michael,

    There are several academic / professional music performance experts here on the Forum who might have insights on the proper headset gear for performance EEG. Here are some I recall off the top of my head. By using the at-sign notation they will get an email inviting them to comment further. You can also see on their profiles, past threads and comments they have made. And the 'Message' button which will send them an email through the Forum.

    Kris Hofstadter @khofstadter
    https://openbci.com/forum/index.php?p=/profile/khofstadter
    https://khofstadter.com/

    Jeremy Deprisco @shivasongster
    https://openbci.com/forum/index.php?p=/profile/shivasongster
    https://www.jeremydeprisco.com/

    Joel Eaton @j_loe
    https://openbci.com/forum/index.php?p=/profile/j_loe
    http://joeleaton.co.uk

    https://openbci.com/forum/index.php?p=/discussion/98/musical-applications

    The Advanced Search button at upper right will let you search on arbitrary terms, there are a ton of posts on music applications. Just one sample search:

    https://www.google.com/search?as_q=music+performance&as_sitesearch=openbci.com

    You are right, EEG is very sensitive to artifacts induced by head and facial motion. In addition to the facial muscles, there are bands of muscles pretty much around the entire head. So I'm unclear how much those are activated by wind instrument performance and how much EEG artifact will result.

    William

  • wjcroftwjcroft Mount Shasta, CA

    @brainmusicguy said:
    ...
    I tried using a Muse S (mostly because of it’s low cost), but that one is not very well suited for the task, since all it’s electrodes are very close to the face. I thought maybe the cEEGrid system might work (I heard they might be available soon through OpenBCI)?

    re: cEEGGrid availability in the OpenBCI Shop

    Please email to (contact at openbci.com) and ask customer support. I do see that the kit can be purchased online:

    https://www.google.com/search?q=cEEGrid+openbci
    https://exgtools.expeeeriments.io/ [cEEGGrid kit]
    https://openbci.com/community/openbci-ceegrids/

    re: Muse S channel locations

    The original Muse and Muse S are essentially four channel EEG systems. The reference electrode(s) are near the center of the forehead, the two channels on the forehead are at AF7 AF8 (somewhat above and lateral to Fp1 and Fp2); and two channels over the ears, TP9 and TP10.

    Did you find that your TP9 and TP10 EEG signal quality was better than the AF7 AF8? Because it seems that the cEEGGrid ear system might have similar signal quality, since it is around the ears. It depends on where their 'reference' electrode is located. EEG is differential, so it subtracts the voltage between the reference and channel. Which means in the case of the Muse S that your reference is still on the forehead, thus contaminated by facial movement. From the cEEGGrid photos, it appears their reference is on one of the terminations around the ear semi-circle. Thus isolated from forehead movement.

    You know, you could try an experiment with OpenBCI and the headband kit, which is listed in the Shop. You could select electrode positions AWAY from the front of face area.

    re: Muse FIFTH electrode option

    Finally, with your Muse S, are you aware that you can attach a FIFTH electrode, using cup+paste? This could be positioned anywhere on your head. But I get the impression it may also be using the same forehead reference.

    https://openbci.com/forum/index.php?p=/discussion/2634/micro-usb-electrode-options-for-muse-extra-electrode-port

    re: simple "re-reference" to avoid forehead noise

    It's possible in some EEG systems to "re-reference" electrode positions, by doing simple subtraction operations between alternate channels. Thus it may be possible for you to use, for example, one ear as reference, and the other ear as your active channel. Or the external fifth channel electrode as reference, and the two ears as left / right channels, etc.

    William

  • wjcroftwjcroft Mount Shasta, CA

    re: feasibility of re-referencing Muse channel data, to use a reference location OTHER THAN the default Fpz

    The paper below shows that re-referencing on Muse can be done. And in your case might re-reference to use one of the ears as reference, or to use the external fifth electrode as reference. In the first case, the opposite ear would provide a data stream without the forehead noise. In the second case (external electrode as reference), each ear would provide a valid data stream.

    "Validating the wearable MUSE headset for EEG spectral analysis and Frontal Alpha Asymmetry"
    https://www.biorxiv.org/content/10.1101/2021.11.02.466989v1.full

    C. EEG data preprocessing
    ...
    The traditional method to compute frontal alpha asymmetry (FAA) is to calculate the difference in log-transformed alpha power between the frontal electrodes F7 and F8 on 64-channel EEG data [47], [48]. While the linked-mastoids reference method has been used extensively in the EEG asymmetry literature, average-referencing was shown to be preferable to estimate FAA [47]. ... With 4 electrodes, an average reference is not meaningful for the MUSE system since it requires a whole-head electrode coverage. The default reference channel for the MUSE is Fpz which is close to the frontal channels AF7 and AF8, and leads to low signal amplitude on these channels. Thus, the MUSE frontal channels were re-referenced to the TP9/TP10 mastoid electrodes (the two other channels available on the MUSE), termed in this study the “mastoid-ref montage” (AF7 and AF8 with linked mastoid reference)”. This reference method has been widely used in the asymmetry literature (e.g., [47], [52]).
    ...

  • edited March 2023

    Thank you so much for the links and the extremely detailed answers!!! I'll start by experimenting with re-referencing the data from the MUSE headset and go through all the forum threads to get a better understanding of what's possible and how.

  • wjcroftwjcroft Mount Shasta, CA

    Michael, thanks.

    In the Muse S electrode diagram above, the 'central' electrode, Fpz, is the default hardware reference. The two electrodes immediately on either side of that, I believe are part of the 'grounding' system.

    Below is some info on how re-referencing can be done inside of EEGLAB. Note that the first link, a tutorial from Irene, is attempting to result in "average reference". That is not what you want, instead you will be re-referencing to one of the ears, either TP9 or TP10. The other ear will be your main channel to monitor during the performance. (Assuming you are not purchasing the external cup electrode.)

    https://irenevigueguix.wordpress.com/2016/07/07/eeglab-tutorial-re-refering-eeg-data/
    https://eeglab.org/tutorials/05_Preprocess/rereferencing.html

    We cannot guarantee that re-referencing will solve your facial noise issue, but there is a good chance it will reduce the magnitude of the noise. Possibly to an acceptable level. Besides EEGLAB, there are other methods of re-referencing. I believe it is a straightforward type of subtraction operation.

    https://www.google.com/search?q=eeg+re-referencing

  • wjcroftwjcroft Mount Shasta, CA

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4160811/
    "A statistically robust EEG re-referencing procedure to mitigate reference effect"

    Abstract
    Background
    The electroencephalogram (EEG) remains the primary tool for diagnosis of abnormal brain activity in clinical neurology and for in vivo recordings of human neurophysiology in neuroscience research. In EEG data acquisition, voltage is measured at positions on the scalp with respect to a reference electrode. When this reference electrode responds to electrical activity or artifact all electrodes are affected. Successful analysis of EEG data often involves re-referencing procedures that modify the recorded traces and seek to minimize the impact of reference electrode activity upon functions of the original EEG recordings.

    New method
    We provide a novel, statistically robust procedure that adapts a robust maximum-likelihood type estimator to the problem of reference estimation, reduces the influence of neural activity from the re-referencing operation, and maintains good performance in a wide variety of empirical scenarios.

    Results
    The performance of the proposed and existing re-referencing procedures are validated in simulation and with examples of EEG recordings. To facilitate this comparison, channel-to-channel correlations are investigated theoretically and in simulation.

    Comparison with existing methods
    The proposed procedure avoids using data contaminated by neural signal and remains unbiased in recording scenarios where physical references, the common average reference (CAR) and the reference estimation standardization technique (REST) are not optimal.

    Conclusion
    The proposed procedure is simple, fast, and avoids the potential for substantial bias when analyzing low-density EEG data.

  • wjcroftwjcroft Mount Shasta, CA

    Section 2 of this document has a clear explanation on how re-referencing is done, using the simple subtraction method.

    https://www.brainlatam.com/blog/choosing-your-reference-for-an-eeg-recording-and-the-advantage-of-use-analyzer-696

  • khofstadterkhofstadter Colchester, U.K.
    edited March 2023

    Thanks William.

    Hi Brainmusicguy.

    I used the Greentek gel-free cap with the Cyton board. Eyes were closed, body and head were not moving, only fingers on the frame drum. So, I didn't have issues with artefacts. The video of the performance is on YT:

    You might find something interesting in Section 5.5 Performance Setting of my thesis:
    https://www.researchgate.net/publication/368365376_Developing_Brain-Computer_Music_Interfaces_for_Meditation
    Also, Section 3.6 outlines the history of brain-computer music interfacing, in which you can find many more researchers (artists and academics) who experimented with mapping brain signals to sound.

    Any questions, please let me know!
    Cheers, k

  • edited March 2023

    @khofstadter said:
    ...
    Cheers, k

    Thank you, I will go through the thesis (I just skimmed over the table of contents, I think there will be a lot of stuff in there that's relevant for me). And the video of the performance is great as well!

  • khofstadterkhofstadter Colchester, U.K.

    Thanks!
    If you end up using SuperCollider for the sound/music part, consider https://github.com/khofstadter/OpenBCI-SuperCollider
    I will have to look into updating it soon ...

  • @khofstadter said:
    Thanks!
    If you end up using SuperCollider for the sound/music part, consider https://github.com/khofstadter/OpenBCI-SuperCollider
    I will have to look into updating it soon ...

    Oh wow, this alone might push me to use SuperCollider (instead of Csound, which I am currently using) for projects with generative sound.

Sign In or Register to comment.