Setup for data acquisition during instrumental performance
Hello,
I would like to make real-time EEG measurements during live music performances. The biggest challenge seems to be gathering data from wind instrumentalists. As far as I can tell, the specific challenges are:
- Heavy involvement of the facial muscles while playing wind instruments leads to artifacts that overpower usable data.
- The while the performer is on stage, they often are in the vicinity of badly shielded electrical equipment, which interferes with the measurements.
Are there any EEG setups that are particularly well suited for these setups? Are some kinds of electrodes better for this than others? Which placement would work best?
I tried using a Muse S (mostly because of it’s low cost), but that one is not very well suited for the task, since all it’s electrodes are very close to the face. I thought maybe the cEEGrid system might work (I heard they might be available soon through OpenBCI)?
I would also be grateful for any hints towards any literature that might me helpful.
Thank you for your help!
Edit: I hope I have put this in the right category. Please move the thread, if that is not the case.
Comments
Hi Michael,
There are several academic / professional music performance experts here on the Forum who might have insights on the proper headset gear for performance EEG. Here are some I recall off the top of my head. By using the at-sign notation they will get an email inviting them to comment further. You can also see on their profiles, past threads and comments they have made. And the 'Message' button which will send them an email through the Forum.
Kris Hofstadter @khofstadter
https://openbci.com/forum/index.php?p=/profile/khofstadter
https://khofstadter.com/
Jeremy Deprisco @shivasongster
https://openbci.com/forum/index.php?p=/profile/shivasongster
https://www.jeremydeprisco.com/
Joel Eaton @j_loe
https://openbci.com/forum/index.php?p=/profile/j_loe
http://joeleaton.co.uk
https://openbci.com/forum/index.php?p=/discussion/98/musical-applications
The Advanced Search button at upper right will let you search on arbitrary terms, there are a ton of posts on music applications. Just one sample search:
https://www.google.com/search?as_q=music+performance&as_sitesearch=openbci.com
You are right, EEG is very sensitive to artifacts induced by head and facial motion. In addition to the facial muscles, there are bands of muscles pretty much around the entire head. So I'm unclear how much those are activated by wind instrument performance and how much EEG artifact will result.
William
re: cEEGGrid availability in the OpenBCI Shop
Please email to (contact at openbci.com) and ask customer support. I do see that the kit can be purchased online:
https://www.google.com/search?q=cEEGrid+openbci
https://exgtools.expeeeriments.io/ [cEEGGrid kit]
https://openbci.com/community/openbci-ceegrids/
re: Muse S channel locations
The original Muse and Muse S are essentially four channel EEG systems. The reference electrode(s) are near the center of the forehead, the two channels on the forehead are at AF7 AF8 (somewhat above and lateral to Fp1 and Fp2); and two channels over the ears, TP9 and TP10.
Did you find that your TP9 and TP10 EEG signal quality was better than the AF7 AF8? Because it seems that the cEEGGrid ear system might have similar signal quality, since it is around the ears. It depends on where their 'reference' electrode is located. EEG is differential, so it subtracts the voltage between the reference and channel. Which means in the case of the Muse S that your reference is still on the forehead, thus contaminated by facial movement. From the cEEGGrid photos, it appears their reference is on one of the terminations around the ear semi-circle. Thus isolated from forehead movement.
You know, you could try an experiment with OpenBCI and the headband kit, which is listed in the Shop. You could select electrode positions AWAY from the front of face area.
re: Muse FIFTH electrode option
Finally, with your Muse S, are you aware that you can attach a FIFTH electrode, using cup+paste? This could be positioned anywhere on your head. But I get the impression it may also be using the same forehead reference.
https://openbci.com/forum/index.php?p=/discussion/2634/micro-usb-electrode-options-for-muse-extra-electrode-port
re: simple "re-reference" to avoid forehead noise
It's possible in some EEG systems to "re-reference" electrode positions, by doing simple subtraction operations between alternate channels. Thus it may be possible for you to use, for example, one ear as reference, and the other ear as your active channel. Or the external fifth channel electrode as reference, and the two ears as left / right channels, etc.
William
re: feasibility of re-referencing Muse channel data, to use a reference location OTHER THAN the default Fpz
The paper below shows that re-referencing on Muse can be done. And in your case might re-reference to use one of the ears as reference, or to use the external fifth electrode as reference. In the first case, the opposite ear would provide a data stream without the forehead noise. In the second case (external electrode as reference), each ear would provide a valid data stream.
"Validating the wearable MUSE headset for EEG spectral analysis and Frontal Alpha Asymmetry"
https://www.biorxiv.org/content/10.1101/2021.11.02.466989v1.full
Thank you so much for the links and the extremely detailed answers!!! I'll start by experimenting with re-referencing the data from the MUSE headset and go through all the forum threads to get a better understanding of what's possible and how.
Michael, thanks.
In the Muse S electrode diagram above, the 'central' electrode, Fpz, is the default hardware reference. The two electrodes immediately on either side of that, I believe are part of the 'grounding' system.
Below is some info on how re-referencing can be done inside of EEGLAB. Note that the first link, a tutorial from Irene, is attempting to result in "average reference". That is not what you want, instead you will be re-referencing to one of the ears, either TP9 or TP10. The other ear will be your main channel to monitor during the performance. (Assuming you are not purchasing the external cup electrode.)
https://irenevigueguix.wordpress.com/2016/07/07/eeglab-tutorial-re-refering-eeg-data/
https://eeglab.org/tutorials/05_Preprocess/rereferencing.html
We cannot guarantee that re-referencing will solve your facial noise issue, but there is a good chance it will reduce the magnitude of the noise. Possibly to an acceptable level. Besides EEGLAB, there are other methods of re-referencing. I believe it is a straightforward type of subtraction operation.
https://www.google.com/search?q=eeg+re-referencing
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4160811/
"A statistically robust EEG re-referencing procedure to mitigate reference effect"
Section 2 of this document has a clear explanation on how re-referencing is done, using the simple subtraction method.
https://www.brainlatam.com/blog/choosing-your-reference-for-an-eeg-recording-and-the-advantage-of-use-analyzer-696
Thanks William.
Hi Brainmusicguy.
I used the Greentek gel-free cap with the Cyton board. Eyes were closed, body and head were not moving, only fingers on the frame drum. So, I didn't have issues with artefacts. The video of the performance is on YT:
You might find something interesting in Section 5.5 Performance Setting of my thesis:
https://www.researchgate.net/publication/368365376_Developing_Brain-Computer_Music_Interfaces_for_Meditation
Also, Section 3.6 outlines the history of brain-computer music interfacing, in which you can find many more researchers (artists and academics) who experimented with mapping brain signals to sound.
Any questions, please let me know!
Cheers, k
Thank you, I will go through the thesis (I just skimmed over the table of contents, I think there will be a lot of stuff in there that's relevant for me). And the video of the performance is great as well!
Thanks!
If you end up using SuperCollider for the sound/music part, consider https://github.com/khofstadter/OpenBCI-SuperCollider
I will have to look into updating it soon ...
Oh wow, this alone might push me to use SuperCollider (instead of Csound, which I am currently using) for projects with generative sound.