Musical applications

shivasongstershivasongster Philadelphia
edited June 2014 in Software
My primary interest in getting involved with this project was to see about potential musical applications. I've seen some demos by Michael O'Bannon and others who work in this field, and I understand that the EEG levels are very low, difficult to map, and hard to predict, so at best we'd be talking generative music - not playing Mary Had A Little Lamb! But I think OpenBCI is an important step toward mind-music interfacing.

If you have similar interests, please contact me or perhaps we can have a dedicated section of the forum for this. I have rudimentary Arduino skills, have worked with some audio boards, and I have a long history of interaction with MIDI sequencers/controllers, including more recently the Percussa Audiocubes. I also work in the healthcare sphere (electronic health records), so am trying to strum up some interest in projects from that angle as well.

I also run the site MINDSPEAK.COM which is very much interested in publishing articles or stories related to this type of work, and implications for all facets of life beyond music.


  • Whoa, how did I miss this post?

    I'm totally into hacking EEG (my site: and I'm totally into hacking musical electronics (my other site:, but I've yet to find the best way to mash the two together.

    The problem is that EEG signals are so difficult to control consciously. So, anything recognizable as melodies or chord progressions or controllable rhythm seem impossible (to me).

    One avenue that might work would be music that is more in the vein of droning noise soundscapes. Something might be able to work there...though an engaging and artful mapping from EEG signals (1-80 Hz...though mostly 1-20 Hz) to audio (20-20,000 Hz...though mostly 50-8,000Hz) is still not obvious.

    I'd love to see other's thoughts on this...especially demos!

  • shivasongstershivasongster Philadelphia
    Right... I'd be happy to just get ambient soundscapes! Any fine control is pretty much out of the question, for the reasons you note. Guess we'll see after April.
  • Hey Shiva and Chip, I'm an EEG researcher at UCSD and I study musical rhythm. I'm not working on making music from brainwaves at the moment so much as probing how music perturbs them. Perhaps someday I could use brain responses to rhythmic events as drum triggers, but I have some basic science to do before I can get there.

    For now we may be working on some similar problems of integrating brainwaves and auditory signals. I have some experience mating MaxMSP and Pd and other EEG systems like biosemi & neuroscan.
  • Hi ProfHazMatt!

    In addition to my other EEG work, I've been looking at different algorithms for making EEG signals audible (besides just speeding up the playback speed) so that one can "hear" their brain. While I have plenty of tools myself for doing this, I'm having difficulty with how help others figure out how to do this.

    One of the hurdles that I've identified is to convert a text file (like the log file produced by my OpenBCI GUI) into an audio format like WAV. I use Matlab, so it's not a problem for me. But, if you don't have $3K to buy Matlab, what tools are out there to generate a WAV file from a text file?

    I had hoped that Audacity would be able to read in a WAV file, but it does not. Audacity does have an "Import Raw" feature, but it requires a binary file, not a text file.

    Do you know of any free software that'll generate WAV files from text files?

  • Hey Chip, I'm in the same boat as you, relying on Matlab, at least as long as EEGlab toolbox is exclusively available for that environment. If someone wants to attack the .txt to .wav problem from that level of sophistication, Octave ( is a free alternative to Matlab. Python should also do what you speak of, but I have less experience with that.

    Perhaps the most accessible option I can think of for the less computationally inclined would be PureData (Pd - which is a free alternative to MaxMSP. They are both graphical programming environments and were originally designed with electronic musicians in mind. You connect programming modules that act like effect boxes with patch cords. I've written text files from Max/Pd, so I'm sure reading them would be easy enough, as well as recording and playing waves. I also have some hope that if the openBCI board does not have a way to integrate time stamping/event markers at the hardware level that I could do it in Max/Pd by reading in the raw EEG data stream and combining it with the .wav I record from a piezo mounted on my subject's drum head. This solution would also work well for someone interested in 'audifying' the EEG signal in real time rather than reading in data after it is recorded.
  • shivasongstershivasongster Philadelphia
    This is one reason I joined... I think my audio experience will be very helpful to this crew.

    First of all, of course Audacity can read and create Wav files. If you need help here, I can be of assistance. It also has a native format, and an option for RAW.

    However, Audacity is not the best editor out there. So depending on what you are trying to do, there are other tools that might be better. How are you trying to create the audio file? What are the the inputs? What type of connection are you using? Do you that a soundcard or audio interface?

    Looking forward to getting my board... Should be soon, right?
  • wjcroftwjcroft Mount Shasta, CA
    The Bioexplorer and Bioera biofeedback / neurofeedback applications are easily used to produce MIDI output from EEG. One of the modules is a MIDI sound device. It can take trigger inputs from Threshold modules for example (connected to a Filter). The MIDI module can also select note or volume based on input pins, and these features can auto-scale.

    The Brainbay package that Chip has used on one of his posts, also supports this capability.
  • One way to start with EEG in musical applications is to simply listen to your brainwaves. Once you've recorded them, you could export them as WAV files and play them back on your computer. I just wrote a routine for doing this with OpenBCI data...

    Of course, brainwaves are too low frequency for hearing directly. Luckily, most audio programs let you play them back faster (such as by telling your audio program to use a faster sample rate). I recommend playing them back at something near 50x the original speed. It might not sound "musical", but I do find it interesting.

  • Just saw this...EEG control of hacked Russian folk instruments (read the whole article, or just hit Ctrl-F and search for "EEG")...

  • Hey, I found this post just now.
    I am a computer scientist and I would love to help. I got a few ideas of what might be done, but I need my openBCI set to test them. Even though I know next to nothing about music making I got many friends that does. Anyway, if an arrengment comes up be sure to check me in^^
  • Another related device:

    Human Brain to Eurorack Interface

    Soundmachines BI1brainterface is an exploration and performance tool for musician, producers, actors, body performers and coreographers who wants to connect their mental and emotional sphere directly with the performance.

  • Aaron Thomen has created a very nice brainwave-music tool called MindMIDI:
    MindMIDI is based on BioExplorer and uses some more MIDI tools.
    I had a chat with Aaron a while ago, he's a very nice guy and might be interested in creating an OpenBCI port as well.
  • wjcroftwjcroft Mount Shasta, CA
    edited January 2015
    I took a look at the design schematic inside the Mindmidi bxd file. Whew. I would venture to say it is approaching the maximum complexity of interconnection and design that you would want to attempt in Bioexplorer. Since there is no support for nested function blocks in Bioexplorer, that might explain the hundreds of interconnection wires involved. I was going to post a screen snapshot of the design but gave up, as it was too large to fit my monitor.

    BioEra does support nesting.

    The basic idea is to have bandpass filters (band amplitudes) for the different bands: delta, theta, alpha, beta, gamma -- and the amplitudes of those bands then control your music generation algorithm. There are many ways to do this kind of thing. VPL's like Brainbay, Bioexplorer, BioEra, PureData, etc. -- if the interconnect is of modest size. Beyond that you would probably want to do this procedurally with some text based language. That way you could track modifications and diff between versions, source code control, etc. 

    Such as interfacing OpenBCI data format (recordings or live) to one of the software suites mentioned here,

  • edited January 2015
    Last year I worked with a friend to attempt this whole on-line EEG sonification using max/msp & my old emotiv epoc. Each electrode drives a filter that basically just cuts everything but a certain frequency band from white noise:

    I later built an installation with 1 speaker for each electrode, but due to budgetconstraints the whole thing sounds like applesauce. Also one of the speakers started burning.

    Looking forward to try it with my openBCI board
  • That's pretty sweet!

    The use of multiple speakers for the multiple EEG channels is a good idea.  How did you get multiple audio streams for the differetn speakers?  Did you have a multi-channel sound card for your computer?

  • Once you have the multiple audio streams, it is really easy to get those to different speakers. You could use something like a Focusrite audio interface to take the signals in, then route them into Ableton Live for processing and back out to a surround system. There are many possibilities.

    I'd like to thank all the folks who responded to my initial post. Lots of smart folks out there, and looking forward to possibly collaborating with some of you. Sadly I have had limited time for my Open BCI board, and I am concerned the learning curve will be significantly more steep without some local help.

    I look forward to exploring some of the links people have shared.
  • shivasongstershivasongster Philadelphia
    edited January 2015
    @nightscape... checking out mindmidi

    Nice implementation, and maybe that is really all I am after. Finding that it may not be worth reinventing the wheel to get to what Aaron has done. I will send him a note.

    The MindMIDI video above showcases a controller designed for apps like Ableton. In terms of generative music - Ableton Live is a very capable platform for generative music, even without of any EEG interface. It offers many ways to set rules, introduce randomness, or indeed reduce randomness of the errant signals EEG might generate. It's here that I thought I might be of help.

    The only problem I have with this... and it is more of a creative thing... At some point - mapping to conventional sounds (marimba patches, bass lines, drums) seems to give the impression that the user has more control than they actually do. This is just a theory, and perhaps it is because these sounds are more familiar to many than, say, the more noise-ambient examples above. An observer of an OpenBCI+musical interface might be mistaken in thinking that the performer has some sort of control, when in fact the generative parameters are doing most of the "work". So there's the "raw" signals, and then the more polished layer possible with apps like Ableton.
  • Thanks @chipaudette !!
    We had only 50€ budget for the entire thing, but I found an extremely cheap external 7.1 surround card that works quite well (for the purpose of driving the tiny crappy speakers). More of a proof of concept, there are much better solutions out there...

    @shivasongster Our approach was really to go away from active BCI and treat the brain as a blackbox, ehrm noise source. Especially because active BCIs usually require a lot of training. And the one's I've seen used in an art context didn't really work, they were I guess mostly working with blinks, head movement or confirmation bias... But jeah, in our blackbox/noise prototype people who tried it started pretty quickly to use extreme facial expressions to get some active response from the system... I hope that it will change by using a higher quality interface like the OpenBCI and better speakers.

    I'm also looking into passive BCIs at the moment (measuring fatigue, workload, attention,...), hopefully come up with an interesting application or two.
  • wjcroftwjcroft Mount Shasta, CA
    @jfrey recently made a quick bridge between the OpenBCI_Python code and OSC (Open Sound Control)  his branch with OSC driver

    What is OSC?

    OSC allows input of EEG (or other) data into a number of audio and video performance apps such as PureData, VVVV, MAX, etc.

    His twitter post and photo: 

    Found this good comparison and strengths of the various performance apps / languages:

    VVVV is free for non-commercial use. PD is free, but apparently development has slowed. Max is great, but expensive.


  • wjcroftwjcroft Mount Shasta, CA
    edited April 2015
    Joel Eaton in the UK has some great videos and posts on his SSVEP / MI / aBCI music projects,

    String quartet
      (motif compositions by Eduardo Miranda)    /    (4 videos)

    Joybeat, BCI controlled drum machine

    1st International Workshop on Brain Computer Music Interfacing (BCMI)


    Guide to Brain-Computer Music Interfacing, edited by Eduardo R. Miranda and Julien Castet

    This book presents a world-class collection of Brain-Computer Music
    Interfacing (BCMI) tools. The text focuses on how these tools enable the
    extraction of meaningful control information from brain signals, and
    discusses how to design effective generative music techniques that
    respond to this information. Features: reviews important techniques for
    hands-free interaction with computers, including event-related
    potentials with P300 waves; explores questions of semiotic
    brain-computer interfacing (BCI), and the use of machine learning to dig
    into relationships among music and emotions; offers tutorials on signal
    extraction, brain electric fields, passive BCI, and applications for
    genetic algorithms, along with historical surveys; describes how BCMI
    research advocates the importance of better scientific understanding of
    the brain for its potential impact on musical creativity; presents broad
    coverage of this emerging, interdisciplinary area, from hard-core EEG
    analysis to practical musical applications.
  • edited September 2015
    Here's a well produced video of a Berklee professor using an EEG headset to drive a modular synth.  Not great sounds being demonstratted, but it is a nice little video...

  • OpenBCI just tweeted this video of someone using OpenBCI to drive percussion sounds...

  • wjcroftwjcroft Mount Shasta, CA
    Just ran across EEGSynth, they are using OpenBCI and Python,

    These guys need to do a post on our Communities page(!)

Sign In or Register to comment.