Community /

I Rebuilt My First Brain-Computer Interface: Here is What I Learned.

A while ago, I wrote about 5 things I have learned about brain-computer interfaces during the first years of my PhD in neurotechnology. In the intro of this piece, I describe a simple brain-computer interface (BCI) that plays a beep when focusing on my breath. Now, about six years later and four years in said PhD, I thought it would be interesting to see what would happen if I rebuild that beep-playing BCI. Would I build it differently? Would I be quicker or will I approach it from another angle? In what area(s) will my experience help me, if any?

To explore this question, the people over at OpenBCI were so friendly to lend me their Biosensing Starter Kit to measure my brain waves. This is a different device than I used the first time, but it definitely has enough capabilities to perform the task at hand. 

To be able to reflect on the project and distill some learnings, I decided to write down every single thing I did. Initially it was only intended as a diary to keep track of what I did. But when re-reading it, I thought it was a unique view on my thought process, including all the little mistakes. So, in an effort to show that mistakes will always be made and to show the little details that are usually left out of the final text, I decided to include these notes without editing (except typos). Before we get into it, I want to set the framework on how to approach this. First, I do not want to reinvent the wheel, so I will use all tools available to reach the goal of a new beep-playing BCI. It’s more about the process and not the end goal. Secondly, I will try to keep it as simple as possible, so that anyone interested can learn from the project and use it as well. Lastly, you might not be interested in some reflections and just want to see the project. No worries, I got you: find the Git repository + short tutorial here.

From brain to beep

First, what components do we have:

  1. The Biosensing Starter Kit, including an amplifier (Ganglion Board), electrode cables and dry and solid gel electrodes that snap on the end of those cables.
  2. A laptop to put the bluetooth dongle in and to run some code.
  3. Python + VSCode to write and test the code.

 Overall, this (and many other similar projects) pose a few major challenges to solve:

  1. Get to know the recording device
  2. Get real time access to the data
  3. Process the data into an output signal.

You might have noticed that all these steps can be categorized as engineering problems, and only a subpart of the first and third challenge involve some neuroscientific component. Especially in these smaller and simpler projects, the engineering part is much more important than the neuroscience part. 

In any case, let us get to know our hardware. The setup was quite straightforward. I followed the instructions (here & here) to assemble the device and connected it to the computer. The software was standalone, so no need to run an installer (I like that). I started a datastream via the GUI, data came streaming in and I was greeted with a nice FFT plot. 

This plot shows the power or strength of the signal in many different frequency bands. The brain constantly emits brainwaves in many different frequencies, and this plot shows the strength (vertical axis) for each frequency (horizontal axis). The interesting part of these waves is that they are related to specific processes in the brain. For example, in the visual cortex (located on the back of your head) the power of the frequencies between 8 and 12 cycles per seconds or Hz (also called “alpha” waves) changes based on whether you have your eyes open or closed. Or “beta” wave power (between 12 to 30 Hz) decreases when you prepare for a movement. Admittedly, as always it is not as clearly defined as described here. There are many different types of waves that are related to many different processes, and they also overlap a lot! But, they can still be used, and now you have some basic intuition of the signal that we will be using. Now, I could try to devise some algorithm that detects changes in the strength of alpha waves. Not only are they involved in the vision part described above, they are also related to attention, arousal and mindfulness (and much more similar concepts). And, just as important, they are relatively easy to measure. But, instead of engineering something suboptimal myself (including creating a multi-stage experimental paradigm), I can make use of OpenBCIs builtin algorithm that detects either relaxation or concentration. It provides a value between 0 and 1, where 1 is fully relaxed and 0 the opposite. (I also was simply curious about how well this algorithm would work out of the box). Now, I do not know exactly what the relaxation/concentration algorithm is using, but a quick look at the docs confirms that it is based on Alpha, Theta and Delta. These three bands together cover all the lower frequencies, from 0 to 12 Hz.

So, data is coming in, our signal is determined. Now we need to access the data. For this, we will use Lab Streaming Layer or LSL for short. It is an easy to use software package that can send and synchronize data on your computer or even on your network. And LSL is integrated in much other software, including the OpenBCI GUI. We can thus send data via the GUI (see the step-by-step tutorial for instructions). Now we need to solve the other side, receiving the data and doing something with it. For this, I wrote a simple python script that performs four tasks: 1) Get data from the LSL stream, 2) process that data into a control signal (now simply the mean of all retrieved data in this loop, 3) evaluate that signal whether to play a beep or not, and finally 4) Play a beep.

With all the components in place, it is time to test the system. First, I need to check if everything works in a technical sense. So without wearing the headset, I turn it on and connect everything and see if it worked. Looking at my notes, I already needed to fix 9 errors in the small python script. Luckily they were quick fixes, so I could rapidly progress to the actual testing phase. I strap the headset on my head, fiddle with it a bit to get the best signal and then try to relax. Take deep breaths, try closing my eyes, try keeping my eyes open. Try not to think too much and just accept any incoming thoughts. After a while, I heard a beep. First a few times, then longer. Then, I tried to do the opposite of relaxing. Sit up straighter, actively look at the screen. Type some stuff, without moving too much to avoid noise from movements. It took a few moments again, but the beeps went away. What I noticed is that my own control of the signal took a minute to become apparent. Since the algorithm uses predefined constants, it does not ‘learn’ over time. What could cause this effect is the fact that I average all the data per chunk that I receive. Therefore, it might take a few moments to get a high value long enough for the beep to play. Since the code is simple, it should be very quick and not introduce lag between the retrieval of the data and processing, but if you use a slow machine this might be a problem. The algorithm in the focus widget could also be averaging the signal, which is not uncommon because it decreases noise. Lastly, it could be a learning effect. After using it for a while, you might (unconsciously) start to understand better how to control the signal. I tried the system out a few hours later, and a few days later and I had the same experience. First some messing around, then it improved over time.

So, time to reflect. I thought the process was straightforward and without any hiccups. But when re-reading my notes it was not as smooth sailing as I remember. After reading it and leaving it for a few days, this is what I took away from rebuilding my first BCI.

The progress of neurotech

I was pleasantly surprised how the neurotech field has improved in the last few years. The fact that I can get this to work without much manual programming is such an improvement. It allows for much easier development of home-made applications and lowers the bar to entry. In the scientific field I see the same rapid development from close-by. The rate of breakthrough papers is increasing every year, medical companies are getting FDA approval for commercial applications in humans, and the increasing investments in the neurotech field are mind-blowing. It seems like a great time to be or to get into the neurotech field.

The importance of multidisciplinarity in neurotech

While solving all the smaller problems in this project it became apparent to me how varied the required skill set is, even for such a small project. For example, I required some basic programming skill, neuroscience understanding to define a target signal and to determine electrode placement, experimental knowledge of recording data, improving signal quality, neuro-engineering knowledge to understand the type of data, access it and handle it. Now, I already knew this because I am using that knowledge all the time, but since I could now compare it with my first attempt, I became more conscious of it. Of course, in this project it is easy to learn these skills on the go, But in a broader sense, it underlines the required multidisciplinarity of the field. If you want to build something into a larger project, it is essential to have people from many different fields to contribute to your project.

Experience gives direction.

On a more personal level, I learned that six years of experience mostly benefited by knowing what problems I needed to solve, leading me to make better choices early on. Six years ago, I would just go with whatever popped into mind and see whether it would work, while now, I had a structured approach to all subproblems. To me, it illustrates the importance of those early projects, which taught me a thing or two about each problem. The fact that I knew that I could use LSL and how it roughly worked comes from the fact that I struggled with it myself. In my first project, I tried to write my own connection between the device and my computer. I failed miserably trying to figure out the data format and to write something that continuously processed the data without lag. But it did teach me a lot about the whole data acquisition part. However, the fact that I immediately had ideas about some solutions also hindered me in some way:

Do not fall in love with your own solution.

Having solutions pop in your head can be great, but it can make you lose sight of other solutions. When I read back my notes, I realized I had jumped into the first solution that popped into my head, neglecting other potentially better options. Now, best case it could have saved me some time in this context, but I sense that on a broader scale this is important to be aware of. It reminded me of a saying in science: “Do not fall in love with your own solution”, which describes the process that you will overvalue your own approach and undervalue other equality valid approaches. Simply because you have been working on your own solution for a long time. I feel like this was a moment where I stopped being open to other solutions too soon. I might need to write this one down. On a side note, another variation of this saying, maybe even better, is: “Fall in love with the problem, not with the solution.”

In the end, I hope you take a few things away from my project. First, I hope the project can provide a starting point for you to develop something yourself. Feel free to use it in any way you want. Second, I hope to show the value of messing around with early projects and that mistakes are made at every level. In any case, for me it was a nice reflection on my own progress over the years.

I want to end with a call to action. If you are interested in Neurotech, get some hands-on experience with a simple project! It will introduce you to many different engineering skills and might help you get into the neurotech field, for example, to land an internship in a scientific lab or company. We are going to need many smart people in the near future!

No inspiration? Here are two challenges:

  1. Can you change the beep playing into something else? For example change to the next or previous channel on your tv or page on e-reader by blinking left or right.
  2. Can you reengineer the focus function and compare it with the built-in algorithm?

Leave a Reply