Hack the Brain
Last 20-22 March 2015, the first UK hackathon dedicated to projects that aim to expand, enhance and augment the mind and the senses, Hack the Brain, was held in London. On the first day, participants had the opportunity to pitch an idea for a project. The proposers of the 10 most voted ideas could then create a group and had the next two days to develop a prototype.
As a PhD student in Brain-Computer Interfaces (BCIs) at the University of Essex, I was very interested in this hackathon to try to apply the research that we run every day in our lab to the everyday life.
I pitched an idea to create a mobile BCI device to be used with the smartphone. In particular, the aim was to operate the smartphone without hands or voice: just with the brain! Our group “Wink It” was formed by four people: Davide Cultrera (an undergraduate student from Cambridge), Matej Káninský (a user experience designer based in London), Ana Matran-Fernandez (another PhD student from the BCI laboratory in Essex) and myself.
In the hackathon, we had the opportunity to work with many different devices and technologies that were available for the projects. However, since Conor Russomanno brought several openBCI kits to Hack the Brain, we decided to use openBCI as the core board of our system.
Our original plan was to record the brain signals and transform them into commands for the smartphone. Since current smartphones have quite powerful CPUs, it would be possible to perform all the processing (averaging, filtering, and so on) on the smartphone and make the BCI really portable! So, our first aim was to get rid of the openBCI USB dongle and transfer the EEG signals directly to the smartphone via Bluetooth.
OpenBCI vs Android
We spent an entire day trying to connect openBCI to Android via Bluetooth 4.0. Conor told us that we could look into the dongle firmware to find (a) how the connection with the board is established and (b) which operations are performed on the dongle (mainly parsing). We tried using the AndroidBLEStack library to establish a connection from Android (4.3+) to the Arduino-based board via RFduino. We also tried the openBCI Android App provided here.
However, we could not make it work. Even though the device was correctly found, it was impossible to establish a connection to receive data. Hence, we needed to change our approach. Since we had devoted most of the available time trying to get the Bluetooth connection to work, we had to simplify the BCI part of the system, so we started looking at other types of signals that could be reliably recorded with openBCI.
From artifacts to commands
Instead of using EEG signals, that require training and various electrodes on the scalp, we used eye winks. These signals are usually considered as noise in BCI research and applications. However, they are large in amplitude and can easily be detected with only a peak detector, without having to train a classifier (or a user) to recognize them. So, why not use them to operate the smartphone? Winking with the left eye could trigger a command while winking with the right eye could trigger a different one (unless you can’t wink with one eye, like myself! — after quite a lot of self-training!).
We built a system that could record data from two electrodes placed above the eyes of the user and sends them to the computer via the USB dongle (remember, this was a prototype built in 8 hours!). Then, a Python program processes the EOG signals by applying very basic filtering and a threshold detector. This allows us to detect eye winks (large potentials that are only recorded in one of the channels) and ignore eye blinks (large potentials in both channels). Then, the Python program sends a specific command to the smartphone via a TCP socket created over the WiFi connection (yes, the PC and the smartphone should be connected to the same WiFi network).
On the smartphone side, we created an app called WinkIt that allows the user to map each eye wink (right or left) to a specific action to control the smartphone. In our first prototype, developed during the hackathon, we supported the most common commands that a user would like to send to the music player: play/pause, next song, previous song, volume up and volume down.
Since we are not really using EEG signals but muscular movements, there is no need of training the system: it is ready to go!
We presented our first prototype in the final part of Hack the Brain. There was a lot of interest in the audience, especially for the large amount of different actions that you can perform as soon as you allow the user to use the smartphone with eye winks (someone suggested Tinder!) and eye wink combinations.
In the end, we won the hackathon and we presented the week after in the You Have Been Upgraded festival at the London Science Museum. The public was very excited about it and we got a lot of suggestions on how to improve it.
After the hackathon, every person went back to his/her life. However, since two of us (Ana and myself) are PhD students in the same research group at Essex, we are actually thinking of moving on with WinkIt making the system more reliable and… wearable!
But this will be another post… 😉