Community /

Huy’s OpenBCI Internship Experience

Hey everyone!

This summer I had the awesome opportunity to be a remote, part-time intern at OpenBCI. Although I was unable to fully experience what it was like to be at the OpenBCI HQ being on the opposite coast, I met some very amazing people and got to work on a challenging problem.

Prior to the internship, I had been interested in BCIs and also Virtual Reality for a long time, and had never thought to combine my two interests. However, after hearing that Galea was being developed, I knew that I wanted to work at this intersection. If you didn’t already know, Galea is a hardware and software platform which OpenBCI is developing that merges next-generation biometrics with mixed reality.

In coming up with a project idea for my internship, my mentors suggested that I work on an indirect, rather than direct, form of human-computer interaction. While our minds naturally jump to the idea of direct human-computer interactions—like say moving a drone by focusing, the idea of indirect human-computer interactions I think is also very powerful yet more subtle. One form of indirect interaction that I am particularly interested in is associated with a class of proposed devices named Affective Brain-Computer Interfaces (ABCIs), which recognize the emotions we are currently experiencing and have some capacity to elicit an emotional change. This could be in the form of a VR environment which recognizes when we are anxious and which modulates the environment in such a way that we become calm. When reviewing the state of the art in the literature for emotion recognition, I realized that there were still many challenges associated with building a working affective brain-computer interface:

Current Problems With the Emotion Recognition Task

(1) Current state of the art emotion classification models have only moderately high accuracy in classifying EEG data on publicly available EEG-emotion datasets, and even then is only able to distinguish between a very small number of emotional classes—sometimes 2, 4, or 6 depending on the methodology. Not enough to fully interpret particular nuances of human emotion.

(2) Furthermore, emotional experience is very subjective, and each individual’s reported emotions at any given time depend on their sociocultural context and immediate situation. And even more so, how these emotions correlate with physiological response which can be measured varies greatly between individuals.

(3) Real-time emotion recognition requires the processing of a large amount of biometric data at any given time point.

In thinking about these scientific and engineering problems outlined above, my internship revolved around the planning and constructing of an end-to-end emotion recognition pipeline that could be applied to Galea in the future:

Gathering Experimental Data To Train Emotion Classification Models

The state of the art emotion recognition classifiers rely on deep learning models which learns how to map labeled EEG signal input to an individual’s self-reported emotional response to a given stimulus, whether it be an emotional (or emotionless) image, music, or video. I constructed a virtual reality EEG experiment wherein subjects would view a stimulus designed to elicit a particular emotion, while an electrode cap would record their EEG data, and later on would self-report their emotional response to the given stimulus.

Developing Emotion Recognition Deep Learning Models

I researched the state of the art models for emotion recognition and tried to recreate some of these models. I was able to get similar accuracies by training some of the public datasets that had been utilized in the papers describing the models.

Continuing Work

There is a lot of work associated with creating an end-to-end real-time emotion classification system and I only skimmed the surface of what must be done in order to make this happen. After reading and testing different models in the literature, I am currently working on developing new novel protocols and architectures that might potentially increase classification accuracy.

Leave a Reply