Community /

Affective Computing and Mixed Reality: An interview with Guillermo Bernal

Guillermo Bernal

Backstory

One of the advantages of being an open source company is that it enables unique opportunities for collaboration which would otherwise not be possible. Over the past 2 years, OpenBCI has been collaborating with MIT Media Lab PhD candidate Guillermo Bernal on exploring the intersection of human-computer interfaces and mixed reality (XR). Sign up for the OpenBCI newsletter for upcoming news on the product of our collaboration with Guillermo!

Within the Media Lab, Guillermo is part of the Fluid Interfaces Group, which aims to create systems and interfaces for cognitive enhancement by building upon insights from psychology and neuroscience. Guillermo’s work includes projects like Emotional Beasts (2019 Schnitzer prize winner), and PhysioHMD; both of which involve integrating biosensors into head-mounted displays (HMDs) as part of his overall aim to create new metrics for objectively measuring user experiences.

Emotional Beasts explored the creation of emotionally responsive digital avatars

In pursuit of these new objective metrics, Guillermo often identifies the need for an entirely new device or method for collecting the necessary data. New sensors and techniques developed for Emotional Beasts (2016) enabled Guillermo, Abhinandan Jain, and Tao Yang, to detect aspects of the emotional state of VR users, and use those measurements to augment their digital avatars. These techniques were incorporated into and further developed in the PhysioHMD (2017) project. They were used to detect a VR user’s facial expressions, while also giving researchers and developers the ability to “aggregate and interpret signals in real-time and use them to develop novel, personalized interactions and evaluate virtual experiences.” 

PhysioHMD's groundbreaking facepad included sensors for multiple types of biometric data
PhysioHMD’s groundbreaking facepad included sensors for multiple types of biometric data

I sat down with Guillermo to discuss his work, motivations, and how his partnership with OpenBCI came about. The following is a lightly edited transcript of our conversation.

Interview

JA: So, pretend we are at a cocktail party, remember those? How would you introduce the focus of your studies thus far?

GB: Well I’ve had a very broad range of topics that I’ve touched on, so it’s always a little challenging to summarize. If I started looking for common threads, I would say that identifying objective metrics for quantifying a user’s experience is a big one. As an HCI researcher, you do a lot of user studies. One of the focuses of my work is removing subjectivity and bias in analyzing user experiences. How can I get closer to the “true experience” that the user is having? 

Once you start diving into that you start looking at things like emotions, cognitive processes, facial expressions, and physical manifestations of subjective states. 

My goal is to help create technology that can adapt to a user’s state, and intervene or disappear as needed. I think the ideal cognitive support tool is one that gets you to a point where you no longer need it, not something you become highly dependent on. I want the metrics and devices that emerge from my studies to create new tools that aid the subjective process of creativity. 

JA: What motivated you to pursue this kind of thing during your time at the Media Lab?

GB: My masters work—at MIT but before the Media Lab—marked a change of course for me academically. As an undergraduate I studied architecture at Pratt Institute, which was also what I had studied during high school in El Salvador. Once I came to MIT, I focused on electrical and computer engineering. 

JA: Why the change from architecture?

GB: The part that I was always more interested in with architecture was the impact that you can have as a designer on the eventual inhabitants of the building. After being exposed to the reality of the field, I learned that, for the most part, what developers care about is the cost per sq. ft. The occupants are not at the top of the list, and the process is very slow. 

After my undergraduate studies, I was working at RPI on projects like Manta, where I was exploring the uses of robotics systems to change and adapt to a user’s gestures, controllers, and even brain signals. At this point, I was exposed to the immediate impact that technology can have on people’s lives. If you put the user first, positive things can happen. Plus, the development cycles are faster. At that point, it was clear to me that there were other areas where I could invest my energy and efforts, so I decided to focus more on tech for my master studies.

Manta (2012) is a surface that changes its form - and therefore acoustic character - in response to multimodal input including sound, stereoscopic vision, multi-touch, and brainwaves.
Manta (2012) is a surface that changes its form – and therefore acoustic character – in response to multimodal input including sound, stereoscopic vision, multi-touch, and brainwaves.  Photo: Michael Villardi

JA: Ok so you transitioned from architecture to a more tech-focused masters program at MIT. How’d that lead to the Media Lab?

GB: While doing my masters, I worked on a study about the pottery creation process of masters vs. amateurs. We used a sleeve with EMG and a few other sensors to analyze the muscle movements of experienced potters. This isn’t only true for pottery, but watching an instructor do the motion often doesn’t convey the subtle pressure, or soft touch, being used. We used our quantification of the “expert” movements and some LEDs to give immediate, ongoing feedback to a class of introductory users trying to perform the same motions. We found that they learned faster with that extra dimension of information on how the experts handled the clay. That study was definitely the precursor to the types of projects I’ve ended up doing at the Media Lab. How can technology help us learn faster, or be more creative?

I also should mention the book “The Creative Habit” by Twyla Tharp as another inspiration. Tharp is a dancer and choreographer whose book discussed the “rituals” of top performers. I loved it, and it always pops into my head when I start thinking about how to “get in the zone.” How do we enter that state? How can we quantify it?

EMG sleeve from Guillermo’s 2015 masters work

 JA: A lot of your recent work involves VR in some way. Was VR something you intentionally wanted to focus on from the start, or more of an accidental discovery for you?

GB: It was definitely more of a discovery. I remember watching early Oculus 360 videos and being inspired by the intense emotions they were able to elicit from users. I’m not sure who coined the term, but I remember VR being described as an “empathy machine” and that resonated with me. I wanted to be able to break down and prove how it worked, so I started incorporating VR into my HCI work. I quickly discovered how useful VR can be for this type of research. You can limit the scope of a user’s perceptions, and control more variables in the environment.  I think we’re still very far away from creating what I would call a totally immersive experience in VR, but I see the path to it, and I know that by getting involved early with VR, I could have more of a say in how that tool is developed. 

JA: That’s interesting. One thing I’ve always found cool about VR is how it’s able to create a “fake” scenario like stepping off a building, which our brain reacts to as if it was real.

GB: Yeah that’s where the art and design side of VR plays a big part in its usefulness as a research tool. If you have a beautiful and compelling experience, it will get you there. It will do a better job at provoking real responses and real emotions. If it’s clunky and ugly, your brain won’t buy it. 

Those Oculus videos I mentioned could be shown on a regular monitor instead of VR, but I’m willing to bet they wouldn’t do as good a job provoking those emotions as they do in VR, because the 360 view makes it that much more immersive. 

So once I started using it, I found myself wanting to make more compelling experiences, but then I’d find I didn’t have the right tools or the right data. So then I’d go and make the tools, and then many times creating those tools would become the main project and the actual experience or user study would come later. 

JA: Is that what happened with the PhysioHMD project? What motivated you to create that new type of device?

GB: That’s about what happened. PhysioHMD definitely emerged from my earlier work in VR, like Emotional Beasts. 

I found myself wanting more data beyond just interviewing users about their emotional reactions to an experience. Most of the time when you bring people into a nice shiny lab, they’ll tell you what they think you want to hear…and I wanted to figure out how to remove that bias from my work. I also wanted to enable communication between individuals in VR to be more like a video call. Skype was a favorite tool in my family because it let me communicate with my mother and my family back home. Adding the video element, the ability to communicate via facial expressions, makes video calls so much more engaging than text or audio only. Despite being a visual medium, I felt that VR was missing that type of emotional input because of how static the digital avatars are. 

One area where I found lots of research was how facial expressions could be used as classifiers for emotion. But the headsets for VR don’t let you do easy camera-based facial expression acquisition, so I started to explore how I could build on the type of facepad sensors used in Emotional Beasts and turn them into something that could detect facial expressions and give me more metrics to work with as a researcher beyond just user interviews. That eventually became the facepad interface you see in PhysioHMD. 

JA: Any other inspirations that influenced your work?

GB: Definitely! I’m always looking at other companies, creators, and artists for ideas. Around 2016 this company AltspaceVR was trying to make VR a more social space. I think they’re still around, and I’m not sure how it’s changed since then, but at the time the avatars were very static and didn’t leave much room for users to express their emotions.

AltspaceVR avatars circa 2016 (source)

I wanted to improve that, and figure out how to enable users to be more expressive in VR, which eventually led to the Emotional Beasts project. 

Another major influence for Emotional Beasts were the costumes and sculptures of Nick Cave’s “Soundsuits” exhibition. Each outfit was really it’s own character and I thought the way he was able to bring a personality to these roughly humanoid forms would translate well to VR. I think every artists’ work deals with capturing and communicating emotion, which is also a big part of my work. 

Nick Cave’s “Soundsuits” served as inspiration for Emotional Beasts (source)

JA: How did you settle on the capabilities that PhysioHMD needed to have? Why did you pick the specific sensors you incorporated into the final design?

First I looked at the real estate that an HMD gave me to work with, and what physiological sensors could be incorporated. The good thing is that the headset gives you lots of access and is kept in close contact with the user’s face. There’s a ton of interesting data you can get from the eyes, skin, and face muscles. 

Being at MIT, I’m lucky enough to have people like Rosalind Picard two floors down from our lab. I saw how she was using electrodermal activity (EDA) to do all kinds of interesting things related to affective computing and knew that I should incorporate it into my next device. 

JA: What does affective computing mean?

GB: I’d paraphrase it as the intersection of computer science and the study of human emotions. How do we bring a user’s affect, or emotions, into human-computer interactions? 

JA: Got it, so you saw what Picard’s group was doing downstairs, and then what?

GB: I knew I would need to combine that with other sensors so I added a PPG (photoplethysmogram) to get the heart rate and used that to help quantify stress and excitement. The EMG sensors around the eyes were also needed to power my classification of how different combinations of muscle movements corresponded with different facial expressions. 

As part of an MIT program I got to travel to Shenzhen, and that’s where I learned about the things that were possible with flexible electronics. That played a big part in enabling me to fit the PCB into the right shape for the facepad. My first prototype was a flex PCB sticker with EMG, EDA, and HRV (heart rate variability). Movement artifacts were an obstacle from the beginning and the connection between the electrode and the skin is as important, or even more important, than the electronics themselves. The mechanical design of the facepad, and use of flexible electronics helped solve those challenges. 

JA: What do you want to see happen with PhysioHMD?

GB: I would like to see it to become a robust, accurate and accessible tool that researchers can use to build upon and to create a common baseline for studies that use VR and neurophysiology. I hope it can save other researchers time who are looking to collect the types of data that inspired me to make the platform in the first place. 

JA: Changing gears a bit, when were you first introduced to OpenBCI hardware?

GB: I was introduced at the Fluid Interface Group during my first year. I had seen it used in other MIT projects like AlterEgo and Serosa. I didn’t really get hands on with OpenBCI until I started to work with Conor. 

JA: How did you and Conor [OpenBCI’s CEO] meet?

GB: Conor stopped by the lab to visit Pattie [Maes] and also met with Scott Greenwald, another Fluid Interfaces alum and a mutual friend of mine. Scott was doing some interesting work at the time using VR for learning and teaching and Conor was apparently talking to him about his idea of incorporating VR into what OpenBCI was doing. I think it was Scott who first connected us. 

I showed Conor PhysioHMD and the other things I was working on and we found there was definitely a common interest, but it wasn’t until 2018 when we were both in San Francisco that we really started spending time together.

Conor was working at Meta then, and I was doing an internship at Samsung. We actually started hanging out by watching world cup matches that summer. He tried to get me into Magic the Gathering, but that didn’t happen. 

JA: Haha yea, that sounds about right. How did it go from watching soccer, to collaborating with OpenBCI?

GB: Well back when we met at the Media Lab he had floated the idea of working together to get a system like PhysioHMD into the hands of more people. We continued to discuss that while in SF and it just felt more and more like working with OpenBCI would be a way to expand the reach of what I was working on. Fortunately, MIT has some flexibility about collaborations between students and companies when the results will be open sourced which was a big factor in making it happen. 

JA: Why OpenBCI? I’m sure other companies would be interested in this kind of tech.

GB: My learning experience used open source a ton and I wanted to give back to open source. I actually knew Sean Montgomery by his GitHub icon before I ever met him via OpenBCI. After spending time with Conor, I decided that OpenBCI’s ethos aligned with my own goal to create tools for other affective computing researchers. It was all tied into a desire to get my work out of MIT and into the real world. I also started seeing other companies do PhysioHMD type things, which made me want to get my version out & open source sooner rather than later. 

JA: And the rest is history I guess. What comes next for Dr. Bernal once you’re done with the PhD?

GB: Oh man, right now I’m so focused on what has to happen before I can finish it’s hard to see beyond that. I do enjoy the research side of things, in addition to electrical engineering. I’d like to continue with that in some way, but it doesn’t have to be in an academic context. 

JA: Any predictions for HMDs and the future of mixed reality?

GB: I do think that HMDs will replace cell phones eventually. Not in their current clunky form though. Once the hardware gets down to a pair of glasses that are comfortable enough for a normal person to wear, that’s going to untether us from our current reliance on rectangular screens. I think the work we are doing right now will help us make sure that the merger of the real and the digital worlds is done in a way that is healthy for humans as users.

JA: Awesome — thanks Guillermo!

Conclusion

It’s been an incredible experience to collaborate with talented engineers like Guillermo. We’ll be announcing more about what we’ve been building together in the coming months. Sign up for the OpenBCI newsletter and be among the first to learn more!

For more information on Guillermo and his work at the MIT Media Lab, visit: https://www.media.mit.edu/people/gbernal/overview/ 

Leave a Reply