Community /

OpenBCI and Push The World

This post is written by AJ Keller of Push The World.

Over the past 18 months, I have been working to implement a futuristic neural sensing system called Thinker. My journey started shortly after leaving The Boeing Company in August of 2015 and presently, the team I formed for Thinker is working to ship and distribute to the first couple users.

The first part of the journey began in Charleston, South Carolina where I was working as a robotics engineer for Boeing. In my spare time, I was writing iPhone apps and running Push The World. On what turned out to be a pivotal call with a close advisor, I started to envision myself building a system to allow for seamless human and machine interaction. He advised I follow my heart over my wallet. Within 24 hours, I had begun organizing a deal to sell my app and let go of the team members I brought on to scale the app.

Shortly after focusing on this vision, I stumbled upon a community called NeuroTechX that provided a sounding board of neurotech experts from around the world. I spent the next two months reading books, watching youtube videos, and taking notes. I found out about OpenBCI through word of mouth on NeuroTechX and discovered they were based in New York City, a short two-hour commute from my parent’s house in Connecticut. On a Wednesday in November, I messaged Conor, the CEO of OpenBCI, and said something along the lines of “I’m an ex-Boeing robotics engineer, let me come help you for free, I’ll write code, put parts in small bags, anything…”. He replied almost immediately and said come into the lab Friday. Whoops… I had forgotten to tell him I lived in South Carolina so Thursday I packed my life up, rented a U-Haul, and drove the 13 hours to Connecticut. I showed up sometime early Friday morning, unloaded the truck and caught a train into NYC. That was the first day I saw my own EEG.

I quickly linked up with a pair of Neuroscientists from NYU when I overheard their aspirations to make an app for neurotech education using web technologies, specifically this new framework called Node.js. However they lacked the low-level expertise that I acquired from my Computer Engineering degree and within two weeks of this first meeting, we had brainwave data coming in through a newly formed project called OpenBCI Node.js SDK, as of writing today, I’ve written over 35,000 lines and am continuously developing it.

From writing iPhone apps, I knew that web technologies hold the promise of being cross-platform (Mac, Windows, Linux, iPhone, Android) from a mostly similar singular code base. This aligned as a road to start towards my Thought Recognition ambitions because I knew I would need a cross platform app that worked with the OpenBCI.  I selected OpenBCI because I had no idea where the electrodes would need to be placed in order to make this system start to work.

I quickly discovered a problem when showing stimulus to users with the OpenBCI system: there was no way to wirelessly, with no additional hardware, align the time a stimulus was shown with what brainwaves were present. This is a fundamental firmware flaw. The neuroscientists and I sat down with Conor and then Joel and outlined the problem. A refactor of the firmware emerged as the clear solution, we also took this opportunity to address some bugs in the V1 firmware. For the next couple months, I worked tirelessly to rewrite the OpenBCI_Radios code base, add a time syncing strategy to the Node.js repo, and drastically simplified and stabilized the OpenBCI_32bit_Library. All OpenBCI cyton boards ship with this v2.x.x firmware.

Thinker has continually improved over these past 18 months and is in alpha testing! Users securely log in add their own custom OpenBCI headset (whatever you have works!) and similar to how people fingerprint on a phone, you train neural commands. These neural commands range from imagining the movement of an arm to doing mental math. With a trained system, you visually see the system react to your neural command and see it get continuously better over time. By the beta release, users will be able to navigate computers by “tapping” neural push-button commands. Features requested by members of our Alpha Test Program (signup here or send an email to [email protected]) will be implemented as fast as possible!

Every time you log into Thinker, your previous neural training data is used to rapidly adapt to your current brain state. On top of that, when a friend wants to use Thinker and your headset, Thinker will automatically use all previous neural training data to quickly get that user going! Thinker gets better every time someone uses your headset!

The cost for Thinker will be free for Alpha Test Program members.

LSL plug-in coming based on requests, we have two so far! Route requests through the [email protected]s.

Sign Up Here for our Alpha Test Program: http://www.pushtheworldllc.com/thinker

The Thinker Team (Left to Right is Rahul, AJ, and Kun. Not Picture is Daniela) at OpenBCI HQ

The Thinker Team (Left to Right is Rahul, AJ, and Kun. Not Picture is Daniela) at OpenBCI HQ

Much love,

AJ Keller
Follow me on twitter @pushtheworld_aj

Leave a Reply