Needing advice/pointers on Unity + BCI workflow

English is not my first language, so apologies if there are unclear things here.

I am currently tinkering with OpenBCI integration with Unity as part of my academic research. I've got some basic experience with Unity and EEG signal processing alike (using SciPy and existing datasets), though this is the first time I attempt to implement BCI on Unity (using brainflow), so I'm... a little lost.

(What I aim to do is to control Flappy Bird-like game using BCI, but I won't delve too much into it for now. Instead, I want to ask about the "general" workflow on BCI-Unity integration.)

I've successfully connected the OpenBCI to Unity using brainflow and get some data from it, so that's a start. However, it's the next steps that I'm still confused at. So for now, here are a few questions I'd like to ask:
1. Should I also integrate the training session into the game (i.e. also implement it in Unity)? If that's the case, then does it mean I should also do the training data recording on Unity as well (including the classification model training), or should I write a separate console C# program for that and only use the trained model in Unity? (especially since I'm not sure about ML implementation in Unity yet)
2. Say that with the existing model, I want to retrieve the signal (in real-time/online) and classify it into a relevant input. The thing is that since Unity calls on Update() and FixedUpdate() on periodic basis, how am I supposed to handle this? Should I let the signal processing take place in every Update()/FixedUpdate() call, or should I set a timer/countdown and only do the signal processing and classification when that countdown runs out?

I'd like to learn more, so any pointer is appreciated. Thank you!

Comments

  • wjcroftwjcroft Mount Shasta, CA

    Mentioning Richard @retiutut, who has built Unity apps in the past.

  • wjcroftwjcroft Mount Shasta, CA
  • wjcroftwjcroft Mount Shasta, CA

    Mentioning @philippitts @retiutut.

  • A lot of this will depend on your particular application, but below are a couple pointers that might help you get started:

    Getting Data in Unity
    You can get data into Unity by using the BrainFlow C# package or by streaming it from the OpenBCI GUI using the networking widget. Unless you need a standalone application, I recommend streaming from the OpenBCI GUI since you'll get the added benefit of connecting to and validating the signals from your device(s). If you need an application that works without starting other applications, then you should use the BrainFlow package and you will need to build a way for users to connect and validate their data in Unity. This second method is more streamlined, but also more work on your end as the developer.

    Training Sessions in Unity
    You asked if you should implement your training session in Unity. The answer to this really depends on what you are training and classifying. If the visual stimulus is important, Unity may be the right tool for your training environment. If the "game" portion of your application is independent from the metric that you're classifying, I would recommend using a toolset more focused on machine learning. BrainFlow has bindings for python, R, MATLAB, Java, and Julia and each of these languages have machine learning toolsets associated with them which may make training your model easier.

    Note that there is both Unity support and BrainFlow support for ONNX models. So you should be able to export your trained model from your platform of choice into a format digestible by your application.

    Signal Processing Frequency
    This is a really good question to ask. There are several factors that will slow down your application and could have garbage collection implications that may slow down your game's frame rate. If your model is large and has significant processing or memory requirements, it could effect the performance of your game. Similarly, if you are doing a lot of signal processing it may also slow your frame rate. You'll need to test your specific model on your target hardware with the rest of your games code and art assets to determine how you want to spend your frame budget. The Unity profiler is your friend here.

    You may not need to perform signal processing on every frame. Using a countdown timer like you mentioned will reduce the processing load between evaluations at the cost of increased latency for your model. In many cases that increased latency is okay! However, also keep in mind that even if you use a countdown timer, you could cause your FPS to sharply drop whenever you perform the evaluation which could cause your game to appear to periodically lag. Evaluating portions of your signal processing pipeline across several frames could help space that computational load out.

    Depending on how many channels of data you are processing, you may be dealing with a relatively large amount of data. When you start dealing with large amounts of data, you need to be careful about how much you are copying that data as copy operations can be expensive and slow. This is particularly important in C# as it has garbage collection that can severely impact your frame rate if it's asked to do many allocations of large temporary data arrays. BrainFlow now has unsafe versions for many of it's API calls and you should use them if you are working in Unity. See https://github.com/brainflow-dev/brainflow/issues/613.

    Unity Update Functions
    One last note on the way out - Unity's Update and FixedUpdate calls do not have time guarantees. They have frequency targets that they will attempt to hit, but may not meet if your performance is being slowed down. If you are doing ERPs or other studies with high temporal coupling, and you are relying on FixedUpdate as a time step, you may see inconsistent or incorrect results. If accurately measuring elapsed time is important to your application, then you should use something like the C# Stopwatch class.

  • wjcroftwjcroft Mount Shasta, CA

    Philip, thanks for that excellent overview.

    It may make sense to post that page somewhere in the Docs tree, such as:

    https://docs.openbci.com/ForDevelopers/SoftwareDevelopment/

    Regards, William

  • @wjcroft @philippitts
    Thank you very much for your assistance and detailed response! Seems like there is a lot more for me to consider and learn.

    Right now I'm directly streaming the data from my Cyton to Unity using Brainflow, but indeed I'm still rather lost on the signal quality validation within the Unity proper. I checked the documentation about the 'Networking' widget... am I correct to assume the option is to use LSL + LSL4Unity? Or can I still use brainflow together with the Networking widget? (with serial...?)

    I'm considering about doing the training in separate program, but what I was unsure of is how to implement the trained model until your answer. I'll see what I can do with the Barracuda package and the ONNX model. Right now I'm still considering several options on what kind of activity to use, so I'm not sure how much the signal processing will affect the performance. But I'll keep those points about performance in mind.

    Thank you again -- I'll see what I can mess around with now.

  • wjcroftwjcroft Mount Shasta, CA

    @cyanindya said:
    ...
    Right now I'm directly streaming the data from my Cyton to Unity using Brainflow, but indeed I'm still rather lost on the signal quality validation within the Unity proper.

    If you are talking about the impedance check feature in the OpenBCI_GUI, it might be easiest to just use the GUI initially for a few minutes with the Impedance Widget. Then close that and open your Unity / Brainflow program. Impedance checks cannot run simultaneously with the data collection, so you are not losing anything with that approach.

Sign In or Register to comment.