Seeking Advice on Parsing LSL Stream for Robot Control
in OpenBCI_GUI
Hello All,
I'm working on a project involving data streaming from a headset to control a robot via Lab Streaming Layer (LSL). The LSL setup is working perfectly, but I'm struggling with receiving and parsing this data using Python and then transferring these parsed commands to another script for robot control.
I would appreciate any suggestions, resources, or experiences you can share. Are there any specific Python libraries you've used for parsing LSL streams? How about translating parsed data into commands for separate scripts? Thanks in advance for any guidance you can provide.
Best,
Jason
Comments
Jason, hi.
(Just one example): Motor Imagery has a long history and was one of the first BCI paradigms, it is not as error-free or efficacious as some newer systems, such as cVEP, used by MindAffect. Suggest you investigate some of these newer paradigms, used to translate external 'commands' from EEG monitoring.
https://www.google.com/search?as_q=mindaffect&as_sitesearch=openbci.com
Good list of BCI paradigms at g.tec:
https://www.gtec.at/product/bcisystem/
William