Community /

Integration of an Eye Gaze Interface and BCI with Biofeedback for Human-Robot Interaction

This hybrid gaze-BCI system was developed by using OpenBCI Cyton board with OpenViBE and LabVIEW software. The full text can be found here.

Background

The way in which children develop their skills of cognition and perception is highly dependent on exploring objects and exploring their physical surroundings. When we refer to a child’s cognitive development, we are referring to the ways they learn, feel, think, resolve problems, and know the environment. Exploration of the environment through manipulating objects often occurs through play, which is essential to children’s development. The human-technology interface plays a fundamental role when controlling assistive technologies to perform functional play activities. Robots can be used as a means for children with physical impairments to perform the activities, and “human-robot interfaces” are used to access them. However, these interfaces generally require a certain degree of physical ability to access and to operate. If users have no voluntary and repeatable muscle control, which makes it difficult to initiate accurate physical movements, it would be impossible for them to operate those interfaces. There are interfaces that do not require the ability to control body movement, such as those that use eye gaze data or brain signals, and in recent years, the cost of these interfaces has become feasible for use in hospitals or homes.

Objective

The main objective of this study was to develop and test an integrated eye gaze and BCI-based human-robot interface providing vibrotactile haptic feedback for eye gaze to select targets and kinesthetic haptic feedback for motor imagery for robot control.

Methods

In this project, the eye gaze and brain signals were integrated as a human-robot interface to control a Lego robot, and the system was tested with five adults without impairments and an adult with cerebral palsy. The task was knocking down one of two piles of blocks. The experimental setup of this study consisted of four components, an eye tracking system, a BCI system, a haptic feedback system, and a mobile robot as shown in Figure 1. A picture of the whole system is shown in Figure 2.

This human-robot interface enables a user to 1) directly select a desired spatial target object in the physical environment based on the eye gaze detected by the stationary eye tracker, Tobii eye tracker 4C (Tobii Technology, Danderyd, Sweden), and 2) move a robot towards the target on the basis of their motor imagery as captured via a Brain-Computer Interface, OpenBCI (OpenBCI, Inc., Brooklyn, NY, USA). A USB camera was mounted over the task environment, which acquired the image data of the entire environment. Since the eye tracker is designed to be used on a two-dimensional screen, the participant’s gaze was mapped into the two-dimensional plane of the task environment by using a projective homogeneous transformation. Eight EEG channels over the pre-motor cortex of the brain (i.e., Cz, Cp, F3, C3, P3, F4, C4 and P4) were recorded at a sampling frequency of 250 Hz.  After performing a 60 Hz notch filter for noise removal and a 7 to 30 Hz FIR band-pass filter to acquire the sensorimotor components of the EEG signals, a Common Spatial Pattern (CSP) filter was applied to the signals to extract the feature vector of the movement intentions. The logarithmic power of the feature vector extracted by the CSP filter was then employed as the input of a Linear Discriminant Analysis classifier to discriminate between the participant’s motor imagery of move or rest in real-time.

In addition, this human-robot interface provided two types of haptic biofeedback to the user. Having visual representation of how well users are performing the gaze control and motor imagery is generally crucial for the operation of those interfaces, however, visual feedback is difficult to provide in this experimental setup where a computer display was not involved. Thus, alternatives to visual feedback were provided: 1) vibrotactile haptic feedback, which helped users to sustain their eye gaze on the target object, and 2) kinesthetic haptic feedback, which facilitated the passive movement of their hand through a haptic robot interface, Novint Falcon (Novint Technologies, Inc., Albuquerque, NM, USA), based on the generated motor imagery.

Results

The participants with and without physical impairments successfully demonstrated robot control with the integrated interface. All adult participants without impairments achieved the robot task with the haptic feedback condition faster than the task without feedback, with two of them showing significance (p=0.01).

Responses to a mental workload assessment, NASA Task Load indeX (NASA-TLX) revealed that four out of five participants without impairments responded that the task had less workload in the haptic feedback condition than the task without feedback. Also, descriptive analysis indicated that the individual with physical impairments performed the task faster and reported less workload with the haptic feedback condition.

A demo video of the hybrid gaze-BCI system is following.  

Conclusions

It is also important to mention that the integrated human-robot interface in this study was developed with a low-cost consumer eye tracker, BCI, and haptic robot interface. Typical equipment used for eye gaze and BCI research for robot control are generally high quality expensive systems, however, those are not affordable for many people who may need the systems. This equipment would generally cost at least ten times as much as the total cost of our system. Our proposed interface should be substantially more affordable than the equipment reported in those studies. The development of this proposed system could be a step towards the practical use of the eye gaze and BCI integrated interface in homes and in hospitals.

Contact Information:

Isao Sakamaki

DynaBrain Inc.

Email: [email protected]

Leave a Reply