This paper proposes a new multimodal architecture for gaze-independent brain-computer interface (BCI)-driven control of a robotic upper limb exoskeleton for stroke rehabilitation to provide active assistance in the execution of reaching tasks in a real setting scenario. At the level of action plan, the patient's intention is decoded by means of an active vision system, through the combination of a Kinect-based vision system, which can online robustly identify and track 3-D objects, and an eye-tracking system for objects selection. At the level of action generation, a BCI is used to control the patient's intention to move his/her own arm, on the basis of brain activity analyzed during motor imagery. The main kinematic parameters of the reaching movement (i.e., speed, acceleration, and jerk) assisted by the robot are modulated by the output of the BCI classifier so that the robot-assisted movement is performed under a continuous control of patient's brain activity. The system was experimentally evaluated in a group of three healthy volunteers and four chronic stroke patients. Experimental results show that all subjects were able to operate the exoskeleton movement by BCI with a classification error rate of 89.4±5.0% in the robot-assisted condition, with no difference of the performance observed in stroke patients compared with healthy subjects. This indicates the high potential of the proposed gaze-BCI-driven robotic assistance for neurorehabilitation of patients with motor impairments after stroke since the earliest phase of recovery

A New Gaze-BCI-Driven Control of an Upper Limb Exoskeleton for Rehabilitation in Real-World Tasks

FRISOLI, Antonio;LOCONSOLE, CLAUDIO;Leonardis, Daniele;BARSOTTI, MICHELE;BERGAMASCO, Massimo
2012-01-01

Abstract

This paper proposes a new multimodal architecture for gaze-independent brain-computer interface (BCI)-driven control of a robotic upper limb exoskeleton for stroke rehabilitation to provide active assistance in the execution of reaching tasks in a real setting scenario. At the level of action plan, the patient's intention is decoded by means of an active vision system, through the combination of a Kinect-based vision system, which can online robustly identify and track 3-D objects, and an eye-tracking system for objects selection. At the level of action generation, a BCI is used to control the patient's intention to move his/her own arm, on the basis of brain activity analyzed during motor imagery. The main kinematic parameters of the reaching movement (i.e., speed, acceleration, and jerk) assisted by the robot are modulated by the output of the BCI classifier so that the robot-assisted movement is performed under a continuous control of patient's brain activity. The system was experimentally evaluated in a group of three healthy volunteers and four chronic stroke patients. Experimental results show that all subjects were able to operate the exoskeleton movement by BCI with a classification error rate of 89.4±5.0% in the robot-assisted condition, with no difference of the performance observed in stroke patients compared with healthy subjects. This indicates the high potential of the proposed gaze-BCI-driven robotic assistance for neurorehabilitation of patients with motor impairments after stroke since the earliest phase of recovery
2012
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11382/382244
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 176
social impact