Portable deep-learning decoder for motor imaginary EEG signals based on a novel compact convolutional neural network incorporating spatial-attention mechanism

In this study, we proposed a high-accuracy MI EEG decoder by incorporating spatial-attention mechanism into convolution neural network (CNN), and deployed it on fully integrated single-chip microcontroller unit (MCU). After the CNN model was trained on workstation computer using GigaDB MI datasets (52 subjects), its parameters were then extracted and converted to build deep-learning architecture interpreter on MCU. For comparison, EEG-Inception model was also trained using the same dataset, and was deployed on MCU. The results indicate that our deep-learning model can independently decode imaginary left-/right-hand motions. The mean accuracy of the proposed compact CNN reaches 96.75  ± 2.41% (8 channels: Frontocentral3 (FC3), FC4, Central1 (C1), C2, Central-Parietal1 (CP1), CP2, C3, and C4), versus 76.96  ± 19.08% of EEG-Inception (6 channels: FC3, FC4, C1, C2, CP1, and CP2). To the best of our knowledge, this is the first portable deep-learning decoder for MI EEG signals. The findings demonstrate high-accuracy deep-learning decoding of MI EEG in a portable mode, which has great implications fo r hand-disabled patients. Our portable system can be used for developing artificial-intelligent wearable BCI devices, as it is less computationally expensive and convenient for real-life application.Graphical Abstract
Source: Medical and Biological Engineering and Computing - Category: Biomedical Engineering Source Type: research