Deep Learning With EEG Spectrograms in Rapid Eye Movement Behavior Disorder

We describe first a simple deep convolutional neural network (DCNN) with a five-layer architecture combining filtering and pooling, which we train using stacked multi-channel EEG spectrograms from idiopathic patients and healthy controls. We treat the data as in audio or image classification problems where deep networks have proven successful by exploiting invariances and compositional features in the data. For comparison, we study a simple deep recurrent neural network (RNN) model using three stacked Long Short Term Memory network (LSTM) cells or gated-recurrent unit (GRU) cells---with very similar results. The performance of these networks typically reaches 80\% ($\pm 1$\%) classification accuracy in the balanced HC vs. PD-conversion classification problem. In particular, using data from the best single EEG channel, we obtain an area under the curve (AUC) of 87\% ($\pm 1$\%)---while avoiding spectral feature selection. The trained classifier can also be used to generate synthetic spectrograms using the {\em DeepDream} algorithm to study what time-frequency features are relevant for classification. \textcolor{red}{We find these to be bursts in the theta band together with a decrease of bursting in the alpha band in future RBD converters (i.e., converting to PD or DLB in the follow up) relative to HCs. From this first study, we conclude that deep networks may provide a useful tool for the analysis of EEG dynamics even from relatively small datasets, offering physiological...
Source: Frontiers in Neurology - Category: Neurology Source Type: research