Aligning Semantic in Brain and Language: A Curriculum Contrastive Method for Electroencephalography-to-Text Generation

Electroencephalography-to-Text generation (EEG-to-Text), which aims to directly generate natural text from EEG signals has drawn increasing attention in recent years due to the enormous potential for Brain-computer interfaces. However, the remarkable discrepancy between the subject-dependent EEG representation and the semantic-dependent text representation poses a great challenge to this task. To mitigate this, we devise a Curriculum Semantic-aware Contrastive Learning strategy (C- SCL), which effectively recalibrates the subject-dependent EEG representation to the semantic-dependent EEG representation, thereby reducing the discrepancy. Specifically, our C- SCL pulls semantically similar EEG representations together while pushing apart dissimilar ones. Besides, in order to introduce more meaningful contrastive pairs, we carefully employ curriculum learning to not only craft meaningful contrastive pairs but also make the learning progressively. We conduct extensive experiments on the ZuCo benchmark and our method combined with diverse models and architectures shows stable improvements across three types of metrics while achieving the new state-of-the-art. Further investigation proves not only its superiority in both the single-subject and low-resource settings but also its robust generalizability in the zero-shot setting. Our codes are available at: https://github.com/xcfcode/contrastive_eeg2text.
Source: IEE Transactions on Neural Systems and Rehabilitation Engineering - Category: Neuroscience Source Type: research