Brain Decoding of Viewed Image Categories via Semi-Supervised Multi-View Bayesian Generative Model

Brain decoding has shown that viewed image categories can be estimated from evoked functional magnetic resonance imaging (fMRI) activity. Recent studies attempted to estimate viewed image categories that were not used for training previously. Nevertheless, the estimation performance is limited since it is difficult to collect a large amount of fMRI data for training. This paper presents a method to accurately estimate viewed image categories not used for training via a semi-supervised multi-view Bayesian generative model. Our model focuses on the relationship between fMRI activity and multiple modalities, i.e., visual features extracted from viewed images and semantic features obtained from viewed image categories. Furthermore, in order to accurately estimate image categories not used for training, our semi-supervised framework incorporates visual and semantic features obtained from additional image categories in addition to image categories of training data. The estimation performance of the proposed model outperforms existing state-of-the-art models in the brain decoding field and achieves more than 95% identification accuracy. The results also have shown that the incorporation of additional image category information is remarkably effective when the number of training samples is small. Our semi-supervised framework is significant for the brain decoding field where brain activity patterns are insufficient but visual stimuli are sufficient.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research