MRI-Informed Multi-Modal Deep Learning for EEG Source Imaging with Subject-Specific Head Structure Recognition
Authors : Jung Hun Phee, Hyejin Park, Dajeong Kang, Minwoo Shin and Kyungho Yoon
Electroencephalography (EEG) source imaging aims to reconstruct the origin of neural activity within the brain from non-invasive scalp recordings. However, it remains fundamentally challenging due to the inherent ill-posed nature of the inverse problem and substantial inter-subject anatomical variability. Traditional numerical methods and recent deep learning approaches often rely on strong priors or fixed head models, limiting their ability to generalize across diverse subjects. To overcome these limitations, we propose a multi-modal deep learning model for EEG source imaging, integrating temporal EEG signals with individual head structural information. Subject-specific head models derived from magnetic resonance (MR) images are incorporated into a deep neural network trained on simulated EEG datasets generated through realistic forward modeling. By jointly learning from the spatial information of MRI and the temporal dynamics of EEG, the model can effectively recognize individual morphological features without requiring subject-specific training. We systematically evaluate our model under various experimental conditions, demonstrating improved performance compared to uni-modal (EEG-only) baselines. Furthermore, experiments on previously unseen MRI data confirm that the proposed approach generalizes well to diverse anatomical structures, maintaining high localization accuracy of 5 mm with an inferecne time of 1.34 ms, enabling real-time application. This work highlights the potential of personalized, structure-aware deep learning models for enhancing the accuracy and clinical applicability of EEG source imaging.