Multimodal data fusion is one of the current primary neuroimagingresearch directions to overcome the fundamental limitations ofindividual modalities by exploiting complementary information fromdifferent modalities. Electroencephalography (EEG) and functionalnear-infrared spectroscopy (fNIRS) are especially compellingmodalities due to their potentially complementary features reflectingthe electro-hemodynamic characteristics of neural responses. However,the current multimodal studies lack a comprehensive systematicapproach to properly merge the complementary features from theirmultimodal data. Identifying a systematic approach to properly fuseEEG-fNIRS data and exploit their complementary potential is crucial inimproving performance. This paper proposes a framework for classifyingfused EEG-fNIRS data at the feature level, relying on a mutualinformation-based feature selection approach with respect to thecomplementarity between features. The goal is to optimize thecomplementarity, redundancy and relevance between multimodal featureswith respect to the class labels as belonging to a pathologicalcondition or healthy control. Nine amyotrophic lateral sclerosis (ALS)patients and nine controls underwent multimodal data recording duringa visuo-mental task. Multiple spectral and temporal features wereextracted and fed to a feature selection algorithm followed by aclassifier, which selected the optimized subset of features through across-validation process. The results demonstrated considerablyimproved hybrid classification performance compared to the individualmodalities and compared to conventional classification without featureselection, suggesting a potential efficacy of our proposed frameworkfor wider neuro-clinical applications.
展开▼