首页> 外文期刊>NeuroImage >Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks
【24h】

Task-specific feature extraction and classification of fMRI volumes using a deep neural network initialized with a deep belief network: Evaluation using sensorimotor tasks

机译:使用深度信仰网络初始化的Deep Neural网络的任务特定特征提取和分类FMRI卷:使用Sensorimotor任务进行评估

获取原文
获取原文并翻译 | 示例
           

摘要

Feedforward deep neural networks (DNNs), artificial neural networks with multiple hidden layers, have recently demonstrated a record-breaking performance in multiple areas of applications in computer vision and speech processing. Following the success, DNNs have been applied to neuroimaging modalities including functional/structural magnetic resonance imaging (MRI) and positron-emission tomography data. However, no study has explicitly applied DNNs to 3D whole-brain fMRI volumes and thereby extracted hidden volumetric representations of fMRI that are discriminative for a task performed as the fMRI volume was acquired. Our study applied fully connected feedforward DNN to fMRI volumes collected in four sensorimotor tasks (i.e., left-hand clenching, right-hand clenching, auditory attention, and visual stimulus) undertaken by 12 healthy participants. Using a leave-one-subject-out cross-validation scheme, a restricted Boltzmann machine-based deep belief network was pretrained and used to initialize weights of the DNN. The pretrained DNN was fine-tuned while systematically controlling weight-sparsity levels across hidden layers. Optimal weight-sparsity levels were determined from a minimum validation error rate of fMRI volume classification. Minimum error rates (mean standard deviation; %) of 6.9 (+/- 3.8) were obtained from the three-layer DNN with the sparsest condition of weights across the three hidden layers. These error rates were even lower than the error rates from the single-layer network (9.4 +/- 4.6) and the two-layer network (7.4 +/- 4.1). The estimated DNN weights showed spatial patterns that are remarkably task-specific, particularly in the higher layers. The output values of the third hidden layer represented distinct patterns/codes of the 3D whole-brain fMRI volume and encoded the information of the tasks as evaluated from representational similarity analysis. Our reported findings show the ability of the DNN to classify a single fMRI volume based on the extraction of hidden representations of fMRI volumes associated with tasks across multiple hidden layers. Our study may be beneficial to the automatic classification/diagnosis of neuropsychiatric and neurological diseases and prediction of disease severity and recovery in (pre-) clinical settings using fMRI volumes without requiring an estimation of activation patterns or ad hoc statistical evaluation. (C) 2016 Elsevier Inc. All rights reserved.
机译:前馈深神经网络(DNN),具有多个隐藏层的人工神经网络,最近在计算机视觉和语音处理中的多个应用领域中展示了记录断裂性能。在成功之后,DNN已被应用于神经影像模型,包括功能/结构磁共振成像(MRI)和正电子发射断层扫描数据。然而,没有明确地将DNNS明确地应用于3D全脑FMRI卷,从而提取了作为获取FMRI体积所执行的任务的判别的FMRI的隐藏体积表示。我们的研究将通过12个健康参与者进行的四个感觉运动机任务(即,左手握紧,右手握紧,听觉刺激,视觉刺激)的FMRI卷进行了完全连接的馈电DNN。使用休假一次性交叉验证方案,预先验证基于Boltzmann机器的深度信念网络并用于初始化DNN的权重。预制的DNN在微调,同时系统地控制隐藏层的重量稀疏水平。从FMRI卷分类的最小验证误差率确定最佳的重量稀疏水平。从三层DNN获得6.9(+/- 3.8)的最小误差率(平均标准偏差;%)与三层隐藏层的重量稀释条件。这些错误率甚至低于单层网络(9.4 +/- 4.6)和双层网络(7.4 +/- 4.1)的错误速率。估计的DNN重量显示出具有显着的任务特异性的空间模式,特别是在较高层中。第三隐藏层的输出值表示3D全脑FMRI卷的不同图案/代码,并将任务的信息编码为从代表性相似性分析评估的。我们报告的调查结果显示了DNN根据与多个隐藏层的任务相关联的FMRI卷的隐藏表示的提取来对单个FMRI卷进行分类的能力。我们的研究可能有利于自动分类/诊断神经精神和神经疾病的自动分类/诊断,并使用FMRI卷预测疾病严重程度和恢复(Pre-)临床环境的恢复,而无需估计激活模式或临时统计评估。 (c)2016 Elsevier Inc.保留所有权利。

著录项

  • 来源
    《NeuroImage》 |2017年第2期|共15页
  • 作者单位

    Korea Univ Dept Brain &

    Cognit Engn Anam Dong 5ga Seoul 02841 South Korea;

    Mind Res Network 1101 Yale Blvd NE Albuquerque NM 87106 USA;

    Mind Res Network 1101 Yale Blvd NE Albuquerque NM 87106 USA;

    Korea Univ Dept Brain &

    Cognit Engn Anam Dong 5ga Seoul 02841 South Korea;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 诊断学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号