...
首页> 外文期刊>Physical review, E >Mechanisms of dimensionality reduction and decorrelation in deep neural networks
【24h】

Mechanisms of dimensionality reduction and decorrelation in deep neural networks

机译:深神经网络中的维度减少和去相关机制

获取原文
获取原文并翻译 | 示例
           

摘要

Deep neural networks are widely used in various domains. However, the nature of computations at each layer of the deep networks is far from being well understood. Increasing the interpretability of deep neural networks is thus important. Here, we construct a mean-field framework to understand how compact representations are developed across layers, not only in deterministic deep networks with random weights but also in generative deep networks where an unsupervised learning is carried out. Our theory shows that the deep computation implements a dimensionality reduction while maintaining a finite level of weak correlations between neurons for possible feature extraction. Mechanisms of dimensionality reduction and decorrelation are unified in the same framework. This work may pave the way for understanding how a sensory hierarchy works.
机译:深度神经网络广泛用于各个领域。 然而,深网络中每层计算的性质远非很好地理解。 因此,增加深度神经网络的可解释性因此重要。 在这里,我们构建一个平均字段框架,了解在层层中开发紧凑型表示,不仅在具有随机重量的确定性深度网络中,还可以在进行无监督学习的生成深网络中。 我们的理论表明,深度计算实现了维度降低,同时保持神经元之间的有限水平的神经元以进行可能的特征提取。 减少维度和去相关机制在同一框架中统一。 这项工作可能会为了解感官层次结构如何工作而铺平道路。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号