首页> 外文会议>International Conference on Advanced Computer Science and Information Systems >Combining normal sparse into discriminative deep belief networks
【24h】

Combining normal sparse into discriminative deep belief networks

机译:将正常稀疏结合到可区分的深度信念网络中

获取原文

摘要

Training deep models are time consuming and face many local minima. For dealing with this problem, we can use DBN (Deep Belief Network) with Contrastive Divergence (CD). Sparse representations are more efficient. This paper aims to find the best structure of incorporating sparsity into discriminative DBN. We use a DBN architecture with 784 units as input, two layers each with 500 hidden units, one layer with 2000 hidden units, and 10 units as the final output. We argue that combining sparsity and discriminative DBN may increase the accuracy, but no previous studies suggest the best structure or configuration of that combination that can give the best accuracy. We took three stages of experiments to find the best configuration, namely preliminary, intermediate, and final stages. Each analysis of each stage serves as a background for consideration of the next experiment We use normal sparse for generative DBN and discriminative DBN. Experimental studies on MNIST dataset show that the best structure or scenario to combine normal sparse into deep belief networks is as follows: input — generative (CD) — generative (CD) — normal sparse discriminative (CD).
机译:训练深层模型非常耗时,并且要面对许多局部最小值。为了解决此问题,我们可以使用具有差异性(CD)的DBN(深层信任网络)。稀疏表示更有效。本文旨在找到将稀疏性纳入判别性DBN中的最佳结构。我们使用DBN体系结构,输入784个单位,两层每个包含500个隐藏单元,一层包含2000个隐藏单元,最后输出10个单元。我们认为将稀疏性和判别性DBN结合起来可以提高准确性,但是以前没有研究表明该结合的最佳结构或配置可以带来最佳准确性。我们通过三个阶段的实验来找到最佳配置,即初步阶段,中间阶段和最终阶段。每个阶段的每个分析都作为下一个实验考虑的背景。我们对生成型DBN和判别型DBN使用正常的稀疏性。对MNIST数据集的实验研究表明,将正常稀疏组合成深度信任网络的最佳结构或场景如下:输入-生成(CD)-生成(CD)-正常稀疏判别(CD)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号