【24h】

Predicting Parameters in Deep Learning

机译:预测深度学习中的参数

获取原文

摘要

We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.
机译:我们证明了几种深度学习模型的参数化存在重大冗余。仅给出每个特征的几个权重值,可以准确地预测剩余值。此外,我们表明,不仅可以预测参数值,而且根本不需要学习其中许多。我们训练几个不同的架构,只学习少量的重量并预测其余的架构。在最佳案例中,我们能够预测网络的95%以上,无需任何跌幅。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号