首页> 外文会议>IEEE Statistical Signal Processing Workshop >Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ℓ1Regularization
【24h】

Simultaneous Sparsity and Parameter Tying for Deep Learning Using Ordered Weighted ℓ1Regularization

机译:使用有序加权ℓ 1 正则化进行深度学习的同时稀疏性和参数绑定

获取原文

摘要

A deep neural network (DNN) usually contains millions of parameters, making both storage and computation extremely expensive. Although this high capacity allows DNNs to learn sophisticated mappings, it also makes them prone to over-fitting. To tackle this issue, we adopt a recently proposed sparsity-inducing regularizer called OWL (ordered weighted ℓ1, which has proven effective in sparse linear regression with strongly correlated covariates. Unlike the conventional sparsity-inducing regularizers, OWL simultaneously eliminates unimportant variables by setting their weights to zero, while also explicitly identifying correlated groups of variables by tying the corresponding weights to a common value. We evaluate the OWL regularizer on several deep learning benchmarks, showing that it can dramatically compress the network with slight or even no loss on generalization accuracy.
机译:深度神经网络(DNN)通常包含数百万个参数,这使得存储和计算成本极为昂贵。尽管这种高容量使DNN能够学习复杂的映射,但也使它们易于过度拟合。为了解决这个问题,我们采用了最近提出的稀疏性正则化器,称为OWL(有序加权ℓ 1 ,已证明可以有效地解决带有相关性很强的协变量的稀疏线性回归。与传统的稀疏诱导正则化器不同,OWL通过将不重要的变量的权重设置为零来同时消除不重要的变量,同时还通过将相应的权重绑定到一个公共值来明确标识相关的变量组。我们在几个深度学习基准上评估了OWL正则化器,表明它可以显着压缩网络,而泛化精度几乎没有甚至没有损失。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号