首页> 外国专利> MODEL COMPRESSION BY SPARSITY-INDUCING REGULARIZATION OPTIMIZATION

MODEL COMPRESSION BY SPARSITY-INDUCING REGULARIZATION OPTIMIZATION

机译:模型压缩通过稀疏诱导正则化优化

摘要

The performance of a neural network (NN) and/or deep neural network (DNN) can limited by the number of operations being performed as well as management of data among the various memory components of the NN/DNN. A sparsity-inducing regularization optimization process is performed on a machine learning model to generate a compressed machine learning model. A machine learning model is trained using a first set of training data. A sparsity-inducing regularization optimization process is executed on the machine learning model. Based on the sparsity-inducing regularization optimization process, a compressed machine learning model is received. The compressed machine learning model is executed to generate one or more outputs.
机译:神经网络(NN)和/或深神经网络(DNN)的性能可以受到所执行的操作数量的限制以及NN / DNN的各种存储器组件之间的数据管理。 在机器学习模型上执行稀疏性诱导正则化优化过程,以生成压缩机学习模型。 使用第一组培训数据训练机器学习模型。 在机器学习模型上执行稀疏诱导正则化优化过程。 基于稀疏性诱导正则化优化过程,接收压缩机学习模型。 执行压缩机学习模型以生成一个或多个输出。

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号