首页> 外文期刊>PLoS One >Group-based local adaptive deep multiple kernel learning with lp norm
【24h】

Group-based local adaptive deep multiple kernel learning with lp norm

机译:基于集团的本地自适应深度多核学习LP标准

获取原文
           

摘要

The deep multiple kernel Learning (DMKL) method has attracted wide attention due to its better classification performance than shallow multiple kernel learning. However, the existing DMKL methods are hard to find suitable global model parameters to improve classification accuracy in numerous datasets and do not take into account inter-class correlation and intra-class diversity. In this paper, we present a group-based local adaptive deep multiple kernel learning (GLDMKL) method with lp norm. Our GLDMKL method can divide samples into multiple groups according to the multiple kernel k-means clustering algorithm. The learning process in each well-grouped local space is exactly adaptive deep multiple kernel learning. And our structure is adaptive, so there is no fixed number of layers. The learning model in each group is trained independently, so the number of layers of the learning model maybe different. In each local space, adapting the model by optimizing the SVM model parameter α and the local kernel weight β in turn and changing the proportion of the base kernel of the combined kernel in each layer by the local kernel weight, and the local kernel weight is constrained by the lp norm to avoid the sparsity of basic kernel. The hyperparameters of the kernel are optimized by the grid search method. Experiments on UCI and Caltech 256 datasets demonstrate that the proposed method is more accurate in classification accuracy than other deep multiple kernel learning methods, especially for datasets with relatively complex data.
机译:由于其比浅内核学习的更好分类性能,深度多核学习(DMKL)方法引起了广泛的关注。但是,现有的DMKL方法很难找到合适的全局模型参数,以提高许多数据集中的分类准确性,并且不考虑级别的相关性和级别的多样性。在本文中,我们介绍了一种基于组的本地自适应深度多个内核学习(GLDMKL)方法,LP标准。我们的GLDMKL方法可以根据多个内核K-means聚类算法将样本划分为多个组。每个良好的本地空间中的学习过程正是适应性深度多个内核学习。我们的结构是自适应的,因此没有固定数量的层。每个组中的学习模型独立培训,因此学习模型的层数可能不同。在每个本地空间中,通过依次优化SVM模型参数α和本地内核重量β来调整模型,并通过本地内核重量改变每个层中组合内核的基础内核的比例,并且本地内核重量是受到LP标准的限制,以避免基本内核的稀疏性。内核的超级参数由网格搜索方法进行优化。 UCI和CALTECH 256数据集的实验表明,所提出的方法在分类精度中比其他深度多个内核学习方法更准确,尤其是具有相对复杂的数据的数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号