首页> 外文会议>European Conference on Machine Learning and Knowledge Discovery in Databases >Learning with L{sub}(q<1) vs L{sub}1-Norm Regularisation with Exponentially Many Irrelevant Features
【24h】

Learning with L{sub}(q<1) vs L{sub}1-Norm Regularisation with Exponentially Many Irrelevant Features

机译:使用L {sub}(q <1)与l {sub}为1-norm正规化,具有呈指数级的无关功能

获取原文

摘要

We study the use of fractional norms for regularisation in supervised learning from high dimensional data, in conditions of a large number of irrelevant features, focusing on logistic regression. We develop a variational method for parameter estimation, and show an equivalence between two approximations recently proposed in the statistics literature. Building on previous work by A.Ng, we show the fractional norm regularised logistic regression enjoys a sample complexity that grows logarithmically with the data dimensions and polynomially with the number of relevant dimensions. In addition, extensive empirical testing indicates that fractional-norm regularisation is more suitable than L1 in cases when the number of relevant features is very small, and works very well despite a large number of irrelevant features.
机译:我们研究了从高维数据的监督学习中的正规化规范化的使用,在大量无关的功能的条件下,专注于逻辑回归。我们开发了参数估计的变分方法,并在统计文献中最近提出的两个近似之间的等效性。通过A.ng构建以前的工作,我们展示了分数规范的逻辑回归享受样本复杂性,这些样本复杂性随着数据尺寸和多项式的相关尺寸的数量而繁殖。此外,广泛的经验测试表明,在相关特征的数量非常小的情况下,分数 - 规范正则化比L1更加合适,并且尽管大量无关的功能,但工作得很好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号