【24h】

PAC Analogues of Perceptron and Winnow via Boosting the Margin

机译:通过提高利润率获得Perceptron和Winnow的PAC类似物

获取原文
获取原文并翻译 | 示例

摘要

We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online p-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans and Gentile and Littlestone . As special cases of the algorithm, by taking p = 2 and p =∞ we obtain natural boosting-based PAC analogues of Perceptron and Winnow respectively. The p = ∞ case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning "sparse perceptrons". The analysis of the generalization error of the new algorithms relies on techniques from the theory of large margin classification.
机译:我们描述了一种新型的PAC模型算法,用于学习线性阈值函数。新算法通过增强一个简单的弱学习者而发挥作用,并且显示出与已知在线算法(例如Perceptron和Winnow)的复杂性界限非常相似,因此表明这些经过深入研究的在线算法在某种意义上对应于Boosting实例。我们表明,新算法可以看作是在线p范数算法的自然PAC类似物,这些算法最近已由Grove,Littlestone和Schuurmans以及Gentile和Littlestone研究。作为算法的特殊情况,通过取p = 2和p =∞,我们分别获得了Perceptron和Winnow的基于自然增强的PAC类似物。我们算法的p =∞情况也可以看作是Jackson和Craven基于PAC模型的基于Boosting的学习“稀疏感知器”的算法的推广(具有改进的样本复杂性范围)。对新算法的泛化误差的分析依赖于大边距分类理论中的技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号