首页> 外文期刊>International Journal of Wavelets, Multiresolution and Information Processing >GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY ALGORITHMIC STABILITY AND DATA QUALITY
【24h】

GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY ALGORITHMIC STABILITY AND DATA QUALITY

机译:通过假设空间复杂度算法和数据质量同时导出正则化算法的泛化界限

获取原文
获取原文并翻译 | 示例
           

摘要

A main issue in machine learning research is to analyze the generalization performance of a learning machine. Most classical results on the generalization performance of regularization algorithms are derived merely with the complexity of hypothesis space or the stability property of a learning algorithm. However, in practical applications, the performance of a learning algorithm is not actually affected only by an unitary factor just like the complexity of hypothesis space, stability of the algorithm and data quality. Therefore, in this paper, we develop a framework of evaluating the generalization performance of regularization algorithms combinatively in terms of hypothesis space complexity, algorithmic stability and data quality. We establish new bounds on the learning rate of regularization algorithms based on the measure of uniform stability and empirical covering number for general type of loss functions. As applications of the generic results, we evaluate the learning rates of support vector machines and regularization networks, and propose a new strategy for regularization parameter setting.
机译:机器学习研究的主要问题是分析学习机的泛化性能。关于正则化算法的泛化性能的大多数古典结果是仅通过假设空间的复杂性或学习算法的稳定性。然而,在实际应用中,学习算法的性能实际上并不仅仅通过单一的因素,就像假设空间的复杂性,算法的稳定性和数据质量一样。因此,在本文中,我们在假设空间复杂度,算法稳定性和数据质量方面,开发了评估正则化算法的泛化性能的框架。基于均匀稳定性和经验覆盖号的尺寸损耗函数的尺寸稳定性和经验覆盖号的测量,我们建立了新的界限。作为通用结果的应用,我们评估支持向量机和正则化网络的学习率,并提出了一种新的正则化参数设置策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号