【24h】

Boosting for Unsupervised Domain Adaptation

机译:提升无监督域适应

获取原文

摘要

To cope with machine learning problems where the learner receives data from different source and target distributions, a new learning framework named domain adaptation (DA) has emerged, opening the door for designing theoretically well-founded algorithms. In this paper, we present SLDAB, a self-labeling DA algorithm, which takes its origin from both the theory of boosting and the theory of DA. SLDAB works in the difficult unsupervised DA setting where source and target training data are available, but only the former are labeled. To deal with the absence of labeled target information, SLDAB jointly minimizes the classification error over the source domain and the proportion of margin violations over the target domain. To prevent the algorithm from inducing degenerate models, we introduce a measure of divergence whose goal is to penalize hypotheses that are not able to decrease the discrepancy between the two domains. We present a theoretical analysis of our algorithm and show practical evidences of its efficiency compared to two widely used DA approaches.
机译:为了应对机器学习问题,其中学习者从不同来源和目标分布中收到数据,出现了一个名为Domain Adaptation(DA)的新的学习框架,用于设计理论上良好的算法的门。在本文中,我们呈现SLDAB,一种自标记DA算法,它从升压理论和DA理论中占据了它的起源。 SLDAB在困难的无监督DA设置中,其中源和目标培训数据可用,但只有前者被标记。要处理缺少标记的目标信息,SLDAB共同最大限度地减少了源域上的分类误差以及目标域上的边距违规的比例。为了防止算法引起简化模型,我们介绍了一种分歧的衡量标准,其目标是惩罚无法降低两个域之间的差异的假设。我们对我们的算法提供了理论分析,与两种广泛使用的DA方法相比,其效率的实际证据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号