首页> 外文会议>Annual conference on Neural Information Processing Systems >Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion
【24h】

Active Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion

机译:使用最大GIBBS误差标准主动学习概率假设

获取原文

摘要

We introduce a new objective function for pool-based Bayesian active learning with probabilistic hypotheses. This objective function, called the policy Gibbs error, is the expected error rate of a random classifier drawn from the prior distribution on the examples adaptively selected by the active learning policy. Exact maximization of the policy Gibbs error is hard, so we propose a greedy strategy that maximizes the Gibbs error at each iteration, where the Gibbs error on an instance is the expected error of a random classifier selected from the posterior label distribution on that instance. We apply this maximum Gibbs error criterion to three active learning scenarios: non-adaptive, adaptive, and batch active learning. In each scenario, we prove that the criterion achieves near-maximal policy Gibbs error when constrained to a fixed budget. For practical implementations, we provide approximations to the maximum Gibbs error criterion for Bayesian conditional random fields and transductive Naive Bayes. Our experimental results on a named entity recognition task and a text classification task show that the maximum Gibbs error criterion is an effective active learning criterion for noisy models.
机译:我们对泳池的贝叶斯主动学习具有概率假设的新客观函数。这种目标函数称为策略GIBBS错误,是从主动学习策略自适应地选择的示例上的先前分发绘制的随机分类器的预期误差率。精确的策略Gibbs错误的最大化很难,因此我们提出了一种贪婪的策略,可以在每个迭代中最大化GIBBS错误,其中GIBBS错误实例是从该实例上的后版标签分布中选择的随机分类器的预期错误。我们将此最大GIBBS错误标准应用于三个主动学习场景:非自适应,自适应和批量活动学习。在每种情况下,我们证明了标准在限制固定预算时达到了近最大的政策GIBBS错误。对于实际实现,我们为贝叶斯有条件随机田和转导天真贝叶斯的最大GIBBS误差标准提供近似值。我们对命名实体识别任务和文本分类任务的实验结果表明,最大GIBBS错误标准是嘈杂模型的有效主动学习标准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号