...
首页> 外文期刊>Machine Learning >On Discriminative Bayesian Network Classifiers and Logistic Regression
【24h】

On Discriminative Bayesian Network Classifiers and Logistic Regression

机译:判别贝叶斯网络分类器与逻辑回归

获取原文
获取原文并翻译 | 示例
           

摘要

Discriminative learning of the parameters in the naive Bayes model is known to be equivalent to a logistic regression problem. Here we show that the same fact holds for much more general Bayesian network models, as long as the corresponding network structure satisfies a certain graph-theoretic property. The property holds for naive Bayes but also for more complex structures such as tree-augmented naive Bayes (TAN) as well as for mixed diagnostic-discriminative structures. Our results imply that for networks satisfying our property, the conditional likelihood cannot have local maxima so that the global maximum can be found by simple local optimization methods. We also show that if this property does not hold, then in general the conditional likelihood can have local, non-global maxima. We illustrate our theoretical results by empirical experiments with local optimization in a conditional naive Bayes model. Furthermore, we provide a heuristic strategy for pruning the number of parameters and relevant features in such models. For many data sets, we obtain good results with heavily pruned submodels containing many fewer parameters than the original naive Bayes model.
机译:朴素贝叶斯模型中的参数的判别学习已知等效于逻辑回归问题。在这里,我们表明,只要相应的网络结构满足一定的图论性质,相同的事实对于更通用的贝叶斯网络模型也适用。该属性适用于朴素的贝叶斯,也适用于更复杂的结构,例如树木增强的朴素贝叶斯(TAN)以及混合诊断-区分结构。我们的结果表明,对于满足我们属性的网络,条件似然不可能具有局部最大值,因此可以通过简单的局部优化方法找到全局最大值。我们还表明,如果此属性不成立,那么条件似然一般可以具有局部非全局最大值。我们通过在条件朴素贝叶斯模型中进行局部优化的经验实验来说明我们的理论结果。此外,我们提供了一种启发式策略,用于修剪此类模型中的参数数量和相关功能。对于许多数据集,使用修剪后的子模型所包含的参数要比原始朴素贝叶斯模型少得多的参数,我们可以获得良好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号