【24h】

Learning Mixtures of MLNs

机译:学习MLNS的混合物

获取原文

摘要

Weight learning is a challenging problem in Markov Logic Networks (MLNs) due to the large size of the ground propositional probabilistic graphical model that underlies the first-order representation of MLNs. Though more sophisticated weight learning methods that use lifted inference have been proposed, such methods can typically scale up only in the absence of evidence, namely in generative weight learning. In discriminative learning, where the evidence typically destroys symmetries, existing approaches are lacking in scalability. In this paper, we propose a novel, intuitive approach for learning MLNs discriminatively by utilizing approximate symmetries. Specifically, we reduce the size of the training database by clustering approximately symmetric atoms together and selecting a representative atom from each cluster. However, each choice made from the clusters induces a different distribution, increasing the uncertainty in our learned model. To reduce this uncertainty, we learn a finite mixture model by stacking the different distributions, where the parameters of the model are learned using an EM approach. Our results on several benchmarks show that our approach is much more scalable and accurate as compared to existing state-of-the-art MLN learning methods.
机译:由于地面命题概率图形模型的大尺寸尺寸,重量学习是马尔可夫逻辑网络(MLNS)中的一个具有挑战性的问题。尽管已经提出了更复杂的重量学习方法,但是已经提出了使用提升推断,但这些方法通常只能在没有证据的情况下扩展,即在生成重量学习中。在歧视性学习中,在证据通常破坏对称的情况下,现有方法缺乏可扩展性。在本文中,我们提出了一种通过利用近似对称来判别学习MLNS的新颖,直观的方法。具体地,我们通过将大致对称原子聚类并选择来自每个群集的代表原子来减少训练数据库的大小。然而,从集群中的每种选择都会引起不同的分布,增加了我们学习模型中的不确定性。为了减少这种不确定性,我们通过堆叠不同的分布来学习有限的混合模型,其中使用EM方法学习模型的参数。我们在几个基准上的结果表明,与现有最先进的MLN学习方法相比,我们的方法更具可扩展性和准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号