【24h】

Multi-View Cross-Lingual Structured Prediction with Minimum Supervision

机译:具有最小监控的多视图交叉结构预测

获取原文

摘要

In structured prediction problems, cross-lingual transfer learning is an efficient way to train quality models for low-resource languages, and further improvement can be obtained by learning from multiple source languages. However, not all source models are created equal and some may hurt performance on the target language. Previous work has explored the similarity between source and target sentences as an approximate measure of strength for different source models. In this paper, we propose a multi-view framework, by leveraging a small number of labeled target sentences, to effectively combine multiple source models into an aggregated source view at different granularity levels (language, sentence, or sub-structure), and transfer it to a target view based on a task-specific model. By encouraging the two views to interact with each other, our framework can dynamically adjust the contidence level of each source model and improve the performance of both views during training. Experiments for three structured prediction tasks on sixteen data sets show that our framework achieves significant improvement over all existing approaches, including these with access to additional source language data.
机译:在结构化预测问题中,交叉传输学习是培训低资源语言的质量模型的有效方法,可以通过从多种源语言学习来获得进一步的改进。但是,并非所有源模型都是相等的,有些则可能会对目标语言进行损害性能。以前的工作已经探索了源代码和目标句子之间的相似性,作为不同源模型的近似力量的量度。在本文中,我们提出了一种多视图框架,通过利用少量标记的目标句子来实现多个源模型以不同的粒度水平(语言,句子或子结构)和传输的聚合源视图基于特定于任务的模型到目标视图。通过鼓励两个观点来相互互动,我们的框架可以动态调整每个源模型的度过率,并在培训期间提高这两个视图的性能。在十六个数据集中有三个结构化预测任务的实验表明,我们的框架对所有现有方法都实现了重大改进,包括访问其他源语言数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号