首页> 外文会议>SIGBioMed workshop on biomedical natural language processing;Annual meeting of the Association for Computational Linguistics >Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
【24h】

Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment

机译:五角大楼在MEDIQA 2019上的多任务学习:使用语言推理和问题蕴涵过滤和重新排列答案

获取原文

摘要

Parallel deep learning architectures like fine-tuned BERT and MT-DNN, have quickly become the state of the art, bypassing previous deep and shallow learning methods by a large margin. More recently, pre-trained models from large related datasets have been able to perform well on many downstream tasks by just fine-tuning on domain-specific datasets (similar to transfer learning). However, using powerful models on non-trivial tasks, such as ranking and large document classification, still remains a challenge due to input size limitations~1 of parallel architecture and extremely small datasets (insufficient for fine-tuning). In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in medical domain. We use task-specific pre-trained models as deep feature extractors. Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task.
机译:并行的深度学习架构(如微调的BERT和MT-DNN)已迅速成为最新技术,大大绕过了以前的深度学习和浅层学习方法。最近,通过对特定领域的数据集进行微调(类似于转移学习),来自大型相关数据集的预训练模型已经能够在许多下游任务上表现良好。但是,由于并行体系结构的输入大小限制〜1和极小的数据集(不足以进行微调),在排序或大型文档分类等非平凡任务上使用强大的模型仍然是一个挑战。在这项工作中,我们引入了经过多任务设置训练的端到端系统,以过滤和重新排列医学领域的答案。我们使用特定于任务的预训练模型作为深度特征提取器。在ACL-BioNLP研讨会MediQA Questioning Answering共享任务上,我们的模型分别实现了最高Spearman的Rho和平均互惠等级分别为0.338和0.9622。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号