首页> 外文会议>Machine learning for multimodal interaction >Detecting Action Items in Multi-party Meetings: Annotation and Initial Experiments
【24h】

Detecting Action Items in Multi-party Meetings: Annotation and Initial Experiments

机译:在多方会议中检测操作项:注释和初​​始实验

获取原文
获取原文并翻译 | 示例

摘要

This paper presents the results of initial investigation and experiments into automatic action item detection from transcripts of multi-party human-human meetings. We start from the flat action item annotations of [1], and show that automatic classification performance is limited. We then describe a new hierarchical annotation schema based on the roles utterances play in the action item assignment process, and propose a corresponding approach to automatic detection that promises improved classification accuracy while also enabling the extraction of useful information for summarization and reporting.
机译:本文介绍了从多方人员会议记录中自动执行动作项目检测的初步调查和实验结果。我们从[1]的平面动作项目注释开始,并显示自动分类性能受到限制。然后,我们基于话语在动作项分配过程中扮演的角色,描述了一种新的层次化注释架构,并提出了一种自动检测的相应方法,该方法可以提高分类准确性,同时还可以提取有用的信息以进行汇总和报告。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号