首页> 外文期刊>SIGKDD explorations >R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering
【24h】

R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering

机译:R-VQA:学习具有语义关注的视觉关系事实,用于视觉问题应答

获取原文
获取原文并翻译 | 示例
       

摘要

Recently, Visual Question Answering (VQA) has emerged as one of the most significant tasks in multimodal learning as it requires understanding both visual and textual modalities. Existing methods mainly rely on extracting image and question features to learn their joint feature embedding via multimodal fusion or attention mechanism. Some recent studies utilize external VQA-independent models to detect candidate entities or attributes in images, which serve as semantic knowledge complementary to the VQA task. However, these candidate entities or attributes might be unrelated to the VQA task and have limited semantic capacities. To better utilize semantic knowledge in images, we propose a novel framework to learn visual relation facts for VQA. Specifically, we build up a Relation-VQA (R-VQA) dataset based on the Visual Genome dataset via a semantic similarity module, in which each data consists of an image, a corresponding question, a correct answer and a supporting relation fact. A well-defined relation detector is then adopted to predict visual question-related relation facts. We further propose a multi-step attention model composed of visual attention and semantic attention sequentially to extract related visual knowledge and semantic knowledge. We conduct comprehensive experiments on the two benchmark datasets, demonstrating that our model achieves state-of-the-art performance and verifying the benefit of considering visual relation facts.
机译:最近,视觉问题应答(VQA)已成为多式化学习中最重要的任务之一,因为它需要了解视觉和文本方式。现有方法主要依靠提取图像和问题特征来学习通过多模式融合或注意机制嵌入的关节功能。一些最近的研究利用外部VQA的独立模型来检测图像中的候选实体或属性,其用作与VQA任务互补的语义知识。但是,这些候选实体或属性可能与VQA任务无关,并且语义容量有限。为了更好地利用图像中的语义知识,我们提出了一种新颖的框架来学习VQA的视觉关系事实。具体地,我们通过语义相似性模块基于视觉基因组数据集来构建基于视觉基因组数据集的关系 - VQA(R-VQA)数据集,其中每个数据包括图像,相应的问题,正确的答案和支持关系事实。然后采用明确的关系检测器来预测视觉问题相关的关系事实。我们进一步提出了一种由视觉关注和语义关注组成的多步注意模型,以提取相关的视觉知识和语义知识。我们对两个基准数据集进行了全面的实验,表明我们的模型实现了最先进的性能,并验证了考虑视觉关系事实的好处。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号