...
首页> 外文期刊>Image and vision computing >Position-guided transformer for image captioning
【24h】

Position-guided transformer for image captioning

机译:Position-guided transformer for image captioning

获取原文
获取原文并翻译 | 示例
           

摘要

Transformer-based frameworks have shown superiorities in image captioning. However, such frameworks are strenuous to consider geometric interrelations among visual contents in an image, as well as fail to prevent changes in the distribution of each layer's input in self-attention. In this work, we first propose a Bi-Positional At-tention (BPA) module, which incorporates absolute and relative position encoding to precisely explore internal relations between objects and their geometric information in an image. Additionally, we use a Group Normaliza-tion (GN) method inside BPA to relieve shifts of the distribution and better exploit the channel dependence of visual features. To validate our proposals, we apply BPA and GN into the original Transformer to constitute our Position-Guided Transformer (PGT) network, which learns a more comprehensive positional representations to augment spatial interactions among objects for image captioning. We conduct extensive experiments to verify the effectiveness of our model. Compared with non-pretraining state-of-the-art methods, experimental results on the MSCOCO benchmark dataset demonstrate that our PGT achieves competitive performance, reaching 134.2% CIDEr score on the Karpathy split with a single model, and 136.2% CIDEr score on the official testing server with an ensemble configuration.(c) 2022 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号