...
首页> 外文期刊>Complexity >Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction
【24h】

Research on Discriminative Skeleton-Based Action Recognition in Spatiotemporal Fusion and Human-Robot Interaction

机译:时空融合中基于识别骨骼作用识别的研究及人机互动

获取原文
           

摘要

A novel posture motion-based spatiotemporal fused graph convolutional network (PM-STGCN) is presented for skeleton-based action recognition. Existing methods on skeleton-based action recognition focus on independently calculating the joint information in single frame and motion information of joints between adjacent frames from the human body skeleton structure and then combine the classification results. However, that does not take into consideration of the complicated temporal and spatial relationship of the human body action sequence, so they are not very efficient in distinguishing similar actions. In this work, we enhance the ability of distinguishing similar actions by focusing on spatiotemporal fusion and adaptive feature extraction for high discrimination information. Firstly, the local posture motion-based attention (LPM-TAM) module is proposed for the purpose of suppressing the skeleton sequence data with a low amount of motion in the temporal domain, and the representation of motion posture features is concentrated. Besides, the local posture motion-based channel attention module (LPM-CAM) is introduced to make use of the strongly discriminative representation between different action classes of similarity. Finally, the posture motion-based spatiotemporal fusion (PM-STF) module is constructed which fuses the spatiotemporal skeleton data by filtering out the low-information sequence and enhances the posture motion features adaptively with high discrimination. Extensive experiments have been conducted, and the results demonstrate that the proposed model is superior to the commonly used action recognition methods. The designed human-robot interaction system based on action recognition has competitive performance compared with the speech interaction system.
机译:提出了一种新的姿势运动基时空融合图卷积网络(PM-STGCN),用于基于骨架的动作识别。基于骨架的动作识别的现有方法专注于独立地计算来自人体骨架结构的相邻框架之间的单帧和动作信息的联合信息,然后结合分类结果。然而,不考虑人体动作序列的复杂时间和空间关系,因此在区分类似的行动方面并不是很有效。在这项工作中,我们通过专注于高辨别信息的时空融合和自适应特征提取来增强区分类似行动的能力。首先,提出了基于局部姿势运动的注意力(LPM-TAM)模块,以抑制时间域中具有低运动量的骨架序列数据,并且运动姿势特征的表示被集中。此外,引入了本地姿势运动的信道注意模块(LPM-CAM)以利用不同动作类相似性的强烈辨别的表示。最后,构造了基于姿势运动的时空融合(PM-STF)模块,其通过滤除低信息序列来熔化时空骨架数据,并以高辨别为自适应地增强姿势运动功能。已经进行了广泛的实验,结果表明,所提出的模型优于常用的动作识别方法。与语音交互系统相比,基于动作识别的设计的人机交互系统具有竞争性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号