首页> 外文期刊>Multimedia Tools and Applications >Combining skeleton and accelerometer data for human fine-grained activity recognition and abnormal behaviour detection with deep temporal convolutional networks
【24h】

Combining skeleton and accelerometer data for human fine-grained activity recognition and abnormal behaviour detection with deep temporal convolutional networks

机译:用深颞卷积网络结合人体细粒度活动识别和异常行为检测的骨架和加速度计

获取原文
获取原文并翻译 | 示例
           

摘要

Single sensing modality is widely adopted for human activity recognition (HAR) for decades and it has made a significant stride. However, it often suffers from challenges such as noises, obstacles, or dropped signals, which might negatively impact on the recognition performance. In this paper, we propose a multi-sensing modality framework for human fine-grained activity recognition and abnormal behaviour detection by combining skeleton and acceleration data at feature level (so-called feature-level fusion). Firstly, deep temporal convolutional networks (TCN), consisting of the dilated causal convolution components, are utilized for feature learning and handling temporal properties. The feature map learnt and represented with convolutional layers in TCN is fed into two fully connected layers for the prediction. Secondly, we conduct an empirical experiment to verify our proposed method. Experimental results have shown that the proposed method could achieve 83% F1-score and surpassed several single modality models as well as early and late fusion methods on the Continuous Multimodal Multi-view Dataset of Human Fall Dataset (CMDFALL), comprised of 20 fine-grained normal and abnormal activities collected from 50 subjects. Moreover, our proposed architecture achieves 96.98% accuracy on the UTD-MHAD dataset, which has 8 subjects and 27 activities. These results indicate the effectiveness of our proposed method for the classification of human fine-grained normal and abnormal activities as well as the potential for HAR-based situated service applications.
机译:几十年来,人类活动识别(HAR)广泛采用单一感应态度,并且它已经取得了重要的进步。然而,它往往遭受噪声,障碍物或丢弃信号的挑战,这可能对识别性能产生负面影响。在本文中,通过在特征级别(所谓的特征级融合)中,提出了一种用于人体细粒度活动识别和异常行为检测的多感应模态框架(所谓的特征级融合)。首先,由扩张的因果卷积分量组成的深度时间卷积网络(TCN)用于特征学习和处理时间特性。通过TCN中的卷积层学习的特征图被馈送到两个完全连接的层中以进行预测。其次,我们进行了实证实验,以验证我们所提出的方法。实验结果表明,该方法可以达到83%F1分数,并超过了几种单一的模型模型以及人类跌倒数据集(CMDFALL)的连续多模式多视图数据集,包括20个单级多视图数据集的早期和晚期融合方法,包括20个细从50个科目收集的粒度正常和异常活动。此外,我们拟议的架构在UTD-MHAD数据集中实现了96.98%的准确性,其中有8个科目和27个活动。这些结果表明了我们提出的方法对人类细粒度正常和异常活动的分类以及基于柱的位置的潜力的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号