首页> 外文会议>IEEE International Conference on Automatic Face Gesture Recognition >Toward Marker-Free 3D Pose Estimation in Lifting: A Deep Multi-View Solution
【24h】

Toward Marker-Free 3D Pose Estimation in Lifting: A Deep Multi-View Solution

机译:升降标记的3D姿势估计:深度多视图解决方案

获取原文

摘要

Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a "view-specific perceptron" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a "multi-view integration" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva-I dataset [1], which demonstrates the superior performance of our approach.
机译:提升是在工作场所执行的常用手动材料处理任务。它被认为是与工作相关的肌肉骨骼障碍的主要风险因素之一。为了改善工作地点安全,有必要评估与这些任务相关的肌肉骨骼和生物力学风险敞口,这需要非常准确的3D姿势。现有方法主要利用基于标记的传感器来收集3D信息。然而,这些方法通常是昂贵的,以在过程中设置,时间,并且对周围环境敏感。在这项研究中,我们提出了一种基于多视图的深度感知方法来解决上述限制。我们的方法由两个模块组成:“查看特定的Perceptron”网络从视图图像独立提取丰富的信息,包括2D形状和分层纹理信息;虽然“多视图集成”网络从所有可用视图中合成信息以预测准确的3D姿势。为了充分评估我们的方法,我们进行了全面的实验,以比较我们设计的不同变体。结果证明,我们的方法通过以前的标记的方法实现了可比性的性能,即升降数据集的平均误差为14:72±2:96 mm。结果也将结果与Mumaneva-i数据集上的最先进方法进行了比较,这表明了我们方法的卓越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号