首页> 外文期刊>Pattern recognition letters >Coupled HMM-based multi-sensor data fusion for sign language recognition
【24h】

Coupled HMM-based multi-sensor data fusion for sign language recognition

机译:基于HMM的多传感器数据融合用于手语识别

获取原文
获取原文并翻译 | 示例
           

摘要

Recent development of low cost depth sensors such as Leap motion controller and Microsoft kinect sensor has opened up new opportunities for Human-Computer-Interaction (HCI). In this paper, we propose a novel multi-sensor fusion framework for Sign Language Recognition (SLR) using Coupled Hidden Markov Model (CHMM). CHMM provides interaction in state-space instead of observation states as Used in classical HMM that fails to model correlation between inter-modal dependencies. The framework has been used to recognize dynamic isolated sign gestures performed by hearing impaired persons. The dataset has been tested using existing data fusion approaches. The best recognition accuracy has been achieved as high as 90.80% with CHMM. Our CHMM-based approach shows improvement in recognition performance over popular existing data fusion techniques. (C) 2016 Elsevier B.V. All rights reserved.
机译:低成本深度传感器(如Leap运动控制器和Microsoft kinect传感器)的最新开发为人机交互(HCI)开辟了新的机遇。在本文中,我们提出了一种使用耦合隐马尔可夫模型(CHMM)的手语识别(SLR)的新型多传感器融合框架。 CHMM提供状态空间中的交互作用,而不是像传统HMM中使用的那样无法观察到状态之间的交互作用,经典HMM无法对模式间依存关系之间的相关性进行建模。该框架已被用于识别由听力障碍者执行的动态孤立手势手势。该数据集已使用现有的数据融合方法进行了测试。 CHMM的最佳识别精度高达90.80%。我们基于CHMM的方法显示出与流行的现有数据融合技术相比,识别性能有所提高。 (C)2016 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号