首页> 外文会议>2014 IEEE International Symposium on Innovations in Intelligent Systems and Applications >A new tool for gestural action recognition to support decisions in emotional framework
【24h】

A new tool for gestural action recognition to support decisions in emotional framework

机译:手势动作识别的新工具可支持情感框架中的决策

获取原文
获取原文并翻译 | 示例

摘要

Introduction and objective: the purpose of this work is to design and implement an innovative tool to recognize 16 different human gestural actions and use them to predict 7 different emotional states. The solution proposed in this paper is based on RGB and depth information of 2D/3D images acquired from a commercial RGB-D sensor called Kinect. Materials: the dataset is a collection of several human actions made by different actors. Each action is performed by each actor for three times in each video. 20 actors perform 16 different actions, both seated and upright, totalling 40 videos per actor. Methods: human gestural actions are recognized by means feature extractions as angles and distances related to joints of human skeleton from RGB and depth images. Emotions are selected according to the state-of-the-art. Experimental results: despite truly similar actions, the overall-accuracy reached is approximately 80%. Conclusions and future works: the proposed work seems to be back-ground- and speed-independent, and it will be used in the future as part of a multimodal emotion recognition software based on facial expressions and speech analysis as well.
机译:引言和目的:这项工作的目的是设计和实施一种创新工具,以识别16种不同的人类手势动作,并使用它们来预测7种不同的情绪状态。本文提出的解决方案基于RGB和从称为Kinect的商用RGB-D传感器获取的2D / 3D图像的深度信息。材料:数据集是由不同参与者做出的几种人类动作的集合。每个演员在每个视频中执行三个动作。 20位演员执行16个不同的动作,包括坐着和直立,每个演员总共40个视频。方法:通过特征提取将人类的手势动作识别为与RGB和深度图像中与人体骨骼的关节有关的角度和距离。情感是根据最新技术来选择的。实验结果:尽管采取了确实类似的措施,但达到的总体精度约为80%。结论和未来的工作:拟议的工作似乎是与背景和速度无关的,并且将来还将用作基于面部表情和语音分析的多模式情感识别软件的一部分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号