首页> 外文会议>IEEE International Conference on Artificial Intelligence and Virtual Reality >Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records
【24h】

Gesture and Action Discovery for Evaluating Virtual Environments with Semi-Supervised Segmentation of Telemetry Records

机译:用于评估虚拟环境的手势和行动发现,具有遥测记录的半监督分段

获取原文

摘要

In this paper, we propose a novel pipeline for semi-supervised behavioral coding of videos of users testing a device or interface, with an eye toward human-computer interaction evaluation for virtual reality. Our system applies existing statistical techniques for time-series classification, including e-divisive change point detection and "Symbolic Aggregate approXimation" (SAX) with agglomerative hierarchical clustering, to 3D pose telemetry data. These techniques create classes of short segments of single-person video data-short actions of potential interest called "micro-gestures." A long short-term memory (LSTM) layer then learns these micro-gestures from pose features generated purely from video via a pre-trained OpenPose convolutional neural network (CNN) to predict their occurrence in unlabeled test videos. We present and discuss the results from testing our system on the single user pose videos of the CMU Panoptic Dataset.
机译:在本文中,我们提出了一种用于测试设备或界面的用户视频视频视频的半监督行为编码的新型管道,以为虚拟现实的人机交互评估。我们的系统适用于时间序列分类的现有统计技术,包括具有附注分层聚类的电子分割变化点检测和“符号聚合近似”(SAX),到3D姿势遥测数据。这些技术创建了单人视频数据 - 潜在兴趣的短片短片的类别,称为“微手势”。然后,长期内记忆(LSTM)层(LSTM)层通过预先训练的Opening卷积神经网络(CNN)从视频产生的姿势特征来学习这些微手势,以预测其在未标记的测试视频中的发生。我们在CMU Panoptic DataSet的单个用户姿势视频上展示并讨论了测试我们的系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号