首页> 外文会议>Human Motion, 2000. Proceedings. Workshop on >Talking heads: introducing the tool of 3D motion fields in the study of action
【24h】

Talking heads: introducing the tool of 3D motion fields in the study of action

机译:会谈负责人:在动作研究中介绍3D运动场的工具

获取原文

摘要

We demonstrate a method to complete three-dimensional (3D) motion fields on a face to serve as an intermediate representation for the study of actions. Twelve synchronized and calibrated cameras are positioned all around a talking person and observe its head in motion. We represent the head as a deformable mesh, which is fitted in a global optimization step to silhouette-contour and multi-camera stereo data derived from all images. The non-rigid displacement of the mesh from frame to frame, the 3D motion field, is determined from the normal flow information in all the images. We integrate these cues over time, thus producing a spatio-temporal representation of the talking head. Our ability to estimate 3D motion fields points to a new framework for the study of action. Using multicamera configurations we can estimate a sequence of evolving 3D motion fields representing specific actions. Then, by performing a geometric and statistical analysis on these structures, we can achieve dimensionality reduction and thus come up with powerful representations of generic human action.
机译:我们演示了一种完成面部上的三维(3D)运动场的方法,以作为动作研究的中间表示。十二个同步和校准的摄像头被放置在讲话者周围,并观察其头部运动。我们将头部表示为可变形的网格,该网格在全局优化步骤中进行拟合,以提取从所有图像得出的轮廓轮廓和多相机立体数据。根据所有图像中的正常流信息确定网格在帧之间的非刚性位移(3D运动场)。随着时间的流逝,我们会整合这些提示,从而产生会说话的人的时空表示。我们估计3D运动场的能力为研究动作提供了一个新的框架。使用多摄像机配置,我们可以估计代表特定动作的一系列不断发展的3D运动场。然后,通过对这些结构进行几何和统计分析,我们可以实现降维,从而提出通用的人类行为的有力表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号