首页> 外文会议>Asian conference on computer vision >An Adaptation Framework for Head-Pose Classification in Dynamic Multi-view Scenarios
【24h】

An Adaptation Framework for Head-Pose Classification in Dynamic Multi-view Scenarios

机译:动态多视图场景中的头部姿态分类适应框架

获取原文

摘要

Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
机译:由于面部外观和透视变化,因此目标在环境中自由移动,因此在低分辨率下,动态场景的多视图头姿势估计很难。在这些条件下,获取足够的训练示例以学习位置之间的动态关系,面部外观和头部姿势可能非常昂贵。相反,在这项工作中提出了一种转移学习方法。在学习来自目标位置的许多示例的加权距离功能时,我们将这些权重适应目标位置变化的场景。适应框架通过将目标外观转换为与参考场景位置对应的规范外观来包括不同面部区域的可靠性。实验结果证实了所提出的方法的有效性,在相关条件下优于最先进的9.5%。为了帮助进一步研究本主题,我们还将仪表 - 一个动态,多视图头部姿势数据集与本文公开提供地面真理。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号