首页> 外文期刊>IEICE Transactions on Information and Systems >Cross-Pose Face Recognition — A Virtual View Generation Approach Using Clustering Based LVTM
【24h】

Cross-Pose Face Recognition — A Virtual View Generation Approach Using Clustering Based LVTM

机译:跨姿势人脸识别-使用基于聚类的LVTM的虚拟视图生成方法

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents an approach for cross-pose face recognition by virtual view generation using an appearance clustering based local view transition model. Previously, the traditional global pattern based view transition model (VTM) method was extended to its local version called LVTM, which learns the linear transformation of pixel values between frontal and non-frontal image pairs from training images using partial image in a small region for each location, instead of transforming the entire image pattern. In this paper, we show that the accuracy of the appearance transition model and the recognition rate can be further improved by better exploiting the inherent linear relationship between frontal-nonfrontal face image patch pairs. This is achieved based on the observation that variations in appearance caused by pose are closely related to the corresponding 3D structure and intuitively frontal-nonfrontal patch pairs from more similar local 3D face structures should have a stronger linear relationship. Thus for each specific location, instead of learning a common transformation as in the LVTM, the corresponding local patches are first clustered based on an appearance similarity distance metric and then the transition models are learned separately for each cluster. In the testing stage, each local patch for the input non-frontal probe image is transformed using the learned local view transition model corresponding to the most visually similar cluster. The experimental results on a real-world face dataset demonstrated the superiority of the proposed method in terms of recognition rate.
机译:本文提出了一种使用基于外观聚类的局部视图过渡模型通过虚拟视图生成进行跨姿势人脸识别的方法。以前,传统的基于全局模式的视图过渡模型(VTM)方法已扩展到其本地版本LVTM,该方法使用小区域中的局部图像从训练图像中学习正面图像和非正面图像对之间像素值的线性转换,每个位置,而不是转换整个图像模式。在本文中,我们表明,可以通过更好地利用正面与非正面人脸图像块对之间的固有线性关系来进一步提高外观转换模型的准确性和识别率。这是基于以下观察结果而实现的:由姿势引起的外观变化与相应的3D结构密切相关,并且更相似的局部3D面部结构的直观的正面-非正面补丁对应该具有更强的线性关系。因此,对于每个特定位置,不是像LVTM中那样学习通用变换,而是首先根据外观相似性距离度量对相应的局部补丁进行聚类,然后为每个聚类分别学习过渡模型。在测试阶段,使用与最视觉相似的群集相对应的学习的局部视图转换模型来转换输入的非正面探针图像的每个局部补丁。在真实世界的面部数据集上的实验结果证明了该方法在识别率方面的优越性。

著录项

  • 来源
    《IEICE Transactions on Information and Systems》 |2013年第3期|531-537|共7页
  • 作者单位

    The authors are with the Graduate School of Information Science, Nagoya University, Nagoya-shi, 464-8601 Japan;

    The authors are with the Graduate School of Information Science, Nagoya University, Nagoya-shi, 464-8601 Japan,The author is with the Faculty of Economics and Information,Gifu Shotoku Gakuen University, Gifu-shi, 500-8288 Japan;

    The author is with Information and Communications Headquarters, Nagoya University, Nagoya-shi, 464-8601 Japan;

    The authors are with the Graduate School of Information Science, Nagoya University, Nagoya-shi, 464-8601 Japan;

    The authors are with the Graduate School of Information Science, Nagoya University, Nagoya-shi, 464-8601 Japan;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    face recognition; pose invariant; clustering; local view transition model;

    机译:人脸识别;姿势不变集群局部视图转换模型;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号