...
首页> 外文期刊>Journal of supercomputing >3D Gaze tracking by combining eye- and facial-gaze vectors
【24h】

3D Gaze tracking by combining eye- and facial-gaze vectors

机译:通过结合眼睛和面部注视向量进行3D注视跟踪

获取原文
获取原文并翻译 | 示例
           

摘要

We propose a 3D gaze-tracking method that combines accurate 3D eye- and facial-gaze vectors estimated from a Kinect v2 high-definition face model. Using accurate 3D facial and ocular feature positions, gaze positions can be calculated more accurately than with previous methods. Considering the image resolution of the face and eye regions, two gaze vectors are combined as a weighted sum, allocating more weight to facial-gaze vectors. Hence, the facial orientation mainly determines the gaze position, and eye- gaze vectors then perform minor manipulations. The 3D facial-gaze vector is first defined, and the 3D rotational center of the eyeball is then estimated; together, these define the 3D eye- gaze vector. Finally, the intersection point between the 3D gaze vector and the physical display plane is calculated as the gaze position. Experimental results show that the average gaze estimation root-mean-square error was approximately 23 pixels from the desired position at a resolution of 1920 x 1080.
机译:我们提出了一种3D凝视跟踪方法,该方法结合了从Kinect v2高清人脸模型估算出的准确3D眼睛和面部凝视向量。使用准确的3D面部和眼部特征位置,可以比以前的方法更准确地计算凝视位置。考虑到面部和眼睛区域的图像分辨率,将两个凝视向量作为加权总和组合,为面部凝视向量分配更多的权重。因此,面部方向主要决定注视位置,然后注视矢量进行较小的操纵。首先定义3D面部凝视矢量,然后估计眼球的3D旋转中心;它们共同定义了3D视线矢量。最后,将3D凝视矢量和物理显示平面之间的交点计算为凝视位置。实验结果表明,在1920 x 1080的分辨率下,平均注视估计均方根误差距离所需位置大约23个像素。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号