首页> 外文会议>Computer Vision/Computer Graphics Collaboration Techniques; Lecture Notes in Computer Science; 4418 >Generation of Expression Space for Realtime Facial Expression Control of 3D Avatar
【24h】

Generation of Expression Space for Realtime Facial Expression Control of 3D Avatar

机译:用于3D头像的实时面部表情控制的表情空间的生成

获取原文
获取原文并翻译 | 示例

摘要

This paper describes expression space generation technology that enables animators to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In this system, approximately 2400 facial expression frames are used to generate facial expression space. In this paper, distance matrixes that present distances between facial characteristic points are used to show the state of an expression. The set of these distance matrixes is defined as facial expression space. However, this facial expression space is not space that can be transferred to one space or another in a straight line, when one expression changes to another. In this technology, the route for moving from one expression to another is approximately inferred from captured facial expression data. First, it is assumed that two expressions are close to each other when the distance between distance matrixes that show facial expression states is below a certain value. When two random facial expression states are connected with the set of a series of adjacent expressions, it is assumed that there is a route between the two expressions. It is further assumed that the shortest path between two facial expressions is the path when one expression moves to the other expression. Dynamic programming is used to find the shortest path be-tween two facial expressions. The facial expression space, which is the set of these distance matrixes, is multidimensional space. The facial expression control of 3-dimensional avatars is carried out in real-time when animators navigate through facial expression space. In order to assist this task, multidimensional scaling is used for visualization in 2-dimensional space, and animators are told to control facial expressions when using this system. This paper evaluates the results of the experiment.
机译:本文介绍了表情空间生成技术,该技术使动画师可以通过从面部表情空间中选择一系列表情来实时控制3维化身的表情。在该系统中,大约2400个面部表情框架用于生成面部表情空间。在本文中,呈现面部特征点之间距离的距离矩阵用于显示表情状态。这些距离矩阵的集合被定义为面部表情空间。然而,当一个表情改变为另一种表情时,该面部表情空间不是可以直线地转移到一个空间或另一个空间的空间。在该技术中,从一个表情移动到另一个表情的路线大约是从捕获的面部表情数据推断出来的。首先,假设当表示面部表情状态的距离矩阵之间的距离小于某个值时,两个表情彼此接近。当两个随机面部表情状态与一系列相邻表情的集合相连时,假定两个表情之间存在一条路线。进一步假设两个面部表情之间的最短路径是一个表情移动到另一表情时的路径。动态编程用于查找两个面部表情之间的最短路径。这些距离矩阵的集合即面部表情空间是多维空间。当动画师在面部表情空间中导航时,实时执行3D头像的面部表情控制。为了协助完成此任务,多维缩放用于二维空间中的可视化,并要求动画师在使用此系统时控制面部表情。本文评估了实验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号