首页> 外文会议>International Conference on Articulated Motion and Deformable Objects >Emotional Facial Expression Classification for Multimodal User Interfaces
【24h】

Emotional Facial Expression Classification for Multimodal User Interfaces

机译:多模式用户界面的情感面部表情分类

获取原文

摘要

We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new multimodal user interfaces.
机译:我们提出了一种简单且计算地可行的方法来执行面部表情的自动情绪分类。我们提出了使用10个特征点(即MPEG4特征点的一部分)以提取相关的情绪信息(基本上五个距离,皱纹和嘴状的存在)。该方法在此信息方面定义和检测六种基本情绪(加上中性的一个),并通过399图像的数据库进行微调。暂时,该方法应用于静态图像。现在开发出序列的应用。关于用户的这些信息的提取对于开发新的多式联用户界面非常感兴趣。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号