【24h】

Emotional Facial Expression Classification for Multimodal User Interfaces

机译:多模式用户界面的情绪表情分类

获取原文
获取原文并翻译 | 示例

摘要

We present a simple and computationally feasible method to perform automatic emotional classification of facial expressions. We propose the use of 10 characteristic points (that are part of the MPEG4 feature points) to extract relevant emotional information (basically five distances, presence of wrinkles and mouth shape). The method defines and detects the six basic emotions (plus the neutral one) in terms of this information and has been fine-tuned with a data-base of 399 images. For the moment, the method is applied to static images. Application to sequences is being now developed. The extraction of such information about the user is of great interest for the development of new mul-timodal user interfaces.
机译:我们提出了一种简单且在计算上可行的方法来执行面部表情的自动情感分类。我们建议使用10个特征点(它们是MPEG4特征点的一部分)来提取相关的情感信息(基本上是五种距离,是否存在皱纹和嘴巴形状)。该方法根据此信息定义并检测了六种基本情绪(加上中性情绪),并已通过399张图像的数据库进行了微调。目前,该方法已应用于静态图像。现在正在开发对序列的应用。这种关于用户的信息的提取对于新的多模式用户界面的开发非常感兴趣。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号