【24h】

Multimodal Engagement Classification for Affective Cinema

机译:情感电影的多模式参与度分类

获取原文
获取原文并翻译 | 示例

摘要

This paper describes a multimodal approach to detect viewers' engagement through psycho-physiological affective signals. We investigate the individual contributions of the different modalities, and report experimental results obtained using several fusion strategies, in both per-clip and per-subject cross-validation settings. A sequence of clips from a short movie was showed to 15 participants, from whom we collected per-clip engagement self-assessments. Cues of the users' affective states were collected by means of (i) galvanic skin response (GSR), (ii) automatic facial tracking, and (iii) electroencephalogram(EEG) signals. The main findings of this study can be summarized as follows: (i) each individual modality significantly encodes the level of engagement of the viewers in response to movie clips, (ii) the GSR and EEG signals provide comparable contributions, and (iii) the best performance is obtained when the three modalities are used together.
机译:本文介绍了一种通过心理生理情感信号检测观众参与度的多模式方法。我们研究了不同模式的个体贡献,并报告了在每个剪辑和每个对象的交叉验证设置中使用几种融合策略获得的实验结果。向15位参与者展示了一部短片中的一系列剪辑,我们从中收集了每个剪辑的参与度自我评估。通过(i)皮肤电反应(GSR),(ii)自动面部跟踪和(iii)脑电图(EEG)信号收集用户的情感状态线索。这项研究的主要发现可以概括如下:(i)每个单独的模式都显着地编码了观众对影片剪辑的参与程度,(ii)GSR和EEG信号提供了可比的贡献,并且(iii)三种模式一起使用时可获得最佳性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号