首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
【2h】

Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention

机译:基于GAT的多头互相关注的多模态情感识别模型

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Emotion recognition has been gaining attention in recent years due to its applications on artificial agents. To achieve a good performance with this task, much research has been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, a research question remains: what exactly is the most appropriate way to fuse the information from different modalities? In this paper, we proposed audio sample augmentation and an emotion-oriented encoder-decoder to improve the performance of emotion recognition and discussed an inter-modality, decision-level fusion method based on a graph attention network (GAT). Compared to the baseline, our model improved the weighted average F1-scores from 64.18 to 68.31% and the weighted average accuracy from 65.25 to 69.88%.
机译:由于其对人工代因素的应用,近年来的情感认可一直在关注。为实现良好的性能,在多种模式情感识别模型中进行了许多研究,用于利用每种方式的不同优势。但是,仍然存在研究问题:融合来自不同模式的信息的最合适方法是什么?在本文中,我们提出了音频采样增强和一个面向情绪的编码器解码器,以提高情感识别的性能,并讨论了基于曲线图注意网络(GAT)的模态决策级融合方法。与基线相比,我们的模型将加权平均F1分数从64.18改善到68.31%,加权平均精度从65.25到69.88%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号