首页> 外文期刊>Pattern recognition letters >A multimodal hyperlapse method based on video and songs’ emotion alignment
【24h】

A multimodal hyperlapse method based on video and songs’ emotion alignment

机译:A multimodal hyperlapse method based on video and songs’ emotion alignment

获取原文
获取原文并翻译 | 示例
           

摘要

? 2022 Elsevier B.V.With the recent growth in the use of social media and new digital devices like smartphones and wearable cameras, people are often recording long first-person videos of their daily activities. These videos are usually very long and tiring to watch, bringing the need to speed them up. Recent fast-forward methods do not consider the background music to be inserted into the video, which could make them even more enjoyable. In this paper, we present a new fast-forward method that considers the information present in the video and the background music. We use neural networks to automatically recognize the emotions induced in the video and song and combine the contents in the accelerated video through a new method of frame selection that has as main objective to maximize the similarity of the induced emotions. We present quantitative and qualitative experiments on a large dataset with different videos and songs, showing that our method achieves the best performance in matching emotion similarity, also keeping the video's visual quality.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号