首页> 外文会议>IEEE International Conference on Multimedia and Expo >Learn A Robust Representation For Cover Song Identification Via Aggregating Local And Global Music Temporal Context
【24h】

Learn A Robust Representation For Cover Song Identification Via Aggregating Local And Global Music Temporal Context

机译:通过汇总本地和全局音乐时间上下文来学习用于歌曲识别的可靠表示

获取原文

摘要

Recently, deep learning models have been proposed for cover song identification and designed to learn fixed-length feature vectors for music recordings. However, the aspect of the temporal progression of music, which is important for measuring the melody similarity between two recordings, is not well exploited in those models. In this paper, we propose a new Siamese architecture to learn deep representations for cover song identification where Dilated Temporal Pyramid Convolution is used to exploit the local temporal context and Temporal Self-Attention to exploit the global temporal context in music recordings. In addition to the traditional block which calculates the similarity between a pair of recordings, we add a classification block to classify the recordings to their respective cliques. By combining the regression loss and the classification loss, our model can leam more robust and discriminative latent representations. The representations extracted by our model show substantial superiority to existing hand-crafted features and learned deep features. Experimental results show that our approach far outperforms the state-of the-art methods on several public datasets.
机译:近来,已经提出了用于封面歌曲识别的深度学习模型,并被设计用于学习用于音乐记录的固定长度特征向量。但是,在这些模型中并未充分利用音乐的时间进程方面,这对于测量两个录音之间的旋律相似性很重要。在本文中,我们提出了一种新的暹罗体系结构,用于学习用于翻唱歌曲识别的深层表示,其中使用扩张的时间金字塔卷积来利用本地时态上下文,而时间自我注意来利用音乐录音中的全局时态上下文。除了计算一对记录之间相似度的传统模块外,我们还添加了一个分类模块,以将记录分类为各自的集团。通过结合回归损失和分类损失,我们的模型可以学习更鲁棒和有区别的潜在表示。通过我们的模型提取的表示形式比现有的手工制作功能和学习到的深度功能具有明显的优越性。实验结果表明,在几种公共数据集上,我们的方法远远优于最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号