首页> 外文期刊>Algorithms >A Robust Visual Tracking Algorithm Based on Spatial-Temporal Context Hierarchical Response Fusion
【24h】

A Robust Visual Tracking Algorithm Based on Spatial-Temporal Context Hierarchical Response Fusion

机译:基于时空上下文分层响应融合的鲁棒视觉跟踪算法

获取原文
           

摘要

Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual object tracking. However, visual tracking is still challenging when the target objects undergo complex scenarios such as occlusion, deformation, scale changes and illumination changes. In this paper, we utilize the hierarchical features of convolutional neural networks (CNNs) and learn a spatial-temporal context correlation filter on convolutional layers. Then, the translation is estimated by fusing the response score of the filters on the three convolutional layers. In terms of scale estimation, we learn a discriminative correlation filter to estimate scale from the best confidence results. Furthermore, we proposed a re-detection activation discrimination method to improve the robustness of visual tracking in the case of tracking failure and an adaptive model update method to reduce tracking drift caused by noisy updates. We evaluate the proposed tracker with DCFs and deep features on OTB benchmark datasets. The tracking results demonstrated that the proposed algorithm is superior to several state-of-the-art DCF methods in terms of accuracy and robustness.
机译:判别相关滤波器(DCF)已显示出在视觉对象跟踪方面的出色表现。但是,当目标对象经历复杂的场景(例如遮挡,变形,缩放比例和光照变化)时,视觉跟踪仍然具有挑战性。在本文中,我们利用了卷积神经网络(CNN)的分层特征,并在卷积层上学习了时空上下文相关过滤器。然后,通过融合三个卷积层上滤波器的响应分数来估计翻译。在规模估计方面,我们学习了一个判别相关滤波器,可以从最佳置信度结果中估计规模。此外,我们提出了一种重新检测激活判别方法,以在跟踪失败的情况下提高视觉跟踪的鲁棒性,并提出一种自适应模型更新方法,以减少由噪声更新引起的跟踪漂移。我们使用DCF和OTB基准数据集上的深层功能评估提出的跟踪器。跟踪结果表明,该算法在准确性和鲁棒性方面均优于几种最新的DCF方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号