...
首页> 外文期刊>Engineering Applications of Artificial Intelligence >A novel Gait-Appearance-based Multi-Scale Video Covariance Approach for pedestrian (re)-identification
【24h】

A novel Gait-Appearance-based Multi-Scale Video Covariance Approach for pedestrian (re)-identification

机译:一种新颖的基于步态外观的多尺度视频协方差方法,用于行人(重新)识别

获取原文
获取原文并翻译 | 示例
           

摘要

In order to handle the complex databases of acquired images in the security area, a robust and adaptive framework for Video Surveillance Data Mining as well as for multi-shot pedestrian (re)-identification is required. The pedestrian's signature must be invariant and robust against the noise and uncontrolled variation. In this paper a new fast Gait-Appearance-based Multi-Scale Video Covariance (GAMS-ViCov) unsupervised approach was proposed to efficiently describe any image-sequence, on streaming or stored in the database, of a pedestrian into a compact and fixed size signature while exploiting the whole relevant spatiotemporal information. The proposed model is based on multi-scale features extracted from a novel data structure called 'Two-Half-Video-Tree' (THVT) which represents the pedestrians and allows discarding the uncontrolled variations. THVT can efficiently model the gait and appearance of the upper and lower parts of the person's silhouette into trees of multi-scale features. THVT can thus model the video data to new structured forms through a fast algorithm. Furthermore, GAMS-ViCov approach can also be competitive as a technique of dynamic video summarization using k-means clustering to model the signatures extracted from the image-sequences of each person into a cluster center. For each person's cluster, the image-sequence that its signature is nearest to the centroid is selected and stored as the key image-sequence of this person. The proposed approach was evaluated for the person (re)-identification with i-LIDS and PRID databases. The experimental results show that GAMS-ViCov outperforms the most of unsupervised approaches.
机译:为了处理安全区域中获取的图像的复杂数据库,需要用于视频监视数据挖掘以及多镜头行人(重新)识别的健壮且自适应的框架。行人的签名必须是不变的,并且对噪声和不受控制的变化具有鲁棒性。在本文中,提出了一种新的基于快速步态外观的多尺度视频协方差(GAMS-ViCov)无监督方法,以有效地将行人流上或存储在数据库中的任何图像序列描述为紧凑且固定的大小利用所有相关的时空信息进行签名。提出的模型基于从称为“两半视频树”(THVT)的新颖数据结构中提取的多尺度特征,该结构代表行人并允许丢弃不受控制的变化。 THVT可以有效地将人的轮廓的上部和下部的步态和外观建模为多尺度特征的树。因此,THVT可以通过快速算法将视频数据建模为新的结构形式。此外,GAMS-ViCov方法作为使用k均值聚类对从每个人的图像序列提取到聚类中心的签名进行建模的动态视频摘要技术,也具有竞争力。对于每个人的聚类,选择其签名最接近质心的图像序列,并将其存储为该人的关键图像序列。通过i-LIDS和PRID数据库对提议的方法进行了人员(重新)识别评估。实验结果表明,GAMS-ViCov优于大多数无监督方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号