【24h】

MixedPeds: Pedestrian Detection in Unannotated Videos Using Synthetically Generated Human-Agents for Training

机译:混合用:使用综合生成的人类代理进行培训,在未经发布的视频中进行行人检测

获取原文

摘要

We present a new method for training pedestrian detectors on an unannotated set of images. We produce a mixed reality dataset that is composed of real-world background images and synthetically generated static human-agents. Our approach is general, robust, and makes few assumptions about the unannotated dataset. We automatically extract from the dataset: i) the vanishing point to calibrate the virtual camera, and ii) the pedestrians' scales to generate a Spawn Probability Map, which is a novel concept that guides our algorithm to place the pedestrians at appropriate locations. After putting synthetic human-agents in the unannotated images, we use these augmented images to train a Pedestrian Detector, with the annotations generated along with the synthetic agents. We conducted our experiments using Faster R-CNN by comparing the detection results on the unannotated dataset performed by the detector trained using our approach and detectors trained with other manually labeled datasets. We showed that our approach improves the average precision by 5-13% over these detectors.
机译:我们提出了一种在未经讨犯的图像集中训练人行探测器的新方法。我们生产由现实世界背景图像和合成产生的静态人代理组成的混合现实数据集。我们的方法是普遍的,强大的,并且对未经发布的数据集进行了几个假设。我们自动从数据集中提取:i)消失点校准虚拟摄像头,而ii)行人的尺度为生成产生的生成概率图,这是一个新颖的概念,指导我们的算法将行人放置在适当的位置。在将合成的人类代理放入未定位的图像之后,我们使用这些增强图像培训行人检测器,与合成代理一起产生的注释。我们通过将检测结果与使用我们的方法和探测器训练使用的方法和探测器培训的检测器训练的检测器训练,使用更快的R-CNN进行了我们的实验。我们认为,我们的方法通过这些探测器提高了5-13%的平均精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号