首页> 外文会议>IEEE International Conference on Advanced Video and Signal Based Surveillance >Improving Real-Time Pedestrian Detectors with RGB+Depth Fusion
【24h】

Improving Real-Time Pedestrian Detectors with RGB+Depth Fusion

机译:用RGB +深度融合改进实时步行探测器

获取原文

摘要

In this paper we investigate the benefit of using depth information on top of normal RGB for camera-based pedestrian detection. Indeed, depth sensing is easily acquired using depth cameras such as a Kinect or stereo setups. We investigate the best way to perform this sensor fusion with a special focus on lightweight single-pass CNN architectures, enabling real-time processing on limited hardware. We implement different network architectures, each fusing depth at different layers of our network. Our experiments show that midway fusion performs the best, outperforming a regular RGB detector substantially in accuracy. Moreover, we prove that our fusion network is better at detecting individuals in a crowd, by demonstrating that it has both a better localization of pedestrians and is better at handling occluded persons. The resulting network is computationally efficient and achieves real-time performance on both desktop and embedded GPUs.
机译:在本文中,我们调查使用深度信息在正常RGB顶部进行相机的行人检测的益处。实际上,使用诸如Kinect或立体声设置的深度摄像机容易地获取深度感测。我们调查了执行此传感器融合的最佳方法,并专注于轻量级单通CNN架构,可在有限硬件上进行实时处理。我们实施不同的网络架构,每个融合深度在我们网络的不同层。我们的实验表明,中途融合表现出最佳,优于常规RGB探测器的准确性。此外,我们证明我们的融合网络更好地在人群中检测人群,通过表明它具有更好的行人本地化,并且更好地处理遮挡人员。由此产生的网络是在计算上有效的,在桌面和嵌入式GPU上实现实时性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号