首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Robust Drivable Road Region Detection for Fixed-Route Autonomous Vehicles Using Map-Fusion Images
【2h】

Robust Drivable Road Region Detection for Fixed-Route Autonomous Vehicles Using Map-Fusion Images

机译:使用地图融合图像的固定路线无人驾驶汽车的鲁棒可驾驶道路区域检测

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Environment perception is one of the major issues in autonomous driving systems. In particular, effective and robust drivable road region detection still remains a challenge to be addressed for autonomous vehicles in multi-lane roads, intersections and unstructured road environments. In this paper, a computer vision and neural networks-based drivable road region detection approach is proposed for fixed-route autonomous vehicles (e.g., shuttles, buses and other vehicles operating on fixed routes), using a vehicle-mounted camera, route map and real-time vehicle location. The key idea of the proposed approach is to fuse an image with its corresponding local route map to obtain the map-fusion image (MFI) where the information of the image and route map act as complementary to each other. The information of the image can be utilized in road regions with rich features, while local route map acts as critical heuristics that enable robust drivable road region detection in areas without clear lane marking or borders. A neural network model constructed upon the Convolutional Neural Networks (CNNs), namely FCN-VGG16, is utilized to extract the drivable road region from the fused MFI. The proposed approach is validated using real-world driving scenario videos captured by an industrial camera mounted on a testing vehicle. Experiments demonstrate that the proposed approach outperforms the conventional approach which uses non-fused images in terms of detection accuracy and robustness, and it achieves desirable robustness against undesirable illumination conditions and pavement appearance, as well as projection and map-fusion errors.
机译:环境感知是自动驾驶系统中的主要问题之一。尤其是,对于多车道道路,交叉路口和非结构化道路环境中的自动驾驶车辆而言,有效且鲁棒的可驾驶道路区域检测仍然是要解决的挑战。本文提出了一种基于计算机视觉和神经网络的可驾驶道路区域检测方法,该方法用于固定路线自动驾驶汽车(例如,在固定路线上运行的班车,公共汽车和其他车辆),并使用车载摄像头,路线图和实时车辆位置。提出的方法的关键思想是将图像与其相应的本地路线图融合,以获得地图融合图像(MFI),其中图像信息和路线图相互补充。图像信息可用于功能丰富的道路区域,而本地路线图则充当关键启发法,可在无明显车道标记或边界的区域中进行可靠的可驾驶道路区域检测。利用在卷积神经网络(CNN)上构建的神经网络模型FCN-VGG16从融合的MFI中提取可行驶的道路区域。使用安装在测试车辆上的工业相机捕获的真实驾驶场景视频对提出的方法进行了验证。实验表明,所提出的方法在检测精度和鲁棒性方面优于使用非融合图像的常规方法,并且在针对不良照明条件和路面外观以及投影和地图融合错误方面达到了理想的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号