首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information
【2h】

Object Detection Based on Faster R-CNN Algorithm with Skip Pooling and Fusion of Contextual Information

机译:基于更快的R-CNN算法的对象检测与跳过池算法和语境信息融合

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Deep learning is currently the mainstream method of object detection. Faster region-based convolutional neural network (Faster R-CNN) has a pivotal position in deep learning. It has impressive detection effects in ordinary scenes. However, under special conditions, there can still be unsatisfactory detection performance, such as the object having problems like occlusion, deformation, or small size. This paper proposes a novel and improved algorithm based on the Faster R-CNN framework combined with the Faster R-CNN algorithm with skip pooling and fusion of contextual information. This algorithm can improve the detection performance under special conditions on the basis of Faster R-CNN. The improvement mainly has three parts: The first part adds a context information feature extraction model after the conv5_3 of the convolutional layer; the second part adds skip pooling so that the former can fully obtain the contextual information of the object, especially for situations where the object is occluded and deformed; and the third part replaces the region proposal network (RPN) with a more efficient guided anchor RPN (GA-RPN), which can maintain the recall rate while improving the detection performance. The latter can obtain more detailed information from different feature layers of the deep neural network algorithm, and is especially aimed at scenes with small objects. Compared with Faster R-CNN, you only look once series (such as: YOLOv3), single shot detector (such as: SSD512), and other object detection algorithms, the algorithm proposed in this paper has an average improvement of 6.857% on the mean average precision (mAP) evaluation index while maintaining a certain recall rate. This strongly proves that the proposed method has higher detection rate and detection efficiency in this case.
机译:深度学习目前是物体检测的主流方法。基于地区的较快的卷积神经网络(更快的R-CNN)在深度学习中具有枢轴位置。它对普通场景有令人印象深刻的检测效果。但是,在特殊条件下,仍然存在不令人满意的检测性能,例如具有遮挡,变形或小尺寸等问题的物体。本文提出了一种基于更快的R-CNN框架的新颖和改进的算法,其与跳过池跳过和融合的速率加快R-CNN算法和上下文信息的融合。该算法基于更快的R-CNN,可以在特殊条件下提高检测性能。改进主要有三个部分:第一部分在卷积层的CONV5_3之后添加上下文信息特征提取模型;第二部分增加了跳过池,使得前者可以完全获取对象的上下文信息,尤其是对于对象被遮挡和变形的情况;第三部分用更有效的引导锚RPN(GA-RPN)替换区域提案网络(RPN),其可以在提高检测性能的同时保持召回速率。后者可以从深神经网络算法的不同特征层获得更多详细信息,特别是瞄准具有小对象的场景。与R-CNN更快,您只需一次看一次(如:YOLOV3),单次探测器(如:SSD512)和其他物体检测算法,本文提出的算法平均提高了6.857%平均平均精度(MAP)评估指标,同时保持某种召回速率。这强烈证明,在这种情况下,所提出的方法具有更高的检测率和检测效率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号