...
首页> 外文期刊>The International journal of robotics research >Online self-supervised learning for dynamic object segmentation
【24h】

Online self-supervised learning for dynamic object segmentation

机译:在线自我监督学习以进行动态对象分割

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes a novel technique for the automatic segmentation of dynamic objects, solely using information from a single uncalibrated moving camera and without the need for manual labeling (or any human intervention, for that matter). Matching pairs of sparse features are extracted from subsequent frames, and the resulting optical flow information is divided into two classes (static or dynamic) using the RANSAC algorithm. This initial classification is then used to incrementally train a Gaussian process (GP) classifier that is then able to segment dynamic objects in new images. The GP hyperparameters are optimized online during navigation, with new data being gradually incorporated into the non-parametric model as it becomes available while redundant data is discarded, to maintain a near-constant computational cost. The result is a vector containing the probability that each pixel in the image belongs to a dynamic object, along with the corresponding uncertainty estimate of this classification. Experiments conducted using different robotic platforms, ranging from modified cars (driving at speeds of up to 50 km/h) to portable cameras (with a full six-degree-of-freedom range of motion), show promising results even in highly unstructured environments with cars, buses and pedestrians as dynamic objects. We also show how it is possible to cluster individual dynamic pixels into different object instances, and then further cluster those into semantically meaningful categories without any prior knowledge of the environment. Finally, we provide visual odometry results that testify to the proposed algorithm s ability to correctly segment (and then remove) dynamic objects from a scene, and how this translates into a more accurate motion estimate between frames.
机译:本文提出了一种自动分割动态物体的新技术,仅使用来自未校准的移动摄像机的信息,而无需手动标记(或任何人工干预)。从后续帧中提取匹配的稀疏特征对,然后使用RANSAC算法将所得的光流信息分为两类(静态或动态)。然后,使用此初始分类来逐步训练高斯过程(GP)分类器,该分类器然后可以在新图像中分割动态对象。 GP超参数在导航期间进行了在线优化,随着新数据的出现,新数据逐渐被合并到非参数模型中,而多余数据则被丢弃,以保持近乎恒定的计算成本。结果是一个矢量,其中包含图像中每个像素属于动态对象的概率以及此分类的相应不确定性估计。使用不同的机器人平台进行的实验,从改装车(以最高50 km / h的速度驾驶)到便携式摄像机(具有六自由度的完整运动范围),即使在高度非结构化的环境中也显示出可喜的结果以汽车,公共汽车和行人为动态对象。我们还将展示如何将各个动态像素聚类到不同的对象实例中,然后在不事先了解环境的情况下将它们进一步聚类成语义上有意义的类别。最后,我们提供了视觉里程计结果,证明了所提出算法从场景中正确分割(然后删除)动态对象的能力,以及如何将其转化为帧之间更准确的运动估计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号