首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization
【2h】

Robust Cylindrical Panorama Stitching for Low-Texture Scenes Based on Image Alignment Using Deep Learning and Iterative Optimization

机译:基于深度学习和迭代优化的图像对齐的低纹理场景鲁棒圆柱形全景拼接

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.
机译:圆柱全景拼接可以生成具有宽视场(FOV)的高分辨率场景图像,使其成为环境感应和机器人定位等应用的有用场景表示。在场景中有足够可靠的特征的情况下,基于手工特征的传统图像拼接方法对于从一系列图像构建圆柱全景图非常有效。但是,这些方法无法处理无法建立可靠特征对应关系的低纹理环境。针对上述问题,本文提出了一种基于深度学习和迭代优化的新型两步图像对齐方法。特别是,提出了一种名为ShiftNet的轻量级端到端可训练卷积神经网络(CNN)体系结构,以估计图像之间的初始偏移,并根据指定的相机运动模型在亚像素细化过程中对其进行了进一步优化。在合成数据集,渲染的逼真图像和真实图像上进行了广泛的实验,以评估我们提出的方法的性能。定性和定量实验结果均表明,基于我们提出的图像对齐方法的圆柱全景拼接可显着改善传统的基于特征的方法以及针对挑战性低纹理环境的基于深度学习的新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号