...
首页> 外文期刊>IEEE Journal of Oceanic Engineering >Omnidirectional Multicamera Video Stitching Using Depth Maps
【24h】

Omnidirectional Multicamera Video Stitching Using Depth Maps

机译:使用深度图拼接全向多轨视频拼接

获取原文
获取原文并翻译 | 示例
           

摘要

Omnidirectional vision has recently captured plenty of attention within the computer vision community. The popularity of cameras able to capture 360 degrees has increased in the last few years. A significant number of these cameras are composed of multiple individual cameras that capture images or videos, which are stitched together at a later postprocess stage. Stitching strategies have the complex objective of seamlessly joining the images, so that the viewer has the feeling the panorama was captured from a single location. Conventional approaches either assume that the world is a simple sphere around the camera, which leads to visible misalignments on the final panoramas, or use feature-based stitching techniques that do not exploit the rigidity of multicamera systems. In this paper, we propose a new stitching pipeline based on state-of-the-art techniques for both online and offline applications. The goal is to stitch the images taking profit of the available information on the multicamera system and the environment. Exploiting the spatial information of the scene helps to achieve significantly better results. While for the online case, sparse data can be obtained from a simultaneous localization and mapping process, for the offline case, it is estimated from a 3-D reconstruction of the scene. The information available is represented in depth maps, which provide all information in a condensed form and allow easy representation of complex shapes. The new pipelines proposed for both online and offline cases are compared, visually and numerically, against conventional approaches, using a real data set. The data set was collected in a challenging underwater scene with a custom-designed multicamera system. The results obtained surpass those of conventional approaches.
机译:全向愿景最近在计算机视觉社区内捕获了很多关注。能够捕获360度的摄像机的普及在过去几年中增加了。大量这些相机由多个单独的相机组成,这些相机捕获图像或视频,该视频在后期后处理阶段一起缝合在一起。拼接策略具有无缝加入图像的复杂目标,使观众感觉全景被捕获从一个位置。常规方法假设世界周围是相机周围的一个简单的球体,这导致最终全景的可见错位,或者使用不利用多色系统刚性的特征的拼接技术。在本文中,我们提出了一种基于在线和离线应用的最先进技术的新拼接管道。目标是缝合借助多轨体系和环境的可用信息的盈利。利用场景的空间信息有助于实现显着更好的结果。虽然对于在线情况,但可以从同时定位和映射过程中获得稀疏数据,对于离线情况,估计从场景的三维重建估计。可用的信息在深度映射中表示,其以冷凝形式提供所有信息,并允许容易地表示复杂形状。使用真实数据集,在视觉上和数值上进行比较,在线和数值进行比较,针对常规方法进行比较新的管道。数据集被收集在具有定制设计的多轨系统的具有挑战性的水下场景中。获得的结果超出了传统方法的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号