...
首页> 外文期刊>ISPRS International Journal of Geo-Information >An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications
【24h】

An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

机译:基于室内场景识别的3D注册机制在移动应用中实现实时AR-GIS可视化

获取原文
           

摘要

Mobile Augmented Reality (MAR) systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN) algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.
机译:移动增强现实(MAR)系统正在成为可视化的理想平台,使用户可以更好地理解空间信息并与空间信息进行交互。随后,这种技术发展反过来促使人们努力增强在现实环境中注册虚拟对象的机制。大多数现有的AR 3D注册技术缺乏准确地描述代表现实的场景中虚拟对象的位置所需的场景识别功能。此外,由于这些系统在室内环境中检测几何和语义信息的能力有限,因此进一步阻碍了这种注册方法在室内AR-GIS系统中的应用。在本文中,我们提出了一种基于室内场景识别技术的融合虚拟对象和室内场景的新方法。为了在AR-GIS中完成场景融合,我们首先检测参考图像中的关键点。然后,我们使用完全连接网络(FCN)算法执行内部布局提取,以获取跟踪目标的布局坐标点。我们检测并识别视频帧图像中的目标场景,以跟踪目标并估计摄像机的姿势。在这种方法中,根据摄像机的姿势和先前提取的布局坐标点,可以将虚拟3D对象精确地融合到真实场景中。我们的结果表明,这种方法可以将虚拟对象与真实室内环境的表示精确融合。基于这种融合技术,用户可以更好地掌握AR-GIS平台上的虚拟三维表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号