首页> 外文会议>2019 International Conference on Robotics and Automation >2D3D-Matchnet: Learning To Match Keypoints Across 2D Image And 3D Point Cloud
【24h】

2D3D-Matchnet: Learning To Match Keypoints Across 2D Image And 3D Point Cloud

机译:2D3D-Matchnet:学习在2D图像和3D点云上匹配关键点

获取原文
获取原文并翻译 | 示例

摘要

Large-scale point cloud generated from 3D sensors is more accurate than its image-based counterpart. However, it is seldom used in visual pose estimation due to the difficulty in obtaining 2D-3D image to point cloud correspondences. In this paper, we propose the 2D3D-MatchNet - an end-to-end deep network architecture to jointly learn the descriptors for 2D and 3D keypoint from image and point cloud, respectively. As a result, we are able to directly match and establish 2D-3D correspondences from the query image and 3D point cloud reference map for visual pose estimation. We create our Oxford 2D-3D Patches dataset from the Oxford Robotcar dataset with the ground truth camera poses and 2D-3D image to point cloud correspondences for training and testing the deep network. Experimental results verify the feasibility of our approach.
机译:从3D传感器生成的大规模点云比基于图像的对应点更准确。但是,由于难以获得2D-3D图像到点云的对应关系,因此很少在视觉姿势估计中使用。在本文中,我们提出了2D3D-MatchNet-一种端到端的深度网络体系结构,以分别从图像和点云中共同学习2D和3D关键点的描述符。结果,我们能够直接从查询图像和3D点云参考图中匹配并建立2D-3D对应关系,以进行视觉姿势估计。我们使用地面真相相机姿态和2D-3D图像到点云的对应关系,从Oxford Robotcar数据集中创建Oxford 2D-3D Patches数据集,以训练和测试深度网络。实验结果证明了我们方法的可行性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号