首页> 外文会议>International Conference on intelligent science and big data engineering >APAC-Net: Unsupervised Learning of Depth and Ego-Motion from Monocular Video
【24h】

APAC-Net: Unsupervised Learning of Depth and Ego-Motion from Monocular Video

机译:APAC-NET:无监督学习单眼视频的深度和自我运动

获取原文

摘要

We propose an unsupervised novel method, Attention-Pixel and Attention-Channel Network (APAC-Net), for unsupervised monocular learning of estimating scene depth and ego-motion. Our model only utilizes monocular image sequences and does not need additional sensor information, such as IMU and GPS, for supervising. The attention mechanism is employed in APAC-Net to improve the networks' efficiency. Specifically, three attention modules are proposed to adjust feature weights when training. Moreover, to minimum the effect of noise, which is produced in the reconstruction processing, the Image-reconstruction loss based on PSNR L+(PSNR) is used to evaluation the reconstruction quality. In addition, due to the fail depth estimation of the objects closed to camera, the Temporal-consistency loss L_(Temp) between adjacent frames and the Scale-based loss L_(Scale) among different scales are proposed. Experimental results showed APAC-Net can perform well in both the depth and ego-motion tasks, and it even behaved better in several items on KITTI and Cityscapes.
机译:我们提出了一种无监督的新颖方法,注意力像素和关注通道网络(APAC-NET),用于估计场景深度和自我运动的无监督单眼学习。我们的模型仅利用单眼图像序列,不需要额外的传感器信息,例如IMU和GPS,用于监督。注意机制在APAC-NET中采用,以提高网络效率。具体地,提出了三个注意力模块来在训练时调整特征权重。此外,至少在重建处理中产生的噪声影响,基于PSNR L +(PSNR)的图像 - 重建损失用于评估重建质量。另外,由于对对象关闭的对象的失败深度估计,提出了相邻帧之间的时间一致性损耗L_(TEMP)和不同比例之间的基于比例的损耗L_(比例)。实验结果表明,Apac网可以在深度和自我运动任务中表现良好,并且在基蒂和城市景观的几个项目中表现得更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号