...
首页> 外文期刊>IEEE Transactions on Network Science and Engineering >Global Visual and Semantic Observations for Outdoor Robot Localization
【24h】

Global Visual and Semantic Observations for Outdoor Robot Localization

机译:

获取原文
获取原文并翻译 | 示例
           

摘要

Most approaches to robot visual localization rely on local, global visual or semantic information as observation. In this paper, the combination of global visual and semantic information is used as landmark in the observation model of Bayesian filters. Introducing the improved Gaussian Process into observation models with visual information, The GP-Localize algorithm is extended to high dimensional data, which means that all the historical data with spatiotemporal correlation to achieve constant time and memory for persistent outdoor robot localization can be considered by the Bayesian filters. The other contribution of this paper is we combine the all above parts into a system for robot visual localization and apply it to two real-world outdoor datasets including unmanned ground vehicle (UGV) and unmanned aerial vehicle (UAV). According to the experimental results, it's no difficult to find there is a higher accuracy while using global vision and semantic results rather than just using single feature. By using the combined features and improved Gaussian process approximation method in Bayes filters, our system is more robust and practical than existing localization systems such as ORB-SLAM.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号