首页> 外文会议>Proceedings of the 2016 IEEE/ION Position, Location and Navigation Symposium >A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary
【24h】

A vision-based indoor positioning method with high accuracy and efficiency based on self-optimized-ordered visual vocabulary

机译:基于自优化有序视觉词汇的高精度高效的基于视觉的室内定位方法

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we present a novel indoor positioning method with high accuracy and efficiency, requiring only the camera of a mobile device. The proposed method takes advantage of a novel visual vocabulary, Self-Optimized-Ordered Visual (SOO) Vocabulary under Bag-of-Visual-Word framework to exploit deep connections between physical locations and feature clusters. Additionally, related techniques improving positioning performance such as feature selection and visual word filtering are also designed and examined. Evaluation results show that when the training image size varies from 20 to 640, our method can save up to 80% processing time in both phases compared to two existing vision-based indoor positioning methods that use state-of-art image query techniques. In the meantime, the average image query accuracy of our method among all evaluated indoor scenes is above 95%, which highly increases positioning accuracy and makes the method a very suitable option for smart-phone based indoor positioning and navigation.
机译:在本文中,我们提出了一种新颖的室内定位方法,该方法具有很高的准确性和效率,只需要移动设备的摄像头即可。所提出的方法利用了新颖的视觉词汇,即“视觉袋词”框架下的自优化有序视觉(SOO)词汇来利用物理位置和特征簇之间的深层联系。另外,还设计和研究了改善定位性能的相关技术,例如特征选择和视觉单词过滤。评估结果表明,当训练图像的大小从20变为640时,与使用现有的最新图像查询技术的两种基于视觉的室内定位方法相比,我们的方法在两个阶段均可节省多达80%的处理时间。同时,我们的方法在所有评估的室内场景中的平均图像查询准确率均超过95%,这大大提高了定位精度,使该方法成为基于智能手机的室内定位和导航的非常合适的选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号