首页> 外文会议>Annual conference on towards autonomous robotic systems >Learning to Listen to Your Ego-(motion): Metric Motion Estimation from Auditory Signals
【24h】

Learning to Listen to Your Ego-(motion): Metric Motion Estimation from Auditory Signals

机译:学会倾听自我(运动):听觉信号的公制运动估计

获取原文

摘要

This paper is about robot ego-motion estimation relying solely on acoustic sensing. By equipping a robot with microphones, we investigate the possibility of employing the noise generated by the motors and actuators of the vehicle to estimate its motion. Audio-based odom-etry is not affected by the scene's appearance, lighting conditions, and structure. This makes sound a compelling auxiliary source of information for ego-motion modelling in environments where more traditional methods, such as those based on visual or laser odometry, are particularly challenged. By leveraging multi-task learning and deep architectures, we provide a regression framework able to estimate the linear and the angular velocity at which the robot has been travelling. Our experimental evaluation conducted on approximately two hours of data collected with an unmanned outdoor field robot demonstrated an absolute error lower than 0.07m/s and 0.02 rad/s for the linear and angular velocity, respectively. When compared to a baseline approach, making use of single-task learning scheme, our system shows an improvement of up to 26% in the ego-motion estimation.
机译:本文是关于仅依靠声学感应的机器人自我运动估计的。通过为机器人配备麦克风,我们研究了利用车辆的电动机和执行器产生的噪声来估计其运动的可能性。基于音频的声音不受场景的外观,照明条件和结构的影响。这使声音成为在特别传统的方法(例如基于视觉或激光测距法的方法)面临挑战的环境中进行自我运动建模的引人注目的辅助信息源。通过利用多任务学习和深入的体系结构,我们提供了一个回归框架,该框架能够估计机器人行进的线性速度和角速度。我们对无人户外野外机器人收集的大约两个小时的数据进行的实验评估表明,线速度和角速度的绝对误差分别低于0.07m / s和0.02 rad / s。与基线方法相比,通过使用单任务学习方案,我们的系统显示出自我运动估计最多可提高26%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号