...
首页> 外文期刊>IEEE transactions on automation science and engineering >Development of a Human–Robot Hybrid Intelligent System Based on Brain Teleoperation and Deep Learning SLAM
【24h】

Development of a Human–Robot Hybrid Intelligent System Based on Brain Teleoperation and Deep Learning SLAM

机译:基于脑遥操作和深度学习SLAM的人机混合智能系统的开发

获取原文
获取原文并翻译 | 示例
           

摘要

To achieve the better navigation performance of a mobile robot in the unknown environments, a novel human-robot hybrid system incorporating a motor-imagery (MI)-based brain teleoperation control is presented in this paper, where a deep-learning-based active perception is developed in the simultaneous localization and mapping (SLAM) framework. Using the deep-learning-based object recognition in the red-green-blue-depth (RGB-D) data acquisition process, the designed SLAM approach can select the valid feature points effectively, and the speed of displacement tracking can be improved by combining the oriented FAST and rotated BRIEF (ORB) SLAM algorithm with the optical flow method. The global trajectory map can also be mended using graph-based nonlinear error optimization. In addition, to build the connection between human intentions and the robot control commands flexibly in the developed mobile robot, a common spatial pattern (CSP)-based support vector machine (SVM) classification algorithm is proposed so that the control commands can be obtained directly from the human electroencephalograph (EEG) signals, which are preanalyzed and classified using the phenomena of event-related synchronization/desynchronization (ERS/ERD). Experiments involving several operators have verified the effectiveness of the proposed framework in the actual unstructured environments. Note to Practitioners This paper is motivated by the exploration of the mobile robots remotely controlled by the disables in an unstructured environment that includes some unknown moving objects in the background. In the conventional approaches, the image feedback and the brain-computer interface-based EEG signals invoked by the visual stimulus are used to guide the motion of the robots. In this paper, the MI of left or right hand is adopted as an input of the brain teleoperation system, and the EEG signals are classified into two categories representing two robot control commands through the designed algorithm. Besides, a deep-learning-based SLAM is designed to analyze the image and depth of the environmental information provided by the robot RGB-D sensor, and the results can be transferred into a 3-D map locating the robot and assisting the operator to understand the operating environment. It has been verified that the deep-learning-based SLAM presented is more efficient and robust than traditional SLAM. The feasibility of the system is demonstrated by a set of experiments in the corridor environment.
机译:为了在未知环境中实现移动机器人更好的导航性能,本文提出了一种新颖的人机混合系统,该系统结合了基于运动图像(MI)的大脑遥操作控制,其中基于深度学习的主动感知是在同时定位和地图绘制(SLAM)框架中开发的。通过在红绿蓝深度(RGB-D)数据采集过程中使用基于深度学习的对象识别,设计的SLAM方法可以有效地选择有效的特征点,并且通过结合使用可以提高位移跟踪的速度光流法进行定向的FAST和旋转的Brief(ORB)SLAM算法。全局轨迹图也可以使用基于图形的非线性误差优化进行修正。此外,为了在开发的移动机器人中灵活建立人的意图与机器人控制命令之间的联系,提出了一种基于通用空间模式(CSP)的支持向量机(SVM)分类算法,以便可以直接获得控制命令。从人类脑电图(EEG)信号中提取,并使用事件相关的同步/去同步(ERS / ERD)现象进行预分析和分类。涉及多个运营商的实验已经验证了所提出框架在实际非结构化环境中的有效性。给从业者的注意本文的灵感来自在非结构化环境中探索由残障人士远程控制的移动机器人的过程,该环境在后台包括一些未知的移动物体。在常规方法中,视觉刺激调用的图像反馈和基于脑机接口的EEG信号用于引导机器人的运动。本文采用左手或右手的MI作为大脑远程操作系统的输入,通过设计算法将脑电信号分为代表两个机器人控制命令的两类。此外,还设计了基于深度学习的SLAM,以分析机器人RGB-D传感器提供的环境信息的图像和深度,并将结果转换为3D地图,以定位机器人并协助操作员进行操作。了解操作环境。已经证实,所展示的基于深度学习的SLAM比传统的SLAM更加有效和强大。该系统的可行性通过走廊环境中的一组实验证明。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号