首页> 外文会议>IFAC Conference on Sensing, Control and Automation Technologies for Agriculture >Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments
【24h】

Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments

机译:在农业中积极的机器人视觉中:一种深入的封闭和非结构化保护裁剪环境中的视觉伺服的深入学习方法

获取原文

摘要

3D Move To See (3DMTS) is a mutli-perspective visual servoing method for unstructured and occluded environments, like that encountered in robotic crop harvesting. This paper presents a deep learning method, Deep-3DMTS for creating a single-perspective approach for 3DMTS through the use of a Convolutional Neural Network (CNN). The novel method is developed and validated via simulation against the standard 3DMTS approach. The Deep-3DMTS approach is shown to have performance equivalent to the standard 3DMTS baseline in guiding the end effector of a robotic arm to improve the view of occluded fruit (sweet peppers): end effector final position within 11.4 mm of the baseline; and an increase in fruit size in the image by a factor of 17.8 compared to the baseline of 16.8 (avg.).
机译:3D移动以查看(3DMTS)是用于非结构化和闭塞环境的Mutli透视伺服方法,如机器人裁剪收获。本文通过使用卷积神经网络(CNN),提出了一种深入的学习方法,深3DMTS,用于为3DMTS创建3DMTS的单透视方法。通过针对标准3DMTS方法进行模拟开发和验证了新颖的方法。深度3DMTS方法被证明具有相当于标准3DMTS基线的性能,引导机器人臂的末端执行器,以改善封闭果实(甘辣椒)的视图:基线11.4毫米内的末端效应器最终位置;与16.8(AVG)的基线相比,图像中的果实大小的增加17.8。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号