首页> 中文期刊> 《光学精密工程》 >双机器人系统的快速手眼标定方法

双机器人系统的快速手眼标定方法

         

摘要

For the hand-eye calibration of a dual robot measurement system, a method based on machine vision to calculate the target robot flange pose and center coordinate was presented. By moving the target robot flange to a proper pose to take an image by the camera and extract the ellipse contour of the flange in the image to calculate the flange pose and its circle center data, the coordinate transform H1 between camera coordinate system and flange coordinate system was obtained by using the location of pinhole on the flange. Then,the coordinate transforms between flange coordinate system and robot base coordinate systems ,namely, H2 and H1 ,are gotten,respectivity, from the robot controllers. Furthermore, the coordinate system transform H3 between two robots were derived from single axis movements of robots, so that a hand-eye expression HCg was obtained to calculate the hand-eye coordinate transform. Finally,by moving the target flange to some coplanar poses to take their images, the calibration accuracy was improved by image fusion. Experimental results indicate that the cal ibration precisions of single image and coplanar poses using image fusion are 0. 345° and 0.187° , re -spectively. It can satisfy the the requirements of dual robot systems for vision guiding measurement.%针对双机器人仿真测量系统的手眼标定问题,提出一种由机器视觉求解法兰盘位姿得出手眼关系的方法.将目标机器人运动到合适的位姿,由视觉机器人拍摄其法兰盘图像,提取图像中法兰盘的椭圆轮廓,解算摄像机坐标系下的法兰盘姿态和圆心坐标,并由销孔位置约束得出摄像机与目标法兰盘坐标系的转换关系H1.然后由控制器读数得出两台机器人各自法兰盘坐标系与基坐标系间的转换关系H2,H4,并由机器人单轴旋转运动得出双机器人基坐标系转换关系H3,由此形成闭环得出机器人手眼关系HCG.将法兰盘运动到共面的多个不同位置分别拍摄图像,通过图像融合来提高标定精度.实验结果表明,单位置标定和多位置图像融合标定的精度分别为0.345°和0.187°,满足双机器人视觉仿真测量系统的精度要求.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号