首页> 外文学位 >A novel approach to active vision systems: Modelling, control and real time tracking.
【24h】

A novel approach to active vision systems: Modelling, control and real time tracking.

机译:主动视觉系统的新颖方法:建模,控制和实时跟踪。

获取原文
获取原文并翻译 | 示例

摘要

Recent hardware developments have rendered controlled active vision a practical option for a broad range of problems: spanning applications as diverse as intelligent vehicle highway systems, robotic-assisted surgery, 3D reconstruction, inspection, vision assisted grasping, microassembly of micro-electromechanical systems (MEMS) and automated spacecraft docking. However, realizing this potential requires having a framework for synthesizing robust active vision systems, capable of moving beyond carefully controlled environments.; This thesis is a study of a novel approach to modeling and controlling of active vision systems. We show how recently developed robust identification techniques can be used to find a family of models for an active vision system in a unified sense, treating cameras, motors and image processing hardware as a single system without requiring any calibration.; Moreover, we use this family of models to design a robust controller by using μ-synthesis techniques. The designed controller is shown to be capable of performing satisfactorily in the presence of uncertainty, noise, changing camera parameters (various zoom values, from minimum zoom levels to almost maximum zoom levels), uncertain time delays (largely due to the time required by the image processing algorithm to locate the object in the scene), unmodeled dynamics, blurring of the image, etc.; These results are experimentally validated using a UniSight/BiSight robotic head-eye platform.
机译:近期的硬件开发使主动视觉控制成为解决各种问题的可行选择:涵盖了智能汽车公路系统,机器人辅助手术,3D重建,检查,视觉辅助抓握,微机电系统(MEMS)的微型装配等多种应用)和自动航天器对接。然而,要实现这一潜力,就需要有一个框架来综合强大的主动视觉系统,并能够超越严格控制的环境。本文是对主动视觉系统的建模和控制方法的新颖方法的研究。我们展示了如何使用最新开发的鲁棒识别技术在统一意义上为主动视觉系统找到一系列模型,将相机,电机和图像处理硬件视为单个系统而无需任何校准。此外,我们使用该系列模型通过使用μ综合技术来设计鲁棒控制器。所示设计的控制器能够在存在不确定性,噪声,更改相机参数(各种缩放值,从最小缩放级别到几乎最大缩放级别),不确定时延(很大)的情况下令人满意地执行(由于图像处理算法在场景中定位对象所需的时间),未建模的动力学,图像模糊等;这些结果使用 UniSight / BiSight 机器人头眼平台进行了实验验证。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号