首页> 外文期刊>Wissenschaftliche Arbeiten der Fachrichtung Geodasie und Geoinformatik der Leibniz Universitat Hannover >Integrated Estimation of UAV Image Orientation with a Generalised Building Model
【24h】

Integrated Estimation of UAV Image Orientation with a Generalised Building Model

机译:用广义建筑模型对UAV图像取向的集成估计

获取原文
           

摘要

The estimation of position and attitude of a camera, addressed as image orientation in photogrammetry, is an important task to obtain information on where a platform is located in the world or relative to objects. Unmanned aerial vehicles (UAV) as an increasingly popular platform led to new applications, some of which involve low flight altitudes and specific requirements such as low weight and low cost of sensors. Image orientation needs additional information to retrieve not only relative measurements but position and attitude in a world coordinate system. Given the requirements on sensors and especially for flights in between obstacles in urban environments classically used information of Global Navigation Satellite Systems (GNSS) and Inertial Measurement Units (IMU) or specially marked ground control points (GCP) are often inaccurate or unavailable. The idea addressed within this thesis is to improve the UAV image orientation based on an existing generalised building model. Such models are increasingly available and provide ground control that is helpful to compensate inaccurate or unavailable camera positions measured by GNSS and drift effects of image orientation. Typically, for UAV applications in street corridors, the geometric accuracy and the level of detail of such models is low compared to the high accuracy and high geometric resolution of the image measurements. Therefore, although the building model differs from the observed scene due to its generalisation, relations of the photogrammetric measurements to the building model are formulated and used in the determination of image orientation. Three approaches to assign tie points to model planes in object space are presented, and a sliding window as well as a global hybrid bundle adjustment are set up for image orientation aided by a generalised building model. The assignments lead to fictitious observations of the distance of tie points to model planes and are iteratively refined by bundle adjustment. Experiments with an image sequence captured flying between buildings show an improvement of image orientation from the metre range with purely GNSS measurements to the decimetre range when using the generalised building model with the simplest assignment method based on point-to-plane distances. No improvement by searching planes in the tie point cloud to indirectly find the relations of tie points to model planes is observed. The results are compared to a building model of higher detail and systematic effects are investigated. In summary, the developed method is found to significantly improve UAV image orientation using a generalised building model successfully.
机译:作为摄影测量中的图像取向的照相机的位置和姿态的估计是获取平台位于世界或相对于对象的位置的重要任务。无人驾驶飞行器(UAV)作为一个越来越流行的平台导致了新的应用,其中一些涉及低飞行高度和特定要求,例如低重量和传感器的低成本。图像方向需要其他信息,不仅可以检索相对测量,而是在世界坐标系中的位置和态度。鉴于传感器的要求,特别是对于城市环境之间的障碍之间的航班经典使用全球导航卫星系统(GNSS)和惯性测量单元(IMU)或特殊标记的地面控制点(GCP)的信息通常是不准确的或不可用的。在本文中寻址的想法是基于现有的广义建筑模型来提高UAV图像取向。这些模型越来越可用,并提供地面控制,有助于补偿由GNSS和图像取向的漂移效果测量的不准确或不可用的相机位置。通常,对于街道走廊中的UAV应用,与图像测量的高精度和高几何分辨率相比,这种模型的几何精度和细节水平低。因此,尽管构建模型由于其概括而与观察到的场景不同,但是在确定图像取向的确定中配制并用于建筑模型的摄影测量测量对建筑模型的关系。提出了将绑定点分配给物体空间中的模型平面的三种方法,并设置滑动窗口以及全局混合束调整,用于通过广义建筑模型辅助的图像取向。该任务导致虚拟观察Tie点对模型平面的距离,并通过束调节迭代地改进。捕获建筑物之间的图像序列的实验显示使用具有基于点对平面距离的最简单的分配方法的广义建筑模型,从仪表范围内提高从仪表范围内的图像取向。观察到在Tie Point云中搜索平面,不再识别与模型平面的绑定点关系没有改进。将结果与高细节和系统效果进行比较。总之,发现开发的方法通过成功使用广义建筑模型显着提高了UAV图像方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号