【24h】

Robust and Fast Global Image Orientation

机译:Robust and Fast Global Image Orientation

获取原文
           

摘要

The estimation of image orientation (also called pose) has always played a crucial role in the field of photogrammetry since it is a fundamental prerequisite for the subsequent works of multi-view dense matching, generating DEM and DSM, etc. In the community of computer vision, the task is also well known as Structure-from-Motion (SfM), which reveals that image pose, while positions of object points are determined interdependently. Despite a lot of efforts over the last decades, it has recently gained the photogrammetrists' interests again due to the fast-growing number of different resources of images. New challenges are posed for accurately and efficiently orienting various image datasets (e.g., unordered datasets with a large number of images, or images compromised of critical stereo pairs). In this thesis, the relevant ambition is to develop a new fast and robust method for the estimation of image orientation which is capable of coping with different types of datasets. To achieve this goal, the two most time-consuming steps of image orientation are in particular taken care of: (a) image matching and (b) the estimation process. To accelerate the image matching process, a new method employing a random k-d forest is proposed to quickly obtain pairs of overlapping images from an unordered image set. After that, image matching and the estimation of relative orientation parameters are performed only for pairs found to be very likely overlapping. On the other hand, to estimate the image poses in a time efficient manner, a global image orientation strategy is advocated. Its basic idea is to first simultaneously solve all available images' poses, before a final bundle adjustment is carried out once for refinement. The conventional two-step global approach is pursued in this work, separating the determination of rotation matrices and translation parameters; the former is solved by an existing popular method of Chatterjee and Govindu [2013], and the latter are estimated globally using a newly developed method: translation estimation integrating both the relative translations and tie points. Tie points within triplets are adopted to firstly calculate global unified scale factors for each available pairwise relative translation. Then, analogous to rotation estimation, translations are determined by performing an averaging operation on the scaled relative translations. In order to improve the robustness of the solution, efforts in this thesis are also focused on coping with outliers in the relative orientations (ROs), which global image orientation approaches are particularly sensitive to. A general method based on triplet compatibility with respect to loop closure errors of relative rotations and translations is presented for detecting blunders in relative orientations. Although this procedure eliminated many gross errors in the input ROs, it typically cannot sort out blunders which are caused by repetitive structures and critical configurations, such as inappropriate baselines (very short baseline or baselines parallel to the viewing direction). Therefore, another new method is proposed to eliminate wrong ROs which have resulted from repetitive structures and very short baselines. Two corresponding criteria that indicate the quality of ROs are introduced. Repetitive structure is detected based on counts of conjugate points of the various image pairs, while very short baselines are found by inspecting the intersection angles of corresponding image rays. By analyzing these two criteria, incorrect ROs are detected and eliminated. As correct ROs of image pairs with a wider baseline nearly parallel to both viewing directions can be valuable, a method to identify and keep these ROs is also a part of this research. The validation and evaluation of the proposed method are thoroughly conducted on various benchmarks including ordered and unordered sets of images, images with repetitive structures and inappropriate baselines, etc. In particular,

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号