首页> 外文学位 >Motion segmentation and dense reconstruction of scenes containing moving objects observed by a moving camera.
【24h】

Motion segmentation and dense reconstruction of scenes containing moving objects observed by a moving camera.

机译:对运动相机观察到的包含运动对象的场景进行运动分割和密集重建。

获取原文
获取原文并翻译 | 示例

摘要

We investigate two fundamental issues in Computer Vision: 2D motion segmentation and 3D dense shape reconstruction of a dynamic scene observed from a moving camera. The scene contains multiple rigid objects moving in a static background, while the camera undergoes general 3D rotation and translation. Our goal is to segment the video frames into 2D motion regions and static background areas, and then to reconstruct the dense 3D shape of both parts of the scene.;Motion segmentation of image sequences shot by a moving camera is inherently difficult as the camera motion induces a displacement for all the image pixels. This camera motion is compensated for by a number of geometric constraints estimated between video frames. The pixels that cannot be compensated for by these constraints are classified as motion regions. A novel 3-view constraint is proposed to handle the cases where existing ones do not work well. The geometric constraints are combined in a decision tree based method for segmenting the motion regions from the background area in each video frame.;After motion segmentation, sparse 3D structure of the static background and 3D camera motion are estimated by the well-developed "Structure and Motion (SaM)" methods. The same SaM methods are applied to recover the 3D shape of moving objects from a moving camera, based on their relative motion. The object scale and motion, however, can be only solved up to an unknown scale, unless additional assumptions are available. In our scenario, a planar-motion assumption is introduced: the object motion trajectory must be parallel to a plane. With the aid of the planar-motion assumption, the 3D object motion trajectory can be uniquely determined.;The sparse 3D reconstruction of the dynamic scene is extended to a dense volumetric one. The whole scene is divided into a set of volume elements, termed as voxels. Each voxel is assigned an object label which may change over time. The task of dense reconstruction is then accomplished by a novel voxel coloring method that finds the optimal label assignment for each voxel to minimize photo-motion variance measures between the voxels and the original images.
机译:我们研究了计算机视觉中的两个基本问题:从移动相机观察到的动态场景的2D运动分割和3D密集形状重构。场景包含在静态背景中移动的多个刚性对象,而相机则进行一般的3D旋转和平移。我们的目标是将视频帧分割为2D运动区域和静态背景区域,然后重构场景两个部分的密集3D形状。由于摄像机运动,移动摄像机拍摄的图像序列的运动分割本质上是困难的引起所有图像像素的位移。该摄像机的运动由视频帧之间估计的许多几何约束所补偿。无法通过这些约束进行补偿的像素被分类为运动区域。提出了一种新颖的三视图约束来处理现有的三视图约束不起作用的情况。在基于决策树的方法中将几何约束组合在一起,以从每个视频帧的背景区域中分割出运动区域。;运动分割后,通过完善的“结构”估算静态背景的稀疏3D结构和3D摄像机运动和运动(SaM)”方法。相同的SaM方法适用于根据运动对象的相对运动从运动相机恢复运动对象的3D形状。但是,除非有其他假设,否则物体的比例和运动只能求解到未知的比例。在我们的场景中,引入了平面运动假设:对象运动轨迹必须与平面平行。借助平面运动假设,可以唯一确定3D对象的运动轨迹。动态场景的稀疏3D重构被扩展为一个密集的立体场景。整个场景分为一组体素,称为体素。每个体素都分配了一个对象标签,该标签可能会随时间变化。然后,通过一种新颖的体素着色方法来完成密集重构的任务,该方法为每个体素找到最佳标签分配,以最小化体素和原始图像之间的光运动差异度量。

著录项

  • 作者

    Yuan, Chang.;

  • 作者单位

    University of Southern California.;

  • 授予单位 University of Southern California.;
  • 学科 Engineering Robotics.;Computer Science.
  • 学位 Ph.D.
  • 年度 2007
  • 页码 156 p.
  • 总页数 156
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 自动化技术、计算机技术;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号