首页> 外文会议>Conference on image-guided procedures, robotic interventions, and modeling >Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
【24h】

Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

机译:基于3D虚拟外科田模型的基于3D虚拟外科域模型的实时内窥镜引导机器人导航

获取原文

摘要

The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for robotically-assisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
机译:挑战是准确地指导三维(3D)手术领域的手术工具,用于从衰弱的脑肿瘤中移除肿瘤肿瘤的肿瘤疗法。所提出的技术是基于匹配术中视频帧到外科手术场的3D虚拟模型的3D图像引导的手术导航。将小激光扫描内窥镜相机连接到模拟的微创外科手术工具,该工具被朝着被剥离的脑肿瘤的幽灵体系的感兴趣区域(残留肿瘤)。来自内窥镜的视频帧提供了与3D虚拟模型匹配的功能,这些功能在通过栅格扫描在外科字段上通过光栅扫描来重建。通过实现受约束的捆绑调整算法来恢复相机姿势(位置和方向)。通过使用微定位阶段将计算的相机姿势与测量的相机姿势比较来确定在荧光目标的方法(残留肿瘤)期间的导航误差。根据这些初步结果,MATLAB代码中算法的计算效率是近实时(每个估计的2.5秒),可以通过C ++的实现来改进。误差分析平均产生3毫米距离误差和2.5度的方向误差。这些错误的来源来自3个虚拟模型的不准确性,在具有立体声跟踪的校准的乌文机器人平台上生成的3D虚拟模型; 2)内窥镜内在参数的不准确,如焦距; 3)任何内窥镜图像畸变扫描不规则性。这项工作展示了机器人外科微型摄像机3D指导的可行性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号