首页> 外文会议>International conference on image and signal processing >Privacy Preserving Dynamic Room Layout Mapping
【24h】

Privacy Preserving Dynamic Room Layout Mapping

机译:隐私保护动态房间布局映射

获取原文

摘要

We present a novel and efficient room layout mapping strategy that does not reveal people's identity. The system uses only a Kinect depth sensor instead of RGB cameras or a high-resolution depth sensor. The users' facial details will neither be captured nor recognized by the system. The system recognizes and localizes 3D objects in an indoor environment, that includes the furniture and equipment, and generates a 2D map of room layout. Our system accomplishes layout mapping in three steps. First, it converts a depth image from the Kinect into a top-view image. Second, our system processes the top-view image by restoring the missing information from occlusion caused by moving people and random noise from Kinect depth sensor. Third, it recognizes and localizes different objects based on their shape and height for a given top-view image. We evaluated this system in two challenging real-world application scenarios: a laboratory room with four people present and a trauma room with up to 10 people during actual trauma resuscitations. The system achieved 80 % object recognition accuracy with 9.25 cm average layout mapping error for the laboratory furniture scenario and 82 % object recognition accuracy for the trauma resuscitation scenario during six actual trauma cases.
机译:我们提出了一种新颖而有效的房间布局映射策略,该策略不会泄露人们的身份。该系统仅使用Kinect深度传感器,而不是RGB相机或高分辨率深度传感器。用户的面部细节将不会被系统捕获或识别。该系统识别并定位室内环境(包括家具和设备)中的3D对象,并生成房间布局的2D地图。我们的系统通过三个步骤完成布局映射。首先,它将来自Kinect的深度图像转换为顶视图图像。其次,我们的系统通过恢复因移动人和Kinect深度传感器产生的随机噪声而造成的遮挡中丢失的信息,来处理顶视图图像。第三,它根据给定顶视图图像的形状和高度识别和定位不同的对象。我们在两个具有挑战性的实际应用场景中评估了该系统:在实际的创伤复苏过程中,一个有四个人在场的实验室室和一个最多有10人的创伤室。在六个实际的创伤案例中,该系统实现了80%的物体识别精度,其中实验室家具场景的平均布局映射误差为9.25 cm,而创伤复苏场景的物体识别精度为82%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号