首页> 外文会议>2018 IEEE/ACM 1st International Workshop on Software Engineering for AI in Autonomous Systems >How Machine Perception Relates to Human Perception: Visual Saliency and Distance in a Frame-by-Frame Semantic Segmentation Task for Highly/Fully Automated Driving
【24h】

How Machine Perception Relates to Human Perception: Visual Saliency and Distance in a Frame-by-Frame Semantic Segmentation Task for Highly/Fully Automated Driving

机译:机器感知与人类感知之间的关系:高度/全自动驾驶的逐帧语义分割任务中的视觉显着性和距离

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we investigate the link between machine perception and human perception for highly/fully automated driving. We compare the classification results of a camera-based frame-by-frame semantic segmentation model Machine with a well-established visual saliency model Human on the Cityscapes dataset. The results show that Machine classifies foreground objects better if they are more salient, indicating a similarity with the human visual system. For background objects, the accuracy drops when the saliency increases, giving evidence for the assumption that Machine has an implicit concept of saliency.
机译:在本文中,我们研究了高度自动驾驶的机器感知与人类感知之间的联系。我们将基于相机的逐帧语义分割模型Machine的分类结果与在Cityscapes数据集上建立良好的视觉显着性模型Human进行比较。结果表明,如果机器的前景更加突出,则Machine可以更好地对其进行分类,这表明与人类视觉系统具有相似性。对于背景对象,当显着性增加时,准确性会下降,这为以下假设提供了证据:机器具有显着性的隐含概念。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号