首页> 外文会议>International Conference on Machine Learning >Adversarial camera stickers: A physical camera-based attack on deep learning systems
【24h】

Adversarial camera stickers: A physical camera-based attack on deep learning systems

机译:对抗相机贴纸:基于物理相机的深度学习系统攻击

获取原文

摘要

Recent work has documented the susceptibility of deep learning systems to adversarial examples, but most such attacks directly manipulate the digital input to a classifier. Although a smaller line of work considers physical adversarial attacks, in all cases these involve manipulating the object of interest, e.g., putting a physical sticker on an object to misclassify it, or manufacturing an object specifically intended to be misclassified. In this work, we consider an alternative question: is it possible to fool deep classifiers, over all perceived objects of a certain type, by physically manipulating the camera itself? We show that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class. To accomplish this, we propose an iterative procedure for both updating the attack perturbation (to make it adversarial for a given classifier), and the threat model itself (to ensure it is physically realizable). For example, we show that we can achieve physically-realizable attacks that fool ImageNet classifiers in a targeted fashion 49.6% of the time. This presents a new class of physically-realizable threat models to consider in the context of adversarially robust machine learning. Our demo video can be viewed at: https://youtu.be/wUVmL33Fx54.
机译:最近的工作已经记录了深度学习系统对对抗的易感性,但大多数此类攻击直接操纵到分类器的数字输入。虽然较小的工作程度考虑了物理对抗攻击,但在所有情况下,这些都涉及操纵感兴趣的对象,例如,将物理贴纸放在物体上以将其分类,或者制造专门旨在被错误分类的物体。在这项工作中,我们考虑了一个替代问题:通过物理操纵相机本身,可以欺骗深度分类器,在某种类型的所有感知对象中吗?我们表明,通过将仔细制作和主要的半透明贴在相机的镜头上,可以创建所观察到的图像的通用扰动,这些图像是不明显的,但将目标物体误认为是不同的(有针对性的)类。为实现这一目标,我们提出了一种更新攻击扰动(对给定分类器的逆势)的迭代程序,以及威胁模型本身(以确保它是物理可实现的)。例如,我们表明我们可以实现愚弄想象成分类器的物理可实现的攻击,以目标时尚为49.6%的时间。这提出了一类新的物理上可实现的威胁模型,以考虑在对接地鲁棒机器学习的背景下。我们的演示视频可以查看:https://youtu.be/wuvml33fx54。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号