首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops >Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection
【24h】

Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection

机译:愚弄自动监视摄像头:攻击者检测的对抗补丁

获取原文

摘要

Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn "patches" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveilance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.
机译:在过去的几年中,对机器学习模型的对抗攻击越来越引起人们的兴趣。通过仅对卷积神经网络的输入进行细微更改,可以摇摆网络的输出以输出完全不同的结果。最初的攻击是通过稍微改变输入图像的像素值来愚弄分类器以输出错误的类来实现的。其他方法已尝试学习可应用于对象的“补丁”,以使检测器和分类器变得愚蠢。这些方法中的某些方法还表明,这些攻击在现实世界中是可行的,即通过修改对象并用摄像机对其进行拍摄。但是,所有这些方法都针对几乎不包含类内变体(例如停车标志)的类。然后,该对象的已知结构将在其顶部生成对抗补丁。在本文中,我们提出了一种生成对抗补丁的方法,该攻击补丁针对具有许多类内变异的目标,即人。目的是生成一个能够成功地将人隐藏在人检测器中的补丁。攻击者可能被恶意利用来绕开监视系统,入侵者可以通过在身体前端对准监控摄像头的身体上放一块小纸板来偷偷摸摸地偷偷摸摸地溜走。从我们的结果可以看出,我们的系统能够大大降低人体检测仪的准确性。我们的方法在现实场景中也能很好地发挥作用,在该场景中,补丁是由相机拍摄的。据我们所知,我们是第一个对具有高水平内部变异的目标(例如人)进行此类攻击的人。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号