首页> 外文会议>IEEE Security and Privacy Workshops >Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
【24h】

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

机译:带图像缩放攻击的回理和中毒神经网络

获取原文

摘要

Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a specific resolution. By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning. Furthermore, we consider the detection of image-scaling attacks and derive an adaptive attack. In an empirical evaluation, we demonstrate the effectiveness of our strategy. First, we show that backdoors and poisoning work equally well when combined with image-scaling attacks. Second, we demonstrate that current detection defenses against image-scaling attacks are insufficient to uncover our manipulations. Overall, our work provides a novel means for hiding traces of manipulations, being applicable to different poisoning approaches.
机译:后域和中毒攻击是对机器 - 学习和视觉系统安全的重大威胁。然而,通常,这些攻击在可视地检测的图像中留下可见的伪像,并削弱攻击的功效。在本文中,我们提出了一种隐藏后门和中毒攻击的新策略。我们的方法在最近对图像缩放的竞争中建立了一类。这些攻击能够操纵图像,使得它们在缩放到特定分辨率时更改其内容。通过组合中毒和图像缩放攻击,我们可以隐藏后门的触发器,并隐藏清洁标签中毒的叠加层。此外,我们考虑检测图像缩放攻击并导出自适应攻击。在经验评估中,我们展示了我们战略的有效性。首先,我们在与图像缩放攻击结合时,我们展示了后门和中毒同样好。其次,我们证明目前对图像缩放攻击的检测防御不足以揭示我们的操纵。总体而言,我们的作品为隐藏覆盖操纵痕迹提供了一种新颖的手段,适用于不同的中毒方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号