...
首页> 外文期刊>Pattern recognition letters >Perceptual quality-preserving black-box attack against deep learning image classifiers
【24h】

Perceptual quality-preserving black-box attack against deep learning image classifiers

机译:感知质量保存的黑匣子攻击对抗深层学习图像分类器

获取原文
获取原文并翻译 | 示例
           

摘要

Deep neural networks provide unprecedented performance in all image classification problems, including biometric recognition systems, key elements in all smart city environments. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning intense research in this field. To improve system security, new countermeasures and stronger attacks are proposed by the day. On the attacker's side, there is growing interest for the realistic black-box scenario, in which the user has no access to the network parameters. The problem is to design efficient attacks which mislead the neural network without compromising image quality. In this work, we propose to perform the black-box attack along a high-saliency and low-distortion path, so as to improve both attack efficiency and image perceptual quality. Experiments on real-world systems prove the effectiveness of the proposed approach both on benchmark tasks and actual biometric applications.(c) 2021 Elsevier B.V. All rights reserved.
机译:深度神经网络在所有图像分类问题中提供前所未有的性能,包括生物识别系统,所有智能城市环境中的关键元素。然而,最近的研究表明他们对对抗性攻击的脆弱性,在这一领域的强烈研究产卵。为了提高系统安全,当天提出了新的对策和更强大的攻击。在攻击者方面,对现实的黑匣子场景日益感兴趣,用户无法访问网络参数。问题是设计有效的攻击,在不影响图像质量的情况下误导神经网络。在这项工作中,我们建议沿着高显着性和低失真路径执行黑匣子攻击,从而提高攻击效率和图像感知质量。关于现实世界系统的实验证明了所提出的方法在基准任务和实际生物识别应用中的效力。(c)2021 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号