【24h】

Effect of Adversarial Examples on the Robustness of CAPTCHA

机译:对抗性例子对验证码稳健性的影响

获取原文

摘要

A good CAPTCHA(Completely Automated Public Turing Test to Tell Computers and Humans Apart) should be friendly for humans to solve but hard for computers. This balance between security and usability is hard to achieve. With the development of deep neural network techniques, increasingly more CAPTCHAs have been cracked. Recent works have shown deep neural networks to be highly susceptible to adversarial examples, which can reliably fool neural networks by adding noise that is imperceptible to humans that matches the needs of CAPTCHA design. In this paper, we study the effect of adversarial examples on CAPTCHA robustness (including image-selecting, clicking-based, and text-based CAPTCHAs). The experimental results demonstrate that adversarial examples have a positive effect on the robustness of CAPTCHA. Even if we fine tune the neural network, the impact of adversarial examples cannot be completely eliminated. At the end of this paper, suggestions are given on how to improve the security of CAPTCHA using adversarial examples.
机译:一个好的CAPTCHA(完全自动化的公共图灵测试,可以使计算机和人类分开)应该对人类友好,但对计算机却很难解决。安全性和可用性之间的这种平衡很难实现。随着深度神经网络技术的发展,越来越多的CAPTCHA被破解。最近的工作表明,深度神经网络极易受到对抗性示例的攻击,这些示例可以通过添加符合CAPTCHA设计要求的人类无法感知的噪声来可靠地欺骗神经网络。在本文中,我们研究了对抗性示例对CAPTCHA鲁棒性的影响(包括图像选择,基于单击和基于文本的CAPTCHA)。实验结果表明,对抗示例对验证码的鲁棒性具有积极影响。即使我们对神经网络进行了微调,也无法完全消除对抗性示例的影响。最后,通过对抗性实例,提出了如何提高验证码安全性的建议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号