首页> 外文会议>Annual IFIP WG 11.3 Conference on Data and Applications Security and Privacy >Detecting Adversarial Attacks in the Context of Bayesian Networks
【24h】

Detecting Adversarial Attacks in the Context of Bayesian Networks

机译:在贝叶斯网络中检测对抗性攻击

获取原文

摘要

In this research, we study data poisoning attacks against Bayesian network structure learning algorithms. We propose to use the distance between Bayesian network models and the value of data conflict to detect data poisoning attacks. We propose a 2-layered framework that detects both one-step and long-duration data poisoning attacks. Layer 1 enforces 'reject on negative impacts' detection; i.e., input that changes the Bayesian network model is labeled potentially malicious. Layer 2 aims to detect long-duration attacks; i.e., observations in the incoming data that conflict with the original Bayesian model. We show that for a typical small Bayesian network, only a few contaminated cases are needed to corrupt the learned structure. Our detection methods are effective against not only one-step attacks but also sophisticated long-duration attacks. We also present our empirical results.
机译:在这项研究中,我们研究针对贝叶斯网络结构学习算法的数据中毒攻击。我们建议使用贝叶斯网络模型之间的距离和数据冲突的值来检测数据中毒攻击。我们提出了一个两层的框架,该框架可以检测一步和长期的数据中毒攻击。第1层强制执行“拒绝负面影响”检测;即更改贝叶斯网络模型的输入被标记为潜在恶意。第2层旨在检测长期攻击;即输入数据中的观察结果与原始贝叶斯模型冲突。我们表明,对于典型的小型贝叶斯网络,只需破坏几个受污染的案例即可破坏学习的结构。我们的检测方法不仅可以有效防御单步攻击,还可以有效地防御复杂的长期攻击。我们还介绍了我们的经验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号