首页> 外文会议>IEEE International Conference on Big Data and Smart Computing >Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory
【24h】

Poisoning Attack on Show and Tell Model and Defense Using Autoencoder in Electric Factory

机译:电气工厂中使用自动编码器对演说模型的中毒攻击和防御

获取原文

摘要

Recently, deep neural network technology has been developed and used in various fields. The image recognition model can be used for automatic safety checks at the electric factory. However, as the deep neural network develops, the importance of security increases. A poisoning attack is one of security problems. It is an attack that breaks down by entering malicious data into the training data set of the model. This paper generates adversarial data that modulates feature values to different targets by manipulating less RGB values. Then, poisoning attacks in one of the image recognition models, the show and tell model. Then use autoencoder to defend adversarial data
机译:近来,深度神经网络技术已被开发并用于各个领域。图像识别模型可用于电力工厂的自动安全检查。但是,随着深度神经网络的发展,安全性的重要性也在增加。中毒攻击是安全问题之一。它是通过将恶意数据输入模型的训练数据集中来分解的攻击。本文生成对抗性数据,通过操纵较少的RGB值将特征值调制到不同的目标。然后,在其中一个图像识别模型show and tell模型中进行中毒攻击。然后使用自动编码器防御对抗性数据

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号