【24h】

Diversity Regularized Adversarial Deep Learning

机译:多样性正则化对抗性深度学习

获取原文

摘要

The two key players in Generative Adversarial Networks (GANs), the discriminator and generator, are usually parameterized as deep neural networks (DNNs). On many generative tasks, GANs achieve state-of-the-art performance but are often unstable to train and sometimes miss modes. A typical failure mode is the collapse of the generator to a single parameter configuration where its outputs are identical. When this collapse occurs, the gradient of the discriminator may point in similar directions for many similar points. We hypothesize that some of these shortcomings are in part due to primitive and redundant features extracted by discriminator and this can easily make the training stuck. We present a novel approach for regularizing adversarial models by enforcing diverse feature learning. In order to do this, both generator and discriminator are regularized by penalizing both negatively and positively correlated features according to their differentiation and based on their relative cosine distances. In addition to the gradient information from the adversarial loss made available by the discriminator, diversity regularization also ensures that a more stable gradient is provided to update both the generator and discriminator. Results indicate our regu-larizer enforces diverse features, stabilizes training, and improves image synthesis.
机译:生成对抗网络(GAN)的两个主要角色,即鉴别器和生成器,通常被参数化为深度神经网络(DNN)。在许多生成任务中,GAN达到了最先进的性能,但训练起来往往不稳定,有时会错过模式。典型的故障模式是发电机崩溃为单个参数配置,此时其输出是相同的。当这种崩溃发生时,鉴别器的梯度可能指向许多相似点的相似方向。我们假设这些缺陷中的某些缺陷部分是由于鉴别器提取的原始和冗余特征而造成的,这很容易使训练停滞不前。我们提出了一种新颖的方法,通过实施多种功能学习来规范化对抗模型。为此,通过根据特征的微分和相对余弦距离对负相关特征和正相关特征进行惩罚,来对生成器和鉴别器进行正则化。除了鉴别器提供的来自对抗性损失的梯度信息之外,分集正则化还确保提供更稳定的梯度以更新生成器和鉴别器。结果表明我们的调节剂具有多种功能,稳定训练并改善图像合成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号