首页> 外文会议>International Joint Conference on Neural Networks >Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30 Less High-Quality Data for Training
【24h】

Conditional Transferring Features: Scaling GANs to Thousands of Classes with 30 Less High-Quality Data for Training

机译:有条件的转移功能:将GAN扩展到成千上万的类,而高质量数据的培训减少30%

获取原文

摘要

Generative adversarial network (GAN) can greatly improve the quality of unsupervised image generation. Previous GAN-based methods often require a large amount of high-quality training data. This work aims to reduce the use of high-quality data in training, meanwhile scaling up GANs to thousands of classes. We propose an image generation method based on conditional transferring features, which can capture pixel-level semantic changes when transforming low-quality images into high-quality ones. Self-supervision learning is then integrated into our GAN architecture to provide more label-free semantic supervisory information observed from the training data. As such, training our GAN architecture requires much fewer high-quality images with a small number of additional low-quality images. Experiments show that even removing 30% high-quality images from the training set, our method can still achieve better image synthesis quality on CIFAR-10, STL-10, ImageNet, and CASIA-HWDB1.0, compared to previous competitive methods. Experiments on ImageNet with 1,000 classes of images and CASIA-HWDB1.0 with 3,755 classes of Chinese handwriting characters also validate the scalability of our method on object classes. Ablation studies further validate the contribution of our conditional transferring features and self-supervision learning to the quality of our synthesized images.
机译:生成对抗网络(GAN)可以大大提高无监督图像生成的质量。以前基于GAN的方法通常需要大量高质量的训练数据。这项工作旨在减少在培训中使用高质量数据,同时将GAN扩展到数千个课程。我们提出了一种基于条件传递特征的图像生成方法,该方法可以在将低质量图像转换为高质量图像时捕获像素级语义变化。然后将自我监督学习集成到我们的GAN架构中,以提供从培训数据中观察到的更多的无标签语义监督信息。因此,训练我们的GAN架构所需的高质量图像要少得多,而附加的少量低质量图像也要少得多。实验表明,与以前的竞争方法相比,即使从训练集中删除30%的高质量图像,我们的方法仍可以在CIFAR-10,STL-10,ImageNet和CASIA-HWDB1.0上获得更好的图像合成质量。在ImageNet上使用1,000类图像进行的实验以及在CASIA-HWDB1.0中使用3,755类中文手写字符的实验也验证了我们方法在对象类上的可扩展性。消融研究进一步验证了我们的条件转移特征和自我监督学习对合成图像质量的贡献。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号