Object transfiguration aims to translate objects in image from a kind to another, which is a subtask of image translation. Recently, researchers have proposed many effective approaches for object transfiguration. However, most of them ignore the difference between target objects and background, which would make background deformation, discolor and other problems. We propose a novel attention-based model for unsupervised object transfiguration called Deep Attention Units Generative Adversarial Network (DAU-GAN). We utilize spatial consistencies of objects and background to enable model to preserve background of image. Such an attention-based design enables DAU-GAN to enhance the expression of meaningful features and let the model able to distinguish specific objects and background in images. Experimental results demonstrate that our approach improves the performance of object transfiguration as well as effectively preserves background.
展开▼