...
首页> 外文期刊>IEEE Transactions on Image Processing >Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning
【24h】

Content-Adaptive Sketch Portrait Generation by Decompositional Representation Learning

机译:基于分解表示学习的内容自适应素描画像生成

获取原文
获取原文并翻译 | 示例
           

摘要

Sketch portrait generation benefits a wide range of applications such as digital entertainment and law enforcement. Although plenty of efforts have been dedicated to this task, several issues still remain unsolved for generating vivid and detail-preserving personal sketch portraits. For example, quite a few artifacts may exist in synthesizing hairpins and glasses, and textural details may be lost in the regions of hair or mustache. Moreover, the generalization ability of current systems is somewhat limited since they usually require elaborately collecting a dictionary of examples or carefully tuning features/components. In this paper, we present a novel representation learning framework that generates an end-to-end photo-sketch mapping through structure and texture decomposition. In the training stage, we first decompose the input face photo into different components according to their representational contents (i.e., structural and textural parts) by using a pre-trained convolutional neural network (CNN). Then, we utilize a branched fully CNN for learning structural and textural representations, respectively. In addition, we design a sorted matching mean square error metric to measure texture patterns in the loss function. In the stage of sketch rendering, our approach automatically generates structural and textural representations for the input photo and produces the final result via a probabilistic fusion scheme. Extensive experiments on several challenging benchmarks suggest that our approach outperforms example-based synthesis algorithms in terms of both perceptual and objective metrics. In addition, the proposed method also has better generalization ability across data set without additional training.
机译:素描人像的生成有利于广泛的应用,例如数字娱乐和执法。尽管为此付出了很多努力,但是仍然无法解决生成生动且保留细节的个人素描画像的几个问题。例如,在合成发夹和眼镜时可能会存在很多假象,并且在头发或胡须的区域中可能丢失纹理细节。此外,当前系统的泛化能力受到一定程度的限制,因为它们通常需要精心收集示例字典或仔细调整功能部件/组件。在本文中,我们提出了一种新颖的表示学习框架,该框架通过结构和纹理分解生成端到端的照片素描映射。在训练阶段,我们首先使用预训练的卷积神经网络(CNN)根据输入的脸部表情内容(即结构和纹理部分)将其分解为不同的成分。然后,我们利用分支的完全CNN分别学习结构和纹理表示。此外,我们设计了一种排序的匹配均方误差度量,以测量损失函数中的纹理图案。在草图渲染阶段,我们的方法会自动为输入的照片生成结构和纹理表示,并通过概率融合方案产生最终结果。在几个具有挑战性的基准上进行的大量实验表明,在感知和客观指标方面,我们的方法均优于基于示例的综合算法。另外,所提出的方法还具有跨数据集的更好的概括能力,而无需额外的训练。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号