首页> 外文会议>IEEE International Conference on Machine Learning and Applications >GRAM: Gradient Rescaling Attention Model for Data Uncertainty Estimation in Single Image Super Resolution
【24h】

GRAM: Gradient Rescaling Attention Model for Data Uncertainty Estimation in Single Image Super Resolution

机译:克:单图像超分辨率数据不确定度估计的梯度重新分校注意模型

获取原文

摘要

In this paper, a new learning method to quantify data uncertainty without suffering from performance degradation in Single Image Super Resolution (SISR) is proposed. Our work is motivated by the fact that the idea of loss design for capturing uncertainty and that for solving SISR are contradictory. As to capturing data uncertainty, we often model the output of a network as a Euclidian distance divided by a predictive variance, negative log-likelihood (NLL) for the Gaussian distribution, so that images with high variance have less impact on training. On the other hand, in the SISR domain, recent works give more weights to the loss of challenging images to improve the performance by using attention models. Nonetheless, the conflict should be handled to make neural networks capable of predicting the uncertainty of a super-resolved image, without suffering from performance degradation. Therefore, we propose a method called Gradient Rescaling Attention Model (GRAM) that combines both attempts effectively. Since variance may reflect the difficulty of an image, we rescale the gradient of NLL by the degree of variance. Hence, the neural network can focus on the challenging images, similarly to attention models. We conduct performance evaluation using standard SISR benchmarks in terms of peak signal-noise ratio (PSNR) and structural similarity (SSIM). The experimental results show that the proposed gradient rescaling method generates negligible performance degradation compared to SISR outputs with the Euclidian loss, whereas NLL without attention degrades the SR quality.
机译:在本文中,提出了一种新的学习方法,用于量化数据不确定性而不遭受单幅图像超分辨率(SISR)中的性能劣化。我们的作品是因为捕获不确定性的损失设计的想法以及解决SISR的损失设计是矛盾的。至于捕获数据的不确定性,我们通常是一个网络的输出作为欧几里得距离由一个预测方差分割建模,负对数似然(NLL)为高斯分布,使得具有高方差的图像对训练的影响较小。另一方面,在SISR域中,最近的作品将更多的重量丢失了挑战图像,以通过使用注意力模型来提高性能。尽管如此,应处理冲突以使能够预测超出分辨图像的不确定性的神经网络,而不会遭受性能下降。因此,我们提出了一种称为梯度重构注意力模型(Gram)的方法,这些模型可以有效地结合起来。由于方差可能反映图像的难度,因此通过方差程度来重新归类nll的梯度。因此,神经网络可以专注于具有挑战性的图像,类似于注意模型。在峰值信噪比(PSNR)和结构相似度(SSIM)方面,使用标准SISR基准进行性能评估。实验结果表明,与具有欧几里国损失的SISR产出相比,所提出的梯度重构方法产生可忽略不计的性能下降,而无需注意力降低SR质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号