The basic idea of the system proposed in this paper lies in the fact that an image usually includes areas of different significance, so we have to code them in a different way to reach an accurate reproduction. Our system divides an image into areas of various importance which we code using wavelet transformations and neural networks for knowledge-based recognition. In this paper, we will explain how the functional relationship between intensity and spatial frequency at the limits of human perception in vision (Contrast Sensitivity Threshold (CST) Curve) can guide one to choose the norm of the error metrics, the compression level in the wavelet hierarchy, and the coefficient quantization strategies to minimize the human perception of error. The CST curve is learned by a backpropogation neural network.
展开▼