The Cascade Correlation algorithm introduced a sound basis for self-scaling learning of monolithic neural networks, but it suffers from a number of drawbacks. These include degradation of learning speed and quality with the size of the network and the development of deep networks with high fan-in rates to hidden units. The Iterative Atrophy algorithm preserves the good features of Cascade Correlation and eliminates its worst characteristics. In addition, we show that Iterative Atrophy extends naturally to the development of self-scaling hierarchical networks which have advantages in both training and representational efficiency.
展开▼