Improving the convergence of the back-propagation algorithm
Van Ooyen, A., and Nienhuis, B. (1992). Neural Networks 5: 465-471. [Full text: PDF]
We propose a modification to the back-propagation method. The modification consists of a simple change in the total error-of-performance function that is to be minimized by the algorithm. The modified algorithm is slightly simpler than the original one. As a result, the convergence of the network is accelerated in two ways. During the learning process according to the original back-propagation method, the network goes through stages in which the improvement of the response is extremely slow. These periods of stagnation are much shorter or even absent in our modified method. Furthermore, the final approach to the desired response function, when the network is already nearly correct, is accelerated by an amount that can be predicted analytically. We compare the original and the modified method in simulations of a variety of functions.