Optimization of recurrent neural networks for time series modeling

Morten With Pedersen

For a copy of this publication, either

Abstract

The present thesis is about optimization of recurrent neural networks applied to time series modeling. In particular is considered fully recurrent networks worki ng from only a single external input, one layer of nonlinear hidden units and a li near output unit applied to prediction of discrete time series. The overall objective s are to improve training by application of second-order methods and to improve generalization ability by architecture optimization accomplished by pruning. The major topics covered in the thesis are:

1. The problem of training recurrent networks is analyzed from a numerical point of view. Especially it is analyzed how numerical ill-conditioning of the Hessian matrix might arise.

2. Training is significantly improved by application of the damped Gauss-Newton method, involving the Hessian. This method is found to outperform gradient descent in terms of both quality of solution obtained as well as computation time required.

3. A theoretical definition of the generalization error for recurrent networks is provided. This definition justifies a commonly adopted approach for estimating generalization ability.

4. The viability of pruning recurrent networks by the Optimal Brain Damage (OBD) and Optimal Brain Surgeon (OBS) pruning schemes is investigated. OBD is found to be very effective whereas OBS is severely influenced by numerical problems which leads to pruning of important weights. 5. A novel operational tool for examination of the internal memory of recurrent networks is proposed. The tool allows for assessment of the length of the effe ctive memory of previous inputs built up in the recurrent network during application.

Time series modeling is also treated from a more general point of view, namely modeling of the joint probability distribution function of the observed series. Two recurrent models rooted in statistical physics are considered in this respect, namely the ``Boltzmann chain'' and the ``Boltzmann zipper'' and a comprehensive tutorial on these models is provided. Boltzmann chains and zippers are found to benefit as well from second-order training and architecture optimization by pruning which is illustrated on artificial problems and a small speech recognition problem.

IMM ph.d thesis 37, 1997


Last modified Nov 3, 1997

For further information, please contact, Finn Kuno Christensen, IMM, Bldg. 321, DTU
Phone: (+45) 4588 1433. Fax: (+45) 4588 2673, E-mail: fkc@imm.dtu.dk

Go back