A Feedforward Neural Network Forecasting Excercise
The following was implemented in Maple by Marcus Davidsson (2008) davidsson_marcus@hotmail.com
and is based upon the work Mercado, Kendrick & Amman (2006) Computational Economics, Princeton University Press
The basic structure of our feedforward neural network can be seen in the figure below..
We can see that our network has one output layer, one hidden layer and one input layer.
← Output Layer
← Hidden Layer
← Input Layer
We now note that the hidden layer represented by S(z1) and S(z2) takes the form of a sigmoid function ( also called a "squasher" ) as seen below
The sigmoid function compresses the independent data points ( x ) into a 0-1 interval as seen above
We can see that when x is large and positive z will be close to 1 and when x is large and negative z will be close to 0
We now note that in our particular case the sigmoid functions take the following form:
where
we should also note that the output layer y(hat) is given by
where are the parameter values that minimize the Norm where the Norm is defined by:
Note that the Norm is simply the sum of the absolute difference between y and the predicted value of y ( y(hat) )
which means that we select the parameter values so that the error of our predicted value is minimized.
We can now load our dataset
We now note that we will only use the data from 1990 to 1999 to train our neural network (find optimal parameter values )
and then we will try use these optimal parameter values to predict the data for "y" from 2000 to 2002.
We can now write a procedure that will accomplish this as follows:
These are the local optimal parameter values that will minimize Norm. We can now do some forecasting: