Is narnet <=> to feedforwardnet where the input(s) is(are) previous value(s) of the ouput?

Hi everyone.
I recently started my PhD and therefore working with artificial neural networks (ANN's).
I'd like to try an architecture where each parameter (input/output) can have its own delay but first I decided to explore the NAR concept.
From what I understood, for NAR the function "narnet" allows the definition of the output delays and the function "preparets" applies the delays and structures accordingly the parameter vector that will be used for training the ANN.
Is this equivalent to using a feedforwardnet function where I prepare input vector(s) as shifted version(s) of the output and I remove the output initial values (for all to have the same number of elements)?
Thanks in advance, Rodrigo

1 Comment

Inputs and outputs are variables, not parameters.
Delays, number of hidden nodes, weights and biases are parameters ... they don't change during the operation of the trained net.
All components of the input vector experience the same delays. Similarly for all components of the output feedack vector.
Nar and Narx open loop configurations have to be closed to become deployable.
The only deployable timeseries net that is structurally equivalent to fitnet(H) or feedforwardnet(H) is TIMEDELAYNET(0,H). However, the coding must be somewhat different because training from the same initial state of the RNG yields different results.

Sign in to comment.

 Accepted Answer

For deployment, NAR and NARNET must be in the CLOSELOOP configuration which, typically, takes very long to train because an error-free feedback signal is not available.
The use of the OPENLOOP configuration witn known target feedback can allow for much quicker training.
The performance of the openloop configuration can be simulated by a feedforward net using the target matrix as a delayed input. However, it woulds absolutely make no sense to do so because it is not deployable and cannot be converted to the closeloop deployable configuration.
Hope this helps.
Thank you for formally accepting my answer
Greg

1 Comment

Thank you for the vocabulary correction, as it will allow me to be more clear in future topics.
Nonetheless, the main point of my question is if a feedforward MLP where I adjust the to-be predicted variable vector by doing replicates and shifting it accordingly is equivalent to a NAR concept.
Example: to be forecasted variable [1:10] and I want to apply a 1:2 delay:
input to MLP: [1:8;2:9] target: [3:10] (where each column is a sample)
is this the same thing as a narnet(1:2,hiddenNeuron) where the target is [1:10]?

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Asked:

on 24 Feb 2015

Edited:

on 26 Feb 2015

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!