- Dynamic Nature of NARX Networks: NARX networks are dynamic systems that use previous inputs and outputs to predict future outputs.
- Bayesian Regularization (trainbr): The trainbr algorithm updates the weights and biases according to Bayesian regularization, which includes an error minimization term and a regularization term.
Using with trainbr with narxnet: the predicted value at time t seems to depend in part on the target value at time t. But we don't know that value yet!
1 view (last 30 days)
Show older comments
Assume we have a time series several thousand points long where we know all of the inputs up to day t, and all of the correct target values up to day t-1.
In scenario one, we apply narxnet with trainbr to the entire series, and look at the predicted value at day t, call it P(t).
In scenario two, we apply narxnet with trainbr to the same series up to day t, and look at the predicted value at day t, call it P*(t).
In general, P(t) ~= P*(t). Why?
In fact, if we substitute in some fictitious value for the time series at day t before training, P*(t) can be wildy different than P(t).
However, also in scenario two, even if we submit the true value of the target at time t , P*(t) is still often not equal to P(t).
Is there a way to deal with this behavior? It seems odd indeed that the target value at time t, the very thing we are trying to predict, influences the network.
0 Comments
Answers (1)
Gagan Agarwal
on 26 Feb 2024
Hi Kevin,
The discrepancy between P(t) and P*(t) in the scenarios you've described can be attributed to the nature of the Nonlinear Autoregressive Network with Exogenous Inputs (NARX) and the training algorithm used, which is the Bayesian Regularization backpropagation (trainbr).
Here could be some reasons for the discrepancy:
See Also
Categories
Find more on Modeling and Prediction with NARX and Time-Delay Networks in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!