How to train NARX neural network in closed loop
Show older comments
I am am trying to use the neural network toolbox to predict an internal temperature given a number of input conditions. I have used an automatically generated code for a NARX network and made some small changes. I am aware that the typical workflow is to train open and then change to closed, however I would like to compare the results from this approach with training the network initially in closed form.
When training with the fourth input arguement of narxnet command set to 'open' the network trained with no problems. When I change this to 'closed' I am getting the following error messages:
Error using network/subsasgn>network_subsasgn (line 91)
Index exceeds matrix dimensions.
Error in network/subsasgn (line 13)
net = network_subsasgn(net,subscripts,v,netname);
Error in narx_closed (line 28)
net.inputs{2}.processFcns =
{'removeconstantrows','mapminmax'};
I'm not really sure what the problem is as the Neural Network Toolbox Users Guide seems to suggest that this is all you need to do to create a closed loop NARX network and train the network directly. I have included my full code below:
%%Closed Loop NARX Neural Network
%%Load data and create input and output matrices
load('junior_class_data.mat');
U = [Outdoor_Temp, Position, Wind_Speed, Wind_Direction];
Y = [Zone_Temp];
inputSeries = tonndata(U,false,false);
targetSeries = tonndata(Y,false,false);
%%Create a Nonlinear Autoregressive Network with External Input
inputDelays = 0:2;
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize,'closed');
%%Pre-Processing
% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'divideblock';
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
net.divideMode = 'value'; % Divide up every value
net.divideParam.trainRatio = 80/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 5/100;
%%Training Function
% For a list of all training functions type: help nntrain
% Customize training parameters at: net.trainParam
net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
'ploterrcorr', 'plotinerrcorr'};
%%Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
%%Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)
%%View the Network
view(net)
7 Comments
Greg Heath
on 14 Apr 2014
I can better help you if
1. You use a MATLAB data set
help nndata
doc nndata
2. You design an openloop net to prove that the narxnet inputs are sufficient for a good solution.
Greg Heath
on 20 Apr 2014
I get 3 immediate errorsErrors:
U = [maglev_Inputs];
undersinputSeries = tonndata(U,false,false);
targetSeries = tonndata(Y,false,false);
Greg Heath
on 20 Apr 2014
Hmmm. Running my code I get almost perfect openloop performance and rotten closeloop performance. In fact the closeloop training doesn't get past 1 epoch.
No error messages but the stopping criterion is MAXIMUM MU REACHED
Hopefully, I'll solve the mystery.
Greg
Greg Heath
on 21 Apr 2014
The problem was eliminated by not using an input delay of 0. A rerun using narxnet defaults yields R^2 ~ 1.0 for neto and R^2 ~ 0.82 for netc where netc was initialized with neto. Respective training times are ~40 and 140 sec respectively.
I recommend reviewing my following NEWSGROUP post for a more detailed explanation of the procedure.
Obtaining a number of candidate designs for netc starting from random initial weights will take a ridiculously long time. Therefore, I will only design one to make sure that no error messages are obtained.
Joshua
on 22 Apr 2014
Muhammad Adil Raja
on 18 Mar 2020
Hi Greg,
the link to your newsgroup tutorial does not work anymore!
Best.
MA
Accepted Answer
More Answers (0)
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!