Architecture of the neural network by nftool?

1 view (last 30 days)
I trained a neural network of 1-2-1 configuration with a cos wave from 0:0.1:2*pi, assigning the nftool option for number of hidden neurons a value of 2. I saved the network into workspace as net1:
After that I changed the weights as:
net1.IW{1,1}=[1 5]';
net1.b{1,1}=[2 2]';
net1.LW{2,1}=[4 8];
net1.b{2,1}=7;
I didn't make any other changes to the network.
Then I executed the following code to check whether I had rightly understood the architecture of the NN:
sum(tansig(0.*net1.IW{1,1}'+net1.b{1,1}').*net1.LW{2,1})+net1.b{2,1}
sim(net1,0);
However, the two lines gave me different results.
ans =
18.5683
ans =
2.0855
Shouldn't the results of the last two lines be the same? Is something wrong with the toolbox or have I misunderstood the architecture of the generated network?

Accepted Answer

Mark Hudson Beale
Mark Hudson Beale on 1 Mar 2012
You have correctly understood how the main part of the neural network works, however, the inputs and outputs of the neural network are also doing some processing.
You can see the processing functions and settings in the processFcns and processSettings fields of inputs and the second layer's outputs:
net.inputs{1}.processFcns
net.inputs{1}.processSettings
net.outputs{2}.processFcns
net.outputs{2}.processSettings
The processSettings were automatically set upon first training the network. For instance the processing MAPMINMAX's settings save the range of inputs X so it can consistently map inputs into the range [-1 1].
Type "help nnprocess" to see a list of processing functions you can assign to a network before training if you like, beyond the ones your network might have.
If the first processing function for inputs is MAPMINMAX then you can process the inputs as follows:
x = 0; for i=1:numel(net.inputs{1}.processFcns) x = feval(net.inputs{1}.processFcns{i},x,net.inputs{i}.processSettings); end
At this point you can put X into your network equation above to calculate Y. Here is another notation for that calculation. BSXFUN makes it easy to add bias vectors to weighted matrices.
y = bsxfun(@plus,net.LW{2,1}*bsxfun(@plus,tansig(net1.IW{1,1}*x,net.b{1}),net.b{2})
Then reverse process the output Y:
for i=numel(net.outputs{2}.processFcns):-1:1 y = feval(net.outputs{2}.processFcns{i},'reverse',y,net.outputs{2}.processSettings{i}); end
At that point Y should be the same as you got for SIM(NET1,0).

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!