training the neural network and then maximising the outputs using genetic algorithm
2 views (last 30 days)
Show older comments
this is my code for training data using code.
inputs = [x1;x2;x3];
targets = [y1;y2];
hiddenlayersize = 10;
net = fitnet(hiddenlayersize);
net.divideparam.trainratio = 85/100;
net.divideparam.valratio = 5/100;
net.divideparam.testratio = 10/100;
[net,tr] = train(net,inputs,targets);
outputs = net(inputs);
errors = gsubtract(outputs,targets);
performance = perform(net,targets,outputs);
view(net);
figure,plotregression(targets,outputs);
Now, i want to maximise both the outputs y1 and y2 using the genetic algoritm (gamultiobj), but i don't understand how can we find the objective function from the above trained data for using in genetic algorithm.
also suggest a good training code for the inputs and outputs.
0 Comments
Accepted Answer
Greg Heath
on 13 Jun 2015
Corrections:
1. Use x, t and y to represent input, target and output, respectively.
2. Your inputs and targets are incompatible because the number of columns are different.
3. Use the functional form of the net output in the fitness function:
y = repmat( b2, O, N) + LW * tanh( repmat(b1, I, N ) + IW * x);
4. Either use norm(y) = sqrt( y(1,:).^2 + y(2,:).^2) or it's square as the GA objective to be maximized.
Hope this helps.
*Thank you for formally accepting my answer*
Greg
0 Comments
More Answers (0)
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!