Feedforward Network and Backpropagation
Show older comments
Hi,
I've very new to Matlab and Neural Networks. I've done a fair amount of reading (neural network faq, matlab userguide, LeCunn, Hagan, various others) and feel like I have some grasp of the concepts - now I'm trying to get the practical side down. I am working through a neural net example/tutorial very similar to the Cancer Detection MATLAB example (<http://www.mathworks.co.uk/help/nnet/examples/cancer-detection.html?prodcode=NN&language=en)>. In my case I am trying to achived a 16 feature set binary classification and am evaluating the effect on training and generalisation of varying the number of nodes in the single hidden layer. For reference below, x (double) is my feature set variable and t is my target vector (binary), training sample size is 200 and test sample size is approx 3700.
My questions are: 1) I'm using patternnet default 'tansig' in both the hidden and output layers with 'mapminmax' and 'trainlm'. I'm interpretting the output by thresholding on y . 0>=0.5 The matlab userguide suggests using 'logsig' for constrained output to [0 1]. Should I change the output layer transfer function to 'logsig' or not ? I've read some conflicting suggestion with regard to doing this and that 'softmax' is sometimes suggested, but can't be used for training without configuring your own derivative function (which I don't feel confident in doing).
2) The tutorial provides a training and test dataset, directing the use of the full training set in training (i.e. dividetrain) and at the same time directs stopping training once the network achieves x% success in classifying patterns. a) is this an achieveable goal without a validation set, or are these conflicting directions? b) if achievable, how do I set 'trainParam.goal' to evaluate at x% success ? Webcrawling has led me to the answer of setting preformFcn = 'mse' and trainParam.goal = (1-x%)var(t) - does this make sense (it's seems to rely on mse = var(err) )? c) Assuming my intuition above is correct - is there an automated way of applying cross validation to a nn in matlab or will I effectively have to program in a loop? e) is there any point to this or does would a simple dividerand(200, 0.8, 0.2, 0.0) acheive the same thing ?
3) Is there an automated way in the nntoolbox of establishing the optimum number of nodes in the hidden layer ?
Thanks in advance for any and all help
Accepted Answer
More Answers (2)
mangood UK
on 14 Apr 2013
0 votes
3) Is there an automated way in the nntoolbox of establishing the optimum number of nodes in the hidden layer ?
nnnnnnooooo its depend on you you can use genetic algorithm to find that but its too complex you start from minimum number then increase as you like
Greg Heath
on 16 Apr 2013
There is no MATLAB neural-network training algorithm that directly minimizes discontinuous classification percentage error rate (PctErr). Furthermore, there is no known analytic relationship between PctErr and the typical training goals ( MSE, MSEREG, MAE ) of the MATLAB training algorithms. Therefore, if a classification error rate goal is specified, train with a standard minimization function using an inner loop over random weight initializations and an outer loop over increasing number of hidden nodes. Calculate PctErr at the end of each inner loop.
You can either
1. Choose from a Ntrials X numH tabulation of results
2. Break at the end of an H = constant loop if the PctErr goal is achieved
within that inner loop.
3. Break as soon as the PctErr goal is achieved.
Hope this helps.
Thank you for formally accepting my answer
Greg
2 Comments
Greg Heath
on 16 Apr 2013
I don't think the powers that be want you to post the same question in the NEWSGROUP and ANSWERS.
Greg
Dink
on 16 Apr 2013
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!