Exporting model to classify new data

Hi,
I have attached the code I use to classify my data. I use 16 different models. What I want to do is the following:
  1. I want to save/export the model sort of like the Classification Learner app does in order to make predictions on new data.
  2. I want to make a ROC curve with AUC results for each of the models
How can I do that?

 Accepted Answer

1.Save: (assuming you want to save/export each classifier in separate files) use save().
2. ROC curve: use perfcurve() and plot() with hold on;
% Linear SVM
tic
classificationLinearSVM = fitcsvm(...
trainingData(train,1:end-1),...
trainingData(train,end), ...
'KernelFunction', 'linear', ...
'PolynomialOrder', [], ...
'KernelScale', 'auto', ...
'BoxConstraint', 1, ...
'Standardize', true, ...
'ClassNames', [0; 1]);
[predsLinSVM,~] = predict(classificationLinearSVM,trainingData(test,1:end-1));
targetLinSVM = trainingData(test,end);
targetsLinSVM_all = [targetsLinSVM_all; squeeze(targetLinSVM)];
predsLinSVM_all = [predsLinSVM_all; squeeze(predsLinSVM)];
t1 = toc;
save('classificationLinearSVM.mat','classificationLinearSVM','-v7.3');
% you need to declare the posclass
%
[~,scoresLinSVM] = resubPredict(fitPosterior(classificationLinearSVM));
[xLinSVM,yLinSVM,~,aucLinSVM] = perfcurve(trainingData(train,end),scoresLinSVM(:,2),posclass);
plot(xLinSVM,yLinSVM); hold on;
Hope this helps!

9 Comments

Uerm
Uerm on 22 Dec 2019
Edited: Uerm on 22 Dec 2019
Hi,
The code seems to work. However, fitPosterior only works for SVM. I also use k-Nearest Neighbor and Random Forest and this function will not work on those classifiers. Are there "fitPosterior" versions for these classifiers as well?
I get the following error:
Undefined function 'fitPosterior' for input arguments of type 'ClassificationKNN'.
I have tried by simply removing fitPosterior for the kNN and Random Forest classifiers and it seems to work, but I am not sure that it is correctly implemented.
Another things: when we use 'save' to save each of the trained classifiers, how do we use them to make predictions on a new data set (code wise)?
Ridwan Alam
Ridwan Alam on 22 Dec 2019
Edited: Ridwan Alam on 22 Dec 2019
Sure. You can find more details about using perfcurve() here:
After save, you can simply load those classifiers just like any variable, and use predict() or model.predictFcn() as you prefer. More details here:
Cheers, it helped a lot! Is there a difference between saving the model in the for loop or after the loop? Will it make a difference? I have the save function of each model inside the loop now.
Good question. That totally depends on the purpose of the loop. If the loop is supposed to help you to find the best performing model among these different types, you don't need to save the models in every iteration, but the models' performances only. After the loop, you compare those results, and find the best model. And retrain that certain kind and save. But if the purpose is different, and you want all intermediate models, you can save those inside the loop. Good luck!
Uerm
Uerm on 5 Jan 2020
Edited: Uerm on 5 Jan 2020
Hi again,
I just want to save the "full" models which will be used to classify new data. It should do the same as the "Export Model" button when using the Classification Learner app.
Note: It seems to save it correctly. When I load the exported model and look at the predictors (features) and response (labels), the number of elements of these is approx 90% of the input data, which makes sense, since it trains on 90% of the input and tests on the last 10% (10-fold cross validation).
Another thing: The way I have used tic and toc... Will they only show the elapsed time for one intermediate result? I want the elapsed time for each individual classifier (full models).
As far as I remember, your loop iteration is the number of folds for the cross validation, right? In that case, if you put the save() command inside the loop, it will keep over-writing every iteration, and at the end, you will only have the model (of each kind e.g svm, random forest, etc.) trained during the last iteration. Now, that would be the same if you use the save() outside the loop, since you are using same model names for each iteration. Hope this makes sense.
About tic-toc: if you want to see the amount of time it takes to train each model, put the toc before the predict() part. Otherwise, you are getting time difference including time to predict and squeeze and so on.
Ok, it makes total sense. Regarding the save part... Does it mean that I have to have 10 save commands for each model since it keeps over-writing?
Ridwan Alam
Ridwan Alam on 6 Jan 2020
Edited: Ridwan Alam on 6 Jan 2020
Say, for the SVM models, if you really want to save the 10 SVM models from each iteration, you can either give them a new name in each iteration (eg mySvm_1, mySvm_2, ...) and save all of them after exiting the loop. But, again, I don't think that's very common to save the intermediate models from all the iterations of the cross-validation. Good luck.
Btw, if you liked the conversation, please vote up the response. Thanks!
Hi Ridwan,
Thanks a lot, I voted up the response!
I have run into another problem (I have attached the code). When I plot the confusion matrix and ROC curve, it seems that the results from the training and validation are combined into one. What I mean by this is that for instance in the confusion matrix, when the numbers in the matrix is summed, it is exactly equal to all the samples (training samples + validation samples). I want to have two confusion matrices (and two ROC curves and thus 2 AUC values) for every model --> One for the training and one for the validation. Is that possible?

Sign in to comment.

More Answers (0)

Categories

Find more on Statistics and Machine Learning Toolbox in Help Center and File Exchange

Products

Release

R2019b

Asked:

on 18 Dec 2019

Commented:

on 10 Jan 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!