How can I assess the reliability of my machine learning model on unseen data?
4 views (last 30 days)
Show older comments
MathWorks Support Team
on 14 Jun 2018
Commented: Greg Heath
on 22 Jun 2018
I have a model of a system that can detect some abnormalities and then react accordingly.
Now, I want to analyze how reliable is our model in predicting these abnormalities.
So far, I have manually analyse certain situations and assess whether the system reacted correctly or incorrectly. This is very time consuming and I would like to know how we could adopt supervised machine learning to train a neural network to make this assessment automatically.
Accepted Answer
MathWorks Support Team
on 14 Jun 2018
In general, to create a machine learning model, you would:
1. Collect data.
2. Split the data into training, test and validation sets.
3. Train a machine learning model using both the training and test sets.
4. Validate that your trained model on the validation set to verify that it can still reliably predict "unseen" data.
5. Use the model to predict real world data.
From the workflow above, you can see that we can only assess the accuracy of the model (before really using it in real world) by evaluating the prediction it outputs on the validation set.
If the predicted values on the validation set is within some reasonable accuracy that you desire, then, you can use the model to predict real world data with the assumption that it would also predict these new data with the same level of accuracy.
Yet, the validation set itself had to first be manually collected and labeled.
Furthermore, it is counter-intuitive (if not impossible) to be able to *automatically *assess the accuracy of your model on new unseen (and unlabeled) data. If you have another model that can assess whether your existing model is predicting new data correctly vs. incorrectly, you would certainly have used that model instead.
1 Comment
More Answers (1)
Greg Heath
on 22 Jun 2018
THE ABOVE IS INCORRECT FOR NEURAL NETWORKS. FOR NNs:
DESIGN = TRAIN + VALIDATE
1. Collect data.
2. a. Split the data into DESIGN and TEST subsets.
b. Split the design data into TRAINING and VALIDATION subsets.
i. Weight values are calculated from the TRAINING subset.
ii. The VALIDATION subset is used to verify good performance
on NONTRAINING DATA via "EARLY STOPPING":
If, DURING TRAINING, VALIDATION subset performance decreases
for 6(default) CONSECUTIVE EPOCHS, TRAINING IS STOPPED!
FOR OBVIOUS REASONS I prefer the term "VALIDATION STOPPING"!
3. UNBIASED ESTIMATES of performance are obtained using the TEST subset which, of course, was not used in any way, for design.
4. MATLAB default values for the trn/val/tst split are 0.7/0.15/0.15
Hope this helps
Thank you for formally accepting my answer
Greg
0 Comments
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!