Use inherently interpretable classification models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex classification models that are not inherently interpretable.
To learn how to interpret classification models, see Interpret Machine Learning Models.
Interpret Trained Model
Local Interpretable Model-Agnostic Explanations (LIME)
|Local interpretable model-agnostic explanations (LIME) (Since R2020b)
|Fit simple model of local interpretable model-agnostic explanations (LIME) (Since R2020b)
|Plot results of local interpretable model-agnostic explanations (LIME) (Since R2020b)
|Shapley values (Since R2021a)
|Compute Shapley values for query point (Since R2021a)
|Plot Shapley values (Since R2021a)
- Interpret Machine Learning Models
Explain model predictions using the
shapleyobjects and the
- Shapley Values for Machine Learning Model
Compute Shapley values for a machine learning model using interventional algorithm or conditional algorithm.
- Introduction to Feature Selection
Learn about feature selection algorithms and explore the functions available for feature selection.
- Explain Model Predictions for Classifiers Trained in Classification Learner App
To understand how trained classifiers use predictors to make predictions, use global and local interpretability tools, such as partial dependence plots, LIME values, and Shapley values.
- Use Partial Dependence Plots to Interpret Classifiers Trained in Classification Learner App
Determine how features are used in trained classifiers by creating partial dependence plots.
- Train Generalized Additive Model for Binary Classification
Train a generalized additive model (GAM) with optimal parameters, assess predictive performance, and interpret the trained model.
- Train Decision Trees Using Classification Learner App
Create and compare classification trees, and export trained models to make predictions for new data.
- Classification Using Nearest Neighbors
Categorize data points based on their distance to points in a training data set, using a variety of distance metrics.