Use inherently interpretable classification models, such as linear models, decision trees, and generalized additive models, or use interpretability features to interpret complex classification models that are not inherently interpretable.
To learn how to interpret classification models, see Interpret Machine Learning Models.
Interpret Trained Model
Local Interpretable Model-Agnostic Explanations (LIME)
|Local interpretable model-agnostic explanations (LIME)|
|Fit simple model of local interpretable model-agnostic explanations (LIME)|
|Plot results of local interpretable model-agnostic explanations (LIME)|
Explain model predictions using
Compute Shapley values for a machine learning model using two algorithms: kernelSHAP and the extension to kernelSHAP.
Learn about feature selection algorithms and explore the functions available for feature selection.
Train a generalized additive model (GAM) with optimal parameters, assess predictive performance, and interpret the trained model.
Create and compare classification trees, and export trained models to make predictions for new data.
Categorize data points based on their distance to points in a training data set, using a variety of distance metrics.