CompactClassificationECOC

Compact multiclass model for support vector machines (SVMs) and other classifiers

Description

CompactClassificationECOC is a compact version of the multiclass error-correcting output codes (ECOC) model. The compact classifier does not include the data used for training the multiclass ECOC model. Therefore, you cannot perform certain tasks, such as cross-validation, using the compact classifier. Use a compact multiclass ECOC model for tasks such as classifying new data (predict).

Creation

You can create a CompactClassificationECOC model in two ways:

• Create a compact ECOC model from a trained ClassificationECOC model by using the compact object function.

• Create a compact ECOC model by using the fitcecoc function and specifying the 'Learners' name-value pair argument as 'linear', 'kernel', a templateLinear or templateKernel object, or a cell array of such objects.

Properties

expand all

After you create a CompactClassificationECOC model object, you can use dot notation to access its properties. For an example, see Train and Cross-Validate ECOC Classifier.

ECOC Properties

Trained binary learners, specified as a cell vector of model objects. The number of binary learners depends on the number of classes in Y and the coding design.

The software trains BinaryLearner{j} according to the binary problem specified by CodingMatrix(:,j). For example, for multiclass learning using SVM learners, each element of BinaryLearners is a CompactClassificationSVM classifier.

Data Types: cell

Binary learner loss function, specified as a character vector representing the loss function name.

The default BinaryLoss value depends on the score ranges returned by the binary learners. This table identifies what some default BinaryLoss values are when you use the default score transform (ScoreTransform property of the model is 'none').

AssumptionDefault Value

All binary learners are any of the following:

• Classification decision trees

• Discriminant analysis models

• k-nearest neighbor models

• Linear or kernel classification models of logistic regression learners

• Naive Bayes models

All binary learners are SVMs or linear or kernel classification models of SVM learners.'hinge'
All binary learners are ensembles trained by AdaboostM1 or GentleBoost.'exponential'
All binary learners are ensembles trained by LogitBoost.'binodeviance'
You specify to predict class posterior probabilities by setting 'FitPosterior',true in fitcecoc.'quadratic'
Binary learners are heterogeneous and use different loss functions.'hamming'

To check the default value, use dot notation to display the BinaryLoss property of the trained model at the command line.

To potentially increase accuracy, specify a binary loss function other than the default during a prediction or loss computation by using the BinaryLoss name-value argument of predict or loss. For more information, see Binary Loss.

Data Types: char

Class assignment codes for the binary learners, specified as a numeric matrix. CodingMatrix is a K-by-L matrix, where K is the number of classes and L is the number of binary learners.

The elements of CodingMatrix are –1, 0, and 1, and the values correspond to dichotomous class assignments. This table describes how learner j assigns observations in class i to a dichotomous class corresponding to the value of CodingMatrix(i,j).

ValueDichotomous Class Assignment
–1Learner j assigns observations in class i to a negative class.
0Before training, learner j removes observations in class i from the data set.
1Learner j assigns observations in class i to a positive class.

Data Types: double | single | int8 | int16 | int32 | int64

Binary learner weights, specified as a numeric row vector. The length of LearnerWeights is equal to the number of binary learners (length(Mdl.BinaryLearners)).

LearnerWeights(j) is the sum of the observation weights that binary learner j uses to train its classifier.

The software uses LearnerWeights to fit posterior probabilities by minimizing the Kullback-Leibler divergence. The software ignores LearnerWeights when it uses the quadratic programming method of estimating posterior probabilities.

Data Types: double | single

Other Classification Properties

Categorical predictor indices, specified as a vector of positive integers. CategoricalPredictors contains index values indicating that the corresponding predictors are categorical. The index values are between 1 and p, where p is the number of predictors used to train the model. If none of the predictors are categorical, then this property is empty ([]).

Data Types: single | double

Unique class labels used in training, specified as a categorical or character array, logical or numeric vector, or cell array of character vectors. ClassNames has the same data type as the class labels Y. (The software treats string arrays as cell arrays of character vectors.) ClassNames also determines the class order.

Data Types: categorical | char | logical | single | double | cell

This property is read-only.

Misclassification costs, specified as a square numeric matrix. Cost has K rows and columns, where K is the number of classes.

Cost(i,j) is the cost of classifying a point into class j if its true class is i. The order of the rows and columns of Cost corresponds to the order of the classes in ClassNames.

Data Types: double

Predictor names in order of their appearance in the predictor data, specified as a cell array of character vectors. The length of PredictorNames is equal to the number of variables in the training data X or Tbl used as predictor variables.

Data Types: cell

Expanded predictor names, specified as a cell array of character vectors.

If the model uses encoding for categorical variables, then ExpandedPredictorNames includes the names that describe the expanded variables. Otherwise, ExpandedPredictorNames is the same as PredictorNames.

Data Types: cell

This property is read-only.

Prior class probabilities, specified as a numeric vector. Prior has as many elements as the number of classes in ClassNames, and the order of the elements corresponds to the order of the classes in ClassNames.

fitcecoc incorporates misclassification costs differently among different types of binary learners.

Data Types: double

Response variable name, specified as a character vector.

Data Types: char

Score transformation function to apply to predicted scores, specified as a function name or function handle.

To change the score transformation function to function, for example, use dot notation.

• For a built-in function, enter this code and replace function with a value in the table.

Mdl.ScoreTransform = 'function';

ValueDescription
"doublelogit"1/(1 + e–2x)
"invlogit"log(x / (1 – x))
"ismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to 0
"logit"1/(1 + ex)
"none" or "identity"x (no transformation)
"sign"–1 for x < 0
0 for x = 0
1 for x > 0
"symmetric"2x – 1
"symmetricismax"Sets the score for the class with the largest score to 1, and sets the scores for all other classes to –1
"symmetriclogit"2/(1 + ex) – 1

• For a MATLAB® function or a function that you define, enter its function handle.

Mdl.ScoreTransform = @function;

function must accept a matrix (the original scores) and return a matrix of the same size (the transformed scores).

Data Types: char | function_handle

Object Functions

 compareHoldout Compare accuracies of two classification models using new data discardSupportVectors Discard support vectors of linear SVM binary learners in ECOC model edge Classification edge for multiclass error-correcting output codes (ECOC) model gather Gather properties of Statistics and Machine Learning Toolbox object from GPU incrementalLearner Convert multiclass error-correcting output codes (ECOC) model to incremental learner lime Local interpretable model-agnostic explanations (LIME) loss Classification loss for multiclass error-correcting output codes (ECOC) model margin Classification margins for multiclass error-correcting output codes (ECOC) model partialDependence Compute partial dependence plotPartialDependence Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots predict Classify observations using multiclass error-correcting output codes (ECOC) model shapley Shapley values selectModels Choose subset of multiclass ECOC models composed of binary ClassificationLinear learners update Update model parameters for code generation

Examples

collapse all

Reduce the size of a full ECOC model by removing the training data. Full ECOC models (ClassificationECOC models) hold the training data. To improve efficiency, use a smaller classifier.

Load Fisher's iris data set. Specify the predictor data X, the response data Y, and the order of the classes in Y.

X = meas;
Y = categorical(species);
classOrder = unique(Y);

Train an ECOC model using SVM binary classifiers. Standardize the predictor data using an SVM template t, and specify the order of the classes. During training, the software uses default values for empty options in t.

t = templateSVM('Standardize',true);
Mdl = fitcecoc(X,Y,'Learners',t,'ClassNames',classOrder);

Mdl is a ClassificationECOC model.

Reduce the size of the ECOC model.

CompactMdl = compact(Mdl)
CompactMdl =
CompactClassificationECOC
ResponseName: 'Y'
CategoricalPredictors: []
ClassNames: [setosa    versicolor    virginica]
ScoreTransform: 'none'
BinaryLearners: {3x1 cell}
CodingMatrix: [3x3 double]

Properties, Methods

CompactMdl is a CompactClassificationECOC model. CompactMdl does not store all of the properties that Mdl stores. In particular, it does not store the training data.

Display the amount of memory each classifier uses.

whos('CompactMdl','Mdl')
Name            Size            Bytes  Class                                                  Attributes

CompactMdl      1x1             15116  classreg.learning.classif.CompactClassificationECOC
Mdl             1x1             28357  ClassificationECOC

The full ECOC model (Mdl) is approximately double the size of the compact ECOC model (CompactMdl).

To label new observations efficiently, you can remove Mdl from the MATLAB® Workspace, and then pass CompactMdl and new predictor values to predict.

Train and cross-validate an ECOC classifier using different binary learners and the one-versus-all coding design.

Load Fisher's iris data set. Specify the predictor data X and the response data Y. Determine the class names and the number of classes.

X = meas;
Y = species;
classNames = unique(species(~strcmp(species,''))) % Remove empty classes
classNames = 3x1 cell
{'setosa'    }
{'versicolor'}
{'virginica' }

K = numel(classNames) % Number of classes
K = 3

You can use classNames to specify the order of the classes during training.

For a one-versus-all coding design, this example has K = 3 binary learners. Specify templates for the binary learners such that:

• Binary learner 1 and 2 are naive Bayes classifiers. By default, each predictor is conditionally, normally distributed given its label.

• Binary learner 3 is an SVM classifier. Specify to use the Gaussian kernel.

rng(1);  % For reproducibility
tNB = templateNaiveBayes();
tSVM = templateSVM('KernelFunction','gaussian');
tLearners = {tNB tNB tSVM};

tNB and tSVM are template objects for naive Bayes and SVM learning, respectively. The objects indicate which options to use during training. Most of their properties are empty, except those specified by name-value pair arguments. During training, the software fills in the empty properties with their default values.

Train and cross-validate an ECOC classifier using the binary learner templates and the one-versus-all coding design. Specify the order of the classes. By default, naive Bayes classifiers use posterior probabilities as scores, whereas SVM classifiers use distances from the decision boundary. Therefore, to aggregate the binary learners, you must specify to fit posterior probabilities.

CVMdl = fitcecoc(X,Y,'ClassNames',classNames,'CrossVal','on',...
'Learners',tLearners,'FitPosterior',true);

CVMdl is a ClassificationPartitionedECOC cross-validated model. By default, the software implements 10-fold cross-validation. The scores across the binary learners have the same form (that is, they are posterior probabilities), so the software can aggregate the results of the binary classifications properly.

Inspect one of the trained folds using dot notation.

CVMdl.Trained{1}
ans =
CompactClassificationECOC
ResponseName: 'Y'
CategoricalPredictors: []
ClassNames: {'setosa'  'versicolor'  'virginica'}
ScoreTransform: 'none'
BinaryLearners: {3x1 cell}
CodingMatrix: [3x3 double]

Properties, Methods

Each fold is a CompactClassificationECOC model trained on 90% of the data.

You can access the results of the binary learners using dot notation and cell indexing. Display the trained SVM classifier (the third binary learner) in the first fold.

CVMdl.Trained{1}.BinaryLearners{3}
ans =
CompactClassificationSVM
ResponseName: 'Y'
CategoricalPredictors: []
ClassNames: [-1 1]
ScoreTransform: '@(S)sigmoid(S,-4.016619e+00,-3.243499e-01)'
Alpha: [33x1 double]
Bias: -0.1345
KernelParameters: [1x1 struct]
SupportVectors: [33x4 double]
SupportVectorLabels: [33x1 double]

Properties, Methods

Estimate the generalization error.

genError = kfoldLoss(CVMdl)
genError = 0.0333

On average, the generalization error is approximately 3%.

expand all

expand all

References

[1] Fürnkranz, Johannes. “Round Robin Classification.” J. Mach. Learn. Res., Vol. 2, 2002, pp. 721–747.

[2] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recog. Lett., Vol. 30, Issue 3, 2009, pp. 285–297.

Version History

Introduced in R2014b