# loss

Find classification error for support vector machine (SVM) classifier

## Description

L = loss(SVMModel,TBL,ResponseVarName) returns the classification error (see Classification Loss), a scalar representing how well the trained support vector machine (SVM) classifier (SVMModel) classifies the predictor data in table TBL compared to the true class labels in TBL.ResponseVarName.

loss normalizes the class probabilities in TBL.ResponseVarName to the prior class probabilities that fitcsvm used for training, stored in the Prior property of SVMModel.

The classification loss (L) is a generalization or resubstitution quality measure. Its interpretation depends on the loss function and weighting scheme, but, in general, better classifiers yield smaller classification loss values.

L = loss(SVMModel,TBL,Y) returns the classification error for the predictor data in table TBL and the true class labels in Y.

loss normalizes the class probabilities in Y to the prior class probabilities that fitcsvm used for training, stored in the Prior property of SVMModel.

example

L = loss(SVMModel,X,Y) returns the classification error based on the predictor data in matrix X compared to the true class labels in Y.

example

L = loss(___,Name,Value) specifies options using one or more name-value pair arguments in addition to the input arguments in previous syntaxes. For example, you can specify the loss function and the classification weights.

## Examples

collapse all

Load the ionosphere data set.

rng(1); % For reproducibility

Train an SVM classifier. Specify a 15% holdout sample for testing, standardize the data, and specify that 'g' is the positive class.

CVSVMModel = fitcsvm(X,Y,'Holdout',0.15,'ClassNames',{'b','g'},...
'Standardize',true);
CompactSVMModel = CVSVMModel.Trained{1}; % Extract the trained, compact classifier
testInds = test(CVSVMModel.Partition);   % Extract the test indices
XTest = X(testInds,:);
YTest = Y(testInds,:);

CVSVMModel is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Determine how well the algorithm generalizes by estimating the test sample classification error.

L = loss(CompactSVMModel,XTest,YTest)
L = 0.0787

The SVM classifier misclassifies approximately 8% of the test sample.

Load the ionosphere data set.

rng(1); % For reproducibility

Train an SVM classifier. Specify a 15% holdout sample for testing, standardize the data, and specify that 'g' is the positive class.

CVSVMModel = fitcsvm(X,Y,'Holdout',0.15,'ClassNames',{'b','g'},...
'Standardize',true);
CompactSVMModel = CVSVMModel.Trained{1}; % Extract the trained, compact classifier
testInds = test(CVSVMModel.Partition);   % Extract the test indices
XTest = X(testInds,:);
YTest = Y(testInds,:);

CVSVMModel is a ClassificationPartitionedModel classifier. It contains the property Trained, which is a 1-by-1 cell array holding a CompactClassificationSVM classifier that the software trained using the training set.

Determine how well the algorithm generalizes by estimating the test sample hinge loss.

L = loss(CompactSVMModel,XTest,YTest,'LossFun','hinge')
L = 0.2998

The hinge loss is approximately 0.3. Classifiers with hinge losses close to 0 are preferred.

## Input Arguments

collapse all

SVM classification model, specified as a ClassificationSVM model object or CompactClassificationSVM model object returned by fitcsvm or compact, respectively.

Sample data, specified as a table. Each row of TBL corresponds to one observation, and each column corresponds to one predictor variable. Optionally, TBL can contain additional columns for the response variable and observation weights. TBL must contain all of the predictors used to train SVMModel. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If TBL contains the response variable used to train SVMModel, then you do not need to specify ResponseVarName or Y.

If you trained SVMModel using sample data contained in a table, then the input data for loss must also be in a table.

If you set 'Standardize',true in fitcsvm when training SVMModel, then the software standardizes the columns of the predictor data using the corresponding means in SVMModel.Mu and the standard deviations in SVMModel.Sigma.

Data Types: table

Response variable name, specified as the name of a variable in TBL.

You must specify ResponseVarName as a character vector or string scalar. For example, if the response variable Y is stored as TBL.Y, then specify ResponseVarName as 'Y'. Otherwise, the software treats all columns of TBL, including Y, as predictors when training the model.

The response variable must be a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types: char | string

Predictor data, specified as a numeric matrix.

Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature). The variables in the columns of X must be the same as the variables that trained the SVMModel classifier.

The length of Y and the number of rows in X must be equal.

If you set 'Standardize',true in fitcsvm to train SVMModel, then the software standardizes the columns of X using the corresponding means in SVMModel.Mu and the standard deviations in SVMModel.Sigma.

Data Types: double | single

Class labels, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. Y must be the same as the data type of SVMModel.ClassNames. (The software treats string arrays as cell arrays of character vectors.)

The length of Y must equal the number of rows in TBL or the number of rows in X.

### Name-Value Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: loss(SVMModel,TBL,Y,'Weights',W) weighs the observations in each row of TBL using the corresponding weight in each row of the variable W in TBL.

Loss function, specified as the comma-separated pair consisting of 'LossFun' and a built-in loss function name or a function handle.

• This table lists the available loss functions. Specify one using its corresponding character vector or string scalar.

ValueDescription
'binodeviance'Binomial deviance
'classiferror'Misclassified rate in decimal
'exponential'Exponential loss
'hinge'Hinge loss
'logit'Logistic loss
'mincost'Minimal expected misclassification cost (for classification scores that are posterior probabilities)

'mincost' is appropriate for classification scores that are posterior probabilities. You can specify to use posterior probabilities as classification scores for SVM models by setting 'FitPosterior',true when you cross-validate the model using fitcsvm.

• Specify your own function by using function handle notation.

Suppose that n is the number of observations in X, and K is the number of distinct classes (numel(SVMModel.ClassNames)) used to create the input model (SVMModel). Your function must have this signature

lossvalue = lossfun(C,S,W,Cost)
where:

• The output argument lossvalue is a scalar.

• You choose the function name (lossfun).

• C is an n-by-K logical matrix with rows indicating the class to which the corresponding observation belongs. The column order corresponds to the class order in SVMModel.ClassNames.

Construct C by setting C(p,q) = 1 if observation p is in class q, for each row. Set all other elements of row p to 0.

• S is an n-by-K numeric matrix of classification scores, similar to the output of predict. The column order corresponds to the class order in SVMModel.ClassNames.

• W is an n-by-1 numeric vector of observation weights. If you pass W, the software normalizes the weights to sum to 1.

• Cost is a K-by-K numeric matrix of misclassification costs. For example, Cost = ones(K) – eye(K) specifies a cost of 0 for correct classification and 1 for misclassification.

Specify your function using 'LossFun',@lossfun.

For more details on loss functions, see Classification Loss.

Example: 'LossFun','binodeviance'

Data Types: char | string | function_handle

Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector or the name of a variable in TBL. The software weighs the observations in each row of X or TBL with the corresponding weight in Weights.

If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of rows in X or TBL.

If you specify Weights as the name of a variable in TBL, you must do so as a character vector or string scalar. For example, if the weights are stored as TBL.W, then specify Weights as 'W'. Otherwise, the software treats all columns of TBL, including TBL.W, as predictors.

If you do not specify your own loss function, then the software normalizes Weights to sum up to the value of the prior probability in the respective class.

Example: 'Weights','W'

Data Types: single | double | char | string

collapse all

### Classification Loss

Classification loss functions measure the predictive inaccuracy of classification models. When you compare the same type of loss among many models, a lower loss indicates a better predictive model.

Consider the following scenario.

• L is the weighted average classification loss.

• n is the sample size.

• For binary classification:

• yj is the observed class label. The software codes it as –1 or 1, indicating the negative or positive class (or the first or second class in the ClassNames property), respectively.

• f(Xj) is the positive-class classification score for observation (row) j of the predictor data X.

• mj = yjf(Xj) is the classification score for classifying observation j into the class corresponding to yj. Positive values of mj indicate correct classification and do not contribute much to the average loss. Negative values of mj indicate incorrect classification and contribute significantly to the average loss.

• For algorithms that support multiclass classification (that is, K ≥ 3):

• yj* is a vector of K – 1 zeros, with 1 in the position corresponding to the true, observed class yj. For example, if the true class of the second observation is the third class and K = 4, then y2* = [0 0 1 0]′. The order of the classes corresponds to the order in the ClassNames property of the input model.

• f(Xj) is the length K vector of class scores for observation j of the predictor data X. The order of the scores corresponds to the order of the classes in the ClassNames property of the input model.

• mj = yj*f(Xj). Therefore, mj is the scalar classification score that the model predicts for the true, observed class.

• The weight for observation j is wj. The software normalizes the observation weights so that they sum to the corresponding prior class probability. The software also normalizes the prior probabilities so they sum to 1. Therefore,

$\sum _{j=1}^{n}{w}_{j}=1.$

Given this scenario, the following table describes the supported loss functions that you can specify by using the 'LossFun' name-value pair argument.

Loss FunctionValue of LossFunEquation
Binomial deviance'binodeviance'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left\{1+\mathrm{exp}\left[-2{m}_{j}\right]\right\}.$
Misclassified rate in decimal'classiferror'

$L=\sum _{j=1}^{n}{w}_{j}I\left\{{\stackrel{^}{y}}_{j}\ne {y}_{j}\right\}.$

${\stackrel{^}{y}}_{j}$ is the class label corresponding to the class with the maximal score. I{·} is the indicator function.

Cross-entropy loss'crossentropy'

'crossentropy' is appropriate only for neural network models.

The weighted cross-entropy loss is

$L=-\sum _{j=1}^{n}\frac{{\stackrel{˜}{w}}_{j}\mathrm{log}\left({m}_{j}\right)}{Kn},$

where the weights ${\stackrel{˜}{w}}_{j}$ are normalized to sum to n instead of 1.

Exponential loss'exponential'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{exp}\left(-{m}_{j}\right).$
Hinge loss'hinge'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{max}\left\{0,1-{m}_{j}\right\}.$
Logit loss'logit'$L=\sum _{j=1}^{n}{w}_{j}\mathrm{log}\left(1+\mathrm{exp}\left(-{m}_{j}\right)\right).$
Minimal expected misclassification cost'mincost'

'mincost' is appropriate only if classification scores are posterior probabilities.

The software computes the weighted minimal expected classification cost using this procedure for observations j = 1,...,n.

1. Estimate the expected misclassification cost of classifying the observation Xj into the class k:

${\gamma }_{jk}={\left(f{\left({X}_{j}\right)}^{\prime }C\right)}_{k}.$

f(Xj) is the column vector of class posterior probabilities for binary and multiclass classification for the observation Xj. C is the cost matrix stored in the Cost property of the model.

2. For observation j, predict the class label corresponding to the minimal expected misclassification cost:

${\stackrel{^}{y}}_{j}=\underset{k=1,...,K}{\text{argmin}}{\gamma }_{jk}.$

3. Using C, identify the cost incurred (cj) for making the prediction.

The weighted average of the minimal expected misclassification cost loss is

$L=\sum _{j=1}^{n}{w}_{j}{c}_{j}.$

If you use the default cost matrix (whose element value is 0 for correct classification and 1 for incorrect classification), then the 'mincost' loss is equivalent to the 'classiferror' loss.

Quadratic loss'quadratic'$L=\sum _{j=1}^{n}{w}_{j}{\left(1-{m}_{j}\right)}^{2}.$

This figure compares the loss functions (except 'crossentropy' and 'mincost') over the score m for one observation. Some functions are normalized to pass through the point (0,1).

### Classification Score

The SVM classification score for classifying observation x is the signed distance from x to the decision boundary ranging from -∞ to +∞. A positive score for a class indicates that x is predicted to be in that class. A negative score indicates otherwise.

The positive class classification score $f\left(x\right)$ is the trained SVM classification function. $f\left(x\right)$ is also the numerical predicted response for x, or the score for predicting x into the positive class.

$f\left(x\right)=\sum _{j=1}^{n}{\alpha }_{j}{y}_{j}G\left({x}_{j},x\right)+b,$

where $\left({\alpha }_{1},...,{\alpha }_{n},b\right)$ are the estimated SVM parameters, $G\left({x}_{j},x\right)$ is the dot product in the predictor space between x and the support vectors, and the sum includes the training set observations. The negative class classification score for x, or the score for predicting x into the negative class, is –f(x).

If G(xj,x) = xjx (the linear kernel), then the score function reduces to

$f\left(x\right)=\left(x/s\right)\prime \beta +b.$

s is the kernel scale and β is the vector of fitted linear coefficients.

For more details, see Understanding Support Vector Machines.

## References

[1] Hastie, T., R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, second edition. Springer, New York, 2008.