# ocsvm

Fit one-class support vector machine (SVM) model for anomaly detection

## Syntax

``Mdl = ocsvm(Tbl)``
``Mdl = ocsvm(X)``
``Mdl = ocsvm(___,Name=Value)``
``[Mdl,tf] = ocsvm(___)``
``[Mdl,tf,scores] = ocsvm(___)``

## Description

Use the `ocsvm` function to fit a one-class support vector machine (SVM) model for outlier detection and novelty detection.

• Outlier detection (detecting anomalies in training data) — Use the output argument `tf` of `ocsvm` to identify anomalies in training data.

• Novelty detection (detecting anomalies in new data with uncontaminated training data) — Create a `OneClassSVM` object by passing uncontaminated training data (data with no outliers) to `ocsvm`. Detect anomalies in new data by passing the object and the new data to the object function `isanomaly`.

````Mdl = ocsvm(Tbl)` returns a `OneClassSVM` object (one-class SVM model object) for predictor data in the table `Tbl`.```
````Mdl = ocsvm(X)` uses predictor data in the matrix `X`.```
````Mdl = ocsvm(___,Name=Value)` specifies options using one or more name-value arguments in addition to any of the input argument combinations in the previous syntaxes. For example, `ContaminationFraction=0.1` instructs the function to process 10% of the training data as anomalies.```
````[Mdl,tf] = ocsvm(___)` also returns the logical array `tf`, whose elements are `true` when an anomaly is detected in the corresponding row of `Tbl` or `X`.```

example

````[Mdl,tf,scores] = ocsvm(___)` also returns an anomaly score in the range `(–inf,inf)` for each observation in `Tbl` or `X`. A negative score value with large magnitude indicates a normal observation, and a large positive value indicates an anomaly.```

## Examples

collapse all

Detect outliers (anomalies in training data) by using the `ocsvm` function.

Load the sample data set `NYCHousing2015`.

`load NYCHousing2015`

The data set includes 10 variables with information on the sales of properties in New York City in 2015. Display a summary of the data set.

`summary(NYCHousing2015)`
```Variables: BOROUGH: 91446×1 double Values: Min 1 Median 3 Max 5 NEIGHBORHOOD: 91446×1 cell array of character vectors BUILDINGCLASSCATEGORY: 91446×1 cell array of character vectors RESIDENTIALUNITS: 91446×1 double Values: Min 0 Median 1 Max 8759 COMMERCIALUNITS: 91446×1 double Values: Min 0 Median 0 Max 612 LANDSQUAREFEET: 91446×1 double Values: Min 0 Median 1700 Max 2.9306e+07 GROSSSQUAREFEET: 91446×1 double Values: Min 0 Median 1056 Max 8.9422e+06 YEARBUILT: 91446×1 double Values: Min 0 Median 1939 Max 2016 SALEPRICE: 91446×1 double Values: Min 0 Median 3.3333e+05 Max 4.1111e+09 SALEDATE: 91446×1 datetime Values: Min 01-Jan-2015 Median 09-Jul-2015 Max 31-Dec-2015 ```

The `SALEDATE` column is a `datetime` array, which is not supported by `ocsvm`. Create columns for the month and day numbers of the `datetime` values, and delete the `SALEDATE` column.

```[~,NYCHousing2015.MM,NYCHousing2015.DD] = ymd(NYCHousing2015.SALEDATE); NYCHousing2015.SALEDATE = [];```

Train a one-class SVM model for `NYCHousing2015`. Specify the fraction of anomalies in the training observations as 0.1, and specify the first variable (`BOROUGH`) as a categorical predictor. The first variable is a numeric array, so `ocsvm` assumes it is a continuous variable unless you specify the variable as a categorical variable. In addition, specify `StandardizeData` as `true` to standardize the input data, because the predictors have largely different scales.

```rng("default") % For reproducibility [Mdl,tf,scores] = ocsvm(NYCHousing2015,ContaminationFraction=0.1, ... CategoricalPredictors=1,StandardizeData=true);```

`Mdl` is a `OneClassSVM` object. `ocsvm` also returns the anomaly indicators (`tf`) and anomaly scores (`scores`) for the training data `NYCHousing2015`.

Plot a histogram of the score values. Create a vertical line at the score threshold corresponding to the specified fraction.

```histogram(scores) xline(Mdl.ScoreThreshold,"r-",["Threshold" Mdl.ScoreThreshold])```

If you want to identify anomalies with a different contamination fraction (for example, 0.01), you can train a new one-class SVM model.

```rng("default") % For reproducibility [newMdl,newtf,scores] = ocsvm(NYCHousing2015, ... ContaminationFraction=0.01,CategoricalPredictors=1); ```

If you want to identify anomalies with a different score threshold value (for example, 0.65), you can pass the `OneClassSVM` object, the training data, and a new threshold value to the `isanomaly` function.

```[newtf,scores] = isanomaly(Mdl,NYCHousing2015,ScoreThreshold=0.65); ```

Note that changing the contamination fraction or score threshold changes the anomaly indicators only, and does not affect the anomaly scores. Therefore, if you do not want to compute the anomaly scores again by using `ocsvm` or `isanomaly`, you can obtain a new anomaly indicator with the existing score values.

Change the fraction of anomalies in the training data to 0.01.

`newContaminationFraction = 0.01;`

Find a new score threshold by using the `quantile` function.

`newScoreThreshold = quantile(scores,1-newContaminationFraction)`
```newScoreThreshold = 0.0480 ```

Obtain a new anomaly indicator.

`newtf = scores > newScoreThreshold;`

Create a `OneClassSVM` object for uncontaminated training observations by using the `ocsvm` function. Then detect novelties (anomalies in new data) by passing the object and the new data to the object function `isanomaly`.

Load the 1994 census data stored in `census1994.mat`. The data set consists of demographic data from the US Census Bureau to predict whether an individual makes over \$50,000 per year.

`load census1994`

`census1994` contains the training data set `adultdata` and the test data set `adulttest`.

`ocsvm` does not use observations with missing values. Remove missing values in the data sets to reduce memory consumption and speed up training.

```adultdata = rmmissing(adultdata); adulttest = rmmissing(adulttest);```

Train a one-class SVM for `adultdata`. Assume that `adultdata` does not contain outliers. Specify `StandardizeData` as `true` to standardize the input data, and set `KernelScale` to `"auto"` to let the function select an appropriate kernel scale parameter using a heuristic procedure.

```rng("default") % For reproducibility [Mdl,~,s] = ocsvm(adultdata,StandardizeData=true,KernelScale="auto");```

`Mdl` is a `OneClassSVM` object. If you do not specify the `ContaminationFraction` name-value argument as a value greater than 0, then `ocsvm` treats all training observations as normal observations. The function sets the score threshold to the maximum score value. Display the threshold value.

`Mdl.ScoreThreshold`
```ans = 0.0322 ```

Find anomalies in `adulttest` by using the trained one-class SVM model.

`[tf_test,s_test] = isanomaly(Mdl,adulttest);`

The `isanomaly` function returns the anomaly indicators `tf_test` and scores `s_test` for `adulttest`. By default, `isanomaly` identifies observations with scores above the threshold (`Mdl.ScoreThreshold`) as anomalies.

Create histograms for the anomaly scores `s` and `s_test`. Create a vertical line at the threshold of the anomaly scores.

```h1 = histogram(s,NumBins=50,Normalization="probability"); hold on h2 = histogram(s_test,h1.BinEdges,Normalization="probability"); xline(Mdl.ScoreThreshold,"r-",join(["Threshold" Mdl.ScoreThreshold])) h1.Parent.YScale = 'log'; h2.Parent.YScale = 'log'; legend("Training Data","Test Data",Location="north") hold off```

Display the observation index of the anomalies in the test data.

`find(tf_test)`
```ans = 0x1 empty double column vector ```

The anomaly score distribution of the test data is similar to that of the training data, so `isanomaly` does not detect any anomalies in the test data with the default threshold value. You can specify a different threshold value by using the `ScoreThreshold` name-value argument. For an example, see Specify Anomaly Score Threshold.

## Input Arguments

collapse all

Predictor data, specified as a table. Each row of `Tbl` corresponds to one observation, and each column corresponds to one predictor variable. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

To use a subset of the variables in `Tbl`, specify the variables by using the `PredictorNames` name-value argument.

Data Types: `table`

Predictor data, specified as a numeric matrix. Each row of `X` corresponds to one observation, and each column corresponds to one predictor variable.

You can use the `PredictorNames` name-value argument to assign names to the predictor variables in `X`.

Data Types: `single` | `double`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: `NumExpansionDimensions=2^15,KernelScale="auto"` maps the predictor data to the `2^15` dimensional space using feature expansion with a kernel scale parameter selected by a heuristic procedure.

Anomaly Detection Options

collapse all

Fraction of anomalies in the training data, specified as a numeric scalar in the range `[0,1]`.

• If the `ContaminationFraction` value is 0 (default), then `ocsvm` treats all training observations as normal observations, and sets the score threshold (`ScoreThreshold` property value of `Mdl`) to the maximum value of `scores`.

• If the `ContaminationFraction` value is in the range (`0`,`1`], then `ocsvm` determines the threshold value so that the function detects the specified fraction of training observations as anomalies.

Example: `ContaminationFraction=0.1`

Data Types: `single` | `double`

Kernel Classification Options

collapse all

Maximum amount of allocated memory (in megabytes), specified as a positive scalar.

If `ocsvm` requires more memory than the value of `BlockSize` to hold the transformed predictor data, then the software uses a block-wise strategy. For details about the block-wise strategy, see Algorithms.

Example: `BlockSize=1e4`

Data Types: `single` | `double`

Kernel scale parameter, specified as `"auto"` or a positive scalar. The software obtains a random basis for random feature expansion by using the kernel scale parameter. For details, see Random Feature Expansion.

If you specify `"auto"`, then the software selects an appropriate kernel scale parameter using a heuristic procedure. This heuristic procedure uses subsampling, so estimates can vary from one call to another. Therefore, to reproduce results, set a random number seed by using `rng` before training.

Example: `KernelScale="auto"`

Data Types: `char` | `string` | `single` | `double`

Regularization term strength, specified as `"auto"` or a nonnegative scalar.

If you specify `"auto"`, then the software selects an appropriate regularization parameter using a heuristic procedure.

Example: `Lambda=0.01`

Data Types: `char` | `string` | `single` | `double`

Number of dimensions of the expanded space, specified as `"auto"` or a positive integer.

If you specify `"auto"`, then the software selects an appropriate number of dimensions using a heuristic procedure.

Example: `NumExpansionDimensions=2^15`

Data Types: `char` | `string` | `single` | `double`

Random number stream for reproducibility of data transformation, specified as a random stream object. For details, see Random Feature Expansion.

Use `RandomStream` to reproduce the random basis functions used by `ocsvm` to transform the predictor data to a high-dimensional space. For details, see Managing the Global Stream Using RandStream and Creating and Controlling a Random Number Stream.

Example: `RandomStream=RandStream("mlfg6331_64")`

Other Classification Options

collapse all

List of categorical predictors, specified as one of the values in this table.

ValueDescription
Vector of positive integers

Each entry in the vector is an index value indicating that the corresponding predictor is categorical. The index values are between 1 and `p`, where `p` is the number of predictors used to train the model.

If `ocsvm` uses a subset of input variables as predictors, then the function indexes the predictors using only the subset. The `CategoricalPredictors` values do not count any variables that the function does not use.

Logical vector

A `true` entry means that the corresponding predictor is categorical. The length of the vector is `p`.

Character matrixEach row of the matrix is the name of a predictor variable. The names must match the entries in `PredictorNames`. Pad the names with extra blanks so each row of the character matrix has the same length.
String array or cell array of character vectorsEach element in the array is the name of a predictor variable. The names must match the entries in `PredictorNames`.
`"all"`All predictors are categorical.

By default, if the predictor data is in a table (`Tbl`), `ocsvm` assumes that a variable is categorical if it is a logical vector, categorical vector, character array, string array, or cell array of character vectors. If the predictor data is a matrix (`X`), `ocsvm` assumes that all predictors are continuous. To identify any other predictors as categorical predictors, specify them by using the `CategoricalPredictors` name-value argument.

For the identified categorical predictors, `ocsvm` creates dummy variables using two different schemes, depending on whether a categorical variable is unordered or ordered. For an unordered categorical variable, `ocsvm` creates one dummy variable for each level of the categorical variable. For an ordered categorical variable, `ocsvm` creates one less dummy variable than the number of categories. For details, see Automatic Creation of Dummy Variables.

Example: `CategoricalPredictors="all"`

Data Types: `single` | `double` | `logical` | `char` | `string` | `cell`

Predictor variable names, specified as a string array of unique names or cell array of unique character vectors. The functionality of `PredictorNames` depends on how you supply the predictor data.

• If you supply `Tbl`, then you can use `PredictorNames` to specify which predictor variables to use. That is, `ocsvm` uses only the predictor variables in `PredictorNames`.

• `PredictorNames` must be a subset of `Tbl.Properties.VariableNames`.

• By default, `PredictorNames` contains the names of all predictor variables in `Tbl`.

• If you supply `X`, then you can use `PredictorNames` to assign names to the predictor variables in `X`.

• The order of the names in `PredictorNames` must correspond to the column order of `X`. That is, `PredictorNames{1}` is the name of `X(:,1)`, `PredictorNames{2}` is the name of `X(:,2)`, and so on. Also, `size(X,2)` and `numel(PredictorNames)` must be equal.

• By default, `PredictorNames` is `{'x1','x2',...}`.

Example: `PredictorNames=["SepalLength" "SepalWidth" "PetalLength" "PetalWidth"]`

Data Types: `string` | `cell`

Flag to standardize the predictor data, specified as a logical `1` (`true`) or `0` (`false`).

If you set `StandardizeData=true`, the `ocsvm` function centers and scales each predictor variable (`X` or `Tbl`) by the corresponding column mean and standard deviation. The function does not standardize the data contained in the dummy variable columns generated for categorical predictors.

Example: `StandardizeData=true`

Data Types: `logical`

Verbosity level, specified as `0` or `1`. `Verbose` controls the display of diagnostic information at the command line.

ValueDescription
`0``ocsvm` does not display diagnostic information.
`1``ocsvm` displays the value of the objective function, gradient magnitude, and other diagnostic information.

Example: `Verbose=1`

Data Types: `single` | `double`

Convergence Options

collapse all

Relative tolerance on the linear coefficients and the bias term (intercept), specified as a nonnegative scalar.

Let ${B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right]$, that is, the vector of the coefficients and the bias term at optimization iteration t. If ${‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance}$, then optimization terminates.

If you also specify `GradientTolerance`, then optimization terminates when the software satisfies either stopping criterion.

Example: `BetaTolerance=1e–6`

Data Types: `single` | `double`

Absolute gradient tolerance, specified as a nonnegative scalar.

Let $\nabla {ℒ}_{t}$ be the gradient vector of the objective function with respect to the coefficients and bias term at optimization iteration t. If ${‖\nabla {ℒ}_{t}‖}_{\infty }=\mathrm{max}|\nabla {ℒ}_{t}|<\text{GradientTolerance}$, then optimization terminates.

If you also specify `BetaTolerance`, then optimization terminates when the software satisfies either stopping criterion.

Example: `GradientTolerance=1e–5`

Data Types: `single` | `double`

Maximum number of optimization iterations, specified as a positive integer.

The default value is 1000 if the transformed data fits in memory, as specified by the `BlockSize` name-value argument. Otherwise, the default value is 100.

Example: `IterationLimit=500`

Data Types: `single` | `double`

## Output Arguments

collapse all

Trained one-class SVM model, returned as a `OneClassSVM` object.

You can use the object function `isanomaly` with `Mdl` to find anomalies in new data.

Anomaly indicators, returned as a logical column vector. An element of `tf` is `true` when the observation in the corresponding row of `Tbl` or `X` is an anomaly, and `false` otherwise. `tf` has the same length as `Tbl` or `X`.

`ocsvm` identifies observations with `scores` above the threshold (`ScoreThreshold` property value of `Mdl`) as anomalies. The function determines the threshold value to detect the specified fraction (`ContaminationFraction` name-value argument) of training observations as anomalies.

Anomaly scores, returned as a numeric column vector whose values are between `–Inf` and `Inf`. `scores` has the same length as `Tbl` or `X`, and each element of `scores` contains an anomaly score for the observation in the corresponding row of `Tbl` or `X`. A negative score value with large magnitude indicates a normal observation, and a large positive value indicates an anomaly.

collapse all

### One-Class SVM

One-class SVM, or unsupervised SVM, is an algorithm used for anomaly detection. The algorithm tries to separate data from the origin in the transformed high-dimensional predictor space. `ocsvm` finds the decision boundary based on the primal form of SVM with the Gaussian kernel approximation method.

### Random Feature Expansion

Random feature expansion, such as Random Kitchen Sinks[1] or Fastfood[2], is a scheme to approximate Gaussian kernels of the kernel classification algorithm to use for big data in a computationally efficient way. Random feature expansion is more practical for big data applications that have large training sets, but can also be applied to smaller data sets that fit in memory.

The kernel classification algorithm searches for an optimal hyperplane that separates the data into two classes after mapping features into a high-dimensional space. Nonlinear features that are not linearly separable in a low-dimensional space can be separable in the expanded high-dimensional space. All the calculations for hyperplane classification use only dot products. You can obtain a nonlinear classification model by replacing the dot product x1x2' with the nonlinear kernel function $G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉$, where xi is the ith observation (row vector) and φ(xi) is a transformation that maps xi to a high-dimensional space (called the “kernel trick”). However, evaluating G(x1,x2) (Gram matrix) for each pair of observations is computationally expensive for a large data set (large n).

The random feature expansion scheme finds a random transformation so that its dot product approximates the Gaussian kernel. That is,

`$G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉\approx T\left({x}_{1}\right)T\left({x}_{2}\right)\text{'},$`

where T(x) maps x in ${ℝ}^{p}$ to a high-dimensional space (${ℝ}^{m}$). The Random Kitchen Sinks scheme uses the random transformation

`$T\left(x\right)={m}^{-1/2}\mathrm{exp}\left(iZx\text{'}\right)\text{'},$`

where $Z\in {ℝ}^{m×p}$ is a sample drawn from $N\left(0,{\sigma }^{-2}\right)$ and σ is a kernel scale. This scheme requires O(mp) computation and storage.

The Fastfood scheme introduces another random basis V instead of Z using Hadamard matrices combined with Gaussian scaling matrices. This random basis reduces the computation cost to O(m`log`p) and reduces storage to O(m).

You can specify values for m and σ using the `NumExpansionDimensions` and `KernelScale` name-value arguments of `ocsvm`, respectively.

The `ocsvm` function uses the Fastfood scheme for random feature expansion, and uses linear classification to train a one-class Gaussian kernel classification model.

## Algorithms

• `ocsvm` considers `NaN`, `''` (empty character vector), `""` (empty string), `<missing>`, and `<undefined>` values in `Tbl` and `NaN` values in `X` to be missing values.

• `ocsvm` removes observations with all missing values.

• `ocsvm` does not use observations with some missing values. The function assigns the anomaly score of `NaN` and anomaly indicator of `false` (logical 0) to the observations.

• `ocsvm` minimizes the regularized objective function using a Limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) solver with ridge (L2) regularization. If `ocsvm` requires more memory than the value of `BlockSize` to hold the transformed predictor data, then the function uses a block-wise strategy.

• When `ocsvm` uses a block-wise strategy, it implements LBFGS by distributing the calculation of the loss and gradient among different parts of the data at each iteration. Also, `ocsvm` refines the initial estimates of the linear coefficients and the bias term by fitting the model locally to parts of the data and combining the coefficients by averaging. If you specify `Verbose=1`, then `ocsvm` displays diagnostic information for each data pass.

• When `ocsvm` does not use a block-wise strategy, the initial estimates are zeros. If you specify `Verbose=1`, then `ocsvm` displays diagnostic information for each iteration.

## Alternative Functionality

You can also use the `fitcsvm` function to train a one-class SVM model for anomaly detection.

• The `ocsvm` function provides a simpler and preferred workflow for anomaly detection than the `fitcsvm` function.

• The `ocsvm` function returns a `OneClassSVM` object, anomaly indicators, and anomaly scores. You can use the outputs to identify anomalies in training data. To find anomalies in new data, you can use the `isanomaly` object function of `OneClassSVM`. The `isanomaly` function returns anomaly indicators and scores for the new data.

• The `fitcsvm` function supports both one-class and binary classification. If the class label variable contains only one class (for example, a vector of ones), `fitcsvm` trains a model for one-class classification and returns a `ClassificationSVM` object. To identify anomalies, you must first compute anomaly scores by using the `resubPredict` or `predict` object function of `ClassificationSVM`, and then identify anomalies by finding observations that have negative scores.

• Note that a large positive anomaly score indicates an anomaly in `ocsvm`, whereas a negative score indicates an anomaly in `predict` of `ClassificationSVM`.

• The `ocsvm` function finds the decision boundary based on the primal form of SVM, whereas the `fitcsvm` function finds the decision boundary based on the dual form of SVM.

• The solver in `ocsvm` is computationally less expensive than the solver in `fitcsvm` for a large data set (large n). Unlike solvers in `fitcsvm`, which require computation of the n-by-n Gram matrix, the solver in `ocsvm` only needs to form a matrix of size n-by-m. Here, m is the number of dimensions of expanded space, which is typically much less than n for big data.

## References

[1] Rahimi, A., and B. Recht. “Random Features for Large-Scale Kernel Machines.” Advances in Neural Information Processing Systems. Vol. 20, 2008, pp. 1177–1184.

[2] Le, Q., T. Sarlós, and A. Smola. “Fastfood — Approximating Kernel Expansions in Loglinear Time.” Proceedings of the 30th International Conference on Machine Learning. Vol. 28, No. 3, 2013, pp. 244–252.

## Version History

Introduced in R2022b