Main Content

isInNetworkDistribution

Determine whether data is within the distribution of the network

Since R2023a

    Description

    tf = isInNetworkDistribution(net,X) returns a logical array that indicates which observations in X are in-distribution (ID) and which observations are out-of-distribution (OOD). If an observation is ID, then the corresponding element of tf is 1 (true). Otherwise, the corresponding element of tf is 0 (false).

    The function computes the distribution confidence score for each observation using the baseline method. For more information, see Softmax-Based Methods. The function classifies any observation with a score less than or equal to the threshold as OOD. To use the default threshold value, use this syntax.

    To set the threshold, use the thr name-value argument. Alternatively, use the networkDistributionDiscriminator function to create a discriminator object that automatically finds an optimal threshold and use that as the first input argument instead of net. You can also use the discriminator object to specify a different method to use to compute the distribution confidence scores.

    example

    tf = isInNetworkDistribution(discriminator,X) determines which observations in X are ID and which observations are OOD using discriminator. To create a discriminator object, use the networkDistributionDiscriminator function. This syntax uses the threshold stored in the Threshold property of discriminator. Use this syntax to specify additional options for the software to use when it computes the distribution confidence scores and to automatically find a suitable threshold. For example, when creating a discriminator, you can specify whether to use a target true positive or false positive rate to pick the threshold. For more information, see networkDistributionDiscriminator.

    example

    tf = isInNetworkDistribution(net,X1,...,XN) determines whether the data is in distribution for networks with multiple inputs using the specified in-memory data.

    tf = isInNetworkDistribution(discriminator,X1,...,XN) determines whether the data is in distribution for a discriminator constructed with a network with multiple inputs using the specified in-memory data.

    tf = isInNetworkDistribution(___,Name=Value) sets the Threshold and VerbosityLevel options using one or more name-value arguments in addition to the input arguments in previous syntaxes.

    example

    Examples

    collapse all

    Load a pretrained classification network.

    load("digitsClassificationMLPNetwork.mat")

    Load data. Convert the data to a dlarray object.

    X = digitTrain4DArrayData;
    X = dlarray(X,"SSCB");

    Determine if the data is ID.

    tf = isInNetworkDistribution(net,X);

    Find the proportion of observations that the function classifies as OOD.

    oodProportion = (sum(1-tf)/numel(tf))
    oodProportion = 
    0.0026
    

    Load a pretrained classification network.

    load("digitsClassificationMLPNetwork.mat")

    Load data and convert the data to a dlarray object.

    X = digitTrain4DArrayData;
    X = dlarray(X,"SSCB");

    Determine if the data is ID using a threshold of 0.9.

    tf = isInNetworkDistribution(net,X,Threshold=0.9);

    Find the proportion of observations that the function classifies as OOD.

    oodProportion = (sum(1-tf)/numel(tf))
    oodProportion = 
    0
    

    Load a pretrained classification network.

    load("digitsClassificationMLPNetwork.mat")

    Load ID data. Convert the data to a dlarray object.

    X = digitTrain4DArrayData;
    X = dlarray(X,"SSCB");

    Create a discriminator using the networkDistributionDiscriminator function. Set the method to "odin" and the true positive goal to 0.975. The software finds the threshold that satisfies the true positive goal.

    method = "odin";
    discriminator = networkDistributionDiscriminator(net,X,[],method, ...
        TruePositiveGoal=0.975);

    Determine if data is ID.

    tf = isInNetworkDistribution(discriminator,X);

    Find the true positive rate.

    truePositives = sum(tf);
    falseNegatives = sum(1-tf); 
    truePositiveRate = truePositives/(truePositives + falseNegatives)
    truePositiveRate = 
    0.9750
    

    Input Arguments

    collapse all

    Neural network, specified as a dlnetwork object with a single softmax output.

    The software uses the baseline method to compute the distribution confidence scores. To use another method, such as ODIN or energy, specify discriminator as the first input argument. For more information about methods for computing distribution confidence scores, see Distribution Confidence Scores.

    For networks without a single softmax layer, create a discriminator object using the networkDistributionDiscriminator function with method set to "hbos" and use this object as the first input argument instead.

    Input data, specified as a formatted dlarray or a minibatchqueue object that returns a formatted dlarray. For more information about dlarray formats, see the fmt input argument of dlarray.

    Use a minibatchqueue object for a network with multiple inputs where the data does not fit on disk. If you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as in-memory arrays. For more information, see X1,...,XN.

    In-memory data for multi-input network, specified dlarray objects. The input Xi corresponds to the network input which is net.InputNames(i) if net is the first input or discriminator.Network.InputNames(i) if discriminator is the first input.

    For multi-input networks, if you have data that fits in memory that does not require additional processing, then it is usually easiest to specify the input data as in-memory arrays. If you want to make predictions with data stored on disk, then specify X as a minibatchqueue object.

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

    Example: TruePositiveGoal=0.99,Temperature=10

    Distribution threshold, specified as a scalar in the range [0, 1]. The software uses this value to separate the ID and OOD data.

    Dependency

    You can only specify this input when the first argument is net. If the first argument is discriminator, then the software uses the threshold stored in the Threshold property of discriminator. For more information, see networkDistributionDiscriminator.

    Verbosity level of the Command Window output, specified as one of these values:

    • "off" — Do not display progress information.

    • "summary" — Display a summary of the progress information.

    • "detailed" — Display detailed information about the progress. This option prints the mini-batch progress. If you do not specify the input data as a minibatchqueue object, then the "detailed" and "summary" options print the same information.

    More About

    collapse all

    In-Distribution and Out-of-Distribution Data

    In-distribution (ID) data refers to any data that you use to construct and train your model. Additionally, any data that is sufficiently similar to the training data is also said to be ID.

    Out-of-distribution (OOD) data refers to data that is sufficiently different to the training data. For example, data collected in a different way, at a different time, under different conditions, or for a different task than the data on which the model was originally trained. Models can receive OOD data when you deploy them in an environment other than the one in which you train them. For example, suppose you train a model on clear X-ray images but then deploy the model on images taken with a lower-quality camera.

    OOD data detection is important for assigning confidence to the predictions of a network. For more information, see OOD Data Detection.

    OOD Data Detection

    OOD data detection is a technique for assessing whether the inputs to a network are OOD. For methods that you apply after training, you can construct a discriminator which acts as an additional output of the trained network that classifies an observation as ID or OOD.

    The discriminator works by finding a distribution confidence score for an input. You can then specify a threshold. If the score is less than or equal to that threshold, then the input is OOD. Two groups of metrics for computing distribution confidence scores are softmax-based and density-based methods. Softmax-based methods use the softmax layer to compute the scores. Density-based methods use the outputs of layers that you specify to compute the scores. For more information about how to compute distribution confidence scores, see Distribution Confidence Scores.

    These images show how a discriminator acts as an additional output of a trained neural network.

    Example Data Discriminators

    Example of Softmax-Based DiscriminatorExample of Density-Based Discriminator

    Diagram of deep neural network with additional discriminator output. The discriminator takes the softmax values and computes the distribution confidence score. If the score is greater than a threshold, then the input is predicted as in-distribution, otherwise the input is predicted as out-of-distribution.

    For more information, see Softmax-Based Methods.

    Diagram of deep neural network with additional discriminator output. The discriminator takes the values from specified network layers and computes the distribution confidence score. If the score is greater than a threshold, then the input is predicted as in-distribution, otherwise the input is predicted as out-of-distribution.

    For more information, see Density-Based Methods.

    Distribution Confidence Scores

    Distribution confidence scores are metrics for classifying data as ID or OOD. If an input has a score less than or equal to a threshold value, then you can classify that input as OOD. You can use different techniques for finding the distribution confidence scores.

    Softmax-Based Methods

    ID data usually corresponds to a higher softmax output than OOD data [1]. Therefore, a method of defining distribution confidence scores is as a function of the softmax scores. These methods are called softmax-based methods. These methods only work for classification networks with a single softmax output.

    Let ai(X) be the input to the softmax layer for class i. The output of the softmax layer for class i is given by this equation:

    Pi(X;T)=eai(X)/Tj=1Ceaj(X)/T,

    where C is the number of classes and T is a temperature scaling. When the network predicts the class label of X, the temperature T is set to 1.

    The baseline, ODIN, and energy methods each define distribution confidence scores as functions of the softmax input.

    • The baseline method [1] uses the maximum of the unscaled output values of the softmax scores:

      confidence(X)=maxiPi(X;1).

    • The out-of-distribution detector for neural networks (ODIN) method [2] uses the maximum of the scaled output of the softmax scores:

      confidence(X;T)=maxiPi(X;T),    T>0.

    • The energy method [3] uses the scaled denominator of the softmax output:

      energy(X;T)=Tlog(j=1Ceaj(X)/T),    T>0confidence(X;T)=energy(X;T),    T>0.

    Density-Based Methods

    Density-based methods compute the distribution scores by describing the underlying features learned by the network as probabilistic models. Observations falling into areas of low density correspond to OOD observations.

    To model the distributions of the features, you can describe the density function for each feature using a histogram. This technique is based on the histogram-based outlier score (HBOS) method [4]. This method uses a data set of ID data, such as training data, to construct histograms representing the density distributions of the ID features. This method has three stages:

    1. Find the principal component features for which to compute the distribution confidence scores:

      1. For each specified layer, find the activations using the n data set observations. Flatten the activations across all dimensions except the batch dimension.

      2. Compute the principal components of the flattened activations matrix. Normalize the eigenvalues such that the largest eigenvalue is 1 and corresponds to the principal component that carries the greatest variance through the layer. Denote the matrix of principal components for layer l by Q(l).

        The principal components are linear combinations of the activations and represent the features that the software uses to compute the distribution scores. To compute the score, the software uses only the principal components whose eigenvalues are greater than the variance cutoff value σ.

        Note

        The HBOS algorithm assumes that the features are statistically independent. The principal component features are pairwise linearly independent but they can have nonlinear dependencies. To investigate feature dependencies, you can use functions such as corr (Statistics and Machine Learning Toolbox). For an example showing how to investigate feature dependence, see Out-of-Distribution Data Discriminator for YOLO v4 Object Detector. If the features are not statistically independent, then the algorithm can return poor results. Using multiple layers to compute the distribution scores can increase the number of statistically dependent features.

    2. For each of the principal component features with an eigenvalue greater than σ, construct a histogram. For each histogram:

      1. Dynamically adjusts the width of the bins to create n bins of approximately equal area.

      2. Normalize the bins heights such that the largest height is 1.

    3. Find the distribution score for an observation by summing the logarithmic height of the bin containing the observation for each of the feature histograms, over each of the layers.

      Let f(l)(X) denote the output of layer l for input X. Use the principal components to project the output into a lower dimensional feature space using this equation: f^(l)(X)=(Q(l))Tf(l)(X).

      Compute the confidence score using this equation:

      HBOS(X;σ) = l=1L(k=1N(l)(σ)log(histk(f^k(l)(X)))),confidence(X;σ) = HBOS(X;σ),

      where N(l)(σ) is the number of number of principal component with an eigenvalue less than σ and L is the number of layers. A larger score corresponds to an observation that lies in the areas of higher density. If the observation lies outside of the range of any of the histograms, then the bin height for those histograms is 0 and the confidence score is -Inf.

      Note

      The distribution scores depend on the properties of the data set used to construct the histograms [6].

    References

    [1] Shalev, Gal, Gabi Shalev, and Joseph Keshet. “A Baseline for Detecting Out-of-Distribution Examples in Image Captioning.” In Proceedings of the 30th ACM International Conference on Multimedia, 4175–84. Lisboa Portugal: ACM, 2022. https://doi.org/10.1145/3503161.3548340.

    [2] Shiyu Liang, Yixuan Li, and R. Srikant, “Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks” arXiv:1706.02690 [cs.LG], August 30, 2020, http://arxiv.org/abs/1706.02690.

    [3] Weitang Liu, Xiaoyun Wang, John D. Owens, and Yixuan Li, “Energy-based Out-of-distribution Detection” arXiv:2010.03759 [cs.LG], April 26, 2021, http://arxiv.org/abs/2010.03759.

    [4] Markus Goldstein and Andreas Dengel. "Histogram-based outlier score (hbos): A fast unsupervised anomaly detection algorithm." KI-2012: poster and demo track 9 (2012).

    [5] Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu, “Generalized Out-of-Distribution Detection: A Survey” August 3, 2022, http://arxiv.org/abs/2110.11334.

    [6] Lee, Kimin, Kibok Lee, Honglak Lee, and Jinwoo Shin. “A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks.” arXiv, October 27, 2018. http://arxiv.org/abs/1807.03888.

    Extended Capabilities

    Version History

    Introduced in R2023a

    expand all