Main Content

extractReidentificationFeatures

Extract object re-identification (ReID) features from image

Since R2024a

Description

features = extractReidentificationFeatures(reID,I) extracts object re-identification features from an image or a batch of images I.

Note

This functionality requires Deep Learning Toolbox™.

[features,labels] = extractReidentificationFeatures(reID,ds) extracts features and their respective labels from the images returned by the read function of the input datastore ds. Specify this syntax to obtain all the feature vectors from a datastore, or to evaluate ReID network performance using the evaluateReidentificationNetwork function.

[___] = extractReidentificationFeatures(___,Name=Value) specifies options using one or more name-value arguments in addition to any combination of arguments from previous syntaxes. For example, MiniBatchSize=4 specifies the minimum batch size to use for inference as 4.

Input Arguments

collapse all

Re-identification network, specified as a reidentificationNetwork object.

Images, specified as a size H-by-W-by-C or H-by-W-by-C-by-B numeric array. You must specify real and nonsparse grayscale or RGB images.

  • H — Height of the input images.

  • W — Width of the input images.

  • C — Number of channels. The channel size of each image must be equal to the input channel size of the network. For example, for grayscale images, C must be 1. For RGB color images, it must be 3.

  • B — Number of test images in the batch. The extractReidentificationFeatures function extracts features for each test image in the batch.

Data Types: uint8 | uint16 | int16 | double | single

Datastore of RGB or grayscale images, specified as an imageDatastore object with a populated Labels property, or a datastore whose read function returns a 2-by-B cell array, where B is the number of images in the datastore. Each row of the cell array is of the form {Image Class}.

ImageClass

RGB image, stored as an H-by-W-by-3 numeric array, or grayscale image, stored as an H-by-W matrix.

String or a categorical cell vector which contains the object class name for the corresponding input image in Image. All categorical data returned by the datastore must have the same categories.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: MiniBatchSize=4 specifies the minimum batch size to use for inference as 4.

Minimum batch size, specified as a positive integer. Adjusting the MiniBatchSize value can help you process a large collection of images. The extractReidentificationFeatures function groups images into minibatches of the specified size and processes them together, which can improve computation efficiency at the cost of increased memory demand. Increase the minibatch size to decrease processing time. Decrease the minibatch size to use less memory.

Hardware resource on which to run the re-identification network feature extraction, specified as one of these values:

  • "auto" — Use a GPU if Parallel Computing Toolbox™ is installed and a supported GPU device is available. Otherwise, use the CPU.

  • "gpu" — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA®-enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • "cpu" — Use the CPU.

Performance optimization, specified as one of these options:

  • "auto" — Automatically apply compatible optimizations suitable for the input network and hardware resource.

  • "none" — Disable all acceleration.

The Acceleration option "auto" can improve performance on subsequent calls with compatible parameters, at the expense of an increased initial run time. Use performance optimization when you plan to call the function multiple times using new input data.

Visible progress display, specified as a numeric or logical 1 (true) or 0 (false).

Output Arguments

collapse all

Features extracted from the re-identification network, returned as one of these options:

  • M-by-1 vector — I is a single image. M is the feature vector size FeatureLength.

  • M-by-B matrix — I is a batch of images. B is the number of images in the batch.

  • M-by-N matrix — I is a datastore, ds. N is the number of images in the datastore.

Labels corresponding to the object ID or class name of each feature vector in features, returned as a 1-by-N vector of strings. N is the number of images in the input datastore, ds.

Version History

Introduced in R2024a