Main Content

trainACFObjectDetector

Train ACF object detector

Description

detector = trainACFObjectDetector(trainingData) returns a trained aggregate channel features (ACF) object detector. The function uses positive instances of objects in images stored in a table or a datastore, and specified by trainingData. The function automatically collects negative instances from the images during training. To create a ground truth table, use the Image Labeler or Video Labeler app.

detector = trainACFObjectDetector(trainingData,Name=Value) specifies options using one or more name-value arguments in addition to any combination of arguments from previous syntaxes. For example, ObjectTrainingSize=[100,100] sets the height and width of objects during training.

example

Examples

collapse all

Use the trainACFObjectDetector with training images to create an ACF object detector that can detect stop signs. Test the detector with a separate image.

Load the training data.

load('stopSignsAndCars.mat')

Prefix the full path to the stop sign images.

stopSigns = fullfile(toolboxdir('vision'),'visiondata',stopSignsAndCars{:,1});

Create datastores to load the ground truth data for stop signs.

imds = imageDatastore(stopSigns);
blds = boxLabelDatastore(stopSignsAndCars(:,2));

Combine the image and box label datastores.

ds = combine(imds,blds);

Train the ACF detector. Set the number of negative samples to use at each stage to 2. You can turn off the training progress output by specifying Verbose=false,as a Name-Value argument.

acfDetector = trainACFObjectDetector(ds,NegativeSamplesFactor=2);
ACF Object Detector Training
The training will take 4 stages. The model size is 34x31.
Sample positive examples(~100% Completed)
Compute approximation coefficients...Completed.
Compute aggregated channel features...Completed.
--------------------------------------------
Stage 1:
Sample negative examples(~100% Completed)
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 19 weak learners.
--------------------------------------------
Stage 2:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 20 weak learners.
--------------------------------------------
Stage 3:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 54 weak learners.
--------------------------------------------
Stage 4:
Sample negative examples(~100% Completed)
Found 84 new negative examples for training.
Compute aggregated channel features...Completed.
Train classifier with 42 positive examples and 84 negative examples...Completed.
The trained classifier has 61 weak learners.
--------------------------------------------
ACF object detector training is completed. Elapsed time is 16.0231 seconds.

Test the ACF detector on a test image.

img = imread('stopSignTest.jpg');
[bboxes,scores] = detect(acfDetector,img);

Display the detection results and insert the bounding boxes for objects into the image.

for i = 1:length(scores)
   annotation = sprintf('Confidence = %.1f',scores(i));
   img = insertObjectAnnotation(img,'rectangle',bboxes(i,:),annotation);
end

figure
imshow(img)

Figure contains an axes object. The hidden axes object contains an object of type image.

Input Arguments

collapse all

Labeled ground truth, specified as a datastore or a table.

  • If you use a datastore, your data must be set up so that calling the datastore with the read and readall functions returns a cell array or table with at least two columns. The table describes the data contained in the columns:

    Imagesboxeslabels (optional)

    Cell vector of grayscale or RGB images.

    M-by-4 matrices of bounding boxes of the form [x, y, width, height], where [x,y] represent the top-left coordinates of the bounding box.

    Cell array that contains an M-element categorical vector containing object class names. All categorical data returned by the datastore must contain the same categories.

    When you provide this data, the function uses the class label to fill the ModelName property of the trained detector, specified as an acfObjectDetector object. Otherwise, the class labels are not required for training because the ACF object detector is a single class detector.

  • If you use a table, the table must have two or more columns. The first column of the table must contain image file names with paths. The images must be grayscale or truecolor (RGB) and they can be in any format supported by imread. Each of the remaining columns must be a cell vector that contains M-by-4 matrices that represent a single object class, such as vehicle, flower, or stop sign. The columns contain 4-element double arrays of M bounding boxes in the format [x,y,width,height]. The format specifies the upper-left corner location and size of the bounding box in the corresponding image. To create a ground truth table, you can use the Image Labeler app or Video Labeler app. To create a table of training data from the generated ground truth, use the objectDetectorTrainingData function.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: ObjectTrainingSize=[100,100] sets the height and width of objects during training.

Size of objects during training, specified as a 2-element vector of the form [height width] in pixels. The minimum training size is [8 8]. During the training process, objects are resized to the height and width specified by 'ObjectTrainingSize'. Increasing the size can improve detection accuracy, but also increases training and detection times.

When you specify 'Auto', the size is set based on the median width-to-height ratio of the positive instances.

Example: [100,100]

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Number of training stages for the iterative training process, specified as a positive integer. Increasing this number can improve the detector and reduce training errors, at the expense of longer training time.

Data Types: double

Negative sample factor, specified as a real-valued scalar. The number of negative samples to use at each stage is equal to

NegativeSamplesFactor × number of positive samples used at each stage

Data Types: double

Maximum number of weak learners for the last stage, specified as a positive integer scalar or vector of positive integers. If the input is a scalar, MaxWeakLearners specifies the maximum number for the last stage. If the input is a vector, MaxWeakLearners specifies the maximum number for each of the stages and must have a length equal to 'NumStages'. These values typically increase throughout the stages. The ACF object detector uses the boosting algorithm to create an ensemble of weaker learners. You can use higher values to improve the detection accuracy, at the expense of reduced detection performance speeds. Recommended values range from 300 to 5000.

Data Types: double

Option to display progress information for the training process, specified as true or false.

Data Types: logical

Output Arguments

collapse all

Trained ACF-based object detector, returned as an acfObjectDetector object.

References

[1] Dollar, Piotr, Ron Appel, Serge Belongie, and Pietro Perona. “Fast Feature Pyramids for Object Detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence. 36, no. 8 (August 2014): 1532–45. DOI.org (Crossref), https://doi.org/10.1109/TPAMI.2014.2300479.

Version History

Introduced in R2017a