Main Content

Code Generation for Object Detection by Using Single Shot Multibox Detector

This example shows how to generate CUDA® code for an SSD network (ssdObjectDetector object) and take advantage of the NVIDIA® cuDNN and TensorRT libraries. An SSD network is based on a feed-forward convolutional neural network that detect multiple objects within the image in a single shot. SSD network can be thought of as having two sub-networks. A feature extraction network, followed by a detection network.

This example generates code for the network trained in the Object Detection Using SSD Deep Learning example from Computer Vision Toolbox™. For more information, see Object Detection Using SSD Deep Learning (Computer Vision Toolbox). The Object Detection Using SSD Deep Learning example uses ResNet-50 for feature extraction. The detection sub-network is a small CNN compared to the feature extraction network and is composed of a few convolutional layers and layers specific to SSD.

Third-Party Prerequisites

Required

This example generates CUDA MEX and has the following third-party requirements.

  • CUDA enabled NVIDIA GPU and compatible driver.

Optional

For non-MEX builds such as static, dynamic libraries or executables, this example has the following additional requirements.

Verify GPU Environment

Use the coder.checkGpuInstall (GPU Coder) function to verify that the compilers and libraries necessary for running this example are set up correctly.

envCfg = coder.gpuEnvConfig('host');
envCfg.DeepLibTarget = 'cudnn';
envCfg.DeepCodegen = 1;
envCfg.Quiet = 1;
coder.checkGpuInstall(envCfg);

Get Pretrained DAG Network

This example uses the ssdResNet50VehicleExample_20a MAT-file containing the pretrained SSD network. This file is approximately 44 MB size. Download the file from the MathWorks® website.

ssdNetFile = matlab.internal.examples.downloadSupportFile('vision/data','ssdResNet50VehicleExample_20a.mat');

The DAG network contains 180 layers including convolution, ReLU, and batch normalization layers, anchor box, SSD merge, focal loss, and other layers. To display an interactive visualization of the deep learning network architecture, use the analyzeNetwork function.

load(ssdNetFile);
analyzeNetwork(detector.Network);

The ssdObj_detect Entry-Point Function

The ssdObj_detect.m entry-point function takes an image input and runs the detector on the image using the deep learning network saved in the ssdResNet50VehicleExample_20a.mat file. The function loads the network object from the ssdResNet50VehicleExample_20a.mat file into a persistent variable ssdObj and reuses the persistent object on subsequent detection calls.

type('ssdObj_detect.m')
function outImg = ssdObj_detect(in,matFile)

%   Copyright 2019-2022 The MathWorks, Inc.

persistent ssdObj;

if isempty(ssdObj)
    ssdObj = coder.loadDeepLearningNetwork(matFile);
end

% Pass in input
[bboxes,~,labels] = detect(ssdObj,in,'Threshold',0.5);

% Convert categorical labels to cell array of charactor vectors for 
% execution
labels = cellstr(labels);

% Annotate detections in the image.
if ~isempty(labels)
    outImg = insertObjectAnnotation(in,'rectangle',bboxes,labels);
else
    outImg = in;
end

Run MEX Code Generation

To generate CUDA code for the ssdObj_detect.m entry-point function, create a GPU code configuration object for a MEX target and set the target language to C++. Use the coder.DeepLearningConfig (GPU Coder) function to create a CuDNN deep learning configuration object and assign it to the DeepLearningConfig property of the GPU code configuration object. Run the codegen command specifying an input size of 300-by-300-by-3. This value corresponds to the input layer size of SSD Network.

cfg = coder.gpuConfig('mex');
cfg.TargetLang = 'C++';
cfg.DeepLearningConfig = coder.DeepLearningConfig('cudnn');
inputArgs = {ones(300,300,3,'uint8'),coder.Constant(ssdNetFile)};
codegen -config cfg ssdObj_detect -args inputArgs -report
Code generation successful: View report

Run Generated MEX

To test the generated MEX, the example uses a small vehicle data set that contains 295 images. Many of these images come from the Caltech Cars 1999 and 2001 data sets, available at the Caltech Research Data Respository website, created by Pietro Perona and used with permission.

Load the vehicle data set and randomly select 10 images to test the generated code.

unzip vehicleDatasetImages.zip
imageNames = dir(fullfile(pwd,'vehicleImages','*.jpg'));
imageNames = {imageNames.name}';
rng(0);
imageIndices = randi(length(imageNames),1,10);

Read the video input frame-by-frame and detect the vehicles in the video using the detector.

for idx = 1:10
    testImage = imread(fullfile(pwd,'vehicleImages',imageNames{imageIndices(idx)}));
    resizedImage = imresize(testImage,[300,300]);
    detectorOutput = ssdObj_detect_mex(resizedImage,ssdNetFile);
    imshow(detectorOutput);
    pause(0.5)
end

References

[1] Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng Yang Fu, and Alexander C. Berg. "SSD: Single shot multibox detector." In 14th European Conference on Computer Vision, ECCV 2016. Springer Verlag, 2016.