Main Content

Processor-in-the-Loop Verification of JPDA Tracker for Automotive Applications

This example shows you how to generate embedded code for a trackerJPDA (JPDA) tracker and verify it using processor-in-the-loop (PIL) simulations on a STM32 Nucleo board with 1 MB RAM and 2 MB flash memory. In this example, you configure the JPDA tracker to process detections from a camera and a radar sensor mounted in front of an ego vehicle in highway scenarios. For PIL simulations, you use simulated detections to verify the tracking and computational performance of the generated code.

Setup Tracking Algorithm for Embedded Code Generation

Balanced between computing requirements and tracking performance. the JPDA tracker is a suitable choice for embedded systems. At every step, the JPDA tracker splits the detections-to-tracks data association problem into multiple clusters per sensor. Each cluster contains a set of detections and tracks that can be assigned to each other after gating. The exact separation of detections and tracks into clusters, the size of each cluster, and number of feasible data association events per cluster is typically determined by run-time inputs and is not known at compilation time. For more information about the JPDA tracking algorithm, refer to the Algorithms section of trackerJPDA.

When generating embedded code from trackers for safety critical applications such as highway lane following, dynamic memory allocation is typically discouraged. This means that the amount of memory allocated to the tracker must be known at compilation time. Further, the generated code must fit the memory offered by the embedded device. To efficiently manage the memory footprint of a tracker without dynamic memory allocation, you must specify certain bounds on the tracker. These bounds are typically defined using prior knowledge about the targeted application. To bound the number of feasible events per cluster, you use a K-best JPDA tracker by specifying a finite value for the MaxNumEvents property. This allows the tracker to use a maximum of K data association events per cluster without enumerating over all feasible events. You use the MaxNumDetectionsPerCluster and MaxNumTracksPerCluster properties to bound the size of the cluster. For highway driving scenarios, the cluster size can be bounded by using prior knowledge about the maximum number of closely-spaced vehicles. You choose an appropriate value for AssignmentThreshold property for gating the detection-to-track association. A large AssignmentThreshold value can cause the gate size to be much larger than desired, which can result in the formation of large clusters. To avoid large clusters, you set the ClusterViolationHandling property to 'Terminate', which causes the tracker to error out if the cluster sizes are violated. You set the MaxNumDetections and MaxNumDetectionsPerSensor properties using the information from the simulated or actual sensor. In this example, the radar outputs a maximum of 36 object-level detections and the camera outputs a maximum of 10 object-level detections.

Finally, embedded code generation using Embedded® Coder™ requires MATLAB® code to be written in the form of a function. This function is typically referred to as an entry-point function. To rewrite tracking algorithm as a function, you define the tracker inside the entry-point function using as a persistent variable to preserve its state between function calls. For this example, the tracking algorithm is wrapped in the entry-point function, trackingAlgorithm shown below and attached with this example.

type trackingAlgorithm.m
function tracks = trackingAlgorithm(detections, time)

% Define the tracker as a persistent variable
persistent tracker

% Initialize the tracker on first call using isempty
if isempty(tracker)
    tracker = trackerJPDA(FilterInitializationFcn=@helperInitFVSFFilter,...
         MaxNumEvents=5,...% 5 best events per cluster 
         MaxNumSensors=2,...% Only 2 sensors feed data to tracker
         MaxNumTracks=36,...% should be at least MaxNumDetectionsPerSensor
         MaxNumDetections=46,...% maximum number of detections from all sensors
         ClutterDensity=1e-9,...% False alarm rate per unit measurement volume
         AssignmentThreshold=50,...% Threshold for gating assignments
         ConfirmationThreshold=[5 6],...% Confirm a track with 5 hits out of 6
         DeletionThreshold=[4 5],...% Delete a track with 4 misses out of 5
         HitMissThreshold=0.5,...% Probability of assignment resulting in hit/miss
         EnableMemoryManagement=true,...% Enable memory management for reducing footprint
         MaxNumDetectionsPerCluster=5,...% Maximum detections per cluster
         MaxNumTracksPerCluster=5,...% Maximum tracks per cluster
         MaxNumDetectionsPerSensor=36,...% Maximum detections per cluster
         ClusterViolationHandling='Terminate'...% Error if cluster sizes are violated
        );
end

% Update the tracker every step using current detections and time stamp
tracks = tracker(detections, time);

end

Setup the Test Bench

To test the tracking algorithm, you use the drivingScenario (Automated Driving Toolbox) object to simulate a highway driving scenario. You use the drivingRadarDataGenerator (Automated Driving Toolbox) and visionDetectionGenerator (Automated Driving Toolbox) objects to simulate detections from radar and camera sensor respectively. The scenarios and sensor configurations used in this example are similar to the one shown in the Forward Vehicle Sensor Fusion (Automated Driving Toolbox) example and are applicable for automotive applications such as Highway Lane Following (Automated Driving Toolbox). The process of scenario and sensor model generation is wrapped in the helper function, helperCreateFVSFPILScenario, attached with this example. This function accepts the name of the scenario as an input. For compatible scenario names, see the Explore Other Scenarios section at the bottom of this example.

The target board used in this example supports floating point operations in both single and double precision. To reduce the memory footprint of the tracker, you use single-precision inputs to the tracker. Using single-precision inputs to the tracker allows it to use strict single-precision arithmetic in the generated code. To cast detections to the single-precision, you use the helperCastDetections function attached with this example. You can configure the tracking algorithm to use double-precision inputs by changing the dataType variable to 'double'.

You evaluate the performance of the tracking algorithm using the trackGOSPAMetric (GOSPA) metric. The GOSPA metric uses the available ground truth from the scenario simulation and captures the accuracy of a tracking algorithm as a scalar distance per step. This feature of the metric also makes it an attractive method to assess the equivalency of a tracking algorithm during PIL simulation. In this example, you verify that the generated code on the target hardware produces the same results by comparing the GOSPA values from MATLAB simulation and PIL simulation.

% Create scenario.
scenarioName = 'scenario_FVSF_01_Curve_FourVehicles';
[scenario, egoVehicle, radar, camera] = helperCreateFVSFPILScenario(scenarioName);

% Create GOSPA metric object
gospaObj = trackGOSPAMetric(Distance='posabserr');

Next, you run the test bench on this particular scenario by running the tracker in MATLAB environment to ensure that the test bench and tracking algorithm produce expected results. You also capture the GOSPA metric during the MATLAB execution.

% Capture the GOSPA metric
gospa = zeros(0,1);

% Create display
scope = HelperJPDATrackerPILDisplay;

% Clear persistent variable before every run
clear trackingAlgorithm;

% Choose data type
dataType = 'single';

while advance(scenario)
    % Get current simulation time
    time = cast(scenario.SimulationTime, dataType);

    % Collect detections from both radar and camera sensors
    detections = helperCollectDetections(egoVehicle, radar, camera, time);

    % Cast detections to single precision
    detections = helperCastDetections(detections, dataType);

    % Feed detections to the tracking algorithm
    tracks = trackingAlgorithm(detections, time);
    
    % Find detectable targets for truth
    truths = helperFilterWithinCoverage(egoVehicle, radar, camera);

    % Calculate the GOSPA metric
    gospa(end+1,1) = gospaObj(tracks, truths); %#ok<SAGROW> 

    % Visualize the results
    scope(scenario, egoVehicle, {radar;camera}, detections, tracks);
end

To conveniently to re-run this test bench during PIL simulations, you also wrap the test bench in a separate function, helperJPDATrackerPILTestBench. This function can be called with the following syntax:

gospa = helperJPDATrackerPILTestBench(scenarioName, trackingAlgorithmName, dataType); % No visualization
gospa = helperJPDATrackerPILTestBench(scenarioName, trackingAlgorithmName, dataType, true); % Enable visualization

Generate Code for PIL

In this section, you generate standalone C code for the tracking algorithm as a static library. You further verify the code by running PIL simulations on a STM32 Nucleo H743ZI2 target board. This target board has an ARM®-Cortex® M7 CPU with 1 MB of RAM and 2 MB of Flash memory. For more information regarding PIL simulations on this board, refer to the Processor-in-the-Loop Verification of MATLAB Functions Using STMicroelectronics Nucleo Boards (Simulink Coder) example. Although this example discusses about PIL simulation on the Nucleo target hardware, this approach can be used on any supported hardware. See the Embedded Coder Supported Hardware (Embedded Coder) page for more information about supported hardware boards.

To generate code for the tracking algorithm, you must define the types of the inputs to the entry-point function. An easy way to define these inputs is by using the codegen (MATLAB Coder) argument. You use the detections captured during the MATLAB execution to define the input types for the entry point function. Note that the data type of the inputs cannot be changed after code generation. Therefore, if the embedded code is generated with single-precision measurements, the test bench must provide single-precision measurements as inputs to the tracking algorithm. As the number of detections change between each call to the tracking algorithm, you define the detection input type as a variable-sized cell array with a maximum of 46 elements using the coder.typeof (MATLAB Coder) function. You also define the input type of time input using the correct data type as defined in the Setup the Test Bench section.

sampleDetection = detections{1};
detectionsInput = coder.typeof({sampleDetection},[46 1],[1 0]);
timeInput = cast(0,dataType);

You define the code generation configuration for PIL verification by creating a coder.EmbeddedCodeConfig (MATLAB Coder) object. You define the VerificationMode as 'PIL' and specify certain hardware properties on the configuration. To profile the generated code on the target hardware, you also set the CodeExecutionProfiling property to true.

cfg = coder.config('lib','ecoder',true); % Creates a coder.EmbeddedeCodeConfig object
cfg.VerificationMode = 'PIL'; % Enable PIL for verification
cfg.DynamicMemoryAllocation = 'off'; % Turn-off dynamic memory allocation
cfg.Toolchain = 'GNU Tools for ARM Embedded Processors'; % Specify toolchain
cfg.Hardware = coder.hardware('STM32 Nucleo H743ZI2'); % Specify hardware board
cfg.StackUsageMax = 512; % (Bytes) Limit stack usage
cfg.Hardware.PILCOMPort = 'COM3'; % Specify the port for connecting with hardware board
cfg.CodeExecutionProfiling = true; % Enable code execution profiling

Generate code using the codegen function. This function produces a MEX file named trackingAlgorithm_pil in the current working directory. This MEX file provides a wrapper to send inputs from MATLAB environment to the target hardware and collect outputs from the target hardware back to MATLAB.

codegen('trackingAlgorithm.m','-args',{detectionsInput,timeInput},'-config',cfg);
### Connectivity configuration for function 'trackingAlgorithm': 'STM32 Microcontroller'
### COM port: COM3
### Baud rate: 115200
Code generation successful.

PIL Simulation and Results

In this section, you use the MEX generated from the previous section to run PIL simulations using the target hardware. To reuse the test bench created in the Setup the Test Bench section, you specify the tracking algorithm name as trackingAlgorithm_pil

trackingAlgorithmName = 'trackingAlgorithm_pil';
gospaPIL = helperJPDATrackerPILTestBench(scenarioName, trackingAlgorithmName, dataType);
### Starting application: 'codegen\lib\trackingAlgorithm\pil\trackingAlgorithm.elf'
    To terminate execution: clear trackingAlgorithm_pil
### Downloading executable to the hardware on Drive: S:
H:\MATLAB\Examples\driving_fusion_nucleo-ex84625170\codegen\lib\trackingAlgorithm\pil\..\..\..\..\trackingAlgorithm.bin 
1 File(s) copied 
    Execution profiling data is available for viewing. Open Simulation Data Inspector.
    Execution profiling report available after termination.

The plots below show the GOSPA metric captured during MATLAB run and during PIL simulation. Note that the GOSPA metrics captured during both runs are same, which assures that the generated code running on target hardware produces the same results as MATLAB.

figure;
plot(gospa,'LineWidth',2); 
hold on;
plot(gospaPIL,'LineWidth',2,'LineStyle','--');
legend('MATLAB Simulation','PIL Simulation');
title("GOSPA Metric"); 
xlabel('Time step'); 
ylabel('Metric'); 
grid on;

In addition to tracking performance, you also use the profiling results captured by the PIL simulation to check computational performance of the tracking algorithm on the target hardware. The plots below show the run-time performance of the tracking algorithm on the target hardware. Note that the tracker is able to run at a rate faster than 100 Hz, assuring the capability of real-time computation on this particular board.

clear(trackingAlgorithmName); % Results available after PIL ends
    Execution profiling report: report(getCoderExecutionProfile('trackingAlgorithm'))
% Plot execution profile
executionProfile = getCoderExecutionProfile('trackingAlgorithm')
 
Code execution profiling data for trackingAlgorithm. To open a report,
enter the command report(executionProfile).
 
figure;
stepSection = executionProfile.Sections(2);
execTime = stepSection.ExecutionTimeInSeconds;
plot(1e3*execTime,'LineWidth',2); 
title('Tracker Execution Time (Simulated Data)'); 
xlabel('Time step'); 
ylabel('Time (ms)'); 
grid on;

Real-Time Performance Verification on Recorded Data

In the previous sections, you verified the tracking and computational performance of the tracking algorithm on the Nucleo target hardware. The scenario simulation allows you to define a variety of situations and verify the performance of the tracker in such situations. However, it is also critical to verify the performance of the tracker on a real data set. This ensures that the tracking algorithm can bear the challenges and complexity of real-world situations.

In this section, you verify the computational performance of the tracker using recorded data from camera and radar on a highway scenario. The radar used in this recording is a multimode radar, which provides a wide coverage at mid-range and a narrow but high-resolution coverage at long range. In addition to providing detections from target objects, the radar also outputs detections from the road infrastructure, making the tracking algorithm susceptible to many false tracks. You filter out the infrastructure detections using the helper function, helperFilterStaticDetections. This helper function uses the recorded speed, yaw-rate of the ego vehicle, as well as doppler (range-rate) information from the radar to filter out detections from static objects in the environment.

videoFile = '05_highway_lanechange_25s.mp4';
sensorFile = '05_highway_lanechange_25s_sensor.mat';

% Load the data
recording = load(sensorFile);
numSteps = numel(recording.radar);

% Visualize the scenario using a camera recording
videoReader = VideoReader(videoFile);

% Initialize display
scope = HelperJPDATrackerPILDisplay('UseRecordedData',true);

% Timer at 20 Hz
time = cast(0,dataType);
timeStep = cast(0.05,dataType);

% Reinitialize tracker
clear(trackingAlgorithmName);

for currentStep = 1:numSteps
    % Update time
    time = time + timeStep;

    % Collect detections from recording
    [radarTotalDetections, visionDetections, laneData, imuData] = helperCollectDetectionsFromRecording(recording, time, currentStep); 
    
    % Radar detections from the targets and clutter
    [radarDetections, staticDetections] = helperFilterStaticDetections(radarTotalDetections, imuData);

    % Concatenate detections
    detections = [radarDetections;visionDetections];

    detections = helperCastDetections(detections,dataType);

    % Run the tracker on hardware
    tracks = feval(trackingAlgorithmName,detections, time); %#ok<FVAL> 

    % Visualize
    vidImage = readFrame(videoReader);
    scope(vidImage, laneData, detections, staticDetections, tracks);
end
### Connectivity configuration for function 'trackingAlgorithm': 'STM32 Microcontroller'
### COM port: COM3
### Baud rate: 115200
### Starting application: 'codegen\lib\trackingAlgorithm\pil\trackingAlgorithm.elf'
    To terminate execution: clear trackingAlgorithm_pil
### Downloading executable to the hardware on Drive: S:
H:\MATLAB\Examples\driving_fusion_nucleo-ex84625170\codegen\lib\trackingAlgorithm\pil\..\..\..\..\trackingAlgorithm.bin 
1 File(s) copied 
    Execution profiling data is available for viewing. Open Simulation Data Inspector.
    Execution profiling report available after termination.

clear(trackingAlgorithmName); % Results available after PIL ends
    Execution profiling report: report(getCoderExecutionProfile('trackingAlgorithm'))
% Plot execution profile
executionProfile = getCoderExecutionProfile('trackingAlgorithm')
 
Code execution profiling data for trackingAlgorithm. To open a report,
enter the command report(executionProfile).
 
figure;
stepSection = executionProfile.Sections(2);
execTime = stepSection.ExecutionTimeInSeconds;
plot(1e3*execTime,'LineWidth',2); 
title('Tracker Execution Time (Recorded Data)'); 
xlabel('Time step'); 
ylabel('Time (ms)'); 
grid on;

Note that the tracker is able to track the targets within the field of view of the sensors and is able to run faster than 60 Hz on this particular hardware board. This verifies the real-time tracking capability of the algorithm in denser traffic scenarios captured in the recording.

Explore Other Scenarios

It is important to assess the performance of the tracking algorithm under different scenarios. You can use the simulation environment in this example to explore other scenarios, compatible with the test bench defined by helperJPDATrackerPILTestBench. Here are five compatible scenarios that you can use by specifying the scenarioName input as one of the following:

  • 'scenario_FVSF_01_Curve_FourVehicles'

  • 'scenario_FVSF_02_Straight_FourVehicles'

  • 'scenario_FVSF_03_Curve_SixVehicle'

  • 'scenario_FVSF_04_Straight_FourVehicles'

  • 'scenario_FVSF_05_Straight_TwoVehicles'

Summary

In this example, you learned how to generate code from a tracking algorithm for PIL simulations. You verified the generated code on a STM32 Nucleo board using simulated data as well as recorded data from highway driving scenarios. You further assessed the computational performance and real-time capability of the tracking algorithm in such scenarios on the chosen target hardware.