Overview of Scenario Generation from Recorded Sensor Data
The Scenario Builder for Automated Driving Toolbox™ support package enables you to create virtual driving scenarios from vehicle data recorded using various sensors, such as a global positioning system (GPS), inertial measurement unit (IMU), camera, and lidar. To create virtual driving scenarios, you can use raw sensor data as well as recorded actor track lists or lane detections. Using these virtual driving scenarios, you can mimic real-world driving conditions and evaluate autonomous driving systems in a simulation environment.
Scenario generation from recorded sensor data involves these steps:
Preprocess input data.
Extract ego vehicle information.
Extract scene information.
Extract information of non-ego actors.
Create, simulate, and export scenario.
Preprocess Input Data
Scenario Builder for Automated Driving Toolbox supports a variety of sensor data. You can load recorded data from GPS, IMU, camera, or lidar sensors into MATLAB®. You must represent recorded sensor data in specified coordinate system to use in scenario generation workflows. For more information, see Coordinate Systems for Scenario Generation. To use the recorded sensor data in Scenario Builder for Automated Driving Toolbox workflows, you can represent them using these sensor data objects:
GPSData
object — Stores GPS data with timestamps.Trajectory
object — Creates trajectory using timestamps and waypoints.CameraData
object — Stores sequence of camera data with timestamps.LidarData
object — Stores sequence of lidar data with timestamps.ActorTrackData
object — Stores recorded actor track data with timestamps.
For more information on how to use sensor data objects to create a scenario, see the Generate RoadRunner Scenario from Recorded Sensor Data example.
You can also create sensor data objects from the recorded sensor data by using the
recordedSensorData
function. Additionally, the synchronize
object function allows you to synchronize different sensor
data by rearranging the data into a common timestamp range. For more information, see
the Synchronize GPS, Camera, and Actor Track Data for Scenario Generation example.
You can specify the region of interest (ROI) in the GPS data for which you want to
create a scenario. Use the getMapROI
function to get the coordinates of a geographic bounding box
from the GPS data. To visualize geographic data, use the geoplayer
object.
Extract Ego Vehicle Information
The trajectory
object function of the GPSData
object enables you to create trajectories from GPS data. Because these trajectories are
directly extracted from raw GPS data, they often suffer from GPS noise due to multipath
propagation. You can smooth this data to remove noise and better localize the ego
vehicle by using the smooth
object function of the Trajectory
object. For more information, see Generate Scenario from Actor Track Data and GPS Data example.
To improve road-level localization of the ego vehicle, you can fuse the information from GPS and IMU sensors. For more information, see Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation. To get lane-level localization of the ego vehicle, you can use lane detections and HD map data. For more information, see Ego Localization Using Lane Detections and HD Map for Scenario Generation.
Extract Scene Information
To extract scene information, you must have road parameters and lane information. Use the
roadprops
function to extract road parameters from the desired
geographic ROI. You can extract road parameters from these sources:
The function extracts parameters for any road within the ROI. To generate a scenario, you need
only the roads on which the ego vehicle is traveling. Use the selectActorRoads
function to get the ego-specific roads.
The ego-specific roads contain lanes, which are essential for navigation in an autonomous system. To generate roads with lanes, you must have lane information. Use these objects and functions to extract lane information from the recorded sensor data.
laneBoundaryDetector
object — Detects lane boundaries in camera images.laneBoundaryTracker
System object™ — Tracks multiple lane boundary detections asparabolicLaneBoundary
,cubicLaneBoundary
, andclothoidLaneBoundary
objects.laneData
object — Stores the recorded lane boundary data with timestamps.updateLaneSpec
function — Updates the lane specifications using the recorded lane detections.egoToWorldLaneBoundarySegments
function — Generates lane boundary segments in world coordinates from tracked lane boundaries in ego coordinates.laneBoundarySegment
object — Stores lane boundary information of a road segment.laneBoundaryGroup
object — Groups lane boundaries in lane boundary segment objects.localizeEgoUsingLanes
— Localizes ego trajectory on a map using lane detections.
For information on how to extract lane information from raw camera data, see Extract Lane Information from Recorded Camera Data for Scene Generation. You can also generate scenes containing add or drop lanes with junctions by using pre-labeled lanes from camera images, raw lidar data, and GPS waypoints. For more information, see Generate RoadRunner Scene Using Labeled Camera Images and Raw Lidar Data.
You can convert custom scene data into the RoadRunner HD Map data model and import your data into RoadRunner. To generate RoadRunner HD Map with lane information from your custom lane boundary points, use
the getLanesInRoadRunnerHDMap
or roadrunnerLaneInfo
function. Along with roads and lanes, the real-world
scene also contains various static objects such as buildings, trees, cones, barriers,
and electric poles, which are useful to recreate in virtual scenarios. Use the roadrunnerStaticObjectInfo
function to generate static object
information in the RoadRunner HD Map format.
You can generate a High-Definition scene containing static objects by using labeled lidar data. For more information, see Generate RoadRunner Scene with Trees and Buildings Using Recorded Lidar Data. In addition to lidar data, you can also use aerial hyperspectral data to generate High-Definition scene containing static objects such as trees and buildings. For more information, see Generate RoadRunner Scene Using Aerial Hyperspectral and Lidar Data.
You can also generate a High-Definition scene containing traffic signs extracted from labeled camera and lidar sensor data. For more information, see Generate RoadRunner Scene with Traffic Signs Using Recorded Sensor Data.
Processing of larger point cloud data often consumes time due to their large size. You
must extract smaller local point clouds of interest for processing. You can use the
egoPointCloudExtractor
object to extract local point clouds from a larger
point cloud around waypoints in a specified ego trajectory.
Extract Non-Ego Actor Information
After extracting ego information and road parameters, you must use non-ego actor information
to create a driving scenario. Use the actorTracklist
object to store recorded actor track list data with
timestamps. You can use the actorprops
function to extract non-ego actor parameters from the actorTracklist
object. The function extracts various non-ego parameters,
including waypoints, speed, roll, pitch, yaw, and entry and exit times.
For information on how to extract an actor track list from camera data, see Extract Vehicle Track List from Recorded Camera Data for Scenario Generation. You can also extract a vehicle track list from recorded lidar data. For more information, see Extract Vehicle Track List from Recorded Lidar Data for Scenario Generation.
You can extract accurate vehicle position, orientation, and dimension information, required for generating scenarios, from raw camera data. For more information, see Extract 3D Vehicle Information from Recorded Monocular Camera Data for Scenario Generation.
Create, Simulate, and Export Scenario
Create a driving scenario using a drivingScenario
object. Use this object to
add a road network and specify actors and their trajectories from your extracted
parameters. For more information on how to create and simulate scenario, see Generate Scenario from Actor Track Data and GPS Data.
You can export the generated scenario to the ASAM OpenSCENARIO® file format using the export
function of the drivingScenario
object.
Using a roadrunnerHDMap
object, you can also create a RoadRunner HD Map from road network data that you have updated using lane detections.
The RoadRunner HD Map enables you to build a RoadRunner scene. For more information, see the Generate RoadRunner Scene from Recorded Lidar Data example.
You can create multiple variations of a generated scenario to perform additional testing of automated driving functionalities. For more information, see Get Started with Euro NCAP Test Suite, and the Generate Variants of Scenario Created from Recorded Sensor Data example.
See Also
Functions
actorprops
|getMapROI
|roadprops
|selectActorRoads
|updateLaneSpec
|roadrunnerLaneInfo
|recordedSensorData
|localizeEgoUsingLanes
Objects
actorTracklist
|laneData
|laneBoundaryDetector
|laneBoundaryTracker
|roadrunnerHDMap
|drivingScenario
|GPSData
|Trajectory
|CameraData
|LidarData
|egoPointCloudExtractor
Topics
- Generate RoadRunner Scenario from Recorded Sensor Data
- Generate RoadRunner Scene Using Processed Camera Data and GPS Data
- Generate RoadRunner Scene from Recorded Lidar Data
- Generate RoadRunner Scene Using Aerial Lidar Data
- Generate High Definition Scene from Lane Detections and OpenStreetMap
- Georeference Sequence of Point Clouds for Scene Generation
- Transform Aerial Point Cloud for Scene Generation
- Ego Vehicle Localization Using GPS and IMU Fusion for Scenario Generation
- Ego Localization Using Lane Detections and HD Map for Scenario Generation
- Preprocess Lane Detections for Scenario Generation
- Extract Lane Information from Recorded Camera Data for Scene Generation
- Generate Scenario from Actor Track Data and GPS Data
- Fuse Prerecorded Lidar and Camera Data to Generate Vehicle Track List for Scenario Generation
- Smooth GPS Waypoints for Ego Localization
- Extract Key Scenario Events from Recorded Sensor Data
1 You need to enter into a separate agreement with HERE in order to gain access to the HDLM services and to get the required credentials (access_key_id and access_key_secret) for using the HERE Service.
2 To gain access to the Zenrin Japan Map API 3.0 (Itsumo NAVI API 3.0) service and get the required credentials (a client ID and secret key), you must enter into a separate agreement with ZENRIN DataCom CO., LTD.