Lidar Toolbox provides algorithms, functions, and apps for designing, analyzing, and testing lidar processing systems. You can perform object detection and tracking, semantic segmentation, shape fitting, lidar registration, and obstacle detection. The toolbox provides workflows and an app for lidar-camera cross-calibration.
The toolbox lets you stream data from Velodyne®, Ouster®, and Hokuyo™ lidars and read data recorded by sensors such as Velodyne, Ouster, and Hesai® lidar sensors. The Lidar Viewer app enables interactive visualization and analysis of lidar point clouds. You can train detection, semantic segmentation, and classification models using machine learning and deep learning algorithms such as PointPillars, SqueezeSegV2, and PointNet++. The Lidar Labeler app supports manual and semi-automated labeling of lidar point clouds for training deep learning and machine learning models.
Lidar Toolbox provides lidar processing reference examples for perception and navigation workflows. Most toolbox algorithms support C/C++ code generation for integrating with existing code, desktop prototyping, and deployment.
Streaming and Reading Lidar Data
Stream live lidar point clouds from Velodyne lidar sensors. Read lidar data in different file formats, including PCAP, LAS, Ibeo, PCD, and PLY.
Lidar Preprocessing
Apply functions and algorithms for unorganized to organized conversion, ground segmentation, downsampling, transforming point clouds, and extracting features from lidar point clouds.
Visualize and Analyze Lidar Data
Visualize, analyze, and perform preprocessing operations on lidar data using the Lidar Viewer app. Use built-in or custom preprocessing algorithms for ground removal, denoising, median filtering, cropping, and downsampling lidar data.
Lidar Semantic Segmentation
Apply deep learning algorithms to segment lidar point clouds. Train, test, and evaluate semantic segmentation networks, including PointNet++, PointSeg, and SqueezeSegV2, on lidar data. Generate C/C++ or CUDA® code for target hardware.
Object Detection on Lidar Point Clouds
Detect and fit oriented bounding boxes around objects in lidar point clouds and use them for object tracking or lidar labeling workflows. Design, train, and evaluate robust detectors such as PointPillars networks and generate C/C++ or CUDA code for target hardware.
Lidar Labeling
Label lidar point clouds for training deep learning models. Apply built-in or custom algorithms to automate lidar point cloud labeling with the Lidar Labeler app, and evaluate automation algorithm performance.
Lidar-Camera Calibration
Cross-calibrate lidar and camera sensors to fuse camera and lidar data. Use the Lidar Camera Calibrator app to detect, extract, and visualize checkerboard features from images and lidar point clouds. Estimate the rigid transformation matrix between the camera and the lidar using feature detection results.
Lidar Registration and Simultaneous Localization and Mapping (SLAM)
Register lidar point clouds by extracting and matching fast point feature histogram (FPFH) descriptors or using segment matching. Implement 3D SLAM algorithms by stitching together lidar point cloud sequences from ground and aerial lidar data.
2D Lidar Processing
Implement SLAM algorithms from 2D lidar scans. Estimate positions and create binary or probabilistic occupancy grids using real or simulated sensor readings.
Product Resources:
Get a Free Trial
30 days of exploration at your fingertips.
Ready to Buy?
Get pricing information and explore related products.
Are You a Student?
Your school may already provide access to MATLAB, Simulink, and add-on products through a campus-wide license.