Point Cloud Registration Overview

A point cloud is a set of points in 3-D space. Point clouds are typically obtained from 3-D scanners, such as a lidar or Kinect® device. They have applications in robot navigation and perception, depth estimation, stereo vision, visual registration, and in advanced driver assistance systems (ADAS). Computer Vision Toolbox™ algorithms provide functions that are integral to the point cloud registration workflow. The workflow includes the use of point cloud functions pcmerge, pcdownsample, pctransform, and pcdenoise and multiple registration functions pcregistericp, pcregistercpd, and pcregisterndt.

Point cloud registration is the process of aligning two or more 3-D point clouds of the same scene. It enables you to integrate 3-D data from different sources into a common coordinate system. The registration process can include reconstructing a 3-D scene from a Kinect device, building a map of a roadway for automobiles, and deformable motion tracking.

Point Cloud Registration Process

The point cloud registration process includes these three steps.

  1. Preprocessing — Remove noise or unwanted objects in each point cloud. Downsample the point clouds for a faster and more accurate registration.

  2. Registration — Register two or more point clouds.

  3. Alignment and stitching — Optionally stitch the point clouds by transforming and merging them.

Point Cloud Registration Methods

You can use the pcregistericp, pcregistercpd, or pcregisterndt function to register a moving point cloud to a fixed point cloud. The registration algorithms used by these functions are based on the iterative closest point (ICP) algorithm, the coherent point drift (CPD) algorithm, and the normal-distributions transform (NDT) algorithm, respectively. For more information on these algorithms, see References.

When registering a point cloud you can choose the type of transformation that represents how objects in the scene change between point clouds.

RigidThe rigid transformation preserves the shape and size of objects in the scene. Objects in the scene can undergo translations, rotations, or both. The same transformation is applied to all points.
AffineThe Affine transformation allows the objects to shear and change scale in addition to translations and rotations.
Non-rigidThe non-rigid transformation allows the shape of objects in the scene to change. Points are transformed differently. A displacement field is used to represent the transformation.

This table compares the point cloud registration function options, their transformation types, and their performance characteristics. Use this table to select the appropriate registration function based on your case..

Registration Method (function)Transformation TypeDescriptionPerformance Characteristics
  • Local registration method that relies on an initial transform estimate

  • Robust to outliers

  • Better with point clouds of differing resolutions and densities

Fast registration method, but generally slower than ICP

Local registration method that relies on an initial transform estimate

Fastest registration method
pcregistercpdRigid, affine, and non-rigid

Global method that does not rely on an initial transformation estimate

Slowest registration method


  • To improve the accuracy and computation speed of registration, downsample the point clouds using the pcdownsample function before registration.

  • Remove unnecessary features from the point cloud by using functions such as:

  • Local registration methods, such as those that use NDT or ICP (pcregisterndt or pcregistericp, respectively), require initial estimates. To obtain an initial estimate use another sensor, such as an inertial measurement unit (IMU) or other forms of odometry. Improving the initial estimate helps the registration algorithm converge faster.

  • Increase the 'MaxIterations' property or decrease the 'Tolerance' property for more accurate registration results, but slower registration speeds.


[1] Myronenko, A., and X. Song. "Point Set Registration: Coherent Point Drift. "Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). Vol. 32, Number 12, December 2010, pp. 2262–2275.

[2] Chen, Y. and G. Medioni. “Object Modelling by Registration of Multiple Range Images.” Image Vision Computing. Butterworth-Heinemann . Vol. 10, Issue 3, April 1992, pp. 145–155.

[3] Besl, Paul J., N. D. McKay. “A Method for Registration of 3-D Shapes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Los Alamitos, CA: IEEE Computer Society. Vol. 14, Issue 2, 1992, pp. 239–256.

[4] Biber, P., and W. Straßer. “The Normal Distributions Transform: A New Approach to Laser Scan Matching.” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Las Vegas, NV. Vol. 3, November 2003, pp. 2743–2748.

[5] Magnusson, M. “The Three-Dimensional Normal-Distributions Transform — an Efficient Representation for Registration, Surface Analysis, and Loop Detection.” Ph.D. Thesis. Örebro University, Örebro, Sweden, 2013.

See Also

| |

Related Topics