Main Content


Estimate camera pose from 3-D to 2-D point correspondences



[worldOrientation,worldLocation] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) returns the orientation and location of a calibrated camera in a world coordinate system. The input worldPoints must be defined in the world coordinate system.

This function solves the perspective-n-point (PnP) problem using the perspective-three-point (P3P) algorithm [1]. The function also eliminates spurious correspondences using the M-estimator sample consensus (MSAC) algorithm.

[___,inlierIdx] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) returns the indices of the inliers used to compute the camera pose, in addition to the arguments from the previous syntax.

[___,status] = estimateWorldCameraPose(imagePoints,worldPoints,cameraParams) additionally returns a status code to indicate whether there were enough points.

[___] = estimateWorldCameraPose(___,Name,Value) uses additional options specified by one or more Name,Value pair arguments, using any of the preceding syntaxes.


collapse all

Load previously calculated world-to-image correspondences.

data = load('worldToImageCorrespondences.mat');

Estimate the world camera pose.

[worldOrientation,worldLocation] = estimateWorldCameraPose(...

Plot the world points.

 pcshow(data.worldPoints,'VerticalAxis','Y','VerticalAxisDir','down', ...
 hold on
 hold off

Figure contains an axes object. The axes object contains 11 objects of type line, text, patch, scatter.

Input Arguments

collapse all

Coordinates of undistorted image points, specified as an M-by-2 array of [x,y] coordinates. The number of image points, M, must be at least four.

The function does not account for lens distortion. You can either undistort the images using the undistortImage function before detecting the image points, or you can undistort the image points themselves using the undistortPoints function.

Data Types: single | double

Coordinates of world points, specified as an M-by-3 array of [x,y,z] coordinates.

Data Types: single | double

Camera parameters, specified as a cameraParameters or cameraIntrinsics object. You can return the cameraParameters object using the estimateCameraParameters function. The cameraParameters object contains the intrinsic, extrinsic, and lens distortion parameters of a camera.

Name-Value Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'MaxNumTrials',1000

Maximum number of random trials, specified as the comma-separated pair consisting of 'MaxNumTrials' and a positive integer scalar. The actual number of trials depends on the number of image and world points, and the values for the MaxReprojectionError and Confidence properties. Increasing the number of trials improves the robustness of the output at the expense of additional computation.

Confidence for finding maximum number of inliers, specified as the comma-separated pair consisting of 'Confidence' and a scalar in the range (0,100). Increasing this value improves the robustness of the output at the expense of additional computation.

Reprojection error threshold for finding outliers, specified as the comma-separated pair consisting of 'MaxReprojectionError' and a positive numeric scalar in pixels. Increasing this value makes the algorithm converge faster, but can reduce the accuracy of the result.

Output Arguments

collapse all

Orientation of camera in world coordinates, returned as a 3-by-3 matrix.

Data Types: double

Location of camera, returned as a 1-by-3 unit vector.

Data Types: double

Indices of inlier points, returned as an M-by-1 logical vector. A logical true value in the vector corresponds to inliers represented in imagePoints and worldPoints.

Status code, returned as 0, 1, or 2.

Status codeStatus
0No error
1imagePoints and worldPoints do not contain enough points. A minimum of four points are required.
2Not enough inliers found. A minimum of 4 inliers are required.


[1] Gao, X.-S., X.-R. Hou, J. Tang, and H.F. Cheng. “Complete Solution Classification for the Perspective-Three-Point Problem.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Volume 25,Issue 8, pp. 930–943, August 2003.

[2] Torr, P. H. S., and A. Zisserman. “MLESAC: A New Robust Estimator with Application to Estimating Image Geometry.” Computer Vision and Image Understanding. Volume 78, Issue 1, April 2000, pp. 138-156.

Extended Capabilities

Introduced in R2016b