Main Content

estrelpose

Calculate relative rotation and translation between camera poses

Since R2022b

Description

relativePose = estrelpose(M,intrinsics,inlierPoints1,inlierPoints2) returns the pose of a calibrated camera relative to its previous pose. The two poses are related by the fundamental, essential, or homography matrix M. The function calculates the camera location up to scale.

example

relativePose = estrelpose(M,intrinsics1,intrinsics2,inlierPoints1,inlierPoints2) returns the pose of the second camera relative to the first one.

[relativePose,validPointsFraction] = estrelpose(___) additionally returns the fraction of the inlier points that project in front of both cameras.

Examples

collapse all

Load a previously calculated fundamental matrix, camera parameters, and image points for a single camera.

load("relCameraPoseData.mat")
intrinsics = cameraParams.Intrinsics;

Calculate the camera pose relative to its previous pose.

relPose = estrelpose(M,intrinsics,inlierPoints1,inlierPoints2)
relPose = 
  rigidtform3d with properties:

    Dimensionality: 3
       Translation: [0.9952 -0.0887 -0.0412]
                 R: [3x3 double]

                 A: [ 0.9911    0.0203   -0.1314    0.9952
                     -0.0196    0.9998    0.0065   -0.0887
                      0.1315   -0.0039    0.9913   -0.0412
                           0         0         0    1.0000]

Input Arguments

collapse all

Fundamental, essential, or homography matrix, specified as a 3-by-3 matrix, an affinetform2d object, projtform2d object, or a simtform2d object containing a homography matrix. You can obtain the 3-by-3 matrix using one of these functions:

Data Types: single | double

Camera intrinsics, specified as a cameraIntrinsics object.

Camera intrinsics for camera 1, specified as a cameraIntrinsics object.

Camera intrinsics for camera 2, specified as a cameraIntrinsics object.

Coordinates of corresponding points in view 1, specified as an M-by-2 matrix of M number of [x y] coordinates, or as one of the point feature objects described in Point Feature Types. You can obtain these points using the estimateFundamentalMatrix function or the estimateEssentialMatrix.

Coordinates of corresponding points in view 2, specified as an M-by-2 matrix of M number of [x y] coordinates, or as one of the point feature objects described in Point Feature Types. You can obtain these points using the estimateFundamentalMatrix function or the estimateEssentialMatrix.

Output Arguments

collapse all

Relative camera pose in world coordinates, returned as a rigidtform3d object. The "R" and the "Translation" properties of the object represent the orientation and location of the camera. If you use only one camera, the properties describe the orientation and location of the second camera pose relative to the first camera pose. If you use two cameras, the matrix describes the orientation and location of camera 2 relative to camera 1.

Fraction of valid inlier points that project in front of both cameras, returned as a scalar. If validPointsFraction is too small, typically less than 0.9, it can indicate that the fundamental matrix is incorrect.

Tips

  • You can calculate the camera extrinsics according to:

    relativeOrientation = relativePose.R;
    relativeLocation = relativePose.Translation;
    camPose = rigidtform3d(relativeOrientation,relativeLocation);
    extrinsics = pose2extr(camPose)

  • The estrelpose function uses the inlierPoints1 and inlierPoints2 arguments to determine which of the multiple possible solutions is physically realizable. If the input M is a projtform2d object, there could be up to two solutions that are equally realizable.

References

[1] Hartley, Richard, and Andrew Zisserman. Multiple View Geometry in Computer Vision. 2nd ed. Cambridge, UK ; New York: Cambridge University Press, 2003.

[2] Torr, P.H.S., and A. Zisserman. "MLESAC: A New Robust Estimator with Application to Estimating Image Geometry." Computer Vision and Image Understanding. 78, no. 1 (April 2000): 138–56. https://doi.org/10.1006/cviu.1999.0832.

[3] Faugeras, O., & Lustman, F. (1988). Motion and structure from motion in a piecewise planar environment. International Journal of Pattern Recognition and Artificial Intelligence 2(3), 485–508.

Extended Capabilities

Version History

Introduced in R2022b

expand all