# How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views?

9 views (last 30 days)
cui on 8 Apr 2021
Commented: cui on 9 Apr 2021
How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views? It seems that the "triangulate" function can only be calculated using one set of points, and multiple sets of corresponding points cannot be calculated?
The premise is that the camera parameters (internal and external parameters) are not known, only the "cameraMatrix" (ie Camera projection matrix, 4×3 matrix) under each view angle is known. How to solve the world coordinate point?
------------------------------------------------------------------------------------chinese----------------------------------------------------------------------------

I wrote an algorithm to complete multiple point reconstruction using the least squares method. The following can be used as a reference, but matlab does not provide a similar function.
function points3D = TriangluateLS(matchedPoints1,camera_matrix1,...
matchedPoints2,camera_matrix2,...
varargin)
% 功能： 最小二乘法多对（n>=2,n<=4）匹配点三维点重建。
% 输入：matchedPoints， m*2 double [x,y] 图像坐标点，其余matchedPoints类推，顺序要对应，大小一致
% camera_matrix, 3*4 double 相机矩阵P，形如P =
% [m11,m12,m13,m14;m21,m22,m23,m24;m31,m32,m33,m34];其余类推
% 输出： points3D，m*3 double [x,y,z],重建后的三维点坐标
%
% reference: https://blog.csdn.net/tiemaxiaosu/article/details/51734667
% author:cuixingxing
% email: cuixingxing150@gmail.com
% 2018.7.31
%
minArgs=4;% 最少2对匹配点
maxArgs=8;% 最多4对匹配点
narginchk(minArgs,maxArgs)
if mod(nargin,2)~=0
error('输入参数必须匹配点和相机矩阵对应!');
end
m = size(matchedPoints1,1); % 点个数
points3D = zeros(m,3);
for i = 1:m
[A1,b1] = GetCoff(matchedPoints1(i,:),camera_matrix1);
[A2,b2] = GetCoff(matchedPoints2(i,:),camera_matrix2);
A = [A1;A2];
b = [b1;b2];
if length(varargin) == 2 % 3对
[A3,b3] = GetCoff(varargin{1}(i,:),varargin{2});
A(5:6,:) = A3;
b(5:6,:) = b3;
end
if length(varargin) == 4
[A3,b3] = GetCoff(varargin{1}(i,:),varargin{2});
[A4,b4] = GetCoff(varargin{3}(i,:),varargin{4});
A(5:8,:) = [A3;A4];
b(5:8,:) = [b3;b4];
end
sol = A\b;
points3D(i,1:3) = sol;
end
GetCoff.m :
function [A,b] = GetCoff(matchedPoints,camera_matrix)
% 功能：获取系数
% 输入，matchedPoints，m*2 double 点集坐标
% camera_matrix， 3*4 相机矩阵
% 输出：A，2*1 double
% b,2*1 double
%
% reference: https://blog.csdn.net/tiemaxiaosu/article/details/51734667
% https://blog.csdn.net/yangdashi888/article/details/51356385
% author:cuixingxing
% email: cuixingxing150@gmail.com
% 2018.7.31
%
u1 = matchedPoints(:,1);v1 = matchedPoints(:,2);
m11 = camera_matrix(1,1);m12 = camera_matrix(1,2);m13 = camera_matrix(1,3);m14 =camera_matrix(1,4);
m21 = camera_matrix(2,1);m22 = camera_matrix(2,2);m23 = camera_matrix(2,3);m24 =camera_matrix(2,4);
m31 = camera_matrix(3,1);m32 = camera_matrix(3,1);m33 = camera_matrix(3,3);m34 =camera_matrix(3,4);
A = [u1.*m31-m11,u1.*m32-m12,u1.*m33-m13;
v1.*m31-m21,v1.*m32-m22,v1.*m33-m23];
b = [m14-u1.*m34;
m24-v1.*m34];

Qu Cao on 8 Apr 2021
Edited: Qu Cao on 8 Apr 2021
triangulateMultiview requires both camera poses and intrinsic parameters inputs to compute the 3-D world positions corresponding to point tracks across different images. Internally, camera projection matrices are computed based on these two inputs. You can take a look at the code implementation of triangulateMultiview and reuse some of the pieces to write a function for your workflow. Just be aware that this approach is not recommended as the internal code is not documented or tested.
cui on 9 Apr 2021
Through the source code view, I found the internal calling method I want. It would be better if it can be implemented in open source.
worldPoints = vision.internal.triangulateMultiViewPoints(pointTracks, cameraMatrices);

R2021a

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!