3D surface reconstruction from 2D image recorded through a prism

23 views (last 30 days)
Patrick Schlegel on 4 Aug 2021
Edited: Patrick Schlegel on 19 Aug 2021
Hi all,
I am currently working on a project that involves the reconstruction of a 3D surface from a 2D image. The 2D image is recorded through a prism, generating 2 views from different perspectives. So what I want to know are the 3D-positions of the black markers in the image below:
For calibration an object of known dimensions was recorded with the same camera setup (see below). So I think technically there should be a way to use the known 2D positions from the calibration image together with its also known 3d positions and the 2D positions of the markers in the first image to get the 3D positions of the markers in the first image.
I have already tried using the triangluate function together with the estimateCameraParameters function, but that doesn't work because the calibration image is expected to be taken from several different angles (and it also seems to be a slightly different use case). Since I am new to 3D reconstruction I would appriciate any help with this.
What I need are the 3D positions of the black markers on their surface in the upper picture. I.e. I need some way to get the transformation function T out of the bottom and upper points of the bottom picture like this:
T(calib2D_bottom, calib2D_top) = calib3D | T=?
To then apply it to the upper image to get the 3D positions of the actual points on the actual surface
T(points2D_bottom, points2D_top) = points3D | points3D=?
I also added this in the post
Thank you,
Patrick

Patrick Schlegel on 19 Aug 2021
Edited: Patrick Schlegel on 19 Aug 2021
OK, I will answer this question myself since I think I should have solved it:
There is an algorithm for 3D reconstruction based on cuboids/cubes described in this paper:
So what I did is first to "estimate" a 3-d cuboid form based on the known positions. For this I estimated and averaged x, y and z vectors. I got the z-vectors indirectly by constructing a point p' that would lie exactly above the points in the gouges of the plate like this:
So I got two artificial cubes based on the top and bottom image and got the average side lengths of the cubes in 2D like this:
From there on I could follow what was described in the paper and created an first version of the 3D shape:
As this was still far from optimal, what I did next was to create a 3D grid of real "world coordinates" and minimized the error between the reconstructed 3D points and the real points with a function like this:
[squareError] = optimizeWithGlobals(F)
refinement with fmin-search
Refined_F=fminsearch(@optimizeWithGlobals, ForRefinment_F, options);
I also took some measures to prevent it all to get stuck in local minima. and got a good reconstruction in the end (green x-es are world points, red ones are reconstructed):
Notice that some of the points on the edge are a little bit off (but none more than 0.2 mm). This is mostly because I could not set these points as precisely as the others in the grid image (was fixed later):
Also a special thanks to Michael who helped me with some advice on how to optimize my reconstruction
If someone wants to know more details about this approach, it will probably be explained in more detail in a future publication (I will link it here, if I remmber it). Also, there is always the option to comment here with questions (but do not expect fast response times).
Edit: I forgot to mention that this whole process is used to ptimize the F matrix that is basically the "function T" I mentioned in the original question:
T(calib2D_bottom, calib2D_top) = calib3D | T=?
so with this F matrix I should now be able to reconstruct surfaces as long as the recording settings stay exactly the same.

darova on 5 Aug 2021
Edited: darova on 5 Aug 2021
Try something like this:
% [xw3,yw3,zw3] - 3D positions of white markers
% [xw2, yw2] - 2D positions of white markers
% [xb2, yb2] - 2D posittions of black markers
xb3 = griddata(xw2,yw2,xw3,xb2,yb2);
yb3 = griddata(xw2,yw2,yw3,xb2,yb2);
zb3 = griddata(xw2,yw2,zw3,xb2,yb2);
Let me know if it works
Patrick Schlegel on 5 Aug 2021
First: thank you for replying. Unfortunately this does not work, as griddata estimates a surface based on the white markers 2D and 3D positions i.e. it estimates the surface of the calibration object, and then gives the 3D-position estimations of the other 2D points, if they were located on this object, but they are not.
What I need are the 3D positions of the black markers on their surface in the upper picture. I.e. I need some way to get the transformation function T out of the bottom and upper points of the bottom picture like this:
T(calib2D_bottom, calib2D_top) = calib3D | T=?
To then apply it to the upper image to get the 3D positions of the actual points on the actual surface
T(points2D_bottom, points2D_top) = points3D | points3D=?
I also added this in the post