Obtaining corresponding pixel indices in perspective-corrected image

6 views (last 30 days)
I am attempting to warp an image taken from an off-center camera to produce an image that resembles the viewfield were it taken on-center. To achieve this, I locate four points in the pixel plane of known distance from one another and produce a projective transformation structure using the fitgeotrans function with my "real" and "pixel" coordinates. I then use this transformation with the imwarp function to produce a perspective-corrected image.
I would also like to obtain the pixel-locations of my four "reference points" in my new, warped image. I have attempted to do so by using the transformPointsForward function but, while the resulting points match the shape of the output reference image, they are scaled differently. They do not match the real-world coordinates that I send into fitgeotrans
Is this the proper way of going about finding my new coordinates, or must it be done in some other way? Note that I have very limited knowledge of homography and machine vision concepts, but I am willing to learn if you know of any resources that would be helpful for this task.
If it helps, I can add figures to describe what I am attempting.
  1 Comment
Max
Max on 12 Dec 2014
This seems like a good approach. If somehow has ideas on this, I would also be interested. I am trying to do the same thing to remove keystone distortion from a camera in a live image also using the image acquisition toolbox.

Sign in to comment.

Accepted Answer

Evan
Evan on 1 Jul 2015
Edited: Evan on 1 Jul 2015
I managed to find a solution to this problem. First note that this applies to MATLAB versions R2014b and earlier. While I can't confirm because I do not have access to the Computer Vision System toolbox for R2015a, it looks as if the pointsToWorld function does exactly what is needed.
For those who, like me, are without access to this function, I found a StackExchange post which allows for this transformation to be done in MATLAB with almost no modification to the process described. I was able to get it working as below:
hCoordsW = [cornW ones(4,1)]'; %Put x,y world points in homogeneous form
hCoordsP = [cornP ones(4,1)]'; %Put x,y image points in homogeneous form
pDefW = hCoordsW(:,1:3) \ hCoordsW(:,4); %Find plane definition of 4 coplanar world points
pDefP = hCoordsP(:,1:3) \ hCoordsP(:,4); %Find plane definition of 4 coplanar pixel points
mW = ones(3,1) * pDefW' .* hCoordsW(:,1:3); %Map world basis vectors to ref. pairs
mP = ones(3,1) * pDefP' .* hCoordsP(:,1:3); %Map pixel basis vectors to ref. pairs
mapWP = mW/mP; %Map between systems.
pixel = [pixelX pixelY ones(numel(pixelX),1)]'; %Organize pixel coordinates
world = mapWP * pixel; %Map all pixel locations to world coordinates
world = world ./ (ones(3,1)*world(3,:)); %Dehomogenize (remove scaling)
world = world(1:2,:); %Put back in (x,y) form

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!