plotCamera: ButtonDownFcn usage

14 views (last 30 days)
Amarbold Purev
Amarbold Purev on 3 Feb 2021
Commented: Mario Malic on 3 Feb 2021
Hi,
I have plotted a point cloud and a camera pose using structure from motion using the below matlab documentation.
I want to show the image of input camera by clicking on the camera.
In my case I have 6 cameras
I think I am doing something wrong with the function itself, can someone help me with the solution?
Thank you,
Amarbold
camPoses = poses(vSet);
figure;
plotCamera(camPoses, 'Size', 0.2, 'ButtonDownFcn', @imshowOrig);
hold on
function imshowOrig(camPoses,~)
idx = camPoses.ViewId;
imshow(pics{idx});
end
  3 Comments
Amarbold Purev
Amarbold Purev on 3 Feb 2021
Edited: Amarbold Purev on 3 Feb 2021
Thank you for your quick response and attention. It is very helpful.
Maybe my full code so far would help you better to understand.
My question is how should I describe plotCamera and its ButtonDownFcn function to get the .jpg image output for every camera, when it is clicked?
Thank you,
Amarbold
%% Read and display image sequence
tic;
% Use |imageDatastore| to get a list of all image file names in a
% directory.
imageDir = fullfile('C:\..\.jpg');
imds = imageDatastore(imageDir);
% Display the images.
figure
montage(imds.Files, 'Size', [3, 2]);
% Convert the images to grayscale.
images = cell(1, numel(imds.Files));
pics = cell(1,numel(imds.Files));
for i = 1:numel(imds.Files)
I = readimage(imds, i);
images{i} = rgb2gray(I);
pics{i} = I; %RGB image of the input
%images{i} = I;
end
title('Input Image Sequence');
disp('Read images');
toc
%% Load Camera Parameters
tic;
% Define camera parameters without lens distortion or skew
focalLength = [1244.1392 1244.1392];
principlePoint = [725.3600236827378466841764748 544.9886704354310040798736736];
imageSize = [1080 1440];
radialDistortion = [1.74629521328660997e-01 -6.27830216229319005e-01 6.41314156207050901e-01];
tangentialDistortion = [-1.09278686452936001e-02 -8.10542255598644605e-05];
intrinsics = cameraIntrinsics(focalLength, principlePoint, imageSize);
cameraParams = cameraParameters('IntrinsicMatrix',intrinsics.IntrinsicMatrix,...
'NumRadialDistortionCoefficients',3,...
'RadialDistortion', radialDistortion,...
'TangentialDistortion', tangentialDistortion);
disp('Loaded camera parameters');
toc
%% Create a view set containing the First View
tic;
% Get intrinsic parameters of the camera
%intrinsics = cameraParams.Intrinsics;
% Undistort the first image.
I = undistortImage(images{1}, intrinsics);
% Detect features. Increasing 'NumOctaves' helps detect large-scale
% features in high-resolution images. Use an ROI to eliminate spurious
% features around the edges of the image.
border = 50;
roi = [border, border, size(I, 2)- 2*border, size(I, 1)- 2*border];
prevPoints = detectSURFFeatures(I, 'NumOctaves', 8, 'ROI', roi);
% Extract features. Using 'Upright' features improves matching, as long as
% the camera motion involves little or no in-plane rotation.
prevFeatures = extractFeatures(I, prevPoints, 'Upright', true);
% Create an empty imageviewset object to manage the data associated with each
% view.
vSet = imageviewset;
% Add the first view. Place the camera associated with the first view
% and the origin, oriented along the Z-axis.
viewId = 1;
vSet = addView(vSet, viewId, rigid3d, 'Points', prevPoints);
disp('First view set');
toc
%% Add the rest of the views
tic;
numPixels = imageSize(1,1) * imageSize(1,2);
for i = 2:numel(images)
% Undistort the current image.
I = undistortImage(images{i}, intrinsics);
% Detect, extract and match features.
currPoints = detectSURFFeatures(I, 'NumOctaves', 8, 'ROI', roi, 'MetricThreshold', 1000);
currFeatures = extractFeatures(I, currPoints, 'Upright', true);
indexPairs = matchFeatures(prevFeatures, currFeatures, ...
'MaxRatio', .7, 'Unique', true);
% Select matched points.
matchedPoints1 = prevPoints(indexPairs(:, 1));
matchedPoints2 = currPoints(indexPairs(:, 2));
% Estimate the camera pose of current view relative to the previous view.
% The pose is computed up to scale, meaning that the distance between
% the cameras in the previous view and the current view is set to 1.
% This will be corrected by the bundle adjustment.
[relativeOrient, relativeLoc, inlierIdx] = helperEstimateRelativePose(...
matchedPoints1, matchedPoints2, intrinsics);
% Get the table containing the previous camera pose.
prevPose = poses(vSet, i-1).AbsolutePose;
relPose = rigid3d(relativeOrient, relativeLoc);
% Compute the current camera pose in the global coordinate system
% relative to the first view.
currPose = rigid3d(relPose.T * prevPose.T);
% Add the current view to the view set.
vSet = addView(vSet, i, currPose, 'Points', currPoints);
% Store the point matches between the previous and the current views.
vSet = addConnection(vSet, i-1, i, relPose, 'Matches', indexPairs(inlierIdx,:));
% Find point tracks across all views.
tracks = findTracks(vSet);
% Get the table containing camera poses for all views.
camPoses = poses(vSet);
% Triangulate initial locations for the 3-D world points.
xyzPoints = triangulateMultiview(tracks, camPoses, intrinsics);
% Refine the 3-D world points and camera poses.
[xyzPoints, camPoses, reprojectionErrors] = bundleAdjustment(xyzPoints, ...
tracks, camPoses, intrinsics, 'FixedViewId', 1, ...
'PointsUndistorted', true);
% Store the refined camera poses.
vSet = updateView(vSet, camPoses);
prevFeatures = currFeatures;
prevPoints = currPoints;
% % Get the color of each reconstructed point
%
% allColors = reshape(pics{i}, [numPixels,3]);
% colorIdx = sub2ind([size(pics{i}, 1), size(pics{i}, 2)], round(matchedPoints1(:, 2)),...
% round(matchedPoints1(:, 1)));
% color = allColors(colorIdx, :);
end
disp('Added the rest of the views');
toc
%% Display camera poses
tic;
% Display camera poses.
camPoses = poses(vSet);
figure;
plotCamera(camPoses, 'Size', 0.2, 'ButtonDownFcn', {@imshowOrig, camPoses});
hold on
% Exclude noisy 3-D points.
goodIdx = (reprojectionErrors < 5);
xyzPoints = xyzPoints(goodIdx, :);
% Display the 3-D points.
pcshow(xyzPoints, 'VerticalAxis', 'y', 'VerticalAxisDir', 'down', ...
'MarkerSize', 45);
grid on
hold off
% Specify the viewing volume.
loc1 = camPoses.AbsolutePose(1).Translation;
xlim([loc1(1)-5, loc1(1)+4]);
ylim([loc1(2)-5, loc1(2)+4]);
zlim([loc1(3)-1, loc1(3)+20]);
camorbit(0, -30);
title('Refined Camera Poses');
toc
Mario Malic
Mario Malic on 3 Feb 2021
It was unnecessary, I have edited my code in comment. Read, you'll need to do some things in order for it to work.

Sign in to comment.

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!