problem finding centroid of an image

I'm working on a motion tracking script and i'm having trouble finding the centroids of the objects i'm tracking. I'm trying to find the centroids of the two white dots on the image below (a bike riders ankle and knee).
The problem is that the centroid of the lower dot (ankle) is calculated to be (1003.6,545.1) but the upper dot (knee) has a centroid of (1282.6,155.4). This is a problem as the upper dot is very obviously above the lower dot, but the calculated centroid is lower. I'm tracking the right objects as everything is behaving exactly as expected except the y value of the knee. when displayed on top of their images, the coordinates given are the centroid.
if true
i=1;
disp('calculating')
while hasFrame(video)
%load current frame as image
x=readFrame(video);
%determine if frame is first frame
if i==1
%image processing is only done on first frame
%convert image to grayscale
u=rgb2gray(x);
%determine threshold value
w=graythresh(u);
%convert greyscale image to binary image
q=im2bw(x,w);
%delete small objects
y=bwareaopen(q,1000,4);
%label remaining object
L=bwlabel(y);
%find the centroid of ankle and knee.
%ankle and knee label determined experimentally
%using imshow
ankle_cent=regionprops(L==7,'centroid');
knee_cent=regionprops(L==10,'centroid');
%extract centroid from struct
ankle_cent=(ankle_cent.Centroid);
knee_cent=(knee_cent.Centroid);
%initialize ankle tracker
ankle_tracker=vision.PointTracker;
initialize(ankle_tracker,ankle_cent,x);
%initialize knee tracker
knee_tracker=vision.PointTracker;
initialize(knee_tracker,knee_cent,x);
%record initial positions of knee and ankle
ankle(i,:)=ankle_cent;
knee(i,:)=knee_cent;
else
%tracks knee and ankle across all frames
[ankle(i,:),point_validity]=step(ankle_tracker,x);
[knee(i,:),point_validity2]=step(knee_tracker,x);
end
i=i+1;
end end
i've eliminated the portion that loads the video, and some graphing stuff.

 Accepted Answer

imshow() puts the origin of the image at the top left. For example if you had a line in the second row of the array, then the line would show up in the second row from the top. Therefore something that has a lower row number appears higher on the screen, and likewise something that is higher on the screen needs to have a lower row number.
For example,
foo = ones(50,40);foo(5,:) = 0;
imagesc(foo)
The line across is low index which translates to higher up on the display. You can see the coordinates numbered along the axis.
You are thinking in terms of higher y value being higher up on the screen, which would be the case if the origin was in the lower left instead of in the upper left. The origin is in the lower left usually, but in the case that you image() or imagesc() or imshow() before you hold on then the image*() routine automatically sets the axis YDir property to 'reverse' .

6 Comments

David Streng
David Streng on 2 Jun 2016
Edited: David Streng on 2 Jun 2016
but imshow isn't being used in the calculation. Does regionprops put the origin in the top left as well?
Edit: apparently regionprops does put the origin in the upper left. I put in a temporary work around, ankle(:,2)=1080-ankle(:,2), is there a way to change the origin in the regionprop command?
but imshow isn't being used in the calculation
How do you know that one centroid is "higher" or "lower" than the other? The answer is that you looked at the image, and you put the image on the display using imshow() which sets the origin at the top left.
regionprops() uses normal MATLAB indexing of arrays, which are typically thought about in terms of having their origin at the top left. That is an abstraction, since memory is linear instead of rectangular.
What I suggest you do is flipud() your image array and imshow() that and then set(gca, 'ydir', 'normal'), and that will get you consistent representation with the origin being at the bottom left.
David Streng
David Streng on 2 Jun 2016
Edited: David Streng on 2 Jun 2016
I know that one centroid is "higher" (with the origin in bottom left) by looking at both the raw video footage (which I didn't show) and the first processed frame (the picture in my original post).
I guess it makes sense that regionprops() would put the origin in top left when you look at the image as an array. Mentally, I was thinking of it as a picture, not an array so I expected it to place the origin as it would in a graph.
I don't have access to the image processing toolkit at the moment (i'm a student and have to use school pcs if I wan't to use the toolkit), but I'll try out your recommendation when I get the chance.
Each of those "looking" probably involved display routines that set the origin to the top left.
I thought the Student Suite automatically came with the Image Processing Toolbox. Type ver on the command line to check what toolboxes you have installed.
turns out I have it, it's just not installed.

Sign in to comment.

More Answers (1)

See my Image Segmentation Tutorial http://www.mathworks.com/matlabcentral/fileexchange/25157-image-segmentation-tutorial and see how it gets the centroids.
Beyond that though, you're making a BIG mistake in assuming that the ankle will always be label #7 and the knee will always be #10. You also don't need to call reqionprops twice and you should not. I know you think it might be faster since you're just computing the centroid of two regions instead of all of them but it doesn't take much time and you're not guaranteed that 7 and 10 will be the correct regions. What you need to do is to do a feature analysis on the blobs and try to determine which blob is knee and which is ankle based on features, not just label ID number.

3 Comments

downloaded your tutorial and I'll take a look at it when I get some free time. Image and video processing is definitely something I want to learn how to do. This is my first time doing any kind of image processing so I know I have a lot to learn.
As I understand my code, I'm only assuming that the labels are true for the first frame. After that the pointTracker tracks the objects. In a single frame, wouldn't MATLAB consistently label identical objects the same? So i'm not exactly sure why I need to do a feature analysis (or how I would even do that).
No, it wouldn't. Labeling is done on an individual image basis based on top to bottom then left to right regarding when it encounters the first pixel in a connected region. Eve a slight movement by one row or column could drastically change the label number. Label number is not a reliable tracking indicator, except in situations where you have a few large well separated blobs, which is not what you have.
I know that the labels will change based on what frame i'm working with, but I'm only doing this for one frame. Every time I run my code, it loads the exact same file and analyzes the exact same frame. So unless somehow the first frame is different every time I run the code, then the labeling should be consistent. The labels are only used to identify things in the first frame. pointTracker doesn't rely on the labels, just the initial positions.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!