Two codes have been uploaded here. Out of these 'keypointsdetectionprogram' will give you the SIFT keys and their descriptors and 'imagekeypointsmatchingprogram' enables you to check the robustness of the code by changing some of the properties (such as change in intensity, rotation etc).Then you can check the matching percentage of key points between the input and other property changed image by using the key point locations using this code.You can select the images and properties from the options given in the command window.
Naveen Cheggoju (2021). SIFT (Scale invariant Feature Transform) Algorithm (https://www.mathworks.com/matlabcentral/fileexchange/43723-sift-scale-invariant-feature-transform-algorithm), MATLAB Central File Exchange. Retrieved .
Suggestions for validation would be nice, but up till now with my experimenting it for a week, the code appears to be correct.
I question the offset calculation in section "accurate keypoint localization" with
offset = [2; 2; Sigma * ScaleFactor^(DiffMinMaxMapIndx) - Sigma * ScaleFactor^(DiffMinMaxMapIndx-2)];
since a central derivative ( difference) is used for x and y , shouldn't be
offset = [ .5 ; .5; Sigma * Scal.....]; ?
I haven't checked the ScaleDiff expression for derivative with respect to sigma, vs so I cannot comment on it. I will do this later, but if someone else than the author has already done this check .... would be nice to know
sir, i want to ask you that why we are taking these keypoints only, while we have neighborhood pixels which are more accurate for matching than these keypoints.
how can a save a file with .SIFT extension
Hi, in the code store2 and store3 variables are used to store 2nd and 3rd level pyramid generation, but those values are never used. why we need to compute those values?
Hi sir, i need detail explanation for where we improve the key point performance in your code of key point detection and key point matching.
please give me a detailed description of how the sift algorithm working in an images
I need to enquire about the issue i.e how can i create a training matrix of many same dimension images as i am facing issue in concatenating the descriptors as diff no of descriptors are coming for different images
Hi mr. Naveen,
sir, can you please give me comments on the code. I really can't understand this
Thanks for your great work.
I've several problems here. Firstly, Lowe has mentioned that each key-point should be an extrema in comparison with its 26 neighbors which are selected from last, present and next scale. But you have used present and next scales. Moreover, your formula for finding gradient and orientation is somehow different from Lowe's paper. Can you enlighten me here?
Thanks for great work! I just wanna to know how to find final number of keypoints from your keypointdetectionprogram.m program
Great work Naveen.
Is this coding is suitable for signature recognition system? please advice.
Hi Mr Nveen
you calculate DOG image(store1, store2,store3) while you just used store1(DOGs in first octave) to determine etremas and as result the key points.
In the original paper it has been mentioned that all key point in all octave shoud be considered.
Nice work. I have one question. In your 'keypointsdetectionprogram' code, kpmag and kpori are calculated in Forming key point neighbourhooods part, but they are not used in creating the descriptors. However, a similar process is carried out in the descriptor creation part. I'm confused about that.
In the SIFT algorithm, a histogram of 36 bins is created in orientation assignment but a histogram of 8 bins is created for each sub region. What is the use of the former one?
Thanks for the reply. Actually I`m kinda new to SIFT. I need your advice on the details as described below:
1) You have only used the information form 1st octave only, (store1) to find the keys point. Why didnt you use the information of store 2 and store 3 to find the keypoints?
2) For finding the keypoints, you have only used my,mz, miy and miz. Why didnt include mx and mix for example, i2(i,j)>my && i2(i,j)>mz) && i2(i,j)>mx) || (i2(i,j)<miy && i2(i,j)<miz && i2(i,j)<mix).
Please advice. Thanks for your help.
its a great work.
Could you please tell me were to find the equations you used in the program?
No I didn't include that in the program. You try with the PPT available at below link, I think it may help you in eliminationg edges.
Anyway, did you include the accurate key-point localization and eliminating edges codes this program? Please advice.
Hi Cheggoju. Well, I sort of understand what you mean about not using keywords, you can still speed up by preallocating memory and by using more matrix multiplication instead of for loops. Even if you still want to keep for loops, the inner loop should always be the one with more iterations.
Anyway some numerical calcs have optimized solutions which exist for many years and at some point, at least professionaly, it will be almost impossible to use only your own code. Such is the case of FFT. Best wishes.
Thank you for your comment @Ricardo.
I prefer using my own coding over keywords so I used my own code. Thank you for your suggestion.
Nice work but could be polished a bit. Most of the for loops used in convolution should be replaced by calls to conv2() or some DFT/FFT/FFTW based algorithm (convolution to product in the frequency domain and back). Preallocating memory (c = zeros(m,n), etc) will also help. The code will become much faster (about 60 times in my laptop). Any relation between your code and the one submitted by vijay anand? They appear to be the same.
Inspired: SIFT Feature Extreaction
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!