Main Content

detectLoop

Detect loop closure using visual features

Since R2024b

Description

[loopViewIDs,similarityScores] = detectLoop(loopDetector,features) compares oriented FAST and rotated BRIEF (ORB) features, against features in a prepopulated loopDetector BoW database. It returns the indices of images, loopViewIDs, in the database that resemble the feature descriptors in features and it returns a quantitative measure of how similar each detected pair of images is in similarityScores.

By using the detectLoop function with the specified inputs, you can effectively search a database for images that are similar to a given set of query images, while ignoring those that are closely connected.

example

[loopViewIDs,similarityScores] = detectLoop(___,connectedViewIds,relativeThreshold) finds images within a BoW database that are similar to an image represented by ORB feature descriptors, features, while ignoring those that are closely connected, specified by connectedViewIds. The relativeThreshold allows you to fine-tune the sensitivity of the search to ensure that only the most relevant, non-connected similar images are identified and returned.

[___] = detectLoop(___,NumResults=Value) sets the maximum number of results to return in addition to all input arguments from the previous syntax. Specify Value as a positive integer. The default value is set to 20.

Examples

collapse all

Initialize a loop detector using the dbowLoopDetector object to identify loop closures based on visual features.

loopDetector =  dbowLoopDetector;

Set the ID of the view (image) to be queried for the loop closure detection.

viewId = 100;

Read an image which represents the 100th view in a sequence.

I = imread("cameraman.tif");

Detect ORB features in the image.

points = detectORBFeatures(I);

Extract ORB features from the detected points in the image. The extractFeatures function returns features and their corresponding locations. This code focuses on only the features for loop closure detection.

[features,~] = extractFeatures(I,points);

Detect loop closures by comparing the current view's features against previously seen views. The detectLoop function returns the IDs of up to 10 views that are most similar to the current view, indicating potential loop closures.

loopViewIds = detectLoop(loopDetector,features,NumResults=10);

Load a pre-existing binary vocabulary for feature description.

bag = bagOfFeaturesDBoW("bagOfFeatures.bin.gz");

Initialize the loop detector with the loaded vocabulary.

loopDetector = dbowLoopDetector(bag);

Use a single image to simulate adding different views.

I = im2gray(imread("cameraman.tif"));
points = detectORBFeatures(I);
[features,points] = extractFeatures(I,points);

Initialize an image view set to manage and store views.

vSet = imageviewset;

Add the first view with random features for initialization.

zeroFeatures = binaryFeatures(zeros(size(points,1),32,"like",uint8(0)));
vSet = addView(vSet,1,"Features",zeroFeatures,"Points",points);
addVisualFeatures(loopDetector,1,zeroFeatures);

Sequentially add three views with actual features to simulate potential loop candidates.

for viewId = 2:4
    vSet = addView(vSet,viewId,"Features",features,"Points",points);
    addVisualFeatures(loopDetector,viewId,features);
end

Add two new connected views to the image sequence. First, add a previous view with a cropped section of the original image to represent the view from a camera. The resulting cropped image represents the previous frame in the sequence.

prevViewId = 5;
prevView = I(100:200,100:200);

Detect ORB features in the cropped frame with the pyramid level set to 3. Add the features to the image viewset as a new view.

prevPoints = detectORBFeatures(prevView,NumLevels=3);
[prevFeatures,prevPoints] = extractFeatures(prevView,prevPoints);
vSet = addView(vSet,prevViewId,"Features",prevFeatures,"Points",prevPoints);
addVisualFeatures(loopDetector,prevViewId,prevFeatures);

Add a current view, connected to the previous one with another cropped section.

currViewId = 6;
currView = I(50:200, 50:200);
currPoints = detectORBFeatures(currView,NumLevels=3);
[currFeatures, currPoints] = extractFeatures(currView,currPoints);
vSet = addView(vSet,currViewId,"Features",currFeatures,"Points",currPoints);
vSet = addConnection(vSet,prevViewId,currViewId,"Matches",[1:10; 1:10]');

Identify views connected to the current key frame.

covisViews = connectedViews(vSet, currViewId);
covisViewsIds = covisViews.ViewId;

Perform loop closure detection by comparing current features against those from connected views. Use 75% of the maximum score among connected views for the threshold.

relativeThreshold = 0.75; 
loopViewIds = detectLoop(loopDetector,currFeatures,covisViewsIds,relativeThreshold);

Input Arguments

collapse all

BoW loop detector database, specified as dbowLoopDetector object. This prepopulated database of images is search to find similar images.

ORB feature descriptors, specified as a binaryFeatures object. These features are used to search the database for similar images.

Connected view identifiers, specified as an M-element vector. The length M, is the number of images considered connected at the time of the query. For example, if the value of connectedViewIds = [2 5 7], this means that the images or views with indices 2,5, and 7 in the database are considered connected to the current query images. The function will ignore these indices during its search for similar images, focusing instead on finding matches among the non-connected images.

Relative threshold for filtering search results, specified as a scalar value between 0 and 1. Use this threshold to help exclude less similar images by setting a minimum similarity score based on the connected images scores.

A value of 0.75 (or 75%) is recommended for the relativeThreshold. This setting means that only images with similarity scores greater than 75% of the minimum score among the connected views are returned, ensuring a high relevance of the detected loops

Output Arguments

collapse all

Image indices in loop detector database, returned as an M-element vector. M is the number of queries. The indices points to an image that is similar to the query image, excluding those marked as connected. These indices allow you to locate the matching images within the database.

Similarity score between each matched image and query image, returned as an M-element vector. M is the number of queries. Higher scores indicate closer matches.

The function calculates the minimum similarity score among the connected views. It then multiplies this minimum score by the relativeThreshold to establish a cutoff score. Only non-connected images with similarity scores above this cut-off are considered significant and included in the results.

References

[1] Galvez-López, D., and J. D. Tardos. “Bags of Binary Words for Fast Place Recognition in Image Sequences.” IEEE Transactions on Robotics, vol. 28, no. 5, Oct. 2012, pp. 1188–97. DOI.org (Crossref), https://doi.org/10.1109/TRO.2012.2197158.

Version History

Introduced in R2024b