Selecting important features from a very large pool
6 views (last 30 days)
Show older comments
Hello everyone.
I've been struggling with a pedestrian detection algorithm and I was wondering if MatLab had a toolbox that could help me somehow.
I extract a very large number of features from images and then I want to classify them as pedestrian or non-pedestrian. What I need is a way to determine which of the features are actually worth extracting for the sake of speed.
Imagine I have a text file with all the data. In the first column I'd enter the class (1 or 2) , and in the next columns come the features.
So the file could look like
1,1000,2000,1500,3000 ...
1,1200,1000,1600,3000 ...
2,3000,4000,120,10000 ...
2,4900,7300,3000,100 ...
and so on.
I have 30.000 features (columns) and around 10.000 samples (rows) and I would like to know if MatLab has any toolbox or function that could return the indexes of the features that are actually relevant for the classification.
Training the classifier isn't a problem, the main issue is to determine which of this features are worth extracting.
Thanks in advance.
3 Comments
Greg Heath
on 6 May 2013
Divide and conquer. You are never going to find the best combination anyway. So just find one that works well.
Answers (4)
Image Analyst
on 3 May 2013
Well principal components analysis is the first thing that comes to my mind - well after thinking how can you have 30,000 features. Does 30.000 mean thirty thousand, or thirty? I can't see how you could possibly have 30,000 features unless you're having whole chunks of the image (groups of pixels) be features. But that doesn't make sense. Maybe for neural networks maybe, but not for feature analysis. I'm sure you can boil it down to a few dozen at most. Surely you must have some idea which of the 30 thousand features is most meaningful and which are useless.
3 Comments
Image Analyst
on 3 May 2013
I've never used that many. How did Viola and Jones do it? The paper you cited says they used adaboost. Can't you do it the same way? I haven't used adaboost so I can't help you anymore.
Ilya
on 6 May 2013
You may find these two posts useful:
I don't really know what Viola and Jones did, but you could simply train AdaBoost (by calling fitensemble from Statistics Tlbx) and then execute the predictorImportance method for the trained ensemble.
8 Comments
Ilya
on 7 May 2013
Each binary stump selects one best feature, based on the split criterion such as the Gini index. Some features can be selected many times and some features can be selected never. If you have only one informative feature and the rest 14999 features are noise, that one feature may be selected for all 2000 stumps and the rest may not be selected for any stumps.
As I wrote earlier, if you grow 2000 trees by fitensemble at the default settings, you will get at most 2000 features with non-zero importance. If you expect at least 2000 useful features, this means you expect that each stump select a feature different from all other stumps. What might this expectation be based on?
I agree - you should invest some time into understanding what AdaBoost and fitensemble do.
Anand
on 8 May 2013
If you have the latest release of the Computer Vision System Toolbox, there's a way to train a classifier using the Viola-Jones approach, i.e. Haar-based features with an ada-boost learning framework. You might want to look at this:
fatima qureshi
on 14 Jan 2016
how adaboost decides which one is relevant and which is irrelevant feature????
0 Comments
See Also
Categories
Find more on Image Processing and Computer Vision in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!