Selecting important features from a very large pool

6 views (last 30 days)
Hello everyone.
I've been struggling with a pedestrian detection algorithm and I was wondering if MatLab had a toolbox that could help me somehow.
I extract a very large number of features from images and then I want to classify them as pedestrian or non-pedestrian. What I need is a way to determine which of the features are actually worth extracting for the sake of speed.
Imagine I have a text file with all the data. In the first column I'd enter the class (1 or 2) , and in the next columns come the features.
So the file could look like
1,1000,2000,1500,3000 ...
1,1200,1000,1600,3000 ...
2,3000,4000,120,10000 ...
2,4900,7300,3000,100 ...
and so on.
I have 30.000 features (columns) and around 10.000 samples (rows) and I would like to know if MatLab has any toolbox or function that could return the indexes of the features that are actually relevant for the classification.
Training the classifier isn't a problem, the main issue is to determine which of this features are worth extracting.
Thanks in advance.
  3 Comments
Pedro Silva
Pedro Silva on 3 May 2013
There are in fact some algorithms specially made for handling very high dimensional data, I was hoping MatLab had some toolbox prepared to work with it
Greg Heath
Greg Heath on 6 May 2013
Divide and conquer. You are never going to find the best combination anyway. So just find one that works well.

Sign in to comment.

Answers (4)

Image Analyst
Image Analyst on 3 May 2013
Well principal components analysis is the first thing that comes to my mind - well after thinking how can you have 30,000 features. Does 30.000 mean thirty thousand, or thirty? I can't see how you could possibly have 30,000 features unless you're having whole chunks of the image (groups of pixels) be features. But that doesn't make sense. Maybe for neural networks maybe, but not for feature analysis. I'm sure you can boil it down to a few dozen at most. Surely you must have some idea which of the 30 thousand features is most meaningful and which are useless.
  3 Comments
Image Analyst
Image Analyst on 3 May 2013
I've never used that many. How did Viola and Jones do it? The paper you cited says they used adaboost. Can't you do it the same way? I haven't used adaboost so I can't help you anymore.
Pedro Silva
Pedro Silva on 3 May 2013
Edited: Pedro Silva on 3 May 2013
Yes, they used adaboost to select the features, but I've gone around it for too long and can't find any way to do it, or anywhere explaining it. So I'm starting to check other solutions for selecting relevant features out of a large pool, and only then use the adaboost to train and test.

Sign in to comment.


Ilya
Ilya on 6 May 2013
You may find these two posts useful:
I don't really know what Viola and Jones did, but you could simply train AdaBoost (by calling fitensemble from Statistics Tlbx) and then execute the predictorImportance method for the trained ensemble.
  8 Comments
Ilya
Ilya on 7 May 2013
Each binary stump selects one best feature, based on the split criterion such as the Gini index. Some features can be selected many times and some features can be selected never. If you have only one informative feature and the rest 14999 features are noise, that one feature may be selected for all 2000 stumps and the rest may not be selected for any stumps.
As I wrote earlier, if you grow 2000 trees by fitensemble at the default settings, you will get at most 2000 features with non-zero importance. If you expect at least 2000 useful features, this means you expect that each stump select a feature different from all other stumps. What might this expectation be based on?
I agree - you should invest some time into understanding what AdaBoost and fitensemble do.
Pedro Silva
Pedro Silva on 8 May 2013
I went to watch some lectures on the matter and I understand the topic a little better.
My adaboost implementation (which is in opencv/c++) already chooses the best features out of the large pool I extract. It takes in the 15K features and creates 2000 decision stumps based on the best combination of features it finds. and, as you said, the same feature can participate in the decision any number of times.
What I really wanted to was to achieve the same classifier but with less features in order to speed up the algorithm, so I tried to find ways to eliminate redundant or irrelevant features using matlab tools, but with no success.
I figured out what my problem is. Right now I am creating only one boosted classifier to make all the decisions. So I need to feed this classifier 15K features every time I want it to make a decision. This is far from the implementation I was looking for.
What I need is a large number of boosted classifiers (a cascade) that would increase in complexity. This way I could stop the evaluation when one of those boosted classifiers gives a negative response.
But this kind of implementation is way over my programming skills.

Sign in to comment.


Anand
Anand on 8 May 2013
If you have the latest release of the Computer Vision System Toolbox, there's a way to train a classifier using the Viola-Jones approach, i.e. Haar-based features with an ada-boost learning framework. You might want to look at this:
  2 Comments
Pedro Silva
Pedro Silva on 8 May 2013
A cascade classifier is exactly what I need. The problem is that my features are custom, and that implementation only allows for 3 types of features.

Sign in to comment.


fatima qureshi
fatima qureshi on 14 Jan 2016
how adaboost decides which one is relevant and which is irrelevant feature????

Categories

Find more on Image Processing and Computer Vision in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!