How is predictor importance for classification trees calculated?
1 view (last 30 days)
Show older comments
I am using MATLAB's function:
predictorImportance
to evaluate the usefulness of features I am extracting from 360° images.
I don't fully understand how predictor importance estimates are calculated and was hoping for a mathematical explanation for the algorithm used.
I have read the MATLAB documentation on this, however, I am unsure about a few things.
Firstly, what is risk? I have assumed it to be the impurity reduction if using the Gini index as the splitting criterion.
Secondly, what does "his sum is taken over best splits found at each branch node" when surrogate splits aren't used.
Finally, I don't understand why the estimates change when you reorder the columns in the feature matrix.
Thank you in advance to anyone able to shed light on this for me.
0 Comments
Answers (1)
Gaurav Garg
on 27 Jan 2021
Hi Ryan,
Yes, risk means impurity reduction if using the Gini index as the splitting criterion. You can also give 'twoing' or 'deviance' as split criterions by following the doc here.
To know about why the estimates change when you reorder columns, you can go through the doc here to understand the algorithm involved behind selections of nodes and splitting of each branch node.
See Also
Categories
Find more on Classification Trees in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!