Reducing dimensionality of features with PCA

4 views (last 30 days)
Sepp
Sepp on 4 Jun 2015
Commented: Clinton Kayson on 27 May 2021
I'm totally confused regarding PCA. I have a 4D image of size 90 x 60 x 12 x 350. That means that each voxel is a vector of size 350 (time series).
Now I divide the 3D image (90 x 60 x 12) into cubes. So let's say a cube contains n voxels, so I have n vectors of size 350. I want to reduce this n vectors to only one vector and then calculate the correlations between all vectors of all cubes.
So for a cube I can construct the matrix M where I just put each voxel after each other, i.e. M = [v1 v2 v3 ... vn] and each v is of size 350.
Now I can apply PCA in Matlab by using [coeff, score, latent, ~, explained] = pca(M); and taking the first component. And now my confusion begins.
  1. Should I transpose the matrix M, i.e. PCA(M')?
  2. Should I take the first column of coeff or of score?
  3. This third question is now a bit unrelated. Let's assume we have a matrix A = rand(30,100) where the rows are the datapoints and the columns are the features. Now I want to reduce the dimensionality of the feature vectors but keeping all data points. How can I do this with PCA? When I do [coeff, score, latent, ~, explained] = pca(M); then coeff is of dimension 100 x 29 and score is of size 30 x 29. I'm totally confused.

Answers (3)

Matlaber
Matlaber on 19 Feb 2019
Is there any setting input arguement of
coeff = pca(X)
coeff = pca(X,Name,Value)
[coeff,score,latent] = pca(___)
[coeff,score,latent,tsquared] = pca(___)
[coeff,score,latent,tsquared,explained,mu] = pca(___)
for reducing a matrix of (400 * 40) to (400 * 20) ?
Thanks

Alfonso Nieto-Castanon
Alfonso Nieto-Castanon on 5 Jun 2015
If you use:
[coeff,score] = pca(M);
Comp_PCA1 = score(:,1);
where M is a (300 by n) matrix of voxel timeseries, and you keep the first column of the resulting matrix score, that will have the (300 by 1) timeseries/vector of component scores most representative of the timeseries variance within your cube.
Note that pca(X) first subtracts the mean effect mean(X,1) from X and then performs SVD on the residuals to decompose the resulting covariance in its principal components. You do not want to use pca(M') because then you would be disregarding the average timeseries across all your voxels within each cube (which often contains useful information). Using pca(M) will instead disregard the average signal across all your timepoints for each voxel, which is fine if you are planning to use this for correlation analyses (since the correlations are invariant to the average value of the timeseries)
  3 Comments
Alfonso Nieto-Castanon
Alfonso Nieto-Castanon on 5 Jun 2015
Regarding normalization of your features, that depends on the classifier that you are planning to use. Many classifiers (e.g. random forest, SVM's) will be invariant to this form of scaling, while others (e.g. logistic regression, gaussian mixture models) will not.
Regarding your the second question, assumming that your feature vectors are mean-centered, the two methods (keeping the first 10 columns of score or multiplying A by the first 10 columns of coeff) are exactly equivalent.
Last, if you are planning to use all this in a machine learning context, please be aware that you need to define your features consistently across your training and validation datasets. That typically means that you do not want to apply PCA on the validation dataset but rather store the coeff matrices computed from the training set and use those to project the validation data. This is particularly important when using PCA since the resulting coefficients/scores are scale/reflection invariant (e.g. you could arbitrarily get -coeff and -score instead of coeff and score as your coefficients/scores resulting from PCA)
Sepp
Sepp on 6 Jun 2015
Thank you for the answer.
I have now also tried out to take the first column of "coeff" instead of "score" and the result is much better (66% compared to 54% with a classification of 4 classes).
But I have now problems with the size of the vectors. Let me explain.
What I'm currently doing is to take only cubes which lies fully in the brain, so my "coeff" vector sizes are all the same but I'm loosing information from the border of the brain in this way.
The problem is the following. Let's say a cube has 18 voxels in it, then I got a matrix M of (time x voxels) of (350 x 18). When I do PCA and extract the first column of coeff I'm getting a vector of size 18.
Now, let's say we have a border cube with only 4 voxels (all other voxels of the cube are outside of brain). Then my Matrix M is of size (350 x 4) and the first column of coeff is of size 4.
To be able to calculate the correlations I need of course same vector sizes.
How would you solve this problem?

Sign in to comment.


Bhuvana P
Bhuvana P on 25 Jan 2018
I need a matlab code for converting 2d image into 1d image

Categories

Find more on Dimensionality Reduction and Feature Extraction in Help Center and File Exchange

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!