# corr

Linear or rank correlation

## Syntax

``rho = corr(X)``
``rho = corr(X,Y)``
``[rho,pval] = corr(X,Y)``
``[rho,pval] = corr(___,Name,Value)``

## Description

````rho = corr(X)` returns a matrix of the pairwise linear correlation coefficient between each pair of columns in the input matrix `X`. ```

example

````rho = corr(X,Y)` returns a matrix of the pairwise correlation coefficient between each pair of columns in the input matrices `X` and `Y`. ```

example

````[rho,pval] = corr(X,Y)` also returns `pval`, a matrix of p-values for testing the hypothesis of no correlation against the alternative hypothesis of a nonzero correlation. ```

example

````[rho,pval] = corr(___,Name,Value)` specifies options using one or more name-value pair arguments in addition to the input arguments in the previous syntaxes. For example, `'Type','Kendall'` specifies computing Kendall's tau correlation coefficient.```

## Examples

collapse all

Find the correlation between two matrices and compare it to the correlation between two column vectors.

Generate sample data.

```rng('default') X = randn(30,4); Y = randn(30,4);```

Introduce correlation between column two of the matrix `X` and column four of the matrix `Y`.

`Y(:,4) = Y(:,4)+X(:,2);`

Calculate the correlation between columns of `X` and `Y`.

`[rho,pval] = corr(X,Y)`
```rho = 4×4 -0.1686 -0.0363 0.2278 0.3245 0.3022 0.0332 -0.0866 0.7653 -0.3632 -0.0987 -0.0200 -0.3693 -0.1365 -0.1804 0.0853 0.0279 ```
```pval = 4×4 0.3731 0.8489 0.2260 0.0802 0.1045 0.8619 0.6491 0.0000 0.0485 0.6039 0.9166 0.0446 0.4721 0.3400 0.6539 0.8837 ```

As expected, the correlation coefficient between column two of `X` and column four of `Y`, `rho(2,4)`, is the highest, and it represents a high positive correlation between the two columns. The corresponding p-value, `pval(2,4)`, is zero to the four digits shown. Because the p-value is less than the significance level of `0.05`, it indicates rejection of the hypothesis that no correlation exists between the two columns.

Calculate the correlation between `X` and `Y` using `corrcoef`.

`[r,p] = corrcoef(X,Y)`
```r = 2×2 1.0000 -0.0329 -0.0329 1.0000 ```
```p = 2×2 1.0000 0.7213 0.7213 1.0000 ```

The MATLAB® function `corrcoef`, unlike the `corr` function, converts the input matrices `X` and `Y` into column vectors, `X(:)` and `Y(:)`, before computing the correlation between them. Therefore, the introduction of correlation between column two of matrix `X` and column four of matrix `Y` no longer exists, because those two columns are in different sections of the converted column vectors.

The value of the off-diagonal elements of `r`, which represents the correlation coefficient between `X` and `Y`, is low. This value indicates little to no correlation between `X` and `Y`. Likewise, the value of the off-diagonal elements of `p`, which represents the p-value, is much higher than the significance level of `0.05`. This value indicates that not enough evidence exists to reject the hypothesis of no correlation between `X` and `Y`.

Test alternative hypotheses for positive, negative, and nonzero correlation between the columns of two matrices. Compare values of the correlation coefficient and p-value in each case.

Generate sample data.

```rng('default') X = randn(50,4); Y = randn(50,4);```

Introduce positive correlation between column one of the matrix `X` and column four of the matrix `Y`.

`Y(:,4) = Y(:,4)+0.7*X(:,1);`

Introduce negative correlation between column two of `X` and column two of `Y`.

`Y(:,2) = Y(:,2)-2*X(:,2);`

Test the alternative hypothesis that the correlation is greater than zero.

`[rho,pval] = corr(X,Y,'Tail','right')`
```rho = 4×4 0.0627 -0.1438 -0.0035 0.7060 -0.1197 -0.8600 -0.0440 0.1984 -0.1119 0.2210 -0.3433 0.1070 -0.3526 -0.2224 0.1023 0.0374 ```
```pval = 4×4 0.3327 0.8405 0.5097 0.0000 0.7962 1.0000 0.6192 0.0836 0.7803 0.0615 0.9927 0.2298 0.9940 0.9397 0.2398 0.3982 ```

As expected, the correlation coefficient between column one of `X` and column four of `Y`, `rho(1,4)`, has the highest positive value, representing a high positive correlation between the two columns. The corresponding p-value, `pval(1,4)`, is zero to the four digits shown, which is lower than the significance level of `0.05`. These results indicate rejection of the null hypothesis that no correlation exists between the two columns and lead to the conclusion that the correlation is greater than zero.

Test the alternative hypothesis that the correlation is less than zero.

`[rho,pval] = corr(X,Y,'Tail','left')`
```rho = 4×4 0.0627 -0.1438 -0.0035 0.7060 -0.1197 -0.8600 -0.0440 0.1984 -0.1119 0.2210 -0.3433 0.1070 -0.3526 -0.2224 0.1023 0.0374 ```
```pval = 4×4 0.6673 0.1595 0.4903 1.0000 0.2038 0.0000 0.3808 0.9164 0.2197 0.9385 0.0073 0.7702 0.0060 0.0603 0.7602 0.6018 ```

As expected, the correlation coefficient between column two of `X` and column two of `Y`, `rho(2,2)`, has the negative number with the largest absolute value (`-0.86`), representing a high negative correlation between the two columns. The corresponding p-value, `pval(2,2)`, is zero to the four digits shown, which is lower than the significance level of `0.05`. Again, these results indicate rejection of the null hypothesis and lead to the conclusion that the correlation is less than zero.

Test the alternative hypothesis that the correlation is not zero.

`[rho,pval] = corr(X,Y)`
```rho = 4×4 0.0627 -0.1438 -0.0035 0.7060 -0.1197 -0.8600 -0.0440 0.1984 -0.1119 0.2210 -0.3433 0.1070 -0.3526 -0.2224 0.1023 0.0374 ```
```pval = 4×4 0.6654 0.3190 0.9807 0.0000 0.4075 0.0000 0.7615 0.1673 0.4393 0.1231 0.0147 0.4595 0.0120 0.1206 0.4797 0.7964 ```

The p-values, `pval(1,4)` and `pval(2,2)`, are both zero to the four digits shown. Because the p-values are lower than the significance level of `0.05`, the correlation coefficients `rho(1,4)` and `rho(2,2)` are significantly different from zero. Therefore, the null hypothesis is rejected; the correlation is not zero.

## Input Arguments

collapse all

Input matrix, specified as an n-by-k matrix. The rows of `X` correspond to observations, and the columns correspond to variables.

Example: `X = randn(10,5)`

Data Types: `single` | `double`

Input matrix, specified as an n-by-k2 matrix when `X` is specified as an n-by-k1 matrix. The rows of `Y` correspond to observations, and the columns correspond to variables.

Example: `Y = randn(20,7)`

Data Types: `single` | `double`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.

Example: `corr(X,Y,'Type','Kendall','Rows','complete')` returns Kendall's tau correlation coefficient using only the rows that contain no missing values.

Type of correlation, specified as the comma-separated pair consisting of `'Type'` and one of these values.

ValueDescription
`'Pearson'`Pearson's Linear Correlation Coefficient
`'Kendall'`Kendall's Tau Coefficient
`'Spearman'`Spearman's Rho

`corr` computes the p-values for Pearson's correlation using a Student's t distribution for a transformation of the correlation. This correlation is exact when `X` and `Y` come from a normal distribution. `corr` computes the p-values for Kendall's tau and Spearman's rho using either the exact permutation distributions (for small sample sizes) or large-sample approximations.

Example: `'Type','Spearman'`

Rows to use in computation, specified as the comma-separated pair consisting of `'Rows'` and one of these values.

ValueDescription
`'all'`Use all rows of the input regardless of missing values (`NaN`s).
`'complete'`Use only rows of the input with no missing values.
`'pairwise'`Compute `rho(i,j)` using rows with no missing values in column `i` or `j`.

The `'complete'` value, unlike the `'pairwise'` value, always produces a positive definite or positive semidefinite `rho`. Also, the `'complete'` value generally uses fewer observations to estimate `rho` when rows of the input (`X` or `Y`) contain missing values.

Example: `'Rows','pairwise'`

Alternative hypothesis, specified as the comma-separated pair consisting of `'Tail'` and one of the values in the table. `'Tail'` specifies the alternative hypothesis against which to compute p-values for testing the hypothesis of no correlation.

ValueDescription
`'both'`Test the alternative hypothesis that the correlation is not `0`.
`'right'`Test the alternative hypothesis that the correlation is greater than `0`
`'left'`Test the alternative hypothesis that the correlation is less than `0`.

`corr` computes the p-values for the two-tailed test by doubling the more significant of the two one-tailed p-values.

Example: `'Tail','left'`

Observation weights, specified as an n-by-1 vector of nonnegative scalar values, where n is the number of observations. For more information, see Algorithms.

Example: `Weights=[300 457 200]`

Data Types: `single` | `double`

## Output Arguments

collapse all

Pairwise linear correlation coefficient, returned as a matrix.

• If you input only a matrix `X`, `rho` is a symmetric k-by-k matrix, where k is the number of columns in `X`. The entry `rho(a,b)` is the pairwise linear correlation coefficient between column a and column b in `X`.

• If you input matrices `X` and `Y`, `rho` is a k1-by-k2 matrix, where k1 and k2 are the number of columns in `X` and `Y`, respectively. The entry `rho(a,b)` is the pairwise linear correlation coefficient between column a in `X` and column b in `Y`.

p-values, returned as a matrix. Each element of `pval` is the p-value for the corresponding element of `rho`.

If `pval(a,b)` is small (less than `0.05`), then the correlation `rho(a,b)` is significantly different from zero.

collapse all

### Pearson's Linear Correlation Coefficient

Pearson's linear correlation coefficient is the most commonly used linear correlation coefficient. For column Xa in matrix X and column Yb in matrix Y, having means ${\overline{X}}_{a}=\sum _{i=1}^{n}\left({X}_{a,i}\right)/n,$ and ${\overline{Y}}_{b}=\sum _{j=1}^{n}\left({Y}_{b,j}\right)/n$, Pearson's linear correlation coefficient rho(a,b) is defined as:

`$rho\left(a,b\right)=\frac{\sum _{i=1}^{n}\left({X}_{a,i}-{\overline{X}}_{a}\right)\left({Y}_{b,i}-{\overline{Y}}_{b}\right)}{{\left\{\sum _{i=1}^{n}{\left({X}_{a,i}-{\overline{X}}_{a}\right)}^{2}\text{\hspace{0.17em}}\sum _{j=1}^{n}{\left({Y}_{b,j}-{\overline{Y}}_{b}\right)}^{2}\right\}}^{1/2}}\text{\hspace{0.17em}},$`

where n is the length of each column.

Values of the correlation coefficient can range from `–1` to `+1`. A value of `–1` indicates perfect negative correlation, while a value of `+1` indicates perfect positive correlation. A value of `0` indicates no correlation between the columns.

### Kendall's Tau Coefficient

Kendall's tau is based on counting the number of (i,j) pairs, for i<j, that are concordant—that is, for which ${X}_{a,i}-{X}_{a,j}$ and ${Y}_{b,i}-{Y}_{b,j}$ have the same sign. The equation for Kendall's tau includes an adjustment for ties in the normalizing constant and is often referred to as tau-b.

For column Xa in matrix X and column Yb in matrix Y, Kendall's tau coefficient is defined as:

`$\tau =\frac{2K}{n\left(n-1\right)},$`

where $K=\sum _{i=1}^{n-1}\sum _{j=i+1}^{n}{\xi }^{*}\left({X}_{a,i},{X}_{a,j},{Y}_{b,i},{Y}_{b,j}\right),$ and

`${\xi }^{*}\left({X}_{a,i},{X}_{a,j},{Y}_{b,i},{Y}_{b,j}\right)=\left\{\begin{array}{ccc}1& \text{if}& \left({X}_{a,i}-{X}_{a,j}\right)\left({Y}_{b,i}-{Y}_{b,j}\right)>0\\ 0& \text{if}& \left({X}_{a,i}-{X}_{a,j}\right)\left({Y}_{b,i}-{Y}_{b,j}\right)=0\\ -1& \text{if}& \left({X}_{a,i}-{X}_{a,j}\right)\left({Y}_{b,i}-{Y}_{b,j}\right)<0\end{array}\text{\hspace{0.17em}}.$`

Values of the correlation coefficient can range from `–1` to `+1`. A value of `–1` indicates that one column ranking is the reverse of the other, while a value of `+1` indicates that the two rankings are the same. A value of `0` indicates no relationship between the columns.

### Spearman's Rho

Spearman's rho is equivalent to Pearson's Linear Correlation Coefficient applied to the rankings of the columns Xa and Yb.

If all the ranks in each column are distinct, the equation simplifies to:

`$rho\left(a,b\right)=1-\frac{6\sum {d}^{2}}{n\left({n}^{2}-1\right)}\text{\hspace{0.17em}},$`

where d is the difference between the ranks of the two columns, and n is the length of each column.

## Tips

The difference between `corr(X,Y)` and the MATLAB® function `corrcoef(X,Y)` is that `corrcoef(X,Y)` returns a matrix of correlation coefficients for two column vectors `X` and `Y`. If `X` and `Y` are not column vectors, `corrcoef(X,Y)` converts them to column vectors.

## Algorithms

When you specify the `Weights` name-value argument, `corr` calculates the Pearson correlation by weighting the variance and covariance calculations. For the Spearman correlation (which is based on ranks), `corr` calculates weighted ranks as proposed by [5]. To calculate the Kendall correlation (which is based on counts of permutations), `corr` extends the weighted counts algorithm in [6] to account for ties.

## References

[1] Gibbons, J.D. Nonparametric Statistical Inference. 2nd ed. M. Dekker, 1985.

[2] Hollander, M., and D.A. Wolfe. Nonparametric Statistical Methods. Wiley, 1973.

[3] Kendall, M.G. Rank Correlation Methods. Griffin, 1970.

[4] Best, D.J., and D.E. Roberts. "Algorithm AS 89: The Upper Tail Probabilities of Spearman's rho." Applied Statistics, 24:377-379.

[5] Bailey, Paul, and Ahmad Emad (2023). wCorr: Weighted Correlations. R package version 1.9.7, https://american-institutes-for-research.github.io/wCorr.

[6] Van Doorn, Johnny, et al. "Using the Weighted Kendall Distance to Analyze Rank Data in Psychology." The Quantitative Methods for Psychology, vol. 17, no. 2, June 2021, pp. 154–65.

## Version History

Introduced before R2006a

expand all