## Contents

```% Multivariate calculus demo script

% This script file is designed to be used in cell mode
% from the matlab editor, or best of all, use the publish
% to HTML feature from the matlab editor. Older versions
% of matlab can copy and paste entire blocks of code into
% the Matlab command window.

% Typical usage of the gradient and Hessian might be in
% optimization problems, where one might compare an analytically
% derived gradient for correctness, or use the Hessian matrix
% to compute confidence interval estimates on parameters in a
% maximum likelihood estimation.
```

## Gradient of the Rosenbrock function at [1,1], the global minimizer

```rosen = @(x) (1-x(1)).^2 + 105*(x(2)-x(1).^2).^2;
% The gradient should be zero (within floating point noise)
```
```grad =
3.6989e-20            0
err =
1.4545e-18            0
```

## The Hessian matrix at the minimizer should be positive definite

```H = hessian(rosen,[1 1])
% The eigenvalues of h should be positive
eig(H)
```
```H =
842         -420
-420          210
ans =
0.39939
1051.6
```

```[grad,err] = gradest(@(x) sum(x.^2),[1 2 3 4 5])
```
```grad =
2            4            6            8           10
err =
1.2533e-14   2.5067e-14   3.4734e-14   5.0134e-14    2.836e-14
```

## Simple Hessian matrix of a problem with 3 independent variables

```[H,err] = hessian(@(x) x(1) + x(2)^2 + x(3)^3,[1 2 3])
```
```H =
0     0     0
0     2     0
0     0    18
err =
0            0            0
0   4.6563e-15            0
0            0   3.3318e-14
```

## A semi-definite Hessian matrix

```H = hessian(@(xy) cos(xy(1) - xy(2)),[0 0])
% one of these eigenvalues will be zero (approximately)
eig(H)
```
```H =
-1            1
1           -1
ans =
-2
-3.8795e-07
```

## Directional derivative of the Rosenbrock function at the solution

This should be zero. Ok, its a trivial test case.

```[dd,err] = directionaldiff(rosen,[1 1],[1 2])
```
```dd =
0
err =
0
```

## Directional derivative at other locations

```[dd,err] = directionaldiff(rosen,[2 3],[1 -1])

% We can test this example
v = [1 -1];
v = v/norm(v);

% The directional derivative will be the dot product of the gradient with
% the (unit normalized) vector. So this difference will be (approx) zero.
dot(g,v) - dd
```
```dd =
743.88
err =
1.5078e-12
ans =
1.5916e-12
```

## Jacobian matrix of a scalar function is just the gradient

```[jac,err] = jacobianest(rosen,[2 3])

```
```jac =
842         -210
err =
2.8698e-12   1.0642e-12
842         -210
```

## Jacobian matrix of a linear system will reduce to the design matrix

```A = rand(5,3);
b = rand(5,1);
fun = @(x) (A*x-b);

x = rand(3,1);
[jac,err] = jacobianest(fun,x)

disp 'This should be essentially zero at any location x'
jac - A
```
```jac =
0.81472      0.09754      0.15761
0.90579       0.2785      0.97059
0.12699      0.54688      0.95717
0.91338      0.95751      0.48538
0.63236      0.96489      0.80028
err =
3.9634e-15   3.8376e-16   5.4271e-16
3.9634e-15   8.8625e-16   5.0134e-15
7.0064e-16   3.0701e-15   5.3175e-15
3.76e-15   4.8542e-15   2.4271e-15
3.0701e-15   4.3417e-15   3.9634e-15
This should be essentially zero at any location x
ans =
0  -3.1919e-16            0
1.1102e-16  -1.6653e-16            0
2.7756e-17            0  -1.1102e-16
-1.1102e-16  -3.3307e-16   1.6653e-16
0   2.2204e-16   1.1102e-16
```

## The jacobian matrix of a nonlinear transformation of variables

evaluated at some arbitrary location [-2, -3]

```fun = @(xy) [xy(1).^2, cos(xy(1) - xy(2))];
[jac,err] = jacobianest(fun,[-2 -3])
```
```jac =
-4            0
-0.84147      0.84147
err =
2.5067e-14            0
6.1478e-14   1.9658e-14
```