# scale/normalize parameter vector for optimization

15 views (last 30 days)

Show older comments

In my optim problem, the parameters naturally vary by several orders of magnitude because they represent interpolation values of a B-Spline function F = spapi(k,x,y). For instance, may be close to zero and . The step tolerance is just a constant and does not account for these differences in scale between the parameters. In my case, the objective function changes differently w.r.t changes in the "smaller" y's than in larger y's, making parameter scaling or normalization potentially beneficial.

1. In machine learning literature, I often encounter [0,1] scaling as a common technique. Would this approach be suitable for my problem as well? Or can you suggest more appropriate scaling techniques given the parameters represent interpolation values?

2. This might be a separate question, but also relates to parameter scaling/transformation. My Jacobian matrix J (hence, Hessian \approx J^T*J) tends to be poorly conditioned. I have considered to switching to a different basis for the parameter vector. So far, my parameters are multipliers for the unit vectors : . I vaguely recall a discussion where a teacher suggested using the normalized eigenvectors of the Hessian as the unit vectors: where are the (normalized) eigenvectors of the Hessian and are the new parameters.

My questions are: In theory, would parameterization in terms of the eigenvectors be effective in improving the conditioning of the problem? If so, is this approach compatible with the presence of bounds and linear constraints on the parameters?

Thank you!

##### 6 Comments

John D'Errico
on 7 Oct 2024

You don't understand. The eigenvalues and eigenvectors of the hessian matrix are continually changing, and will be doing so dramatically as you move around the parameter space. In some places, it is even likely the eigenvalues change sign, indicating there may be regions of both simultaneously positive and negative curvature. As such, those eigenvectors are constantly pointing in very different directions as you move. Is it truly likely your initial guess is such a great guess? I doubt that is true. Why not?

BECAUSE you already told us the problem is poorly conditioned.

A characteristic of poorly conditioned problems is you have a long valley where if you move aloong the bottom of that valley, the objective changes relatively little. I.e., there is some linear combination of the parameters where relatively large changes in that direction do little to the objective. And of course, since the objective is nonlinear, almost always that valley lies along a curve. Not even a long straight valley.

Next, does it improve the conditioning? No. Not at all. You will still have the same fundamental valley to chase along, but as I said, almost always, that vally is not even a long perfectly straight one.

### Accepted Answer

Bruno Luong
on 7 Oct 2024

Edited: Bruno Luong
on 7 Oct 2024

1. In machine learning literature, I often encounter [0,1] scaling as a common technique. Would this approach be suitable for my problem as well? Or can you suggest more appropriate scaling techniques given the parameters represent interpolation values?

2. This might be a separate question, but also relates to parameter scaling/transformation. My Jacobian matrix J (hence, Hessian \approx J^T*J) tends to be poorly conditioned. I have considered to switching to a different basis for the parameter vector. So far, my parameters are multipliers for the unit vectors : . I vaguely recall a discussion where a teacher suggested using the normalized eigenvectors of the Hessian as the unit vectors: where are the (normalized) eigenvectors of the Hessian and are the new parameters.

This probably show what your teacher suggested, for the unconstrained case. Observe the smaller number of iterations needed for convergence when the basis is well chosen, render a better conditioning of the Hessian.

% Create ill conditioning matrix J

n = 4;

J = sqrtm(hilb(n));

xt = rand(n,1)

y = J*xt;

x0 = zeros(size(J,2),1);

[x,fval,exitflag,output] = fminunc(@(x)sum((J*x-y).^2),x0);

output

% Hessian (assumption linear model so there is no second order derivative

% here)

H = J'*J;

H = 1/2*(H + H'); % Make matrix numerically symmetric

fprintf('cond(H) = %g\n', cond(H));

[V,lambda] = eig(H,'vector');

lambda = reshape(lambda,1,[]);

% New basis (each column is a vector of basis, "normalized" eigen vector of

% H

B = V*diag(1./sqrt(lambda)); % Careful we assume H is spd matrix here inorder to be able to select such basis

% model with respect to coefficients in the new basis

JB = J*B;

% fprintf('cond(JB''*JB) = %g\n', cond(JB'*JB)); % NOTE: close to 1

[xB,fval,exitflag,outputPrecond] = fminunc(@(xB)sum((JB*xB-y).^2),x0);

xPrecond = B*xB

outputPrecond

One could make the Hessian matrix with better consitioned by change the basis of x as showed here or basis of y. Those correspond to the so called right or left preconditioning of the model matrix J.

For constrained case you need to consider the some sort Hessian projected to the active set. The topic is large, better to buy a good book of optimzation rather than post question here.

##### 3 Comments

Bruno Luong
on 7 Oct 2024

Edited: Bruno Luong
on 7 Oct 2024

### More Answers (2)

Matt J
on 7 Oct 2024

Edited: Matt J
on 7 Oct 2024

The transformation is non-differentiable, and probably is not a good idea for Optimization Toolbox optimizers which are smooth solvers. In machine learning, smoothness doesn't matter so much (a) because stochastic gradient solvers are typically used, which aren't as influenced by non-differentiability, and (b) because there isn't really a high priority in machine learning on precise minimization.

A common transformation strategy would be to take the diagonal of the Hessian, or in approximation J.'*J, at the initial point x0 and use that to pre-normalization of the variables. The idea is to scale things so that the Hessian diagonals at x0 are approximately equal 1. The effectiveness of this strategy gets better the more diagonally dominant the Hessian is, and the better the initial guess x0.

##### 9 Comments

Bruno Luong
on 8 Oct 2024

Edited: Bruno Luong
on 8 Oct 2024

I did read that and understood.

In practice I encouted rarely, if not ever, the diagonal dominant characters of the Hessian, something that we rather undergo (the Hessian form). The most diagonal dominant that comes to my mind spontanuously is from Laplace/Posson equation. Usually it's already have consant diagonal, and people still derive more sophisticated preconditioning techinique (eg scaled FFT basis, multigrid technique ...)

I simply show the convergence result/how to code of such preconditioning using model matrix J without Hessian H in common and with simple example.

Feel free to give example where the assumption you stipulate is true and effective.

Bruno Luong
on 8 Oct 2024

### See Also

### Categories

### Products

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!