Tried to optimize the function but getting output as infinite

Need to minimize below equation:
F=1/ 2 x^T (Ks +)^1 x − 1/ 2 log( Ks + Kn) − M/ 2 log(2π).
where Ks = s^2 exp ((x-x')*(x-x')'/(2*l^2))
have to find parameters s,l.
Used Gradient Descent algorithm:
load PCG_1.mat
syms x1 x2 positive
N=5; % Length of the window
n=5; % Length of the signal
%a = mecg(1:n+N+2)';
a = env(1:n+N+2);
a0=[10 1]; % Initial point
for i=1:n
a1= a(i:i+N-1);
for j=1:n
a2=a(1+j:1+j+N-1);
k1(i,j) = (x1^2)* exp(-((a1-a2)*(a1-a2)')*0.5/(x2^2)); % Equation (5) from [1] % Squares exponential covariance function
end
end
a= a(1:n)';
x0=a0;
K=k1;
tol= 1e-6; % tolerence
maxiter=1000; % maximum iterations
dxmin=1e-6; % differences
alpha=0.1; % step size
gnorm =inf ; x=x0' ; niter =0 ; dx= inf;
syms x1 x2 positive
f= 0.5*a'*inv(K)*a...
+ 0.5 * log(det(K))...
+ 1000/2 *log(2*pi);
g1= jacobian (f, [x1 , x2]);
while and(gnorm>=tol , and(niter <=maxiter , dx >=dxmin))
g= (eval(subs(g1,{x1,x2},x')))';
g
gnorm =norm(g);
xnew = x - alpha *g;
if ~isfinite(xnew)
error('x is inf or NaN')
end
niter = niter +1;
dx= norm (xnew - x);
x=xnew;
end

1 Comment

Please recheck your equation F. It appears to be missing symbols as you have (Ks +) with nothing between the + and the close bracket.
Is Kn a constant or related to Ks? A description of what each name is and its size would help

Sign in to comment.

Answers (2)

So what is your problem? No optimization is guaranteed to converge to the local solution that you may desire. We don't know if your data is the problem or if you have mis-implemented this algorithm or mis-wrote the objective function.
Instead, you will be far better off using a good optimization code, written by professionals. At least there the question will not be if there is a problem in the code.
Don't write your own optimizers. There are better codes already written. Use them. Start with those in the optimization toolbox, or other optimization tools like the genetic algorithms TB. Others exist too.

1 Comment

Hi , I tried to use the optimization toolbox. As I compile this code , the system gets stuck . I don't know what's wrong.
clc
close all
load PCG_1.mat
syms x1 x2 positive
N=1500; % Length of the window
n=1500; % Length of the signal
a = env(1:n+N+2);
a0=[1 1]; % Initial point
tic
for i=1:n
a1= a(i:i+N-1);
for j=1:n
a2=a(1+j:1+j+N-1);
K(i,j) = (x1^2)* exp(-((a1-a2)*(a1-a2)')*0.5/(x2^2)); % Equation (5) from [1] % Squares exponential covariance function
end
end
t=toc;
f= 0.5*a'*inv(K)*a...
+ 0.5 * log(det(K))...
+ n/2 *log(2*pi);
x = fminsearch(f,a0)

Sign in to comment.

You should be pre-allocating your K.
But even if you do that: setting the values in your K matrix is quite slow. Slow enough that you should probably write it as a completely different phase that creates K and f and saves it to a file and then creates f, and saves it to a file, so that you would then have f available to try different minimization algorithms. You should also be using matlabFunction to generate a function handle to send to fminsearch as fminsearch does not expect symbolic expressions.
I put in a waitbar() to monitor how far into i had been done; it is taking a few minutes each i. You should consider using parfor

11 Comments

The creation of f from K involves finding the symbolic inv() of a 1500 x 1500 symbolic matrix. That is not even a little realistic.
The inv() of a symbolic matrix involves computations with the determinant, and the number of terms in the determinant of an N x N matrix is factorial(N). For a 1500 x 1500 matrix that would be factorial(1500) which is over 10^4000 terms.
In your first code, after you calculated k1, you had
a= a(1:n)'
but you do not have that in your optimize_covariance.m file, so you still have
a = env(1:n+N+2);
so a is 1 by n+N+2 which for your 1500 would be 1 x 3002 .
Your calculation of f includes
0.5*a'*inv(K)*a
where your K is n x n . But if you group the two right terms together,
0.5 * a' * (inv(K) * a)
then we see that inv(K) has to be matrix-multipliable by a. inv(K) is the same size as K, so that part is (n x n), and your a is 1 x (n+N+2) . The two simply cannot be reconciled for matrix multiplication because the respective inner dimensions are n and 1 . If you were to take the transpose of a so that it was (n x n) * (n+N+2 x 1) then that could only work if n == n+N+2 which can only happen if N == -2 . This suggests that you should not have dropped the a= a(1:n)' from your code. Question: are you sure you want conjugate transpose and not plain transpose?
The inv(K) * a can be rewritten as K\a . This does not make much difference when K is symbolic formulae but does not hurt, and is much more efficient if you received a numeric K.
But either way, even with just the 6 x 6 subset of symbolic K, K\a is slow, too much so for practical work.
Hi, But computing K matrix is necessary . is there is any way of decreasing the time?
No, it is completely and utterly infeasible to construct inv() of a 1500 by 1500 symbolic matrix. We are talking Heat Death Of The Universe Is Barely Getting Started timeframes.
The approach you are going to have to adopt is a pure numeric one: instead of finding the symbolic formula and optimizing using that symbolic formula, you will need to let the optimization provide specific numeric x1 and x2 and build the numeric K matrix and proceed form there.
On my system, inv() of a numeric 1500 x 1500 matrix takes about 1/7 of a second -- but when you do the numeric code, switch to using A\B instead of inv(A)*B
You can build a matrix of -((a1-a2)*(a1-a2)')*0.5 ahead of time and pass it in to the objective function by parameterizing the function and then index it.
for i=1:n
a1= a(i:i+N-1);
for j=1:n
a2=a(1+j:1+j+N-1);
A12(i,j) = -((a1-a2)*(a1-a2)')*0.5; %
end
end
then pass in A12, and in the objective function, no loop would be required to build K:
K = x1.^2 * exp(A12 ./ x2.^2 );
and you would not search at this level:
x = 0.5 .* a' * (K\a) ...
+ 0.5 * log(det(K)) ...
+ n/2 *log(2*pi);
Instead, you would wrap those pair of lines in a function (and possibly another line or two of initialization) and you would fminsearch on that:
fminsearch( @(x12) obj(x12(1), x12(2), A12)
It's working . Thank You
I have another question . Is there any better way to know the good initial point for optimization.
Unfortunately there is seldom a way to know a good initial point, except if you have information about what the range of results "probably" is.
Warning: Matrix is singular to working precision. > In obj (line 4) In optimize_conv2>@(x12)obj(x12(1),x12(2),A,a3,n) In fminsearch (line 189) In optimize_conv2 (line 23)
Exiting: Maximum number of function evaluations has been exceeded - increase MaxFunEvals option. Current function value: NaN
I'm getting this warning . Is this because of the wrong choice of initial point?
Please show your obj code.
At the moment it seems more likely that you have a / operation where you need a ./ operation.
Your K becomes all 0 if x1 goes to 0, and that is not invertible. You need a bounded search but fmincon is always unbounded. Or you could fake it with max(eps, x1.^2)
There might be other circumstances in which your K has bad conditioning or low rank.
I think its because of the covariance matrix . It should be positive definite which is not in my case.
The way I implemented the covariance matrix is it right? Here's the link for the paper.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!