MATLAB Answers

Getting differnet results with fmincon in different runs

24 views (last 30 days)
Ali Aghaeifar
Ali Aghaeifar on 1 Jun 2019
Edited: Matt J on 5 Jun 2019
I use following code to find proper X which minimize myFun
problem.x0 = zeros(3,20);
problem.ub = 10*ones(3,20); = -problem.ub;
problem.objective = @(x) myFun(x);
problem.solver = 'fmincon';
problem.options = optimoptions(@fmincon, 'Algorithm', 'sqp', 'MaxFunEvals', 1e6, 'MaxIter', 1e5);
xFinal = fmincon(problem);
The cost function, myFun, is not convex and the gradient is not specified as well, so, there no no garantee to yeild global minimum (no problem so far).
My probem is that number of iterations is different if I run this code in different computers, and I get different X as local minimum. Start value, constraints, ... all are same. MATLAB version is also same.
Any idea why I cannot get same results in different PCs?
Tried to add
at the beggining, but didn't solve the issue.
after iteration No. 0, I get different f(x) or Fval. Iiteration No. 0 is same for all runs.
F-count is not same after first iteration as well (for the runs in different computers).

  1 Comment

Ali Aghaeifar
Ali Aghaeifar on 3 Jun 2019
Not sure, but seems the problem returns back to OS. I got same results when I ran on two PCs with windows 10. However, different OS yeild different results (tested with Windows 7, 10 and ubuntu 14)

Sign in to comment.

Answers (1)

Matt J
Matt J on 3 Jun 2019
Edited: Matt J on 3 Jun 2019
Does myFun have any randomization commands in it? It shouldn't.
Otherwise, it is possible that you have a continuum of solutions, similar to how has a global minimum for all x=y. In such cases, the result will be numerically unstable, and different machines with different processors can give very different results.
It is also conceivable that one machine is converging to a saddle point, remaining in regions with positive curvature. The other machine, which does the numerical calculations slightly differently, lands in the region of negative curvature and manages to get greater descent.


Ali Aghaeifar
Ali Aghaeifar on 3 Jun 2019
Thanks for your reply.
No, there are no randomization commands.
Then it's bad news for portablity of programs if different processors can give different results. However, I'm in doublt this can be the main reason. All the function evaluations for the zeroth iteration yeild same results in different computers (e.g. F-count is 25 for the zeroth iteration and function evalution of all 25 times are also same).
I also don't have continuum of solutions, beacause evalution of objective function with different solutions gives different results.
I just saw two more questions with a similar problem of mine:
Matt J
Matt J on 4 Jun 2019
Then it's bad news for portablity of programs if different processors can give different results.
No, it's not a broad issue of portability. Different processors can give significantly different results only for optimization problems with certain unstable properties, such as those I described in the thread you referenced.
The first thing I would verify is whether both of the solutions you are getting are stable solutions. One way to do this is to add small random noise to the solution and re-run the optimization using this noisy point as your initial x0. If the optimization returns to the solution, it is probably stable.

Sign in to comment.