How to know if a minimum found with Global Search is actually the Global minimum?
37 views (last 30 days)
I try to find a global minimum with fmincon() in a GlobalSearch. I gave it a maximum runtime of 30 minutes. The solver ended long before that with the following message:
gs = GlobalSearch('MaxTime',1800); %
[global_min,fg,exitflag,output,solutions] = run(gs,problem)
GlobalSearch stopped because it analyzed all the trial points.
3 out of 19 local solver runs converged with a positive local solver exit flag.
Elapsed time is 507.457135 seconds.
exitflag = 2
I know that these 3 found solutions are local minima, but is the "most optimal" of those minima, the one saved in 'global_min' actually the global minimum? How can I find out if it is? The documentation for GlobalSearch() does not seem to cover it.
John D'Errico on 27 Jan 2021
Bjorn and Stephen should be encouraged to make their comments into Answers. But let me expand on what they have both said.
The point is it is impossible to ensure a numerical solver on a fully general nonlinear optimization problem (a complete black box) will find the global solution. This is because that objective may always contain a local minimum that is essentally a delta function, a virtually infinitely narrow hole. Even if the function is technically fully differentiable, such an effective numerical singularity will make any solver fail almost 100% of the time to find the true global solution.
Can you succeed in general with the symbolic toolbox? NO! There is not even any assurance a symbolic solver will work, because that problem reduces to finding all roots of a general nonlinear function. Again, this is impossible on almost all such nasty functions you will throw at it. And you have told us nothing about your problem, except that it is nasty.
The point is, you can WANT a solver to always give a global minimum, but if wishes were horses, beggars would ride.
The best you can do with a deterministic tool like fmincon is to use good starting guesses, or failing that, to use a multi-start method, with many start points. That INCREASES the probability you will find a solution. The same thing applies with any "global search" tool. Give it as good a chance as you can of finding its way into the basin of attraction of the global minimizer. That means you need to have a well sampled search space. Anyway, I said PROBABILITY above. I meant that. At best, you can try to improve the probability you will succeed, driving that probability closer to 1.
With high dimensional problems, things go to hell fast. Strong samplings of high dimensional search spaces becomes nearly impossible. The curse of dimensionality makes things really bad, and really fast. Can you improve things there? Sometimes...
The point is that just as good starting values are a huge benefit for any deterministic downhill solver, to get into the basin of attraction of the global min, you need to constrain the solver as highly as possible. If you know something about the solution, then that information needs to go into bound or inequality constraints of some form. Such constraint sets can make an ill-posed problem into a well-posed one.