How does 'fmincon' work?

25 views (last 30 days)
Nathalie on 9 May 2014
Edited: Matt J on 10 May 2014
Can fmincon estimated with different algorithms: active-search or sqp, provide different estimated values of parameters at the end given all other things to be the same?

Matt J on 9 May 2014
Edited: Matt J on 9 May 2014
Yes, different algorithms can give different results. No iterative optimization algorithm ever yields the exact solution (unless perhaps your initial guess x0 is lucky enough to be that solution). Rather, they do successive approximation until some stopping criteria are reached. The path each algorithm takes toward the true solution is a trait of that algorithm. So, when the stopping criteria are applied, different algorithms can stop in different places in the neighborhood of the true solution.
Nathalie on 9 May 2014
so, which solution should i accept: I tried both algorithm and from my four estimated parameter, three are quite the same and one differes drastically between two algorithms. Which algorithm should i trust?
Matt J on 10 May 2014
Edited: Matt J on 10 May 2014
You can test which algorithm did the best job by looking at the objective function values at the solutions. If all 4 have similar objective function values, it probably means you have multiple distinct solutions. If some are much lower than the others, it could mean the others got stuck in local minima. Getting stuck at a local minimum isn't exclusively the algorithm's fault, however. It could be a matter of luck and the algorithm might have performed as well as the others if given a different initial guess x0.
Even if one algorithm outperformed all the others, it still doesn't mean any of the solutions are to be trusted. You should get acquainted with the other fmincon outputs.
In particular, exitflag tells you whether fmincon thinks it succeeded or not and why. exitflag values<=0 usually mean the optimization failed. Even when exitflag>0, it means fmincon thinks it got close enough to a minimum, but only close by its own standards, which might not be yours.
To really build trust in the result of an iterative optimization, you have to run it on test problems simulating your actual problem as closely as possible, but where you know the solution in advance. Then you can directly compare the output to the known solution and see if the result is close enough to satisfy you.