Unconstrained Minimization Using fminunc
This example shows how to use fminunc to solve the nonlinear minimization problem
To solve this two-dimensional problem, write a function that returns . Then, invoke the unconstrained minimization routine fminunc starting from the initial point x0 = [-1,1].
The helper function objfun at the end of this example calculates .
To find the minimum of , set the initial point and call fminunc.
x0 = [-1,1]; [x,fval,exitflag,output] = fminunc(@objfun,x0);
Local minimum found. Optimization completed because the size of the gradient is less than the value of the optimality tolerance. <stopping criteria details>
View the results, including the first-order optimality measure in the output structure.
disp(x)
0.5000 -1.0000
disp(fval)
3.6609e-15
disp(exitflag)
1
disp(output.firstorderopt)
1.2284e-07
The exitflag output indicates whether the algorithm converges. exitflag = 1 means fminunc finds a local minimum.
The output structure gives more details about the optimization. For fminunc, the structure includes:
output.iterations, the number of iterationsoutput.funcCount, the number of function evaluationsoutput.stepsize, the final step-sizeoutput.firstorderopt, a measure of first-order optimality (which, in this unconstrained case, is the infinity norm of the gradient at the solution)output.algorithm, the type of algorithm usedoutput.message, the reason the algorithm stopped
Helper Function
This code creates the objfun helper function.
function f = objfun(x) f = exp(x(1)) * (4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1); end