# fmincon

Find minimum of constrained nonlinear multivariable function

## Equation

Finds the minimum of a problem specified by

b and beq are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is a function that returns a scalar. f(x), c(x), and ceq(x) can be nonlinear functions.

x, lb, and ub can be passed as vectors or matrices; see Matrix Arguments.

## Syntax

```x = fmincon(fun,x0,A,b)x = fmincon(fun,x0,A,b,Aeq,beq)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)x = fmincon(problem)[x,fval] = fmincon(...)[x,fval,exitflag] = fmincon(...)[x,fval,exitflag,output] = fmincon(...)[x,fval,exitflag,output,lambda] = fmincon(...)[x,fval,exitflag,output,lambda,grad] = fmincon(...)[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...)```

## Description

`fmincon` attempts to find a constrained minimum of a scalar function of several variables starting at an initial estimate. This is generally referred to as constrained nonlinear optimization or nonlinear programming.

 Note:   Passing Extra Parameters explains how to pass extra parameters to the objective function and nonlinear constraint functions, if necessary.

`x = fmincon(fun,x0,A,b)` starts at `x0` and attempts to find a minimizer `x` of the function described in `fun` subject to the linear inequalities `A*x ≤ b`. `x0` can be a scalar, vector, or matrix.

`x = fmincon(fun,x0,A,b,Aeq,beq)` minimizes `fun` subject to the linear equalities `Aeq*x = beq` and `A*x ≤ b`. If no inequalities exist, set `A = []` and `b = []`.

`x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub)` defines a set of lower and upper bounds on the design variables in `x`, so that the solution is always in the range `lb `` x `` ub`. If no equalities exist, set `Aeq = []` and ```beq = []```. If `x(i)` is unbounded below, set ```lb(i) = -Inf```, and if `x(i)` is unbounded above, set `ub(i) = Inf`.

 Note:   If the specified input bounds for a problem are inconsistent, the output `x` is `x0` and the output `fval` is `[]`.Components of `x0` that violate the bounds `lb ≤ x ≤ ub` are reset to the interior of the box defined by the bounds. Components that respect the bounds are not changed.

`x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon)` subjects the minimization to the nonlinear inequalities `c(x)` or equalities `ceq(x)` defined in `nonlcon`. `fmincon` optimizes such that `c(x) ≤ 0` and `ceq(x) = 0`. If no bounds exist, set ```lb = []``` and/or `ub = []`.

`x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)` minimizes with the optimization options specified in `options`. Use `optimoptions` to set these options. If there are no nonlinear inequality or equality constraints, set `nonlcon = []`.

`x = fmincon(problem)` finds the minimum for `problem`, where `problem` is a structure described in Input Arguments. Create the `problem` structure by exporting a problem from Optimization app, as described in Exporting Your Work.

`[x,fval] = fmincon(...)` returns the value of the objective function `fun` at the solution `x`.

`[x,fval,exitflag] = fmincon(...)` returns a value `exitflag` that describes the exit condition of `fmincon`.

`[x,fval,exitflag,output] = fmincon(...)` returns a structure `output` with information about the optimization.

`[x,fval,exitflag,output,lambda] = fmincon(...)` returns a structure `lambda` whose fields contain the Lagrange multipliers at the solution `x`.

```[x,fval,exitflag,output,lambda,grad] = fmincon(...)``` returns the value of the gradient of `fun` at the solution `x`.

```[x,fval,exitflag,output,lambda,grad,hessian] = fmincon(...)``` returns the value of the Hessian at the solution `x`. See fmincon Hessian.

## Input Arguments

Function Arguments describes the arguments passed to `fmincon`. Options provides the function-specific details for the `options` values. This section provides function-specific details for `fun`, `nonlcon`, and `problem`.

`fun`

The function to be minimized. `fun` is a function that accepts a vector `x` and returns a scalar `f`, the objective function evaluated at `x`. `fun` can be specified as a function handle for a file:

`x = fmincon(@myfun,x0,A,b)`

where `myfun` is a MATLAB® function such as

```function f = myfun(x) f = ... % Compute function value at x```

`fun` can also be a function handle for an anonymous function:

`x = fmincon(@(x)norm(x)^2,x0,A,b);`

If the gradient of `fun` can also be computed and the `GradObj` option is `'on'`, as set by

`options = optimoptions('fmincon','GradObj','on')`
then `fun` must return the gradient vector `g(x)` in the second output argument.

If the Hessian matrix can also be computed and the `Hessian` option is `'on'` via `options = optimoptions('fmincon','Hessian','user-supplied')` and the `Algorithm` option is `trust-region-reflective`, `fun` must return the Hessian value `H(x)`, a symmetric matrix, in a third output argument. `fun` can give a sparse Hessian. See Writing Objective Functions for details.

If the Hessian matrix can be computed and the `Algorithm` option is `interior-point`, there are several ways to pass the Hessian to `fmincon`. For more information, see Hessian.

`A`, `b`, `Aeq`, `beq`

Linear constraint matrices `A` and `Aeq`, and their corresponding vectors `b` and `beq`, can be sparse or dense. The `trust-region-reflective` and `interior-point` algorithms use sparse linear algebra. If `A` or `Aeq` is large, with relatively few nonzero entries, save running time and memory in the `trust-region-reflective` or `interior-point` algorithms by using sparse matrices.

`nonlcon`

The function that computes the nonlinear inequality constraints `c(x)≤ 0` and the nonlinear equality constraints `ceq(x) = 0`. `nonlcon` accepts a vector `x` and returns the two vectors `c` and `ceq`. `c` is a vector that contains the nonlinear inequalities evaluated at `x`, and `ceq` is a vector that contains the nonlinear equalities evaluated at `x`. `nonlcon` should be specified as a function handle to a file or to an anonymous function, such as `mycon`:

`x = fmincon(@myfun,x0,A,b,Aeq,beq,lb,ub,@mycon)`

where `mycon` is a MATLAB function such as

```function [c,ceq] = mycon(x) c = ... % Compute nonlinear inequalities at x. ceq = ... % Compute nonlinear equalities at x.```
If the gradients of the constraints can also be computed and the `GradConstr` option is `'on'`, as set by
`options = optimoptions('fmincon','GradConstr','on')`
then `nonlcon` must also return, in the third and fourth output arguments, `GC`, the gradient of `c(x)`, and `GCeq`, the gradient of `ceq(x)`. `GC` and `GCeq` can be sparse or dense. If `GC` or `GCeq` is large, with relatively few nonzero entries, save running time and memory in the `interior-point` algorithm by representing them as sparse matrices. For more information, see Nonlinear Constraints.

 Note   Because Optimization Toolbox™ functions only accept inputs of type `double`, user-supplied objective and nonlinear constraint functions must return outputs of type `double`.

`problem`

`objective`

Objective function

`x0`

Initial point for `x`

`Aineq`

Matrix for linear inequality constraints

`bineq`

Vector for linear inequality constraints

`Aeq`

Matrix for linear equality constraints

`beq`

Vector for linear equality constraints
`lb`Vector of lower bounds
`ub`Vector of upper bounds

`nonlcon`

Nonlinear constraint function

`solver`

`'fmincon'`

`options`

Options created with `optimoptions`

## Output Arguments

Function Arguments describes arguments returned by `fmincon`. This section provides function-specific details for `exitflag`, `lambda`, and `output`:

 `exitflag` Integer identifying the reason the algorithm terminated. The following lists the values of `exitflag` and the corresponding reasons the algorithm terminated. All Algorithms: `1` First-order optimality measure was less than `options.TolFun`, and maximum constraint violation was less than `options.TolCon`. `0` Number of iterations exceeded `options.MaxIter` or number of function evaluations exceeded `options.MaxFunEvals`. `-1` Stopped by an output function or plot function. `-2` No feasible point was found. `trust-region-reflective`, `interior-point`, and `sqp` algorithms: `2` Change in `x` was less than `options.TolX` and maximum constraint violation was less than `options.TolCon`. `trust-region-reflective` algorithm only: `3` Change in the objective function value was less than `options.TolFun` and maximum constraint violation was less than `options.TolCon`. `active-set` algorithm only: `4` Magnitude of the search direction was less than 2*`options.TolX` and maximum constraint violation was less than `options.TolCon`. `5` Magnitude of directional derivative in search direction was less than 2*`options.TolFun` and maximum constraint violation was less than `options.TolCon`. `interior-point` and `sqp` algorithms: `-3` Objective function at current iteration went below `options.ObjectiveLimit` and maximum constraint violation was less than `options.TolCon`. `grad` Gradient at `x` `hessian` Hessian at `x` `lambda` Structure containing the Lagrange multipliers at the solution `x` (separated by constraint type). The fields of the structure are: `lower` Lower bounds `lb` `upper` Upper bounds `ub` `ineqlin` Linear inequalities `eqlin` Linear equalities `ineqnonlin` Nonlinear inequalities `eqnonlin` Nonlinear equalities `output` Structure containing information about the optimization. The fields of the structure are: `iterations` Number of iterations taken `funcCount` Number of function evaluations `lssteplength` Size of line search step relative to search direction (`active-set` algorithm only) `constrviolation` Maximum of constraint functions `stepsize` Length of last displacement in `x` (`active-set` and `interior-point` algorithms) `algorithm` Optimization algorithm used `cgiterations` Total number of PCG iterations (`trust-region-reflective` and `interior-point` algorithms) `firstorderopt` Measure of first-order optimality `message` Exit message

### Hessian

`fmincon` uses a Hessian as an optional input. This Hessian is the second derivatives of the Lagrangian (see Equation 3-1), namely,

 ${\nabla }_{xx}^{2}L\left(x,\lambda \right)={\nabla }^{2}f\left(x\right)+\sum {\lambda }_{i}{\nabla }^{2}{c}_{i}\left(x\right)+\sum {\lambda }_{i}{\nabla }^{2}ce{q}_{i}\left(x\right).$ (14-1)

The various `fmincon` algorithms handle input Hessians differently:

• The `active-set` and `sqp` algorithms do not accept a user-supplied Hessian. They compute a quasi-Newton approximation to the Hessian of the Lagrangian.

• The `trust-region-reflective` algorithm can accept a user-supplied Hessian as the final output of the objective function. Since this algorithm has only bounds or linear constraints, the Hessian of the Lagrangian is same as the Hessian of the objective function. See Writing Scalar Objective Functions for details on how to pass the Hessian to `fmincon`. Indicate that you are supplying a Hessian by

`options = optimoptions('fmincon','Algorithm','trust-region-reflective','Hessian','user-supplied');`
If you do not pass a Hessian, the algorithm computes a finite-difference approximation.

• The `interior-point` algorithm can accept a user-supplied Hessian as a separately defined function—it is not computed in the objective function. The syntax is

`hessian = hessianfcn(x, lambda)`
`hessian` is an n-by-n matrix, sparse or dense, where n is the number of variables. If `hessian` is large and has relatively few nonzero entries, save running time and memory by representing `hessian` as a sparse matrix. `lambda` is a structure with the Lagrange multiplier vectors associated with the nonlinear constraints:
```lambda.ineqnonlin lambda.eqnonlin```
`fmincon` computes the structure `lambda`. `hessianfcn` must calculate the sums in Equation 14-1. Indicate that you are supplying a Hessian by
```options = optimoptions('fmincon','Algorithm','interior-point',... 'Hessian','user-supplied','HessFcn',@hessianfcn);```

For an example, see fmincon Interior-Point Algorithm with Analytic Hessian.

The `interior-point` algorithm has several more options for Hessians, see Choose Input Hessian for interior-point fmincon:

• `options = optimoptions('fmincon','Hessian','bfgs');`

`fmincon` calculates the Hessian by a dense quasi-Newton approximation. This is the default.

• `options = optimoptions('fmincon','Hessian','lbfgs');`

`fmincon` calculates the Hessian by a limited-memory, large-scale quasi-Newton approximation. The default memory, 10 iterations, is used.

• `options = optimoptions('fmincon','Hessian',{'lbfgs',positive integer});`

`fmincon` calculates the Hessian by a limited-memory, large-scale quasi-Newton approximation. The positive integer specifies how many past iterations should be remembered.

• ```options = optimoptions('fmincon','Hessian','fin-diff-grads',... 'SubproblemAlgorithm','cg','GradObj','on',... 'GradConstr','on');```

`fmincon` calculates a Hessian-times-vector product by finite differences of the gradient(s). You must supply the gradient of the objective function, and also gradients of nonlinear constraints if they exist.

#### Hessian Multiply Function

The `interior-point` and `trust-region-reflective` algorithms allow you to supply a Hessian multiply function. This function gives the result of a Hessian-times-vector product, without computing the Hessian directly. This can save memory.

The syntax for the two algorithms differ:

• For the `interior-point` algorithm, the syntax is

`W = HessMultFcn(x,lambda,v);`

The result `W` should be the product `H*v`, where `H` is the Hessian of the Lagrangian at `x` (see Equation 14-1), `lambda` is the Lagrange multiplier (computed by `fmincon`), and `v` is a vector of size n-by-1. Set options as follows:

```options = optimoptions('fmincon','Algorithm','interior-point','Hessian','user-supplied',... 'SubproblemAlgorithm','cg','HessMult',@HessMultFcn);```

Supply the function `HessMultFcn`, which returns an n-by-1 vector, where n is the number of dimensions of x. The `HessMult` option enables you to pass the result of multiplying the Hessian by a vector without calculating the Hessian.

• The `trust-region-reflective` algorithm does not involve `lambda`:

`W = HessMultFcn(H,v);`

The result `W = H*v`. `fmincon` passes `H` as the value returned in the third output of the objective function (see Writing Scalar Objective Functions). `fmincon` also passes `v`, a vector or matrix with n rows. The number of columns in `v` can vary, so write `HessMultFcn` to accept an arbitrary number of columns. `H` does not have to be the Hessian; rather, it can be anything that enables you to calculate `W = H*v`.

Set options as follows:

```options = optimoptions('fmincon','Algorithm','trust-region-reflective',... 'Hessian','user-supplied','HessMult',@HessMultFcn);```

For an example using a Hessian multiply function with the `trust-region-reflective` algorithm, see Minimization with Dense Structured Hessian, Linear Equalities.

## Options

Optimization options used by `fmincon`. Some options apply to all algorithms, and others are relevant for particular algorithms. Use `optimoptions` to set or change the values in `options`. See Optimization Options Reference for detailed information.

### All Algorithms

All four algorithms use these options:

 `Algorithm` Choose the optimization algorithm:`'interior-point'` (default)`'trust-region-reflective'``'sqp'``'active-set'`For information on choosing the algorithm, see Choosing the Algorithm.The `trust-region-reflective` algorithm requires:A gradient to be supplied in the objective function`GradObj` to be set to `'on'`Either bound constraints or linear equality constraints, but not both If you select the `'trust-region-reflective'` algorithm and these conditions are not all satisfied, `fmincon` throws an error.The `'active-set'` and `'sqp'` algorithms are not large-scale. See Large-Scale vs. Medium-Scale Algorithms. `DerivativeCheck` Compare user-supplied derivatives (gradients of objective or constraints) to finite-differencing derivatives. The choices are `'on'` or the default, `'off'`. `Diagnostics` Display diagnostic information about the function to be minimized or solved. The choices are `'on'` or the default, `'off'`. `DiffMaxChange` Maximum change in variables for finite-difference gradients (a positive scalar). The default is `Inf`. `DiffMinChange` Minimum change in variables for finite-difference gradients (a positive scalar). The default is `0`. `Display` Level of display:`'off'` or `'none'` displays no output.`'iter'` displays output at each iteration, and gives the default exit message.`'iter-detailed'` displays output at each iteration, and gives the technical exit message.`'notify'` displays output only if the function does not converge, and gives the default exit message.`'notify-detailed'` displays output only if the function does not converge, and gives the technical exit message.`'final'` (default) displays just the final output, and gives the default exit message.`'final-detailed'` displays just the final output, and gives the technical exit message. `FinDiffRelStep` Scalar or vector step size factor. When you set `FinDiffRelStep` to a vector `v`, forward finite differences `delta` are```delta = v.*sign(x).*max(abs(x),TypicalX);```and central finite differences are`delta = v.*max(abs(x),TypicalX);`Scalar `FinDiffRelStep` expands to a vector. The default is `sqrt(eps)` for forward finite differences, and `eps^(1/3)` for central finite differences. `FinDiffType` Finite differences, used to estimate gradients, are either `'forward'` (default), or `'central'` (centered). `'central'` takes twice as many function evaluations but should be more accurate.`fmincon` is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. However, for the `interior-point` algorithm, `'central'` differences might violate bounds during their evaluation if the `AlwaysHonorConstraints` option is set to `'none'`. `FunValCheck` Check whether objective function and constraints values are valid. `'on'` displays an error when the objective function or constraints return a value that is complex, `Inf`, or `NaN`. The default, `'off'`, displays no error. `GradConstr` Gradient for nonlinear constraint functions defined by the user. When set to `'on'`, `fmincon` expects the constraint function to have four outputs, as described in `nonlcon` in the Input Arguments section. When set to the default, `'off'`, gradients of the nonlinear constraints are estimated by finite differences. The `trust-region-reflective` algorithm does not accept nonlinear constraints. `GradObj` Gradient for the objective function defined by the user. See the preceding description of `fun` to see how to define the gradient in `fun`. Set to `'on'` to have `fmincon` use a user-defined gradient of the objective function. The default, `'off'`, causes `fmincon` to estimate gradients using finite differences. You must provide the gradient, and set `GradObj` to `'on'`, to use the trust-region-reflective method. `MaxFunEvals` Maximum number of function evaluations allowed, a positive integer. The default value for all algorithms except `interior-point` is `100*numberOfVariables`; for the `interior-point` algorithm the default is `3000`. `MaxIter` Maximum number of iterations allowed, a positive integer. The default value for all algorithms except `interior-point` is `400`; for the `interior-point` algorithm the default is `1000`. `OutputFcn` Specify one or more user-defined functions that an optimization function calls at each iteration, either as a function handle or as a cell array of function handles. The default is none (`[]`). See Output Function. `PlotFcns` Plots various measures of progress while the algorithm executes, select from predefined plots or write your own. Pass a function handle or a cell array of function handles. The default is none (`[]`). `@optimplotx` plots the current point`@optimplotfunccount` plots the function count`@optimplotfval` plots the function value`@optimplotconstrviolation` plots the maximum constraint violation`@optimplotstepsize` plots the step size`@optimplotfirstorderopt` plots the first-order optimality measureFor information on writing a custom plot function, see Plot Functions. `TolCon` Tolerance on the constraint violation, a positive scalar. The default is `1e-6`. `TolFun` Termination tolerance on the function value, a positive scalar. The default is `1e-6`. `TolX` Termination tolerance on `x`, a positive scalar. The default value for all algorithms except `'interior-point'` is `1e-6`; for the `'interior-point'` algorithm the default is `1e-10`. `TypicalX` Typical `x` values. The number of elements in `TypicalX` is equal to the number of elements in `x0`, the starting point. The default value is `ones(numberofvariables,1)`. `fmincon` uses `TypicalX` for scaling finite differences for gradient estimation.The `'trust-region-reflective'` algorithm uses `TypicalX` only for the `DerivativeCheck` option. `UseParallel` When `true`, estimate gradients in parallel. Disable by setting to the default, `false`. `trust-region-reflective` requires a gradient in the objective, so `UseParallel` does not apply. See Parallel Computing.

### Trust-Region-Reflective Algorithm

The `'trust-region-reflective'` algorithm uses these options:

`Hessian`

If `'on'` or `'user-supplied'`, `fmincon` uses a user-defined Hessian (defined in `fun`), or Hessian information (when using `HessMult`), for the objective function. If `'off'` (default), `fmincon` approximates the Hessian using finite differences.

`HessMult`

Function handle for Hessian multiply function. For large-scale structured problems, this function computes the Hessian matrix product `H*Y` without actually forming `H`. The function is of the form

`W = hmfun(Hinfo,Y)`

where `Hinfo` contains a matrix used to compute `H*Y`.

The first argument must be the same as the third argument returned by the objective function `fun`, for example:

`[f,g,Hinfo] = fun(x)`

`Y` is a matrix that has the same number of rows as there are dimensions in the problem. `W = H*Y`, although `H` is not formed explicitly. `fmincon` uses `Hinfo` to compute the preconditioner. See Passing Extra Parameters for information on how to supply values for any additional parameters that `hmfun` needs.

 Note   `Hessian` must be set to `'on'` or `'user-supplied'` for `fmincon` to pass `Hinfo` from `fun` to `hmfun`.

See Minimization with Dense Structured Hessian, Linear Equalities for an example.

`HessPattern`

Sparsity pattern of the Hessian for finite differencing. Set `HessPattern(i,j) = 1` when you can have ∂2`fun`/∂`x(i)``x(j)` ≠ 0. Otherwise, set ```HessPattern(i,j) = 0```.

Use `HessPattern` when it is inconvenient to compute the Hessian matrix `H` in `fun`, but you can determine (say, by inspection) when the `i`th component of the gradient of `fun` depends on `x(j)`. `fmincon` can approximate `H` via sparse finite differences (of the gradient) if you provide the sparsity structure of `H` — i.e., locations of the nonzeros — as the value for `HessPattern`.

In the worst case, when the structure is unknown, do not set `HessPattern`. The default behavior is as if `HessPattern` is a dense matrix of ones. Then `fmincon` computes a full finite-difference approximation in each iteration. This can be very expensive for large problems, so it is usually better to determine the sparsity structure.

`MaxPCGIter`

Maximum number of PCG (preconditioned conjugate gradient) iterations, a positive scalar. The default is `max(1,floor(numberOfVariables/2))`. For more information, see Preconditioned Conjugate Gradient Method.

`PrecondBandWidth`

Upper bandwidth of preconditioner for PCG, a nonnegative integer. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. Setting `PrecondBandWidth` to `Inf` uses a direct factorization (Cholesky) rather than the conjugate gradients (CG). The direct factorization is computationally more expensive than CG, but produces a better quality step towards the solution.

`TolPCG`

Termination tolerance on the PCG iteration, a positive scalar. The default is `0.1`.

### Active-Set Algorithm

The `'active-set'` algorithm uses these options:

 `MaxSQPIter` Maximum number of SQP iterations allowed, a positive integer. The default is ```10*max(numberOfVariables, numberOfInequalities + numberOfBounds)```. `RelLineSrchBnd` Relative bound (a real nonnegative scalar value) on the line search step length such that the total displacement in x satisfies |Δx(i)| ≤ relLineSrchBnd· max(|x(i)|,|typicalx(i)|). This option provides control over the magnitude of the displacements in x for cases in which the solver takes steps that are considered too large. The default is no bounds (`[]`). `RelLineSrchBndDuration` Number of iterations for which the bound specified in `RelLineSrchBnd` should be active (default is `1`). `TolConSQP` Termination tolerance on inner iteration SQP constraint violation, a positive scalar. The default is `1e-6`.

### Interior-Point Algorithm

The `'interior-point'` algorithm uses these options:

 `AlwaysHonorConstraints` The default `'bounds'` ensures that bound constraints are satisfied at every iteration. Disable by setting to `'none'`. `HessFcn` Function handle to a user-supplied Hessian (see Hessian). This is used when the `Hessian` option is set to `'user-supplied'`. `Hessian` Chooses how `fmincon` calculates the Hessian (see Hessian). The choices are:`'bfgs'` (default)`'fin-diff-grads'``'lbfgs'``{'lbfgs',Positive Integer}``'user-supplied'` `HessMult` Handle to a user-supplied function that gives a Hessian-times-vector product (see Hessian). This is used when the `Hessian` option is set to `'user-supplied'`. `InitBarrierParam` Initial barrier value, a positive scalar. Sometimes it might help to try a value above the default `0.1`, especially if the objective or constraint functions are large. `InitTrustRegionRadius` Initial radius of the trust region, a positive scalar. On badly scaled problems it might help to choose a value smaller than the default $\sqrt{n}$, where n is the number of variables. `MaxProjCGIter` A tolerance (stopping criterion) for the number of projected conjugate gradient iterations; this is an inner iteration, not the number of iterations of the algorithm. This positive integer has a default value of `2*(numberOfVariables - numberOfEqualities)`. `ObjectiveLimit` A tolerance (stopping criterion) that is a scalar. If the objective function value goes below `ObjectiveLimit` and the iterate is feasible, the iterations halt, since the problem is presumably unbounded. The default value is `-1e20`. `ScaleProblem` `'obj-and-constr'` causes the algorithm to normalize all constraints and the objective function. Disable by setting to the default `'none'`. `SubproblemAlgorithm` Determines how the iteration step is calculated. The default, `'ldl-factorization'`, is usually faster than `'cg'` (conjugate gradient), though `'cg'` might be faster for large problems with dense Hessians. `TolProjCG` A relative tolerance (stopping criterion) for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration. This positive scalar has a default of `0.01`. `TolProjCGAbs` Absolute tolerance (stopping criterion) for projected conjugate gradient algorithm; this is for an inner iteration, not the algorithm iteration. This positive scalar has a default of `1e-10`.

### SQP Algorithm

The `'sqp'` algorithm uses these options:

 `ObjectiveLimit` A tolerance (stopping criterion) that is a scalar. If the objective function value goes below `ObjectiveLimit` and the iterate is feasible, the iterations halt, since the problem is presumably unbounded. The default value is `-1e20`. `ScaleProblem` `'obj-and-constr'` causes the algorithm to normalize all constraints and the objective function. Disable by setting to the default `'none'`.

## Examples

Find values of x that minimize f(x) = –x1x2x3, starting at the point `x = [10;10;10]`, subject to the constraints:

0 ≤ x1 + 2x2 + 2x3 ≤ 72.

1. Write a file that returns a scalar value `f` of the objective function evaluated at `x`:

```function f = myfun(x) f = -x(1) * x(2) * x(3);```
2. Rewrite the constraints as both less than or equal to a constant,

x1–2x2–2x3 ≤ 0
x1 + 2x2 + 2x3≤ 72

3. Since both constraints are linear, formulate them as the matrix inequality A·x ≤ b, where

```A = [-1 -2 -2; ... 1 2 2]; b = [0;72];```
4. Supply a starting point and invoke an optimization routine:

```x0 = [10;10;10]; % Starting guess at the solution [x,fval] = fmincon(@myfun,x0,A,b);```
5. After `fmincon` stops, the solution is

```x x = 24.0000 12.0000 12.0000```

where the function value is

```fval fval = -3.4560e+03```

and linear inequality constraints evaluate to be less than or equal to `0`:

```A*x-b ans = -72.0000 -0.0000```

## Notes

### Trust-Region-Reflective Optimization

To use the trust-region-reflective algorithm, you must

• Supply the gradient of the objective function in `fun`.

• Set `GradObj` to `'on'` in `options`.

• Specify the feasible region using one, but not both, of the following types of constraints:

• Upper and lower bounds constraints

• Linear equality constraints, in which the equality constraint matrix `Aeq` cannot have more rows than columns

You cannot use inequality constraints with the trust-region-reflective algorithm. If the preceding conditions are not met, `fmincon` reverts to the active-set algorithm.

`fmincon` returns a warning if you do not provide a gradient and the `Algorithm` option is `'trust-region-reflective'`. `fmincon` permits an approximate gradient to be supplied, but this option is not recommended; the numerical behavior of most optimization methods is considerably more robust when the true gradient is used.

The trust-region-reflective method in `fmincon` is in general most effective when the matrix of second derivatives, i.e., the Hessian matrix H(x), is also computed. However, evaluation of the true Hessian matrix is not required. For example, if you can supply the Hessian sparsity structure (using the `HessPattern` option in `options`), `fmincon` computes a sparse finite-difference approximation to H(x).

If `x0` is not strictly feasible, `fmincon` chooses a new strictly feasible (centered) starting point.

If components of x have no upper (or lower) bounds, `fmincon` prefers that the corresponding components of `ub` (or `lb`) be set to `Inf` (or `-Inf` for `lb`) as opposed to an arbitrary but very large positive (or negative in the case of lower bounds) number.

Take note of these characteristics of linearly constrained minimization:

• A dense (or fairly dense) column of matrix `Aeq` can result in considerable fill and computational cost.

• `fmincon` removes (numerically) linearly dependent rows in `Aeq`; however, this process involves repeated matrix factorizations and therefore can be costly if there are many dependencies.

• Each iteration involves a sparse least-squares solution with matrix

$\overline{Aeq}=Ae{q}^{T}{R}^{T},$

where RT is the Cholesky factor of the preconditioner. Therefore, there is a potential conflict between choosing an effective preconditioner and minimizing fill in $\overline{Aeq}$.

### Active-Set Optimization

If equality constraints are present and dependent equalities are detected and removed in the quadratic subproblem, `'dependent'` appears under the `Procedures` heading (when you ask for output by setting the `Display` option to `'iter'`). The dependent equalities are only removed when the equalities are consistent. If the system of equalities is not consistent, the subproblem is infeasible and `'infeasible'` appears under the `Procedures` heading.

## Limitations

`fmincon` is a gradient-based method that is designed to work on problems where the objective and constraint functions are both continuous and have continuous first derivatives.

When the problem is infeasible, `fmincon` attempts to minimize the maximum constraint value.

The `'trust-region-reflective'` algorithm does not allow equal upper and lower bounds. For example, if `lb(2)==ub(2)`, `fmincon` gives this error:

```Equal upper and lower bounds not permitted in trust-region-reflective algorithm. Use either interior-point or SQP algorithms instead.```

There are two different syntaxes for passing a Hessian, and there are two different syntaxes for passing a `HessMult` function; one for `trust-region-reflective`, and another for `interior-point`.

For `trust-region-reflective`, the Hessian of the Lagrangian is the same as the Hessian of the objective function. You pass that Hessian as the third output of the objective function.

For `interior-point`, the Hessian of the Lagrangian involves the Lagrange multipliers and the Hessians of the nonlinear constraint functions. You pass the Hessian as a separate function that takes into account both the position `x` and the Lagrange multiplier structure `lambda`.

Trust-Region-Reflective Coverage and Requirements

Must provide gradient for `f(x)` in `fun`.

• Provide sparsity structure of the Hessian or compute the Hessian in `fun`.

• The Hessian should be sparse.

• Aeq should be sparse.

collapse all

### Trust-Region-Reflective Optimization

The `'trust-region-reflective'` algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [3] and [4]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See the trust-region and preconditioned conjugate gradient method descriptions in fmincon Trust Region Reflective Algorithm.

### Active-Set Optimization

`fmincon` uses a sequential quadratic programming (SQP) method. In this method, the function solves a quadratic programming (QP) subproblem at each iteration. `fmincon` updates an estimate of the Hessian of the Lagrangian at each iteration using the BFGS formula (see `fminunc` and references [7] and [8]).

`fmincon` performs a line search using a merit function similar to that proposed by [6], [7], and [8]. The QP subproblem is solved using an active set strategy similar to that described in [5]. fmincon Active Set Algorithm describes this algorithm in detail.

### Interior-Point Optimization

This algorithm is described in fmincon Interior Point Algorithm. There is more extensive description in [1], [41], and [9].

### SQP Optimization

The `fmincon` `'sqp'` algorithm is similar to the `'active-set'` algorithm described in Active-Set Optimization. fmincon SQP Algorithm describes the main differences. In summary, these differences are:

## References

[1] Byrd, R.H., J. C. Gilbert, and J. Nocedal, "A Trust Region Method Based on Interior Point Techniques for Nonlinear Programming," Mathematical Programming, Vol 89, No. 1, pp. 149–185, 2000.

[2] Byrd, R.H., Mary E. Hribar, and Jorge Nocedal, "An Interior Point Algorithm for Large-Scale Nonlinear Programming, SIAM Journal on Optimization," SIAM Journal on Optimization, Vol 9, No. 4, pp. 877–900, 1999.

[3] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418–445, 1996.

[4] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189–224, 1994.

[5] Gill, P.E., W. Murray, and M.H. Wright, Practical Optimization, London, Academic Press, 1981.

[6] Han, S.P., "A Globally Convergent Method for Nonlinear Programming," Vol. 22, Journal of Optimization Theory and Applications, p. 297, 1977.

[7] Powell, M.J.D., "A Fast Algorithm for Nonlinearly Constrained Optimization Calculations," Numerical Analysis, ed. G.A. Watson, Lecture Notes in Mathematics, Springer Verlag, Vol. 630, 1978.

[8] Powell, M.J.D., "The Convergence of Variable Metric Methods For Nonlinearly Constrained Optimization Calculations," Nonlinear Programming 3 (O.L. Mangasarian, R.R. Meyer, and S.M. Robinson, eds.), Academic Press, 1978.

[9] Waltz, R. A., J. L. Morales, J. Nocedal, and D. Orban, "An interior algorithm for nonlinear optimization that combines line search and trust region steps," Mathematical Programming, Vol 107, No. 3, pp. 391–408, 2006.