Documentation 
On this page… 

The following table is designed to help you choose a solver. It does not address multiobjective optimization or equation solving. There are more details on all the solvers in Problems Handled by Optimization Toolbox Functions.
Use the table as follows:
Identify your objective function as one of five types:
Linear
Quadratic
Sumofsquares (Least squares)
Smooth nonlinear
Nonsmooth
Identify your constraints as one of five types:
None (unconstrained)
Bound
Linear (including bound)
General smooth
Discrete (integer)
Use the table to identify a relevant solver.
In this table:
* means relevant solvers are found in Global Optimization Toolbox functions (licensed separately from Optimization Toolbox™ solvers).
fmincon applies to most smooth objective functions with smooth constraints. It is not listed as a preferred solver for least squares or linear or quadratic programming because the listed solvers are usually more efficient.
The table has suggested functions, but it is not meant to unduly restrict your choices. For example, fmincon can be effective on some nonsmooth problems.
The Global Optimization Toolbox ga function can address mixedinteger programming problems.
Solvers by Objective and Constraint
Constraint Type  Objective Type  

Linear  Quadratic  Least Squares  Smooth nonlinear  Nonsmooth  
None  n/a (f = const, or min = $$\infty $$)  quadprog, Information  \, lsqcurvefit, lsqnonlin, Information  fminsearch, fminunc, Information  fminsearch, * 
Bound  linprog, Information  quadprog, Information  lsqcurvefit, lsqlin, lsqnonlin, lsqnonneg, Information  fminbnd, fmincon, fseminf, Information  fminbnd, * 
Linear  linprog, Information  quadprog, Information  lsqlin, Information  fmincon, fseminf, Information  * 
General smooth  fmincon, Information  fmincon, Information  fmincon, Information  fmincon, fseminf, Information  * 
Discrete  intlinprog, Information  *  *  *  * 
Note: This table does not list multiobjective solvers nor equation solvers. See Problems Handled by Optimization Toolbox Functions for a complete list of problems addressed by Optimization Toolbox functions. 
Note: Some solvers have several algorithms. For help choosing, see Choosing the Algorithm. 
fmincon has four algorithm options:
'interiorpoint' (default)
'trustregionreflective'
'sqp'
'activeset'
Use optimoptions to set the Algorithm option at the command line.
Recommendations 


Reasoning Behind the Recommendations.
'interiorpoint' handles large, sparse problems, as well as small dense problems. The algorithm satisfies bounds at all iterations, and can recover from NaN or Inf results. It is a largescale algorithm; see LargeScale vs. MediumScale Algorithms. The algorithm can use special techniques for largescale problems. For details, see InteriorPoint Algorithm.
'sqp' satisfies bounds at all iterations. The algorithm can recover from NaN or Inf results. It is not a largescale algorithm; see LargeScale vs. MediumScale Algorithms.
'activeset' can take large steps, which adds speed. The algorithm is effective on some problems with nonsmooth constraints. It is not a largescale algorithm; see LargeScale vs. MediumScale Algorithms.
'trustregionreflective' requires you to provide a gradient, and allows only bounds or linear equality constraints, but not both. Within these limitations, the algorithm handles both large sparse problems and small dense problems efficiently. It is a largescale algorithm; see LargeScale vs. MediumScale Algorithms. The algorithm can use special techniques to save memory usage, such as a Hessian multiply function. For details, see TrustRegionReflective Algorithm.
fsolve has three algorithms:
'trustregiondogleg' (default)
'trustregionreflective'
'levenbergmarquardt'
Use optimoptions to set the Algorithm option at the command line.
Recommendations 


Reasoning Behind the Recommendations.
'trustregiondogleg' is the only algorithm that is specially designed to solve nonlinear equations. The others attempt to minimize the sum of squares of the function.
The 'trustregionreflective' algorithm is effective on sparse problems. It can use special techniques such as a Jacobian multiply function for largescale problems.
fminunc has two algorithms:
'trustregion' (formerly LargeScale = 'on'), the default
'quasinewton' (formerly LargeScale = 'off')
Use optimoptions to set the Algorithm option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
lsqlin. lsqlin has three algorithms:
'trustregionreflective' (formerly LargeScale = 'on'), the default
'activeset' (formerly LargeScale = 'off')
'interiorpoint'
Use optimoptions to set the Algorithm option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
lsqcurvefit and lsqnonlin. lsqcurvefit and lsqnonlin have two algorithms:
'trustregionreflective' (default)
'levenbergmarquardt'
Use optimoptions to set the Algorithm option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
linprog has four algorithms:
'interiorpoint' (formerly LargeScale = 'on'), the default
'dualsimplex'
'activeset' (formerly LargeScale = 'off')
'simplex' (formerly LargeScale = 'off', Simplex = 'on')
Use optimoptions to set the Algorithm option at the command line.
Recommendations 

Use the 'interiorpoint' algorithm or the 'dualsimplex' algorithm. For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
Reasoning Behind the Recommendations.
Both the 'interiorpoint' and 'dualsimplex' algorithms are largescale algorithms, while the other two are not. See LargeScale vs. MediumScale Algorithms.
Generally, the 'interiorpoint' and 'dualsimplex' algorithms are faster and use less memory than the other two algorithms.
The 'activeset' and 'simplex' algorithms will be removed in a future release.
quadprog has three algorithms:
'interiorpointconvex' (default)
'trustregionreflective' (formerly LargeScale = 'on')
'activeset' (formerly LargeScale = 'off').
Use optimoptions to set the Algorithm option at the command line.
Recommendations 

For help if the minimization fails, see When the Solver Fails or When the Solver Might Have Succeeded. 
An optimization algorithm is large scale when it uses linear algebra that does not need to store, nor operate on, full matrices. This may be done internally by storing sparse matrices, and by using sparse linear algebra for computations whenever possible. Furthermore, the internal algorithms either preserve sparsity, such as a sparse Cholesky decomposition, or do not generate matrices, such as a conjugate gradient method.
In contrast, mediumscale methods internally create full matrices and use dense linear algebra. If a problem is sufficiently large, full matrices take up a significant amount of memory, and the dense linear algebra may require a long time to execute.
Don't let the name "large scale" mislead you; you can use a largescale algorithm on a small problem. Furthermore, you do not need to specify any sparse matrices to use a largescale algorithm. Choose a mediumscale algorithm to access extra functionality, such as additional constraint types, or possibly for better performance.
Interiorpoint algorithms in fmincon, quadprog, and linprog have many good characteristics, such as low memory usage and the ability to solve large problems quickly. However, their solutions can be slightly less accurate than those from other algorithms. The reason for this potential inaccuracy is that the (internally calculated) barrier function keeps iterates away from inequality constraint boundaries.
For most practical purposes, this inaccuracy is usually quite small.
To reduce the inaccuracy, try to:
Rerun the solver with smaller TolX, TolFun, and possibly TolCon tolerances (but keep the tolerances sensible.) See Tolerances and Stopping Criteria).
Run a different algorithm, starting from the interiorpoint solution. This can fail, because some algorithms can use excessive memory or time, and some linprog and quadprog algorithms do not accept an initial point.
For example, try to minimize the function x when bounded below by 0. Using the fmincon interiorpoint algorithm:
options = optimoptions(@fmincon,'Algorithm','interiorpoint','Display','off'); x = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x = 2.0000e08
Using the fmincon sqp algorithm:
options.Algorithm = 'sqp';
x2 = fmincon(@(x)x,1,[],[],[],[],0,[],[],options)
x2 = 0
Similarly, solve the same problem using the linprog interiorpoint algorithm:
opts = optimoptions(@linprog,'Display','off','Algorithm','interiorpoint'); x = linprog(1,[],[],[],[],0,[],1,opts)
x = 2.0833e13
Using the linprog simplex algorithm:
opts.Algorithm = 'simplex';
x2 = linprog(1,[],[],[],[],0,[],1,opts)
x2 = 0
In these cases, the interiorpoint algorithms are less accurate, but the answers are quite close to the correct answer.
The following tables show the functions available for minimization, equation solving, multiobjective optimization, and solving leastsquares or datafitting problems.
Minimization Problems
Type  Formulation  Solver 

Scalar minimization  $$\underset{x}{\mathrm{min}}f(x)$$ such that lb < x < ub (x is scalar)  fminbnd 
Unconstrained minimization  $$\underset{x}{\mathrm{min}}f(x)$$  
Linear programming  $$\underset{x}{\mathrm{min}}{f}^{T}x$$ such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub  
Mixedinteger linear programming  $$\underset{x}{\mathrm{min}}{f}^{T}x$$ such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub, x(intcon) is integervalued.  
Quadratic programming  $$\underset{x}{\mathrm{min}}\frac{1}{2}{x}^{T}Hx+{c}^{T}x$$ such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub  
Constrained minimization  $$\underset{x}{\mathrm{min}}f(x)$$ such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub  
Semiinfinite minimization  $$\underset{x}{\mathrm{min}}f(x)$$ such that K(x,w) ≤ 0 for all w, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub 
Multiobjective Problems
Type  Formulation  Solver 

Goal attainment  $$\underset{x,\gamma}{\mathrm{min}}\gamma $$ such that F(x) – w·γ ≤ goal, c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub  
Minimax  $$\underset{x}{\mathrm{min}}\underset{i}{\mathrm{max}}{F}_{i}(x)$$ such that c(x) ≤ 0, ceq(x) = 0, A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub 
Equation Solving Problems
Type  Formulation  Solver 

Linear equations  C·x = d, n equations, n variables  \ (matrix left division) 
Nonlinear equation of one variable  f(x) = 0  
Nonlinear equations  F(x) = 0, n equations, n variables 
LeastSquares (ModelFitting) Problems
Type  Formulation  Solver 

Linear leastsquares  $$\underset{x}{\mathrm{min}}\frac{1}{2}{\Vert C\cdot xd\Vert}_{2}^{2}$$ m equations, n variables  \ (matrix left division) 
Nonnegative linearleastsquares  $$\underset{x}{\mathrm{min}}\frac{1}{2}{\Vert C\cdot xd\Vert}_{2}^{2}$$ such that x ≥ 0  
Constrained linearleastsquares  $$\underset{x}{\mathrm{min}}\frac{1}{2}{\Vert C\cdot xd\Vert}_{2}^{2}$$ such that A·x ≤ b, Aeq·x = beq, lb ≤ x ≤ ub  
Nonlinear leastsquares  $$\underset{x}{\mathrm{min}}{\Vert F(x)\Vert}_{2}^{2}=\underset{x}{\mathrm{min}}{\displaystyle \sum _{i}{F}_{i}^{2}(x)}$$ such that lb ≤ x ≤ ub  
Nonlinear curve fitting  $$\underset{x}{\mathrm{min}}{\Vert F(x,xdata)ydata\Vert}_{2}^{2}$$ such that lb ≤ x ≤ ub 