Solve nonlinear leastsquares (nonlinear datafitting) problems
Solves nonlinear leastsquares curve fitting problems of the form
$$\underset{x}{\mathrm{min}}{\Vert f(x)\Vert}_{2}^{2}=\underset{x}{\mathrm{min}}\left({f}_{1}{(x)}^{2}+{f}_{2}{(x)}^{2}+\mathrm{...}+{f}_{n}{(x)}^{2}\right)$$
with optional lower and upper bounds lb and ub on the components of x.
x, lb, and ub can be vectors or matrices; see Matrix Arguments.
x = lsqnonlin(fun,x0)
x = lsqnonlin(fun,x0,lb,ub)
x = lsqnonlin(fun,x0,lb,ub,options)
x = lsqnonlin(problem)
[x,resnorm] = lsqnonlin(...)
[x,resnorm,residual] = lsqnonlin(...)
[x,resnorm,residual,exitflag] = lsqnonlin(...)
[x,resnorm,residual,exitflag,output]
= lsqnonlin(...)
[x,resnorm,residual,exitflag,output,lambda]
= lsqnonlin(...)
[x,resnorm,residual,exitflag,output,lambda,jacobian]
= lsqnonlin(...)
lsqnonlin
solves nonlinear leastsquares
problems, including nonlinear datafitting problems.
Rather than compute the value $${\Vert f(x)\Vert}_{2}^{2}$$ (the
sum of squares), lsqnonlin
requires the userdefined
function to compute the vectorvalued function
$$f(x)=\left[\begin{array}{c}{f}_{1}(x)\\ {f}_{2}(x)\\ \vdots \\ {f}_{n}(x)\end{array}\right]$$
Then, in vector terms, you can restate this optimization problem as
$$\underset{x}{\mathrm{min}}{\Vert f(x)\Vert}_{2}^{2}=\underset{x}{\mathrm{min}}\left({f}_{1}{(x)}^{2}+{f}_{2}{(x)}^{2}+\mathrm{...}+{f}_{n}{(x)}^{2}\right)$$
where x is a vector or matrix and f(x) is a function that returns a vector or matrix value. For details of matrix values, see Matrix Arguments.
x = lsqnonlin(fun,x0)
starts
at the point x0
and finds a minimum of the sum
of squares of the functions described in fun
. fun
should
return a vector of values and not the sum of squares of the values.
(The algorithm implicitly computes the sum of squares of the components
of fun(x)
.)
Note: Passing Extra Parameters explains how to pass extra parameters to the vector function f, if necessary. 
x = lsqnonlin(fun,x0,lb,ub)
defines
a set of lower and upper bounds on the design variables in x
,
so that the solution is always in the range lb
≤ x
≤ ub
.
You can fix the solution component x(i)
by specifying lb(i) = ub(i)
.
x = lsqnonlin(fun,x0,lb,ub,options)
minimizes
with the optimization options specified in options
.
Use optimoptions
to set these
options. Pass empty matrices for lb
and ub
if
no bounds exist.
x = lsqnonlin(problem)
finds the minimum
for problem
, where problem
is
a structure described in Input Arguments.
Create the problem
structure by exporting
a problem from Optimization app, as described in Exporting Your Work.
[x,resnorm] = lsqnonlin(...)
returns
the value of the squared 2norm of the residual at x
: sum(fun(x).^2)
.
[x,resnorm,residual] = lsqnonlin(...)
returns
the value of the residual fun(x)
at the solution x
.
[x,resnorm,residual,exitflag] = lsqnonlin(...)
returns
a value exitflag
that describes the exit condition.
[x,resnorm,residual,exitflag,output]
= lsqnonlin(...)
returns a structure output
that
contains information about the optimization.
[x,resnorm,residual,exitflag,output,lambda]
= lsqnonlin(...)
returns a structure lambda
whose
fields contain the Lagrange multipliers at the solution x
.
[x,resnorm,residual,exitflag,output,lambda,jacobian]
= lsqnonlin(...)
returns the Jacobian of fun
at
the solution x
.
Note:
If the specified input bounds for a problem are inconsistent,
the output Components of 
Function Arguments contains
general descriptions of arguments passed into lsqnonlin
.
This section provides functionspecific details for fun
, options
,
and problem
:
 The function whose sum
of squares is minimized. x = lsqnonlin(@myfun,x0) where function F = myfun(x) F = ... % Compute function values at x
x = lsqnonlin(@(x)sin(x.*x),x0); If
the userdefined values for
If the Jacobian can also be computed and the
Jacobian option is options = optimoptions('lsqnonlin','Jacobian','on') the
function function [F,J] = myfun(x) F = ... % Objective function values at x if nargout > 1 % Two output arguments J = ... % Jacobian of the function evaluated at x end If  
 Options provides the functionspecific details for the  
problem 
 Objective function  
 Initial point for x  
lb  Vector of lower bounds  
ub  Vector of upper bounds  
 'lsqnonlin'  
 Options created with optimoptions 
Function Arguments contains
general descriptions of arguments returned by lsqnonlin
.
This section provides functionspecific details for exitflag
, lambda
,
and output
:
 Integer identifying the
reason the algorithm terminated. The following lists the values of  
 Function converged to a solution  
 Change in  
 Change in the residual was less than the specified tolerance.  
 Magnitude of search direction was smaller than the specified tolerance.  
 Number of iterations exceeded  
 Output function terminated the algorithm.  
 Problem is infeasible: the bounds  
 Line search could not sufficiently decrease the residual along the current search direction.  
 Structure containing the
Lagrange multipliers at the solution  
 Lower bounds  
 Upper bounds  
 Structure containing information about the optimization. The fields of the structure are  
firstorderopt  Measure of firstorder optimality  
iterations  Number of iterations taken  
funcCount  The number of function evaluations  
cgiterations  Total number of PCG iterations (trustregionreflective algorithm only)  
stepsize  Final displacement in  
algorithm  Optimization algorithm used  
message  Exit message 
Optimization options. Set or change options using the optimoptions
function. Some options apply
to all algorithms, some are only relevant when you are using the trustregionreflective
algorithm, and others are only relevant when you are using the LevenbergMarquardt
algorithm. See Optimization Options Reference for
detailed information.
Both algorithms use the following options:
 Choose between The 
DerivativeCheck  Compare usersupplied derivatives
(gradients of objective or constraints) to finitedifferencing derivatives.
The choices are 
Diagnostics  Display diagnostic information
about the function to be minimized or solved. The choices are 
 Maximum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Minimum change in variables for
finitedifference gradients (a positive scalar). The default is 
 Level of display:

FinDiffRelStep  Scalar or vector step size factor. When you set
and central finite differences are
Scalar 
FinDiffType  Finite differences, used to estimate gradients,
are either The algorithm is careful to obey bounds when estimating both types of finite differences. So, for example, it could take a backward, rather than a forward, difference to avoid evaluating at a point outside bounds. 
 Check whether function values are
valid. 
 If 
 Maximum number of function evaluations
allowed, a positive integer. The default is 
 Maximum number of iterations allowed,
a positive integer. The default is 
OutputFcn  Specify one or more userdefined
functions that an optimization function calls at each iteration, either
as a function handle or as a cell array of function handles. The default
is none ( 
 Plots various measures of progress
while the algorithm executes, select from predefined plots or write
your own. Pass a function handle or a cell array of function handles.
The default is none (
For information on writing a custom plot function, see Plot Functions. 
 Termination tolerance on the function
value, a positive scalar. The default is 
 Termination tolerance on 
 Typical 
The trustregionreflective algorithm uses the following options:
 Function handle for
Jacobian multiply function. For largescale structured problems, this
function computes the Jacobian matrix product W = jmfun(Jinfo,Y,flag) where [F,Jinfo] = fun(x)
In each case,
See Minimization with Dense Structured Hessian, Linear Equalities and Jacobian Multiply Function with Linear Least Squares for similar examples.  
 Sparsity pattern of the Jacobian
for finite differencing. Set Use In
the worst case, if the structure is unknown, do not set  
 Maximum number of PCG (preconditioned
conjugate gradient) iterations, a positive scalar. The default is  
 Upper bandwidth of preconditioner
for PCG, a nonnegative integer. The default  
 Termination tolerance on the PCG
iteration, a positive scalar. The default is 
The LevenbergMarquardt algorithm uses the following options:
 Initial value of the LevenbergMarquardt parameter, a
positive scalar. Default is 


Find x that minimizes
$$\sum _{k=1}^{10}{\left(2+2k{e}^{k{x}_{1}}{e}^{k{x}_{2}}\right)}^{2}},$$
starting at the point x = [0.3, 0.4]
.
Because lsqnonlin
assumes that the sum of
squares is not explicitly formed in the userdefined
function, the function passed to lsqnonlin
should
instead compute the vectorvalued function
$${F}_{k}(x)=2+2k{e}^{k{x}_{1}}{e}^{k{x}_{2}},$$
for k = 1 to 10 (that
is, F
should have 10
components).
First, write a file to compute the 10
component
vector F
.
function F = myfun(x) k = 1:10; F = 2 + 2*kexp(k*x(1))exp(k*x(2));
Next, invoke an optimization routine.
x0 = [0.3 0.4] % Starting guess [x,resnorm] = lsqnonlin(@myfun,x0); % Invoke optimizer
After about 24 function evaluations, this example gives the solution
x,resnorm x = 0.2578 0.2578 resnorm = 124.3622
You can use the trustregion reflective algorithm in lsqnonlin
, lsqcurvefit
,
and fsolve
with small to mediumscale
problems without computing the Jacobian in fun
or
providing the Jacobian sparsity pattern. (This also applies to using fmincon
or fminunc
without
computing the Hessian or supplying the Hessian sparsity pattern.)
How small is small to mediumscale? No absolute answer is available,
as it depends on the amount of virtual memory in your computer system
configuration.
Suppose your problem has m
equations and n
unknowns.
If the command J = sparse(ones(m,n))
causes
an Out of memory
error on your machine,
then this is certainly too large a problem. If it does not result
in an error, the problem might still be too large. You can only find
out by running it and seeing if MATLAB runs within the amount
of virtual memory available on your system.
The trustregionreflective method does not allow equal upper
and lower bounds. For example, if lb(2)==ub(2)
, lsqlin
gives the error
Equal upper and lower bounds not permitted.
lsqnonlin
does not handle equality constraints,
which is another way to formulate equal bounds. If equality constraints
are present, use fmincon
, fminimax
, or fgoalattain
for
alternative formulations where equality constraints can be included.)
The function to be minimized must be continuous. lsqnonlin
might
only give local solutions.
lsqnonlin
can
solve complexvalued problems directly with the levenbergmarquardt
algorithm.
However, this algorithm does not accept bound constraints. For a complex
problem with bound constraints, split the variables into real and
imaginary parts, and use the trustregionreflective
algorithm.
See Fit a Model to ComplexValued Data.
The trustregionreflective algorithm for lsqnonlin
does
not solve underdetermined systems; it requires that the number of
equations, i.e., the row dimension of F, be at
least as great as the number of variables. In the underdetermined
case, the LevenbergMarquardt algorithm is used instead.
The preconditioner computation used in the preconditioned conjugate gradient part of the trustregionreflective method forms J^{T}J (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product J^{T}J, can lead to a costly solution process for large problems.
If components of x have no upper (or lower)
bounds, lsqnonlin
prefers that the corresponding
components of ub
(or lb
) be
set to inf
(or inf
for lower
bounds) as opposed to an arbitrary but very large positive (or negative
for lower bounds) number.
TrustRegionReflective Problem Coverage and Requirements
For Large Problems 


The LevenbergMarquardt algorithm does not handle bound constraints.
Since the trustregionreflective algorithm does not handle
underdetermined systems and the LevenbergMarquardt does not handle
bound constraints, problems with both these characteristics cannot
be solved by lsqnonlin
.
[1] Coleman, T.F. and Y. Li, "An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds," SIAM Journal on Optimization, Vol. 6, pp. 418–445, 1996.
[2] Coleman, T.F. and Y. Li, "On the Convergence of Reflective Newton Methods for LargeScale Nonlinear Minimization Subject to Bounds," Mathematical Programming, Vol. 67, Number 2, pp. 189224, 1994.
[3] Dennis, J.E., Jr., "Nonlinear LeastSquares," State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269–312, 1977.
[4] Levenberg, K., "A Method for the Solution of Certain Problems in LeastSquares," Quarterly Applied Math. 2, pp. 164–168, 1944.
[5] Marquardt, D., "An Algorithm for LeastSquares Estimation of Nonlinear Parameters," SIAM Journal Applied Math., Vol. 11, pp. 431–441, 1963.
[6] Moré, J.J., "The LevenbergMarquardt Algorithm: Implementation and Theory," Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105–116, 1977.