Main Content

How the Software Formulates Parameter Estimation as an Optimization Problem

Overview of Parameter Estimation as an Optimization Problem

When you perform parameter estimation, the software formulates an optimization problem. The optimization problem solution is the estimated parameter values set. This optimization problem consists of:

  • xDesign variables. The model parameters and initial states to be estimated.

  • F(x) — Objective function. A function that calculates a measure of the difference between the simulated and measured responses. Also called cost function or estimation error.

  • (Optional) x¯xx¯Bounds. Limits on the estimated parameter values.

  • (Optional) C(x) — Constraint function. A function that specifies restrictions on the design variables.

The optimization solver tunes the values of the design variables to satisfy the specified objectives and constraints. The exact formulation of the optimization depends on the optimization method that you use.

Cost Function

The software tunes the model parameters to obtain a simulated response (ysim) that tracks the measured response or reference signal (yref). To do so, the solver minimizes the cost function or estimation error, a measure of the difference between the simulated and measured responses. The cost function, F(x), is the objective function of the optimization problem.

Types

The raw estimation error, e(t), is defined as:

e(t)=yref(t)ysim(t)

e(t) is also referred to as the error residuals or, simply, residuals.

Simulink® Design Optimization™ software provides you the following cost functions to process e(t):

Cost FunctionFormulationOption Name in GUI or Command Line
Sum squared error (default)

F(x)=t=0tNe(t)×e(t)

N is the number of samples.

'SSE'
Sum absolute error

F(x)=t=0tN|e(t)|

N is the number of samples.

'SAE'
Raw error

F(x)=[e(0)e(N)]

N is the number of samples.

'Residuals'

This option is available only at the command line.

Custom functionN/A

This option is available only at the command line.

Time Base

The software evaluates the cost function for a specific time interval. This interval is dependent on the measured signal time base and the simulated signal time base.

  • The measured signal time base consists of all the time points for which the measured signal is specified. In case of multiple measured signals, this time base is the union of the time points of all the measured signals.

  • The simulated signal time base consists of all the time points for which the model is simulated.

If the model uses a variable-step solver, then the simulated signal time base can change from one optimization iteration to another. The simulated and measured signal time bases can be different. The software evaluates the cost function for only the time interval that is common to both. By default, the software uses only the time points specified by the measured signal in the common time interval.

  • In the GUI, you can specify the simulation start and stop times in the Simulation time area of the Simulation Options dialog box.

  • At the command line, the software specifies the simulation stop time as the last point of the measured signal time base. For example, the following code simulates the model until the end time of the longest running output signal of exp, an sdo.Experiment object:

    sim_obj = createSimulator(exp);
    sim_obj = sim(sim_obj);

    sim_obj contains the simulated response for the model associated with exp.

Bounds and Constraints

You can specify bounds for the design variables (estimated model parameters), based on your knowledge of the system. Bounds are expressed as:

x¯xx¯

x¯ and x¯ are the lower and upper bounds for the design variables.

For example, in a battery discharging experiment, the estimated battery initial charge must be greater than zero and less than Inf. These bounds are expressed as:

0<x<

For an example of how to specify these types of bounds, see Estimate Model Parameters and Initial States (Code).

You can also specify other constraints, C(x), on the design variables at the command line. C(x) can be linear or nonlinear and can describe equalities or inequalities. C(x) can also specify multiparameter constraints. For example, for a simple friction model, C(x) can specify that the static friction coefficient must be greater than or equal to the dynamic friction coefficient. One way of expressing this constraint is:

C(x):x1x2C(x)0

x1 and x2 are the dynamic and static friction coefficients, respectively.

For an example of how to specify a constraint, see Estimate Model Parameters with Parameter Constraints (Code).

Optimization Methods and Problem Formulations

An optimization problem can be one of the following types:

  • Minimization problem — Minimizes an objective function, F(x). You specify the measured signal that you want the model output to track. You can optionally specify bounds for the estimated parameters.

  • Mixed minimization and feasibility problem — Minimizes an objective function, F(x), subject to specified bounds and constraints, C(x). You specify the measured signal that you want the model to track and bounds and constraints for the estimated parameters.

  • Feasibility problem — Finds a solution that satisfies the specified constraints, C(x). You specify only bounds and constraints for the estimated parameters. This type of problem is not common in parameter estimation.

The optimization method that you specify determines the formulation of the estimation problem. The software provides the following optimization methods:

Optimization Method NameDescriptionOptimization Problem Formulation
  • User interface: Nonlinear Least Squares

  • Command line: 'lsqnonlin'

Minimizes the squares of the residuals, recommended method for parameter estimation.

This method requires a vector of error residuals, computed using a fixed time base. Do not use this approach if you have a scalar cost function or if the number of error residuals can change from one iteration to another.

This method uses the Optimization Toolbox™ function, lsqnonlin.

  • Minimization Problem

    minxF(x)22=minx(f1(x)2+f2(x)2++fn(x)2)s.t.  x¯xx¯

    f1(x), f2(x),...,fn(x) represent residuals. n is the number of samples.

  • Mixed Minimization and Feasibility Problem (since R2023b)

    minxF(x)22=minx(f1(x)2+f2(x)2++fn(x)2)s.t.C(x)0x¯xx¯

  • Feasibility Problem

    Not supported.

  • User interface: Gradient Descent

  • Command line: 'fmincon'

General nonlinear solver, uses the cost function gradient.

Use this approach if you want to specify one or any combination of the following:

  • Custom cost functions

  • Parameter-based constraints

  • Signal-based constraints

This method uses the Optimization Toolbox function, fmincon.

For information on how the gradient is computed, see Gradient Computations.

  • Minimization Problem

    minx F(x)s.t.  x¯xx¯ 

  • Mixed Minimization and Feasibility Problem

    minx  F(x)s.t.     C(x)0       x¯xx¯

    Note

    When tracking a reference signal, the software ignores the maximally feasible solution option.

  • Feasibility Problem

    • If you select the maximally feasible solution option (i.e., the optimization continues after an initial feasible solution is found), the software uses the following problem formulation:

      min[x,γ] γs.t.   C(x)γ     x¯xx¯     γ0

      γ is a slack variable that permits a feasible solution with C(x) ≤ γ rather than C(x) ≤ 0.

    • If you do not select the maximally feasible solution option (i.e., the optimization terminates as soon as a feasible solution is found), the software uses the following problem formulation:

      minx 0 s.t.  C(x)0            x¯xx¯

  • User interface: Simplex Search

  • Command line: 'fminsearch'

Based on the Nelder-Mead algorithm, this approach does not use the cost function gradient.

Use this approach if your cost function or constraints are not continuous or differentiable.

This method uses the Optimization Toolbox functions, fminsearch and fminbnd. fminbnd is used if one scalar parameter is being optimized. Otherwise, fminsearch is used. You cannot specify parameter bounds, x¯xx¯, with fminsearch.

  • Minimization Problem

    minx F(x)

  • Mixed Minimization and Feasibility Problem

    The software formulates the problem in two steps:

    1. Finds a feasible solution.

      minx max(C(x))

    2. Minimizes the objective. The software uses the results from step 1 as initial guesses. It maintains feasibility by introducing a discontinuous barrier in the optimization objective.

      minx  Γ(x)whereΓ(x)={ifmax(C(x))>0F(x)otherwise.

  • Feasibility Problem

    minx max(C(x))

  • User interface: Pattern Search

  • Command line: 'patternsearch'

Direct search method, based on the generalized pattern search algorithm, this method does not use the cost function gradient.

Use this approach if your cost function or constraints are not continuous or differentiable.

This method uses the Global Optimization Toolbox function, patternsearch (Global Optimization Toolbox).

  • Minimization Problem

    minx  F(x)s.t.    x¯xx¯

  • Mixed Minimization and Feasibility Problem

    The software formulates the problem in two steps:

    1. Finds a feasible solution.

      minx max(C(x))s.t.     x¯xx¯

    2. Minimizes the objective. The software uses the results from step 1 as initial guesses. It maintains feasibility by introducing a discontinuous barrier in the optimization objective.

      minxΓ(x)s.t. x¯xx¯whereΓ(x)={ifmax(C(x))>0F(x)otherwise.

  • Feasibility Problem

    minx max(C(x))s.t.     x¯xx¯

See Also

| | | | | | | | (Global Optimization Toolbox)

Related Examples

More About