Main Content

Interpreting Results of Robust Tuning

When you tune a control system with systune or Control System Tuner, the software reports on the tuning progress and results as described in Interpret Numeric Tuning Results. When you tune a control system with parameter uncertainty, the results contain additional information about the progress of the tuning algorithm toward tuning for the worst-case parameter values.

Robust Tuning Algorithm

The software begins the robust tuning process by tuning for the nominal plant model. Then, the software performs the following steps iteratively:

  1. Identifies a parameter combination within the uncertainty ranges that violates the design requirements (analysis step).

  2. Adds a model evaluated at these parameter values to the set of models over which the software is tuning.

  3. Repeats tuning for the expanded model set (tuning step).

This process terminates when the analysis step is unable to find a parameter combination that yields a significantly worse performance index than the value obtained in the last iteration of the tuning step. The performance index is a weighted combination of the soft constraint value fSoft and the hard constraint value gHard. (See Interpret Numeric Tuning Results for more information.)

Displayed Results

The result is that on each iteration of this process, the algorithm returns a range of values for each of fSoft and gHard. The minimum is the best achieved value for that iteration, tuning the controller parameters over all the models in the expanded model set. The maximum is the worst value the software can find in the uncertainty range, using that design (set of tuned controller-parameter values). This range is reflected in the default display at the command line or in the Tuning Report in Control System Tuner. For example, the following is a typical report for robust tuning of an uncertain system using only soft constraints.

Soft: [0.906,18.3], Hard: [-Inf,-Inf], Iterations = 106
Soft: [1.02,3.77], Hard: [-Inf,-Inf], Iterations = 55
Soft: [1.25,1.85], Hard: [-Inf,-Inf], Iterations = 67
Soft: [1.26,1.26], Hard: [-Inf,-Inf], Iterations = 24
Final: Soft = 1.26, Hard = -Inf, Iterations = 252

Each of the first four lines corresponds to one iteration in the robust tuning process. In the first iteration, the soft goals are satisfied for the nominal system (fSoft < 1). That design is not robust against the entire uncertainty range, as shown by the worst-case fSoft = 18.3. Adding that worst-case model to the expanded model set, the algorithm finds a new design with fSoft = 1.02. Testing that design over the uncertainty range yields a worst case of fSoft = 3.77. With each iteration, the gap between the performance of the model set used for tuning and the worst-case performance narrows. In the final iteration, the worst-case performance matches the multi-model performance. The multi-model values typically increase as the algorithm tunes the controller against a larger set of models, so that the robust fSoft and gHard values are typically larger than the nominal values. systune returns the final values as output arguments.

Robust Tuning with Random Starts

When you use systuneOptions to set RandomStart > 0, the tuning software performs nominal tuning from each of the random starting points. It then performs the robust tuning process on each nominal design, starting with the best design. The “robustification” of any particular design is aborted when the minimum value of fSoft (the lower bound on robust performance) becomes much higher than the best robust performance achieved so far.

The default display includes the fSoft and gHard values for all the nominal designs and the results of each robust-tuning iteration. The software selects the best result of robust tuning from among the randomly started designs.

Validation

The robust-tuning algorithm finds locally optimal designs that meet your design requirements. However, identifying the worst-case parameter combinations for a given design is a difficult process. Although it rarely happens in practice, it is possible for the algorithm to miss a worst-case parameter combination. Therefore, independent confirmation of robustness, such as using μ-analysis, is recommended.

Related Examples

More About