Main Content

Validating Models After Estimation

Ways to Validate Models

You can use the following approaches to validate models:

  • Comparing simulated or predicted model output to measured output.

    See Simulate and Predict Identified Model Output.

    To simulate identified models in the Simulink® environment, see Simulate Identified Model in Simulink.

  • Analyzing autocorrelation and cross-correlation of the residuals with input.

    See What Is Residual Analysis?

  • Analyzing model response. For more information, see the following:

    For information about the response of the noise model, see Noise Spectrum Plots.

  • Plotting the poles and zeros of the linear parametric model.

    For more information, see Pole and Zero Plots.

  • Comparing the response of nonparametric models, such as impulse-, step-, and frequency-response models, to parametric models, such as linear polynomial models, state-space model, and nonlinear parametric models.

    Note

    Do not use this comparison when feedback is present in the system because feedback makes nonparametric models unreliable. To test if feedback is present in the system, use the advice command on the data.

  • Compare models using Akaike Information Criterion or Akaike Final Prediction Error.

    For more information, see the aic and fpe reference page.

  • Plotting linear and nonlinear blocks of Hammerstein-Wiener and nonlinear ARX models.

Displaying confidence intervals on supported plots helps you assess the uncertainty of model parameters. For more information, see Compute Model Uncertainty.

Data for Model Validation

For plots that compare model response to measured response and perform residual analysis, you designate two types of data sets: one for estimating the models (estimation data), and the other for validating the models (validation data). Although you can designate the same data set to be used for estimating and validating the model, you risk over-fitting your data. When you validate a model using an independent data set, this process is called cross-validation.

Note

Validation data should be the same in frequency content as the estimation data. If you detrended the estimation data, you must remove the same trend from the validation data. For more information about detrending, see Handling Offsets and Trends in Data.