Discrete Time Systems Identification and Control M

(Master in Automation Engineering)

Prof. Roberto Diversi


Learning outcomes
The course aims to introduce the main techniques for identifying discrete time systems with particular reference to the family of equation errors models used for prediction and control. At the end of the course students are able to run basic identification algorithms for modeling real processes and to evaluate the quality of the obtained models. Stochastic optimal estimation, Kalman prediction and filtering in the discrete time setting and advanced digital control schemes will also be introduced.

Course contents

Introduction
Systems and models. Mathematical models. Classification of models by modeling objectives. Physical modeling and system identification. Identification steps.

Brief review of stochastic processes
Random (stochastic) processes. First and second order moments: mean, variance, autocorrelation, autocovariance. Stationary and weakly stationary processes. Gaussian processes. Ergodic processes. Sample estimates of first and second order moments. White noise. Cross-correlation and cross-covariance of two stochastic processes. Independence, uncorrelatedness and orthogonality. Vector stochastic processes and their first and second order moments. Spectral density.

Stochastic models
Modeling disturbances by filtering white noise: ARMA processes, AR processes, MA processes. System representation by means of backward and forward shift operators. Equation error models: ARX, ARARX, ARMAX, ARARMAX. FIR models as approximations of impulse responses. Time series models: AR and ARMA models. Output error models and Box-Jenkins models.

The identification problem
Definition of the identification problem. Parameter estimation and model order estimation. Identifiability and the concept of the true model. Estimator properties: unbiasdness, asymptotically unbiasdness, consistency, efficiency. Covariance of the estimate and its use as performance index.

The least squares
Introduction to the least squares (LS) method: the linear regression form. Derivation of the LS estimate. Identifiability conditions. Geometrical interpretation of the LS estimate. Derivation of the LS estimate in the geometrical framework: pseudoinverse of a matrix. Statistical properties of the LS estimator in the static case. Covariance of the estimate and its use as performance index. Weighted least squares. The best linear unbiased estimator.
Least squares identification of dynamic equation error models. Hankel matrices. LS identification of FIR models and its statistical properties. Asymptotic properties. Quasi-stationary deterministic signals. LS identification of ARX models. Consistency of the estimate. Statistical properties of the LS estimate of ARX models: consistency, asymptotic distribution, covariance of the estimate. ARX optimal predictor. LS identification of autoregressive models.
Identifiability properties of LS estimates: persistency of excitation (PE) of input signals. Persistency of excitation of input signals. PE properties of some possible input signals: white noise, step function, impulse function, ARMA signals, sum of sinusoids, pseudo random binary sequence. Identifiability conditions for FIR, ARX and AR models.
Recursive least squares (RLS) identification. RLS algorithms: standard form, covariance form, standard form with inverse matrix updating, covariance form with inverse matrix updating. Choice of the initial values. Tracking parameter variations: weigthed least squares and the forgetting factor. Recursive weighted least squares. Choice of the forgetting factor. Asymptotic behavior of RLS algorithms.

Model order estimation and model validation
The chi-square distribution and its properties. Statistical hypothesis testing. Errors of type I and II. Example of statistical hypothesis test. Asymptotic properties of the residual of the least squares estimation. Model order estimation methods: F-test, final prediction error criterion. Criteria with complexity terms: Akaike information criterion, minimum description length criterion. Model validation: whiteness tests on the LS residual sequence, tests of uncorrelation between input and residual, cross validation.

The prediction error method
Inconsistency of the LS estimate for ARARX, ARMAX and ARMA models. The prediction error method. Optimal ARMAX and ARMA predictors. Introduction to the Newton-Raphson algorithm. PEM identification of ARMAX and ARMA models by simplifying the Newton-Raphson algorithm: the Gauss-Newton algorithm. Evaluation of the gradient of the residual. Choice of the initial estimate. Statistical properties of the PEM estimator. Applying PEM to equation error models, output error models and Box-Jenkins models.

The instrumental variable method
Introduction to the instrumental variable (IV) method. Identification of ARMAX models by using the IV method. Consistency conditions. Choice of instruments. Identification of ARMA models by means of the IV method. Extended IV methods and Yule-Walker equations for ARMA and AR models. Statistical properties of the IV estimator. Identification of ARARX models by using the IV method and the LS estimation of AR models. Recursive IV algorithms. Identification of MA models. Approximation of MA models with high-order AR models. Identification of the moving average part of ARMAX and ARMA models.

Maximum likelihood
Introduction to maximum likelihood estimation. Maximum likelihood identification. The gaussian case: equivalence between ML identification and PEM (or LS) identification. Covariance of the estimate: the Cramer-Rao lower bound. Example: application of the Cramer-Rao lower bound to the LS identification of static models.

Optimal filtering and prediction of stochastic signals
Optimal k-step ahead predictors for ARMAX and ARMA models. The fundamental theorem of estimation theory: optimal estimator and optimal linear estimator. Properties of the optimal (linear) estimator. Optimal estimation of signals: the innovation sequence and its properties.
Brief recalls on the Luenberger observer. Stochastic state space models. Kalman filtering: standard assumptions. Derivation of the Kalman filter equations by means of the properties of the optimal linear estimator. The predictor-corrector form. The Kalman predictor and the difference Riccati equation. Convergence of the difference Riccati equation. The steady-state (suboptimal) Kalman predictor. Some extension of the standard Kalman filter: nonzero mean noises, time-varying models. Dealing with colored noises: state space representations of ARMA, AR and MA models and their use in the augmented state space model of the plant.

Introduction to optimal LQG control

Course slides - A.A. 2018/2019

Introduction

Stochastic models

Linear regression

Statistical hypothesis testing

Least squares: some MATLAB examples

Prediction error methods

Instrumental variable methods

Seminar condition monitoring

Optimal estimation of signals

System Identification with the SYSID MATLAB toolbox: some examples

Readings
R. Guidorzi, "Multivariable System Identification: From Observations to Models", Bononia University Press, Bologna, 2003.

T. Söderström and P. Stoica, "System Identification", Prentice Hall, Englewood Cliffs, N.J., 1989.
This book is now out of print and can be downloaded here

L. Ljung, "System Identification: Theory for the User", Prentice Hall, Englewood Cliffs, N.J., 1987.

B. D. O. Anderson and J. B. Moore, "Optimal filtering", Prentice Hall, Englewood Cliffs, N.J., 1979.

S. Bittanti, "Identificazione dei modelli e sistemi adattativi", Pitagora Editrice Bologna, 2005 (in italian).

S. Bittanti, "Teoria della predizione e del filtraggio", Pitagora Editrice Bologna, 2005 (in italian).

Assessment methods
The final evaluation is based on a written exam. The exam rules can be found in the file below.

Exam rules

Some examples of exam questions can be found in the file below.
Examples of exam questions

Exam dates (registration on AlmaEsami)

Written test: 17 June 2019, 14:00, room 2.3

Written test: 2 July 2019, 10:00, room 6.1

Written test: 19 July 2019, 10:00, room 6.1

Written test: 16 September 2019, 10:00, room 0.6

Registration sessions: see AlmaEsami