Week nr. |
Date |
Subject Tehme Topic |
34 |
24.8. |
Introduction, notations,
cost function, constraints, prediction model. Ch. 2.1 |
35 |
31.8 |
Prediction Model (PM) from state space models.
Rewriting the cost function in order to remove the summation. Explicit
unconstrained
solution of the MPC problem. Ch. 2, Ch. 3 |
36 |
7.9 |
MPC in terms of control rate of change
variables. Prediction models by using rate of change variables, Ch.
2.3.2. Prediction models using models with non-zero mean values, Ch.
2.3.4. |
37 |
14.9 |
How to handle process constraints, ch. 9
and 10.. More general criterion, ch. 6. Solving QP problems using
MATLAB software, etc.. Ch. 3.
Optimization. |
38 |
21.9 |
Constraints (e.g. output) in the MPC
problem. Model Predictive Control with integral action (item 2 in
Syllabus list) or Ch. 10
in Lecture notes. Prediction models using models with non-zero mean values, Ch.
2.3.4. |
39 |
28.9 |
Model Predictive Control with integral
action. How to handle time delay. More on optimization of non-linear
functions and QP problems. Overview of numerical optimization. Serach
methods. Newton method. Stepest descernt method. Line search
parameter. Newton method for quadratic problems.
LQ control with integral action:
mic-journal |
40 |
5.10 |
Model Predictive Control with integral
action. Exercise with integrator and time delay.
exercise_integrator_mpc.pdf Implementation. Models used in MPC.
New
method for PI controllertuning of integrating pluss time delay
systems. |
41 |
12.10.
|
No lecture. Work with exercise project. |
42 |
19.10 |
MPC overview. 1) System model, 2)
Control objective, 3) Prediction model, 4) Optimization method, 5)
State observer. LQ optimal control with integral action
mic-journal Discrete LQ optimal control. Presenting Exercise 6.
Syllabus State
observers: Note Syllabus: Exercises 1-5.
Lecture notes. Ch. 1, Ch. 2. (skip Ch. 2.3.5. skip example 2.6). Ch.
3. Ch. 4. Ch. 5 (overview). Ch. 8 and 9. |
43 |
26.10 |
Discrete LQ optimal control. Presenting Exercise 6.
Syllabus Main Topic:
MPC vs. Optimal Control |
44 |
2.11 |
LQG control, state estimation, relations
to MPC. Section 2.5 in Note
Mention shortly. The separation principle. Check for robustness
separatly 1. How to incorporate integral action in MPC. Discussion.
2. State estimation on observers,
paper. Matlab
m-files in paper:
lpe2.m, lpe.m,
ss2ocf.m |
45 |
9.11 |
No Lecture: Work with as described below:
1) Exercises and the 5 exercise project.
2) Read paper about historical overview of MPC (Qin
2000 paper) and methods as DMC, MAC.
3) Lecture notes Ch. 2.3.5, Ex. 2.6. Generalized Predictive Control
(GPC). CARIMA and ARIMAX models. Diophantine quation. Software:
poly2gpcpm.m,
demo_gpcpm.m,
demo_gpcpm2.m
m-files used: ss2essm.m
htilde2.m
4) LQ
and MPC of a inverted pendulum. Some continuous LQ optimal control
theory and similarities with MPC control. In connection with Example
5.7 in the lecture notes. Software
main_empc_pendel.m
ss2h.m |
46 |
16.11 |
1. More on MPC with integral action.
Exercise 6 and simulation results. Control horizon, Ch. 4. MATLAB m-file
implementation
main_exercise6_mpc.m
of Exercise 6 (MPC of non-linear chemical rector). 2. Discrete LQ optimal control with integral action and
connection to standard PID controller (Paper)
and (MIC-paper) Software for computing the LQ controller:
dlqdu_pi
Example m-files: Example 5.1
dlq_ex4_du.m
Example 5.2:
dlq_ex3_du.m
3. MPC using local PID/PI controllers.
Robustness of
standard feedback systems. |
47 |
23.11 |
1) Computing present state, x_k. Kalman
filter. In terms of past inputs and outputs. Lecture notes, Ch. 2.
2) Estimating present state, x_k.
Extended Kalman Filter
(EKF). Kalman filter on prediction form, and apriori aposteriori form.
3) Uncented Kalman Filter (UKF).
UKF note |
48 |
30.11 |
Summing up lecture. Exam tasks. |
49 |
7.12. |
|