model predictive control

63
Model Predictive Control Table of Contents 1. CONFIGURATION OF MULTIVARIABLE PROCESS CONTROL SYSTEMS ........................................................................... 2 2. MODEL PREDICTIVE CONTROL (MPC) FOR MULTIVARIABLE SYSTEMS .......................................................................... 3 2.1. WHY MPC? .................................................................................................................................................................... 3 2.2. BASIC CONCEPTS .............................................................................................................................................................. 3 2.3. PROTOTYPICAL MPC FORMULATION .................................................................................................................................... 6 2.4. ELEMENTS OF MPC THEORY (RAWLINGS AND MAYNE, 2009) ................................................................................................ 15 2.4.1. Models (RM 1.2) ................................................................................................................................................. 15 2.4.2. The linear quadratic regulator (LQR) (RM 1.3)................................................................................................... 16 2.4.3. Using the LQR formula in a moving horizon context (RM 1.3.4) ........................................................................ 20 2.4.4. Using the LQR formula in an infinite moving horizon context (RM 1.3.4) to ensure stability ............................ 25 2.4.5. Sidebar: Lyapunov functions and stability of dynamic systems ........................................................................ 29 2.4.6. Designing finite-horizon LQR for closed-loop stability: Lyapunov function approach ....................................... 30 2.4.6.1. Idea 1: Finite-horizon LQR with terminal constraint ....................................................................................................... 32 2.4.6.1. Idea 2: Finite-horizon LQR with terminal penalty ........................................................................................................... 33 2.4.7. Nonlinear systems .............................................................................................................................................. 34 2.4.8. LQR with states not measured directly (RM 1.4)................................................................................................ 34 2.4.9. MPC = LQR with constraints (RM 2.4.3) ............................................................................................................. 35 3. REFERENCES ............................................................................................................................................................... 36 4. APPENDICES ............................................................................................................................................................... 37 4.1. APPENDIX A LINEAR AND QUADRATIC PROGRAMMING EXAMPLES ......................................................................................... 37 4.1.1. Linear Programming .......................................................................................................................................... 37 4.1.2. Linear Programming in General ......................................................................................................................... 40 4.1.1. Linear Programming Sensitivity ......................................................................................................................... 41 4.1.2. Quadratic Programming .................................................................................................................................... 45 4.1.2.1. Case 1 ............................................................................................................................................................................... 46 4.1.2.2. Case 2 ............................................................................................................................................................................... 50 4.1.3. Quadratic Programming in General ................................................................................................................... 54 4.2. APPENDIX B SAMPLE MATLAB CODE FOR MULTIVARIABLE MPC OF HIGH-PURITY DISTILLATION COLUMN ................................... 55 4.3. APPENDIX C MATLAB MPC CODE FOR EXAMPLE 2 ........................................................................................................... 62

Upload: -

Post on 06-Sep-2015

30 views

Category:

Documents


11 download

DESCRIPTION

Summary with some examples

TRANSCRIPT

  • Model Predictive Control

    Table of Contents 1. CONFIGURATION OF MULTIVARIABLE PROCESS CONTROL SYSTEMS ........................................................................... 2

    2. MODEL PREDICTIVE CONTROL (MPC) FOR MULTIVARIABLE SYSTEMS .......................................................................... 3

    2.1. WHY MPC? .................................................................................................................................................................... 3 2.2. BASIC CONCEPTS .............................................................................................................................................................. 3 2.3. PROTOTYPICAL MPC FORMULATION .................................................................................................................................... 6 2.4. ELEMENTS OF MPC THEORY (RAWLINGS AND MAYNE, 2009) ................................................................................................ 15

    2.4.1. Models (RM 1.2) ................................................................................................................................................. 15 2.4.2. The linear quadratic regulator (LQR) (RM 1.3) ................................................................................................... 16 2.4.3. Using the LQR formula in a moving horizon context (RM 1.3.4) ........................................................................ 20 2.4.4. Using the LQR formula in an infinite moving horizon context (RM 1.3.4) to ensure stability ............................ 25 2.4.5. Sidebar: Lyapunov functions and stability of dynamic systems ........................................................................ 29 2.4.6. Designing finite-horizon LQR for closed-loop stability: Lyapunov function approach ....................................... 30

    2.4.6.1. Idea 1: Finite-horizon LQR with terminal constraint ....................................................................................................... 32 2.4.6.1. Idea 2: Finite-horizon LQR with terminal penalty ........................................................................................................... 33

    2.4.7. Nonlinear systems .............................................................................................................................................. 34 2.4.8. LQR with states not measured directly (RM 1.4) ................................................................................................ 34 2.4.9. MPC = LQR with constraints (RM 2.4.3) ............................................................................................................. 35

    3. REFERENCES ............................................................................................................................................................... 36

    4. APPENDICES ............................................................................................................................................................... 37

    4.1. APPENDIX A LINEAR AND QUADRATIC PROGRAMMING EXAMPLES ......................................................................................... 37 4.1.1. Linear Programming .......................................................................................................................................... 37 4.1.2. Linear Programming in General ......................................................................................................................... 40 4.1.1. Linear Programming Sensitivity ......................................................................................................................... 41 4.1.2. Quadratic Programming .................................................................................................................................... 45

    4.1.2.1. Case 1 ............................................................................................................................................................................... 46 4.1.2.2. Case 2 ............................................................................................................................................................................... 50

    4.1.3. Quadratic Programming in General ................................................................................................................... 54 4.2. APPENDIX B SAMPLE MATLAB CODE FOR MULTIVARIABLE MPC OF HIGH-PURITY DISTILLATION COLUMN ................................... 55 4.3. APPENDIX C MATLAB MPC CODE FOR EXAMPLE 2 ........................................................................................................... 62

  • Model Predictive Control Michael Nikolaou

    - 2 -

    1. Configuration of multivariable process control systems - Digital sensors, transmission lines, controllers, valves - Advanced control systems are inherently multivariable - Modular decomposition over time scales.

    Figure 1 Multiscale hierarchy of decision making for process operations. - Planning focuses on economics and forecasts, and answers such questions as what feedstock to select, which products

    to make, and how much of each product to make. In most refineries and larger chemical plants, a linear program (LP) or successive LP (SLP) is used for planning and is based on a plant profit objective function.

    - Scheduling addresses the timing of actions and events necessary to execute the chosen plan, with the key consideration being feasibility. Scheduling deals with such issues as the timing of the deliveries of feeds, product liftings and operating mode changes, and avoiding storage problems (overflow or shortage). Unlike planning, a range of tools are used across the industry for scheduling; from whiteboards and spreadsheets to tools involving simulation models, rules, and optimization.

    - Real-time optimization (RTO) is concerned with implementing business decisions in real time (time-scale of hours) based on a fundamental, steady-state model of the plant. RTO is implemented where economically justified. Like planning, it is driven based on a profit function of the plant, but unlike planning, it seeks additional profit based on what is happening in the plant now using additional knowledge represented in the form of a calibrated nonlinear model of the process, and other details that the planning model and MPC do not have.

    - Model-predictive control (MPC) ensures that the optimal setpoints selected by the RTO layer are followed by the plant. In some implementations (e.g. DMC) an additional LP optimization is performed to fine-tune optimal setpoint values based on linear models, followed by a moving-horizon optimization to ensure that setpoints are followed.

    - Regulatory control (mostly PID) usually ensures that the commands issued by MPC (usually flowrate setpoints) are actually implemented. When not used in the multivariable hierarchy shown here, PID controllers can obviously be used for many other control purposes.

    Focus of this text: MPC and regulatory control.

    (Disclaimer: Calculations in all examples are shown with more significant digits than practically meaningful, only to make it possible for the reader to reproduce numerical results.)

    Planning & Scheduling

    Real-time optimization

    Model predictive control

    Regulatory control

    Process

    >days

    hours

    minutes

    seconds

  • Model Predictive Control Michael Nikolaou

    - 3 -

    2. Model predictive Control (MPC) for multivariable systems

    2.1. WHY MPC?

    The only approach that can systematically handle control of multivariable systems subject to inequality constraints.

    2.2. BASIC CONCEPTS

    Figure 2 MPC finite moving horizon at time point k

    - Control algorithm relies on real-time optimization. - Real-time optimization is performed over a finite moving horizon. - At time k , after the optimal input sequence over the moving horizon has been computed,

    o the first input is implemented, o the process runs until the next time point 1k + , and o the real-time optimization problem is set up and solved again. o The cycle is repeated indefinitely.

    - A model of the controlled process (effect of manipulated inputs and disturbances on outputs) is explicitly used to provide predictions of future output values, such that optimal input values can be selected over the finite horizon.

    - Although FIR models have been used with MPC for a long time, any model structure can be employed. - Controller design for MPC refers to how one sets up the real-time optimization problem, i.e.

    o How future predictions are made (recall: model and future disturbances are uncertain), o How long the output and input horizons are, o What the objective function looks like (usually includes penalties on future output errors and future input

    moves) o What the constraints look like (usually include quality, safety, and operability bounds, e.g. valve openings

    between 0 and 100%, or upper bounds on pressures and temperatures, vapor and liquid flow rate bounds in distillation columns, etc.)

    - MPC outputs are usually flow rate setpoints cascaded to PI flow controllers.

  • Model Predictive Control Michael Nikolaou

    - 4 -

    Figure 3 Comparison of decentralized control and MPC with flow control cascades

    Steam

    Vapor Distillation column

    Reboiler

    Liquid

    Surge tank

    Cooling heat exchanger

    FC

    LC

    LC

    Steam

    Vapor Distillation column

    Reboiler

    Liquid

    Surge tank

    Cooling heat exchanger

    FC

    FC

    FC

    FC

    MPC

  • Model Predictive Control Michael Nikolaou

    - 5 -

    Figure 4 Vapor and liquid flow rate constraints that must be satisfied for a distillation column to operate normally.

  • Model Predictive Control Michael Nikolaou

    - 6 -

    2.3. PROTOTYPICAL MPC FORMULATION

    Consider a process modeled as

    11 1

    n+

    + +

    = +

    =

    x Ax Buy Cx

    (1)

    If the above process is stable, a MIMO pulse-response model with dead time n may be developed for this process as

    1 1

    n N

    i i i ii n i n

    +

    = + = +

    = y H u H u

    (2)

    where noy

    , niu

    , no niH

    . At time k , future predictions of the output y can be made assuming a zeroth-order disturbance model as

    | | |1

    n N

    k n j k i k n j i k k ki n

    +

    + + + + = +

    = +y H u d , 1,2,3,...j = (3) where

    |1

    n N

    k k k i k ii n

    +

    = +

    = d y H u (4) and the notation |i j refers to prediction made at time j about a future value at time i j> . At time k , the following objective function must be minimized

    ( ) ( )| | 1| 1|1 1

    p mTSP SP T

    k n j k k n j k k j k k j kj j

    + + + + + + = =

    + y y Q y y u R u (5)

    subject to the following constraints min | max k n j k+ + y y y , 1, ,j p= (6) min 1| maxk j k+ u u u , 1, ,j m= . (7) | 1|k i k k m k+ + =u u , , , 1i m p= (8)

    where no noQ is a symmetric positive semidefinite matrix, and ni niR is a symmetric positive definite matrix. Eqn. (3) for 1,2,...,j p= and eqn. (4) imply that future predictions are

    1| 1 |1 1

    n N n N

    k n k i k n i k k i k ii n i n

    + +

    + + + + = + = +

    = + y H u y H u (9)

    | |1 1

    n N n N

    k n p k i k n p i k k i k ii n i n

    + +

    + + + + = + = +

    = + y H u y H u (10) or, in vector/matrix form, F F F P P k= + + y H u H u (11) where

    1|

    | ( . ) 1

    k n k

    F

    k n p k p no

    + +

    + +

    =

    yy

    y (12)

    1

    1 ( . ) ( . )

    n

    F n N

    n N n p no p ni

    +

    +

    + +

    =

    H

    H H

    H H

    (13)

  • Model Predictive Control Michael Nikolaou

    - 7 -

    |

    1| ( . ) 1

    k k

    F

    k p k p ni+

    =

    uu

    u (14)

    12

    1( . ) ([ ]. ) ( . ) ([ ]. )

    n N nn N n

    Pn N

    n N np no n N ni p no n N ni

    + ++ +

    +

    + + + +

    =

    H HH H

    HH

    H H

    (15)

    1

    1

    1 ([ ]. ) 1

    k n N

    k N

    P

    k n

    k n N ni

    +

    +

    =

    u

    uu

    u

    u

    (16)

    ( . ) 1

    k

    k

    k p no

    =

    y

    y ,

    ( . ) 1

    SP

    SP

    SP

    p no

    =

    y

    y (17)

    ( . ) ([ ]. )

    ( . ) ( . )

    ( . ) ([ ]. )( . ) ( . )

    F m ni p m ni F P

    m ni m ni

    m ni n N nim ni p ni

    +

    = +

    FD

    I 0 0 II 0 0

    u 0 u u

    I I 0 0

    (18)

    Substituting eqn. (11) into equations (5), (6), (7), and (8) creates a quadratic objective

    ( ) ( )( ) ( )

    ( ) ( )

    TSP SP T

    F F F F

    TSP SPF F P P k F F P P k

    TF P F P

    diag diag

    diag

    diag

    +

    = + + + +

    + + + =

    y y Q y y u R u

    H u H u Q H u H u

    Du Fu R Du Fu

    ( ) ( )2 ...T T T T T SP TF F F F F F P P k Pdiag diag diag diag + + + + + u H QH D RD u u H Q H u y y D RFu

    ( ) ( ) ( )

    { }

    min 2

    min 2

    F

    QP k

    QP

    F

    T T T T T T T SPF F F F F F P P F k

    T TF QP F F QP

    diag diag diag diag diag

    + + +

    = +

    uH

    f

    u

    u H QH D RD u u H QH D RF u H Q

    u H u u f

    (19)

    and linear constraints min maxk P P F F k P P H u H u H u (20) min maxF U u U (21) F =Uu 0 (22)

  • Model Predictive Control Michael Nikolaou

    - 8 -

    where

    ([ ]. ) ([ 1]. ) ([ ]. ) ([ 1]. )p m ni m ni p m ni p m ni +

    =

    I IU

    I I (23)

    (Note: The above choice for U reduces the number of decision variables to .m ni from .p ni . Other choices for U can be used to impose equality constraints on Fu that further reduce that actual number of decision variables.) The above quadratic programming problem can be solved at each time k. The part |

    optk ku of the optimal solution

    optFu

    is sent to the process, and the process is left to run until time 1k + , at which point the optimization problem is set up and solved again. The procedure is repeated indefinitely. Theorem 1 MPC is equivalent to linear time-invariant control if there are no inequality constraints The solution of the minimization in eqn. (19) subject to eqn. (22) is optF P P e k= + u M u M (24)

    Proof: For the matrix U in eqn. (23), we can make substitutions to retain the first m of the p components of the vector Fu in eqn. (19). Then, the minimization becomes { }min 2

    Fm

    T TFm m Fm Fm m

    J

    +u

    u H u u f

    , (25)

    which yields (by setting Fm

    J=

    0

    u)

    1Fm m m

    = K

    u H f . (26)

    from which the result follow. In general, to minimize 2T TF QP F F QP+u H u u f in eqn. (19) subject to eqn. (22), we can use the standard Lagrange-multiplier method, namely find

    ,

    1min2F

    T T T TF QP F F QP F

    L

    + +

    uu H u u f u U

    (27)

    without any constraints. At the optimum the Jacobians of L must satisfy optF opt

    L L

    = =

    0

    u. The second equality is simply

    eqn. (22). The first equality implies ( )1opt T opt opt T optQP F QP F QP QP + + = = +H u f U 0 u H f U (28) Substituting Fu into eqn. (22) yields

    ( ) ( ) 11 1 1T opt opt TQP QP QP QP QP + = = UH f U 0 UH U UH f (29)

    Substituting opt back to eqn. (28) yields

    ( )( )

    ( )( ) ( ) ( )

    11 1 1

    11 1 1 1

    eP k

    QP

    opt T TF QP QP QP QP QP

    T T T T T SPQP QP QP QP F P P F k

    P P e

    diag diag diag

    =

    = + +

    =

    KKK

    f

    u H f U UH U UH f

    H H U UH U UH H QH D RF u H Q

    KK u KK

    k

    P P e k= + M u M

  • Model Predictive Control Michael Nikolaou

    - 9 -

    EXAMPLE 1 MIMO MPC FOR HIGH-PURITY DISTILLATION

    The Matlab sample code in the Appendix provides an example for a high-purity distillation column. The code has been debugged and found to run well for a simple multivariable system. Simulations show the closed-loop response to setpoint change 1 ( ) 1

    SPy t = , while keeping 2 ( ) 0SPy t = .

    When constraints are not active, eqn. (26) is applicable. For 1, 10m p= = we get 1.6002 1.60251.6025 1.6052m

    =

    H with

    eigenvalues 3.205 and 0.0002, indicating very high sensitivity to model errors. The corresponding quadratic surface is shown in Figure 5.

    Figure 5 Quadratic surface for MPC optimization with 1, 10m p= = .

    604020 0 20 40 60

    604020

    0204060

    u1

    u 2

    JuTMu

  • Model Predictive Control Michael Nikolaou

    - 10 -

    Figure 6 Closed-loop response of high-purity distillation column (Morari and Zafiriou, 1989) controlled by MIMO MPC. Note the initial coupling of the outputs y1 and y2, as well as the collinearity of the two inputs u1 and u2. Note, also, the effect of the move suppression coefficient r (cf. R = r*I in eqn. (5)).

  • Model Predictive Control Michael Nikolaou

    - 11 -

    Figure 7 Closed-loop response of high-purity distillation column (Morari and Zafiriou, 1989) controlled by MIMO MPC. Note the almost complete decoupling of the outputs y1 and y2 for small value of the move suppression coefficient r (cf. R = r*I in eqn. (5)).

    Figure 8 Closed-loop response of high-purity distillation column (Morari and Zafiriou, 1989) controlled by MIMO MPC in the presence of uncertainty: The true steady-state matrix is assumed

    to be 0.878 0.8641. 96. 0

    = 1 120P .

  • Model Predictive Control Michael Nikolaou

    - 12 -

    EXAMPLE 2 MPC WITHOUT CONSTRAINTS IS EQUIVALENT TO LINEAR TIME-INVARIANT CONTROL

    Consider a stable process modeled as

    1

    n N

    i ii n

    y h u d+

    = +

    = +

    (30)

    where 2n = , 3N = , 1 2 3 4 5{ , , , , } {0,0,6,8,1}h h h h h = . The process is to be controlled by a model predictive controller (MPC), that performs the following real-time optimization at every time point k:

    ( )1

    2 2| 1|,..., 1 1

    mink k m

    p mSP

    k n j k k j ku u j jy y r u

    + + + +

    = =

    +

    (31)

    subject to the constraints min 1| maxk j ku u u+ , 1, ,j m= . (32) | 1|k i k k m ku u+ + = , , , 1i m p= . (33) where min max 4u u = = , 6p = , 2m = , and r will be used as a tunable parameter. If the inequality constraints, are absent, the minimum subject to equality constraints is attained for

    ( )( ) ( ) ( )11 1 1 1k

    QP

    opt T T T T T SPF QP QP QP QP F P P F k

    = + +

    e

    K

    f

    u H H U UH U UH H H D RF u H y y

    (34)

    Based on eqn. (34), we can write an explicit feedback law (i.e. how the current input value ku is calculated by the controller in terms of past input values k iu and the current error

    SPk ke y y= ) for the corresponding linear time-invariant controller

    that is equivalent to MPC.

    [ ] ( ) ( )

    [ ] [ ]

    opt 1 0 0

    1 0 0 1 0 0

    k

    P

    T T T SPk F P P F k

    TP P F k

    u = +

    =

    e

    K

    K H H D RF u H y y

    KK u KH e

    For 0.00001r = , Matlab >> K = -HQP^(-1)+HQP^(-1)*U'*(U*HQP^(-1)*U')^(-1)*U*HQP^(-1) K = -0.0103 0.0007 0.0007 0.0007 0.0007 0.0007 0.0007 -0.0012 -0.0012 -0.0012 -0.0012 -0.0012 0.0007 -0.0012 -0.0012 -0.0012 -0.0012 -0.0012 0.0007 -0.0012 -0.0012 -0.0012 -0.0012 -0.0012 0.0007 -0.0012 -0.0012 -0.0012 -0.0012 -0.0012 0.0007 -0.0012 -0.0012 -0.0012 -0.0012 -0.0012 >> KP = HF'*HP+D'*R*F

  • Model Predictive Control Michael Nikolaou

    - 13 -

    KP = -15.0000 -120.0000 -90.0000 6.0000 56.0000 -15.0000 -120.0000 -90.0000 0 6.0000 -15.0000 -120.0000 -90.0000 0 0 -15.0000 -120.0000 -90.0000 0 0 -14.0000 -112.0000 -84.0000 0 0 -6.0000 -48.0000 -36.0000 0 0 >> [1 0 0 0 0 0]*K*KP ans = 0.1091 0.8728 0.6546 -0.0620 -0.5745 >> [1 0 0 0 0 0]*K*HF' ans = -0.0620 -0.0784 -0.0004 0.0106 0.0106 0.0106 i.e., [ ]1 0 0 [0.1091 0.8728 0.6546 - 0.0620 - 0.5745]P =KK [ ]1 0 0 [-0.0620 - 0.0784 - 0.0004 0.0106 0.0106 0.0106]TF =KH which implies

    [ ] [ ]opt

    -5 -4 -3 -2 -1

    1 0 0 1 0 0

    0.1091 0.8728 0.6546 0.0620 0.5745 0.1091

    Tk P P F k

    k k k k k k k

    u

    u u u u u u e

    =

    = + + +

    KK u KH e

    Note: The controller has integral action, as expected: The controller transfer function C(z) satisfies

    5

    2 3 4 5

    ( )

    0.1091( ) ( )0.1091 0.8728 0.6546 0.0620 0.5745

    C z

    zu z e zz z z z z

    = + + +

    which has a pole at 1: >> Gofz = tf([1 0 0 0 0 0],[1 0.5745 0.0620 -0.6546 -0.8728 -0.1091],1) Transfer function: z^5 ------------------------------------------------------------- z^5 + 0.5745 z^4 + 0.062 z^3 - 0.6546 z^2 - 0.8728 z - 0.1091 Sampling time: 1 >> zpk(Gofz) Zero/pole/gain: z^5 ----------------------------------------------------- (z-1) (z+0.8354) (z+0.1396) (z^2 + 0.5995z + 0.9354) Similarly, for r = 1, Matlab [ ]1 0 0 [0.0067 0.0538 0.0403 0.0005 0.8997]P =KK - [ ]1 0 0 [-0.0005 - 0.0012 - 0.0012 - 0.0012 - 0.0012 - 0.0012]TF =KH which implies

    [ ] [ ]opt

    -5 -4 -3 -2 -1

    1 0 0 1 0 0

    0.0067 0.0538 0.0403 0.0005 0.8997 0.0067

    Tk P P F k

    k k k k k k k

    u

    u u u u u u e

    =

    = + + +

    KK u KH e

  • Model Predictive Control Michael Nikolaou

    - 14 -

    Note: This controller also has integral action, as expected.

    a. Assuming the true process follows the equation

    1

    n N

    i ii n

    y g u d+

    = +

    = +

    (35)

    with 1 2 3 4 5{ , , , , } {0,0,5,10,2}g g g g g = , the following simulations show how MPC rejects a step disturbance 2d = for 1r = and 0.00001r = .

    r = 0.00001, r = 1, (also r = 10000 for comparison purposes)

    Figure 9 Closed-loop response of FIR process by MPC for different tunings (weight of move suppression term). Matlab code available in Appendix C.

    0 5 10 15 20 25 30 35 400

    50

    time step, k

    d k

    0 5 10 15 20 25 30 35 40-50

    0

    50

    time step, k

    y k

    0 5 10 15 20 25 30 35 40-4

    -2

    0

    time step, k

    u k

    0 5 10 15 20 25 30 35 400

    50

    time step, k

    d k

    0 5 10 15 20 25 30 35 40-50

    0

    50

    time step, k

    y k

    0 5 10 15 20 25 30 35 40-4

    -2

    0

    time step, k

    u k

    0 5 10 15 20 25 30 35 400

    50

    time step, k

    d k

    0 5 10 15 20 25 30 35 400

    50

    time step, k

    y k

    0 5 10 15 20 25 30 35 40-4

    -2

    0

    time step, k

    u k

  • Model Predictive Control Michael Nikolaou

    - 15 -

    2.4. ELEMENTS OF MPC THEORY (RAWLINGS AND MAYNE, 2009)

    2.4.1. Models (RM 1.2)

    - Basic model:

    1+= +

    = +

    x Ax Buy Cx Du

    (36)

    - Why state-space model? o General: Represents both stable and unstable processes, with delays or not. o A lot of theory developed in terms of state-space models. o Can go back and forth from state-space models to other forms (e.g. FIR or transfer matrix)

    State-space model transfer matrix Transfer matrix state space: Formulas somewhat more involved, but straightforward

    - Stochastic version of basic model:

    1+= + +

    = + +

    x Ax Bu Gwy Cx Du v

    (37)

  • Model Predictive Control Michael Nikolaou

    - 16 -

    2.4.2. The linear quadratic regulator (LQR) (RM 1.3)

    - Why bother? o Because a lot of work on LQR before MPC is relevant to MPC and has contributed to both inadvertent

    confusion and improved understanding (as well as improvement) of MPC. - Why is it called linear quadratic?

    o Because LQR is an optimization problem with linear constraints on the state and input and the cost is a quadratic function of the state and input.

    - Why is it called regulator? o Because LQR is supposed to regulate the state of system to its setpoint. The setpoint itself is not supposed

    to change, namely it stays at its nominal value of 0 (regulator problem). (Versions of LQR that handle setpoint changes (servo problem) and persistent disturbances (load changes) also exist.)

    - How does LQR work? o LQR has a long history and it comes in a number of variants that must be interpreted very carefully to make

    sure that LQR is used correctly. o In its basic form, LQR finds the solution of the optimization problem

    1

    opt 1 1 10 0 0 1 2 2 2

    00 1 0 1

    ( ) min ( , ,..., ) min ( ) ,..., ,...,

    ( , ) ( )

    NT T T

    N N N N f N

    N NN N

    V V

    l l

    =

    = = + +

    x x u u x Qx u Ru x P xu u u u x u x

    (38)

    given 0x and subject to 1+ = +x Ax Bu (39) where 0>Q , 0>R (Figure 10).

    Figure 10. Graphical representation of LQR optimization. Total cost is sum of individual costs at each stage. Cost at each stage depends on input into this stage and state out of previous stage.

    - The model in eqn. (39) does not include disturbances. Is this realistic? o No, but exposure of the theory is simplified a lot under this assumption, and then extension to the case of

    the counterpart of eqn. (39) with disturbances becomes a lot easier. - The model in eqn. (39) and eqn. (38) assume that the system state 0x is known. Is this realistic?

    o Usually not, but exposure of the theory is simplified a lot under this assumption, and then extension to the case where reconstruction of the state x from the system output y becomes a lot easier.

    - The model in eqn. (39) is linear. Is this essential? o No, if the control law relies on real-time solution of a corresponding optimization problem, but exposure

    of the theory is simplified a lot under this assumption.

  • Model Predictive Control Michael Nikolaou

    - 17 -

    EXAMPLE 3 LQR FOR A SIMPLE SISO CASE

    System: 1x ax bu+ = + (40) Objective:

    1

    2 2 21 10 0 1 2 2

    0( , ,..., ) ( )

    ( , ) ( )

    N

    N N f N

    N N

    V x u u qx ru p xl x u l x

    =

    = + +

    (41)

    with 0.4a = , 0.6b = , 1fq r p= = = , 5N = , and 0 1x = . Potential input and output sequences { , }u x

    along with resulting values of 0 0 1( , ,..., )N NV x u u are shown in Figure 11.

    Figure 11. Samples of sequences { , }u x

    along with resulting values of 0 0 1( , ,..., )NV x u u .

    -0.8

    -0.6

    -0.4

    -0.2

    0

    0.2

    0 1 2 3 4 5

    u(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.44

    -1.5

    -0.5

    0.5

    1.5

    0 1 2 3 4 5

    x(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.44

    -0.8

    -0.6

    -0.4

    -0.2

    0

    0.2

    0 1 2 3 4 5

    u(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.19

    -1.5

    -0.5

    0.5

    1.5

    0 1 2 3 4 5

    x(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.19

    -0.8

    -0.6

    -0.4

    -0.2

    0

    0.2

    0 1 2 3 4 5

    u(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.16

    -1.5

    -0.5

    0.5

    1.5

    0 1 2 3 4 5

    x(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.16

    -0.8

    -0.6

    -0.4

    -0.2

    0

    0.2

    0 1 2 3 4 5

    u(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.13

    -1.5

    -0.5

    0.5

    1.5

    0 1 2 3 4 5

    x(k)

    k

    V(x(0),u(0),...,u(N-1)) =1.13

  • Model Predictive Control Michael Nikolaou

    - 18 -

    o Optimal solution of eqns. (38), (39) can be expressed in many forms. (Of course, it is always the same!)

    Theorem 2. Backward/forward LQR solution in terms of Riccati equation (RM 1.3.3)

    0 0 1

    0 1

    min ( , ,..., ),...,

    N N

    N

    V

    x u uu u

    attained for

    1

    1 1 1 1( )T T T T

    N f

    + + + + = + +

    =

    Q A A A B B B R B AP

    (42)

    opt opt

    11 1( )

    T T+ +

    =

    = +

    u K xK B B R B A

    (43)

    where 0,1,..., 1N= . Also, optimal cost, eqn. (38), is

    opt 0 0 0 01( )2

    TNV = x x x (44)

    EXAMPLE 4 LQR FOR A SIMPLE SISO CASE, EXAMPLE 3 CONTD

    Figure 12. Optimal sequence 0 1 2 3 4{ , , , , } { 0.19258, 0.19257, 0.19248, 0.19128, 0.17647}K K K K K = , resulting in optimal sequences opt opt opt opt opt0 1 2 3 4{ , , , , } { 0.19258, 0.05478, 0.01557, 0.0044, 0.00116}u u u u u = and opt opt opt opt opt0 1 2 3 4 5{ , , , , , } {1.000,0.284,0.081,0.023,0.007,0.002}x x x x x x = and optimal cost opt opt0 0 4( , ,..., ) 1.13V x u u = corresponding to the last sample choice in Figure 11.

    -0.195

    -0.19

    -0.185

    -0.18

    -0.1750 1 2 3 4 5

    K(k

    )

    k

    V(x(0),u(0),...,u(N-1)) =1.13

  • Model Predictive Control Michael Nikolaou

    - 19 -

    o Where did eqn. (43) come from? (Hint: The dynamic programming approach.) o This looks quite involved! What is the final solution for opt opt0 1{ ,..., }N u u in an explicit form?

    Eqn. (43) 1 2 1 0{ , ,..., , }N N known 1 2 1 0{ , ,..., , }N N K K K K known

    opt opt opt opt

    1opt

    1 0 0

    ( )

    ( ) ( )+

    +

    = + = +

    = + +

    x Ax BK x A BK xx A BK A BK x

    (45)

    and opt 1 0 0( ) ( )= + +u K A BK A BK x , 0,1,..., 1N= (46)

    The optimal input sequence 0 1{ ,..., }N u u is a linear function of the initial state.

    o The formula opt opt=u K x

    is sometimes called the feedback solution of the LQR problem. This is misleading.

    (Why? Hint: p. 113 in (Kalman, 1960).)

  • Model Predictive Control Michael Nikolaou

    - 20 -

    2.4.3. Using the LQR formula in a moving horizon context (RM 1.3.4)

    Eqn. (46) opt0 0 0=u K x (47)

    This is the feedback form of the LQR solution. Only opt0u out of the optimal input sequence opt opt0 1{ ,..., }N u u is implemented in

    a moving-horizon implementation. After opt0u is implemented, the controlled process is left to run, and at the next sampling point the new opt0u out of the new optimal input sequence

    opt opt0 1{ ,..., }N u u for the new 0x is implemented, and so on. (Note

    that 1 1,..., N K K do not appear anywhere.) Closed-loop for system 1k k k+ = +x Ax Bu (48) with feedback as in eqn. (47) is 1 0 0( )k k k k+ = + = +x Ax BK x A BK x (49)

    o Is eqn. (49) stable? (i.e., does this choice of 0K stabilize the controlled system? After all, this choice is optimal!)

    EXAMPLE 5 OPTIMAL MAY BE A POOR CHOICE! (RM, 1.3.4)

    Eqn. (48) with

    [ ]4 3 2 3 1

    , , 2 3 11 0 0

    = = =

    A B C . (50)

    and

    2 2 1 14 / 9 2 / 3

    , 02 / 3 1

    Tf

    = = = = P Q C C R (51)

    1yields 1 2 1...T

    N N N = = = = = =Q C C

    1 10 1 1 1 1

    1 1

    ... ( ) ( )

    ( ) ( )

    T T T T T TN k k

    T T

    + +

    = = = = + =

    =

    K K K B B R B A B C CB B C CA

    CB B C T TB C1( )

    [1 6 2 3]

    = =

    CA

    CB CA (52)

    closed loop is

    1 0 0

    1

    CL

    ( )

    ( ( ) )

    1.5 0

    1 0

    k k k k

    k

    k

    +

    = + = +

    =

    =

    x Ax BK x A BK xA B CB CA x

    A

    x

    (53)

    closed loop is unstable.

    1 Note that neither Q nor R is positive definite in this example. The case 0Q is handled by standard LQR theory when ( , )A Q is detectable.

  • Model Predictive Control Michael Nikolaou

    - 21 -

    o Why is the closed loop in EXAMPLE 5, eqn. (53), unstable? =R 0 LQR inverts the controlled system:

    1 1

    1 1

    1 11 1

    ( )

    ( ) ( )

    k k

    k k

    k k

    k k k

    =

    = +

    = +

    =

    y CxC Ax BuCAx CBu

    u CB y CB CAx

    (54)

    Substituting into eqn. (48) yields

    1 1

    1 1

    1 11

    ( ) ( )

    ( ) ( )

    k k k k

    k k

    = +

    = +

    x Ax B CB y CB CAx

    A B CB CA x CB y

    1 11 1

    CL1 1

    1

    ( ( ) ) ( )

    ( ) ( )

    k k k

    k k k

    + +

    +

    = +

    = +

    x A B CB CA x CB yA

    u CB CAx CB y

    (55)

    Eqn. (55) is the inverse system of eqn. (48). (Note that the inverse is non-causal, namely a future value of y is needed for calculation of the current value of u .) If the system has a zero outside the unit disk, the inverse of the system will be unstable.

  • Model Predictive Control Michael Nikolaou

    - 22 -

    EXAMPLE 6 WHY OPTIMAL MAY BE A POOR CHOICE (RM, 1.3.4) EXAMPLE 5, CONTD

    EXAMPLE 5 and eqn. Error! Reference source not found.

    [ ]

    [ ]

    [ ]

    1

    1

    2

    2

    2

    ( ) ( )

    4 3 2 3 12 3 1

    1 0

    2 311 4 3

    2 3 10(4 3) 2 3

    12 3 1

    (4 3) 2 3( 2 3) 1

    (4 3) 2 3

    G z z

    zz

    zz

    z zz

    z zz

    z z

    =

    =

    = +

    =

    + +

    = +

    C I A B

    zero at 3 / 2 LQR results in unstable closed loop (because it inverts ( )G z ).

    Note also that the poles of ( )G z are at 2 2 2(4 3) 2 3 0 0.67 0.473jz z z j + = = = ( )G z is stable.

    Figure 13. Pulse and step responses of the system in EXAMPLE 6.

  • Model Predictive Control Michael Nikolaou

    - 23 -

    o OK, what if >R 0 ? Does LQR stabilize the closed loop? Not necessarily!

    EXAMPLE 7 OPTIMAL STILL A POOR CHOICE (RM, 1.3.4) EXAMPLE 5, CONTD

    EXAMPLE 5 with 2 2 1 10.001 , 0.001Tf

    = = + = P Q C C I R . (56) eigenvalues of 0( )+A BK are 0( ) {1.307,0.001}i + =A BK . (57) closed loop unstable again.

  • Model Predictive Control Michael Nikolaou

    - 24 -

    o So, how to design LQR to ensure closed loop is stable? Large 0>R Large N

    EXAMPLE 8 OPTIMAL MAY BE A GOOD CHOICE (RM, 1.3.4) EXAMPLE 5, CONTD 2 2 1 1, Tf c c = = + = P Q C C I R

    0c = N 0K 0( )i +A BK

    2 0.1667 0[ .6667] {1.5,0.} 5 0.1667 0[ .6667] {1.5,0.}

    10 0.1667 0[ .6667] {1.5,0.} 20 0.1667 0[ .6667] {1.5,0.} 50 0.1667 0[ .6667] {1.5,0.}

    We knew that! (Why? Hint: =R 0 LQR inverts controlled system.)

    0.001c = N 0K 0( )i +A BK

    2 0.1518 0[ .6652] {1.484,0.0009988} 5 0.02561 0.6[ ]654 {1.307,0.0009988}

    10 0.6253 0.6[ 660] {0.7070,0.0009988} 20 0.6683 0.6[ 660] {0.6641,0.0009988} 50 0.6683 0.6[ 660] {0.6641,0.0009988}

    Note that the feedback law for 5N = , corresponding to the last sample choice in Figure 11, is destabilizing, despite the fact that within the moving horizon the state is driven very close to zero, i.e. opt5 0.002x = (Figure 12).

    0.01c = N 0K 0( )i +A BK

    2 0.03416[ 0.6532] {1.358,0.009888} 5 0.4989 0.6[ ]585 {0.8245,0.009889}

    10 0.6791 0.6[ ]603 {0.6444,0.009889} 20 0.6817 0.6[ ]603 {0.6417,0.009889} 50 0.6817 0.6[ ]603 {0.6417,0.009889}

    0.1c =

    N 0K 0( )i +A BK

    2 0.4657 0.5[ ]949 {0.7751,0.09258} 5 0.7511 0.6[ ]209 {0.4887,0.09355}

    10 0.7558 0.6[ ]214 {0.4839,0.09355} 20 0.7558 0.6[ ]214 {0.4839,0.09355} 50 0.7558 0.6[ ]214 {0.4839,0.09355}

    How large should N be for the horizon to be long enough?

  • Model Predictive Control Michael Nikolaou

    - 25 -

    2.4.4. Using the LQR formula in an infinite moving horizon context (RM 1.3.4) to ensure stability

    Eqn. (38) with N

    opt 1 10 0 0 2 200 0

    ( ) min ( , ,..., ) min ( ) ,..., ,...,

    ( , )

    T TV Vl

    =

    = = +x x u u x Qx u Ruu u u u x u

    (58)

    subject to eqn. (39). Theorem 3. LQR solution for N in terms of the discrete algebraic Riccati equation (DARE), (RM 1.3.6)

    0 0

    0

    min ( , ,..., ),...,

    V

    x u uu u

    attained for

    1( )T T T T = + + Q A A A B B B R B A (59)

    opt opt

    1( )T T=

    = +

    u KxK B B R B A (60)

    with optimal cost, eqn. (38),

    opt 0 0 01( )2

    TV = x x x (61)

    - When can eqn. (59) be solved?

    o When ( , )A B controllable2. - Is the solution unique?

    o The relevant solution (corresponding to the optimal LQR solution) yes. - How can eqn. (59) be solved?

    o Reliable numerical methods available. - Does the solution define a stabilizing control law?

    o Yes.

    o So, how to design LQR to ensure closed loop is stable?

    Large Large

    2 n nA , n mB ( , )A B controllable iff 1rank n n = B AB A B . (Why? Hint: An input sequence exists that can drive the state to the origin from any initial condition.)

    0>RN

  • Model Predictive Control Michael Nikolaou

    - 26 -

    EXAMPLE 9 CLOSED-LOOP POLES FROM INFINITE-HORIZON LQR (KAILATH, 1980, P. 244) 1 1 1 1 1, , , , , , , ( ) ( )n n n n T n nfN r G z z

    = = = = = A B C P 0 Q C C R C I A B It can be shown (Kailath, 1980, p. 244), that 0( )i +A BK are the n roots inside the unit disk of the 2n -degree polynomial

    1( ) ( )( ) 1

    G z G zzr

    = + (62)

    where 1( ) [ ]G z z = C I A B , eqn. Error! Reference source not found.. (What does T=Q C C mean?) EXAMPLE 5 and EXAMPLE 6

    22 21

    2 1

    1 ( 2 3) 1 ( 2 3) 1( ) 1 1(4 3) 2 3 (4

    ( 3 2 )( 2 3 )(33) 2 4 2 )(2 4 33 )

    z z zr

    z zzr z z z z zz z z

    + +

    + + = + = + + ++

    (63)

    Therefore

    r 1 0 2 0{ ( ), ( )} + + =A BK A BK stable poles of 1( ) ( )G z G z = 2 2 0.67 0.47

    3j j = .

    0r 1 0 2 0{ ( ), ( )} + + =A BK A BK stable zeros of 1( ) ( )G z G z = {2 / 3,0} (Why?). where the four roots of ( ) 0z = are (Figure 14)

    2 2 2

    1 2 3 43 10 9 18 2 2 9 21 23 ( 3 10 ) 9 18 212

    r r r r r r r rr

    + + + + + + (64)

    with 1 2 3 41{ , , , } {1, 1,1,1}, {1,1,1,1}, { 1, 1, 1, 1}, { 1,1, 1, 1} = (65)

    Figure 14. Roots of 1

    2 1 2

    1 ( 2 3) 1 ( 2 3) 11 0(4 3) 2 3 (4 3) 2 3

    z zr z z z z

    + ++ = + +

    for 0 10r< , eqn. (64). Note that

    two are stable (a,c) and two are unstable (b,d) for any value of 0r > . Roots start at {0, , 2 / 3,3 / 2} as 0r , and continue on the real axis until 0.47r = (solution of 29 18 2 0,r r = cf. eqn. (64)) at which point the roots become complex conjugate of each other, to reach the poles of 1( ) ( )G z G z , i.e. {0.67 0.47, 1 0.71, 0.67 0.47, 1 0.71}j j j j as r .

  • Model Predictive Control Michael Nikolaou

    - 27 -

    Theorem 4. The infinite-horizon LQR solution via DARE defines a stabilizing feedback control law ( , )A B controllable, 0>Q , 0>R , and N a. There exists a unique symmetric 0 > that solves the DARE, eqn. (59). b. The corresponding unique K , eqn. (60), defines a stabilizing control law, i.e. ( ) 1i +

  • Model Predictive Control Michael Nikolaou

    - 28 -

    Figure 16. Graphical representation of the relationships in eqn. (66). Each sequence represents input, state, and cost elements within a moving horizon starting at state kx (top row) and the subsequent state 1k k k+ = +x x BuA (rows 2 and 3).

    opt{ ( )}kV x is a decreasing sequence. (67) But opt ( ) 0kV x . (68) Eqns. (67) and (68) opt{ ( )}kV x converges. (69) Eqns. (69) and (66) ( )opt opt10 lim ( ) ( ) lim ( , ) 0k k kk k kV V l + = = x x x u (70) lim ( , ) 0kk kl =x u (71)

    (Why?) lim 0kk =x (72)

    and lim 0kk =u (73)

    (Why?)

    opt ( )V x is a Lyapunov function (RM 2.4.1) for the closed-loop system:

    Inifinite moving horizon at time

    Infinite moving horizon at time

    Optimal inputs at time

    Optimal inputs at time

  • Model Predictive Control Michael Nikolaou

    - 29 -

    2.4.5. Sidebar: Lyapunov functions and stability of dynamic systems

    Is the equilibrium state ss ss( )n= x g x of the dynamic system 1 ( )k k+ =x g x stable? Namely, is ss sslim ( )kk = =x x g x for all

    0x in a neighborhood of ssx ? Lyapunovs theorem asserts that this is so if and only if there is a scalar-valued function : : ( )nU U x x that satisfies the following properties along the trajectories { }kx :

    1 20 ( ) ( ) ( )k k kU x x x (74)

    1 3( ) ( ) ( ) 0k k kU U + x x x (75) where 1 2 3, , are continuous, strictly increasing, zero at zero, and unbounded functions. The energy of the dynamic system is frequently a Lyapunov function. There may be others, but there is no recipe for finding them. As a trivial example, a Lyapunov function for the dynamic system 1 0.3k kx x+ = is

    2( )U x x= , with 2 2 2 2 2 21 0.3 0.7 0k k k k kx x x x x+ = = (76)

    - What if

    o N < ? o there are inequality constraints?

  • Model Predictive Control Michael Nikolaou

    - 30 -

    2.4.6. Designing finite-horizon LQR for closed-loop stability: Lyapunov function approach

    - How to extend the stability proof of Theorem 4 to the case of finite horizon ( N < ) o Follow proof and determine how to modify LQR objective in order to satisfy eqn. (75):

    Figure 17. Graphical representation of closed loop with finite-horizon LQR feedback at initial state kx . Note that, in general, opt opt1| 1| 1k k k k+ + +u u . (Why? How does this figure compare to Figure 15?) Implement moving horizon control law at times k and 1k +

    1

    opt opt opt opt1 | | |

    01

    ( ) min ( , ,..., ) ( , ) ( ),...,

    N

    N N k k N k k k k N k Nk k

    k

    k

    k N

    V V l l

    + + + +=+

    = = +u u xx x u xu u

    (77)

    and

    1

    opt1 1 1

    opt opt opt1 | 1 1 |

    01 1 1

    1 1 1 | 11( ) min ( , ,..., ) ( , ) ( ),...,

    k k

    N

    N N k k N N

    k k N

    k k k Nk kkV V l l+ + + + + + + + ++ +

    + + + =+ + +

    = = +x x u uu

    xu

    u x

    (78)

    By construction 1

    opt11 1 1( ) ( , ,..., )N N k k Nk kV V+ + + + + x ux u (79)

    for any 1 1 1{ ,..., }k k N+ + + u u . Set the first 1N elements of 1 1 2 1 1{ ,..., , }k k N k N+ + + + + u u u equal to the optimal sequence from the previous time step k and the last element equal to any feasible value: opt opt1 1 2 1 1 1| | 11|{ ,..., , } { ,..., , }k k N k N k k k N k Nk k+ + + + + + + + +=u u u u uu

    (80)

    Now (assuming no modeling uncertainty) opt1 1|k k k++ =x x (81) (Why?) (See Figure 18)

    opt opt opt1| 1|

    opt opt op

    1 1

    op|

    t opt opt1| 1| 1| 1| 1 1||

    | 1

    | 1

    ( ) ( , ,..., , ) (Why?)

    ( , ) ... ( , ) ( , ) ( )

    ( ,

    N N k k k N k

    k k k k k N k

    k N k

    k

    k k

    k k

    k N k k N k k

    k

    N kN k N

    V V

    l l l l

    l

    + +

    + + + + +

    + +

    +

    +

    + + + +

    +

    = + + + +

    =

    x x uu u

    x u

    x u xx u x u

    t

    opt|

    o

    opt opt opt opt opt1| 1| 1| 1| |

    o

    pt

    pt opt| |

    opt o||

    |

    p

    | 1 1 1

    ) ( , ) ... ( , ) ( )

    ( , ) ( , ) ( )] ( )

    ( ) ( , ) (k k k

    k k k

    k k k k k N k k N k N k N k

    k N k N N k N kk

    N k

    N k N k

    k N k

    k

    l l l

    l l l l

    V l l

    + + + + +

    + +

    +

    + + + + +

    + + + +

    + +

    = +

    x u

    x x

    x u x u

    x

    u

    x

    x

    x

    u x

    t opt|| 1 1| 1, ) ( ) ( )N N k N kk N k k N kl l+ + + + + ++ u x x

    (82)

    | 1 1| 1

    opt opt opt opt| |

    opt1 |( ) ( ) ( , ) ( , ) ( ) ( )N N k N k k N kk k k k k NN N k Nkk kV V l l l l+ + ++ + ++ + + + x x u ux xx x (83)

  • Model Predictive Control Michael Nikolaou

    - 31 -

    Eqn. (82) suggests how to shape the LQR optimization so that closed-loop stability can be guaranteed: Make opt opt| || 1 1| 1( , ) ( ) ( ) 0k N k N N kk N k N N kk kl l l++ ++ + + ++ u xx x (84) to ensure opt opt1( ) ( ) ( , )k kN N k kV V l+ x x x u (85) and use Lyapunov arguments. Various options have appeared in literature, with the most classic being opt |k N k+ =x 0 .

    Figure 18. Graphical representation of the relationships in eqn. (82). Each sequence represents input, state, and cost elements within a moving horizon starting at state kx (top row) and the subsequent state 1k k k+ = +x x BuA (rows 2 and 3).

    Finite moving horizon at time

    Finite moving horizon at time

    Finite moving horizon at time

    Optimal inputs at time

    Optimal inputs at time

    Feasible inputs at time

  • Model Predictive Control Michael Nikolaou

    - 32 -

    2.4.6.1. Idea 1: Finite-horizon LQR with terminal constraint Eqn. (84) At time k explicitly require |k N k+ =x 0 (86)

    Then eqn. (82) with opt |k N k+ =x 0 (assumed feasible) and | 1k N k+ + = 0u (which is feasible) 1| 1( ) 0k NN kl + + + =x opt opt| || 1 1| 1( , ) ( ) ( ) 0k N k N N kk N k N N kk kl l l++ ++ + + ++ =u xx x (87) opt opt1( ) ( ) ( , ) 0k kN N k kV V l+ x x x u (88)

    opt ( )NV x is a Lyapunov function for finite horizon LQR with terminal constraint, i.e. (as before) opt{ ( )}N kV x is a decreasing sequence, (89) But opt ( ) 0N kV x . (90) Eqns. (88) and (90) opt{ ( )}N kV x converges. (91) Eqns. (91) and (88) ( )opt opt10 lim ( ) ( ) lim ( , ) 0k kN Nk kk kV V l + = x x x u (92) lim ( , ) 0kk kl =x u (93)

    (Why?) lim 0kk =x (94)

    and lim 0kk =u (95)

    (Why?)

    - Summary: Overall optimization at time k

    1

    1 1| | | |2 2

    01

    min ( ),...,

    NT Tk k k k k k k k

    k k N

    + + + +=+

    + x Qx u Ruu u

    (96)

    subject to |k k k=x x (97) 1| | |k k k k k k+ + + += +x Ax Bu (98) |k N k+ =x 0 (99)

  • Model Predictive Control Michael Nikolaou

    - 33 -

    2.4.6.1. Idea 2: Finite-horizon LQR with terminal penalty Eqn. (84) Select terminal penalty ( )Nl x to be a control Lyapunov function (CLF), i.e. such that There exists u with ( ) ( ) ( , )N Nl l l+ Ax Bu x x u (100) for all x .

    - A number of possibilities o Minimum cost-to-go from k N+ to

    12( )T

    Nl = x x x (101)

    (Why? Hint: Eqns. (61) and (66).) Summary: Overall optimization at time k

    1

    1 1| | | | | |2 2

    01

    min [ ( ) ],...,

    NT T Tk k k k k k k k k N k k N k

    k k N

    + + + + + +=+

    + + x Qx u Ru x xu u

    (102)

    subject to (103) (104) where satisfies DARE, eqn. (59).

    o For stable systems: Cost-to-go from k N+ to with 1 ...N N += = =u u 0

    12( )T

    Nl =x x Px (105) where 0>P with T =P A PA Q (106) (Why? Hint: ( ) ( ) ( ) ( , )T T TN Nl l l+ = = =Ax B0 x x A PA P x x Qx x 0 which is eqn. (84), and

    ( )0 0 0( ) ( )T T T j j T T j j Tk N j k N j k N k N k N k N k N k Nj j j + + + + + + + + + += = == = = x Qx x A QA x x A QA x x Px where

    0 1( ) ( )T j j T T j j T

    j j

    = == = = P A QA A PA A QA P A PA Q

    ) Summary: Overall optimization at time k

    1

    1 1| | | | | |2 2

    01

    min [ ( ) ],...,

    NT T Tk k k k k k k k k N k k N k

    k k N

    + + + + + +=+

    + + x Qx u Ru x Pxu u

    (107)

    subject to (108)

    (109) where P satisfies, eqn. (106).

    |k k k=x x

    1| | |k k k k k k+ + + += +x Ax Bu

    |k k k=x x

    1| | |k k k k k k+ + + += +x Ax Bu

  • Model Predictive Control Michael Nikolaou

    - 34 -

    2.4.7. Nonlinear systems

    - Stability theory for linear systems carries over to nonlinear systems almost intact. o Replace eqn. (39) by

    1 ( , )+ =x f x u (110) - Difficulties in solving nonlinear optimization problems, e.g., ensuring the counterpart of eqn. (99).

    o Replace eqn. (99) by less restrictive constraint: Require |k N k+x to be in a neighborhood of zero, e.g.

    |k N k fX+ x (111)

    2.4.8. LQR with states not measured directly (RM 1.4)

    - Given model in eqn. (36), measurement of y , and knowledge of u , what is x ? o Tempted to say

    1( )k k k k k k= + = y Cx Du x C y Du (112)

    But C is not square 1C does not exist. - State can be estimated from output measurements (and knowledge of input) using an observer

    Luenberger state observer structure: 1 ( ) ( ) ( )k k k k k k k k k k+ = + + = + + x Ax Bu L y Cx Du A LC x Bu L y Du (113)

    - Luenberger observer can be designed by selecting L to make ( )A LC stable. - Even if C is square, for a model in eqn. (37) estimation of the state (and of the output) can be improved be

    including the equality constraints posed by eqn. (37) in the estimation process. - Most celebrated observer for model in eqn. (37): Kalman filter3

    o Select L using DARE. o Celebrated because

    Computationally easy to implement: Recursive, fast; Optimal and stable.

    - State estimation trivial for FIR models (Why?)

    FIR

    0

    N

    k i k ii

    =

    = y H u (114)

    FIR FIRFIR FIR

    FIR

    1, 1 2,1, 1 1,

    2, 1 3,

    1, 1 ,, 1 ,

    , 11

    k kk k

    k k

    N k N kN k N k

    N k kk k

    +

    ++

    ++

    ++

    = = = + = =

    z zz z0 I 0 0z z

    0I 0

    z zz z0 0 I

    z uBAx x

    FIR

    FIR

    1,

    1 0

    ,

    [ ]

    k

    k

    k N k

    N k

    k

    = +

    u

    zy H H H u

    z DCx

    3 http://www.cs.unc.edu/~welch/kalman/

  • Model Predictive Control Michael Nikolaou

    - 35 -

    2.4.9. MPC = LQR with constraints (RM 2.4.3)

    The preceding analysis on LQR/MPC that guarantees stability is valid in the presence of inequality constraints, assuming feasibility of the on-line optimization problem. Feasibility Stability

    - Ensuring feasibility not trivial, but a number of approaches are available, relying on numerical optimization. - From a practical viewpoint, making the horizon long enough works well for stable systems. - More complicated for unstable systems.

  • Model Predictive Control Michael Nikolaou

    - 36 -

    3. References Dewilde, P. and E. F. Deprettere (1988). Singular Value Decomposition: An

    Introduction. SVD and signal processing: algorithms, applications, and architectures. E. F. Deprettere, North-Holland: 3-41.

    Kailath, T. (1980). Linear systems Prentice-Hall Kalman, R. E. (1960). "Contributions to the theory of optimal control." Bull. Soc.

    Math. Mex. 5: 102-119. King, M. (2011). Process Control: A Practical Approach, Wiley. Morari, M. and E. Zafiriou (1989). Robust Process Control, Prentice-Hall. Rawlings, J. B. and D. Q. Mayne (2009). Model Predictive Control: Theory and

    Design, Nob Hill Publishing. Stephanopoulos, G. (1984). Chemical Process Control - An Introduction to Theory and

    Practice, Prentice-Hall.

  • Model Predictive Control Michael Nikolaou

    - 37 -

    4. Appendices

    4.1. APPENDIX A LINEAR AND QUADRATIC PROGRAMMING EXAMPLES

    4.1.1. Linear Programming4

    EXAMPLE 10 PRODUCTION PLANNING IN A REFINERY

    A refinery has available two crude oils that have the yields shown in the following table.

    Product Volume percent yields Maximum allowable product rate (bbl/day) Crude #1 Crude #2

    Gasoline 70 31 24000 Kerosene 6 9 2400 Fuel oil 24 60 12000

    Because of equipment and storage limitations, production of gasoline, kerosene, and fuel oil must be limited as also shown in this table. There are no plant limitations on the production of other products such as gas oils. The profit on processing crude #1 is $1.00/bbl and on crude #2 it is $0.70/bbl. What are the optimal daily feed rates of the two crudes?

    - First thoughts

    Because crude #1 brings higher profit than crude #2, one might be inclined to use crude #1 only. However, the proportions of products from crude #1 (70:6:24) are different from the proportions of maximum allowable product rates (24000:2400:12000). If crude #1 alone were used, one could use up to

    24000 2400 12000min{ , , } min{34286,40000,50000} 342860.70 0.06 0.24

    = = bbl/day

    of crude #1 (and no crude #2), because product sales are limited. In particular, the sale of gasoline (with 70% yield from crude #1) is limited to 24000 bbl/day. The resulting profit from production using crude #1 alone would be 34286(bbl/d) 1($/bbl) 34286($/d) = (115) Using a little less crude #1 and some crude #2 would drastically reduce gasoline production and would significantly increase production of fuel oil (which has the highest yield for crude #2, and of which up to 12000 bbl/day can be sold). Would this increase profit? If so, what is the combination of crude #1 and #2 that would maximize profit?

    - Mathematical problem formulation

    Definition of variables: Crude #1: x bbl/day Crude #2: y bbl/day Profit: 0.7x y+ Optimization problem:

    ,max( 0.7 )

    x yx y+ (116)

    subject to

    4 Adapted from Edgar and Himmelblau, Optimization of Chemical Processes, McGraw-Hill, 2001.

  • Model Predictive Control Michael Nikolaou

    - 38 -

    0.70 0.31 240000.06 0.09 24000.24 0.60 12000

    00

    x yx yx y

    xy

    +

    + +

    (117)

    (Note: This minimization cannot be handled by setting partial derivatives to zero, as learned in calculus. In fact, ( 0.7 ) / 1 0x y x + = and ( 0.7 ) / 0.7 0x y y + = .)

    - Graphical solution

    Figure 19. Graphical solution of the linear programming problem in eqns. (116) and (117). The optimum is at (31891.9, 5405.41) with optimal profit 35676. Cost remains constant along the dotted lines. Optimal profit: 31892(bbl/d) 1($/bbl)+5405.4(bbl/d) 0.70($/bbl) 35676($/d) = (118) Compare to eqn. (115).

  • Model Predictive Control Michael Nikolaou

    - 39 -

    - Numerical solution

    Figure 20. Excel sheet to solve the linear programming problem in eqns. (116) and (117). Optimum is at (31891.9, 5405.41) with an optimal profit 35676.

  • Model Predictive Control Michael Nikolaou

    - 40 -

    Figure 21. Excel Solver for solution of the linear programming problem in eqns. (116) and (117).

    4.1.2. Linear Programming in General

    max Tx

    c x (119)

    subject to Ax b (120) or, in detail,

    11 1,...,

    max( ... )n

    n nx xc x c x+ + (121)

    subject to

    11 1 1 1

    1 1

    ...

    ...

    n n

    m mn n m

    a x a x b

    a x a x b

    + +

    + +

    (122)

    - Easy to solve numerically via the Simplex method or others. - Software readily available.

  • Model Predictive Control Michael Nikolaou

    - 41 -

    4.1.1. Linear Programming Sensitivity

    EXAMPLE 11 HOW DOES DEMAND AFFECT PRODUCTION PLANNING IN A REFINERY?

    Product Volume percent yields Maximum allowable product rate (bbl/day) Crude #1 Crude #2

    Gasoline 70 31 30000 Kerosene 6 9 2400 Fuel oil 24 60 12000

    Figure 22. Graphical solution of the linear programming problem in eqns. (116) and (117). Optimum is at (40000, 0) with optimal profit 40000. Cost remains constant along the dotted lines.

    Product Volume percent yields Maximum allowable product rate (bbl/day) Crude #1 Crude #2

    Gasoline 70 31 16000 Kerosene 6 9 2400 Fuel oil 24 60 12000

    Figure 23. Graphical solution of the linear programming problem in eqns. (116) and (117). Optimum is at (17013.9, 13194.4) with optimal profit 26250. Cost remains constant along the dotted lines.

  • Model Predictive Control Michael Nikolaou

    - 42 -

    EXAMPLE 12 RUNNING A C3/C4 SPLITTER AT OPTIMAL CONDITIONS (ADAPTED FROM (KING, 2011)

    Figure 24 shows a C3/C4 splitter. There are several variables to control, but we will focus on the following two controlled variables (CV): % of C4 in the distillate, Dy and % of C3 in the bottom product, Bx . To control these two CV, two manipulated variables (MV) are used: The reflux and reboil rates, L and ,V respectively. The feed flow rate, ,F is the main disturbance.

    Figure 24. C3/C4 splitter, i.e. distillation column that separates hydrocarbons with three carbon atoms from hydrocarbons with four carbon atoms (PT = Pressure Transmitter, LT = Level Transmitter, AT = Composition Analyzer Transmitter). Optimal values of L , ,V are to be determined, such that the total operating cost 1 2 3 4 D BJ p y p x p L p V= + + + (123) is minimized. The decision variables L , V affect ,D By x as

    0.962 4.17 0.973

    0.806 5.32 33.94D

    B

    y L Vx L V

    = + +

    = + (124)

    Because of purity constraints in both the distillate and bottom streams, the corresponding values of ,D By x in eqns. (123) and (124) must satisfy the constraints 0 5, 0 5D By x (125) Finally, the flow rates L , V are constrained (by process equipment size and operability considerations) as 50 80, 13 17.5L V (126) Assume 1 2 3 4 1p p p p = = = =

    - Substitute the expressions of ,D By x from eqn. (124) into eqns. (123) and (125) and combine the results with eqn. (126), to create a linear programming problem with decision variables L , V alone.

    Condenser

    Feed,

    Steam

    Accumulator

    Reflux,

    Distillate,

    Bottom product,

    LT

    LT

    AT

    AT

    Reboil,

    PT

    Cooling water

  • Model Predictive Control Michael Nikolaou

    - 43 -

    ( )min( ) min ( 0.962 4.17 0.973) (0.806 5.32 33.94)

    34.9, ,

    min(1.156 2.15 ) 1,

    3

    D By x L V L V L V L VL V L V

    L VL V

    + + = + + + +

    +

    = + (127)

    subject to

    0 0.962 4.17 0.973 50 0.806 5.32 33.94 5

    50 8013 17.5

    L VL V

    LV

    + + +

    (128)

    Figure 25. Feasible area for C3/C4 splitter. The minimum cost is 60.5, attained at (Vopt,Lopt) = (14. 56.5). Cost remains constant along the dotted lines.

  • Model Predictive Control Michael Nikolaou

    - 44 -

    Figure 26. Cost surface for feasible values of ,V L (3D counterpart of Figure 25). The feasible area of ,V L values is shown on the ( , )V L -plane, where cost remains constant along the dotted lines (projections of continuous lines from cost surface). The minimum cost is 60.5, attained at opt opt( , ) (14,56.5)V L = . .

  • Model Predictive Control Michael Nikolaou

    - 45 -

    4.1.2. Quadratic Programming

    EXAMPLE 13 REACHING TOP AND BOTTOM SPECIFICATIONS IN A DISTILLATION COLUMN

    Figure 27. Low-purity distillation column. For the low-purity binary distillation column shown in Figure 27, the following model captures the effect of the reflux and boil-up flow rates on the top and bottom concentrations.

    0.7 0.91.0 0.9

    D

    B

    y Lx V

    =

    mGy

    (129)

    (All variables are shown in deviation from the normal operation steady state). It is desired to select values for the reflux and boil-up flow rates L and V , respectively, such that the vector [ ] TD By x= y of top and bottom concentrations approaches the setpoint vector SP [1 1]T= y as close as possible. Eqn. (129)

    SP

    1 0.7 0.9 6.671 1.0 0.9 6.30

    L LV V

    = =

    mGy

    (130)

    Distillate, D

    Bottoms, B

    Condenser

    Reboiler Boil-up V

    Reflux L yD

    xB

    Feed

  • Model Predictive Control Michael Nikolaou

    - 46 -

    4.1.2.1. Case 1 The flow rates L and V cannot be moved arbitrarily, but must satisfy the constraints

    5 20

    20 10LV

    (131)

    Unfortunately, the above L and V in eqn. (130) do not satisfy the constraints, eqn. (131). What is the best that can be achieved in this case?

    - Mathematical problem formulation

    ( )1 2

    SP 2 SP 2 SP 21 1 2 22, ,

    min ( , ) min( ) min ( ) ( )D BL V y yf y x y y y y = = + y y y

    (132)5

    subject to the constraints in eqn. (136). Using eqn. (129), write ( , )D Bf y x in terms of [ ]

    TL V= m to write eqn. (137) as 2 211 22 12 1 2, ,min ( , ) min( 2 ) min 2 2 2

    T T

    L V L Vg L V c h L h V h L V f L f V c

    = + + = + + + + +

    mm Hm m f

    (133)

    where

    1.49 1.531.53 1.

    62

    T = =

    H G G

    (134)

    SP0.30.

    T

    = =

    f G y

    (135)

    (Why? Hint:

    SP 2 SP SP

    2SP SP

    SP SP

    SP SP SP

    ( ) ( ) ( )

    ( ) ( )( ( ) )( )

    2 ( )

    T

    T

    T T T

    T T T T T

    =

    =

    =

    = +

    y y y y y y

    Gm y Gm ym G y Gm y

    m G Gm m G y y y

    ) - Graphical solution

    Figure 28. Surface plot of ( , ) 2T Tg L V c = + +m Hm m f .

    5 The 2-norm of a vector x in n is defined as 2 212 ...

    Tnx x= + + =x x x ,

  • Model Predictive Control Michael Nikolaou

    - 47 -

    Figure 29. Contour plot of ( , ) 2T Tg L V c = + +m Hm m f and graphical solution for opt1 5m L= = and opt2 4.7222m V= = , resulting in opt1 0.75y = (instead of 1) and opt1 0.75y = (instead of -1).

  • Model Predictive Control Michael Nikolaou

    - 48 -

    - Numerical solution using Excel Solver

    Figure 30. Excel sheet to solve the quadratic programming problem. Optimum is at opt1 5m L= = and opt2 4.7222m V= = , resulting in opt1 0.75y = (instead of 1) and opt1 0.75y = (instead of -1). .

  • Model Predictive Control Michael Nikolaou

    - 49 -

    Figure 31. Excel Solver for solution of the quadratic programming problem.

  • Model Predictive Control Michael Nikolaou

    - 50 -

    4.1.2.2. Case 2 The flow rates L and V cannot be moved arbitrarily, but must satisfy the constraints

    10 2020 10

    LV

    (136)

    Luckily, the above L and V satisfy the constraints, eqn. (136).

    - Mathematical problem formulation

    The same formulation as in Case 1 can be used again:

    ( ) ( )1 2

    2SP SP 2 SP 2

    1 1 2 22, ,min ( , ) min min ( ) ( )D BL V y yf y x y y y y = = + y y y

    (137)

    subject to the constraints in eqn. (136). Using eqn. (129), write ( , )D Bf y x in terms of [ ]

    TL V= m to write eqn. (137) as 2 211 22 12 1 2, ,min ( , ) min( 2 ) min 2 2 2

    T T

    L V L Vg L V c h L h V h L V f L f V c

    = + + = + + + + +

    mm Hm m f

    (138)

    where

    1.49 1.531.53 1.

    62

    T = =

    H G G

    (139)

    SP0.30.

    T

    = =

    f G y

    (140)

    - Graphical solution

    Figure 32. Surface plot of ( , ) 2T Tg L V c = + +m Hm m f . Same as in Figure 28.

  • Model Predictive Control Michael Nikolaou

    - 51 -

    Figure 33. Contour plot of ( , ) 2T Tg L V c = + +m Hm m f and graphical solution for optL and optV. Optimum is at opt1 6.6667m L= = and opt2 6.2963m V= = , resulting in opt1 1y = and opt1 1y = as desired. Note similarities with and differences from Figure 29.

  • Model Predictive Control Michael Nikolaou

    - 52 -

    - Numerical solution using Excel Solver

    Figure 34. Excel sheet to solve the quadratic programming problem. Optimum is at Optimum is at opt1 6.6669m L= = and opt2 6.2965m V= = , resulting in opt1 1y = and opt1 1y = as desired. Note similarities with and differences from Figure 30.

  • Model Predictive Control Michael Nikolaou

    - 53 -

    Figure 35. Excel Solver for solution of the quadratic programming problem. Note similarities with and differences from Figure 31.

  • Model Predictive Control Michael Nikolaou

    - 54 -

    4.1.3. Quadratic Programming in General

    min( 2 )T T+x

    x Hx x f (141)

    subject to Ax b (142) or, in detail,

    1

    2 211 1 21 2 1 , 1 1 1 1,...,

    min ( ... 2 ... 2 2 ... 2 )n

    nn n n n n n n nx xh x h x h x x h x x f x f x + + + + + + + + (143)

    subject to

    11 1 1 1

    1 1

    ...

    ...

    n n

    m mn n m

    a x a x b

    a x a x b

    + +

    + +

    (144)

    - Easy to solve numerically when 0H , i.e. all eigenvalues of the symmetric matrix H are non-negative. - General numerical solution via a number of methods. - Software readily available.

  • Model Predictive Control Michael Nikolaou

    - 55 -

    4.2. APPENDIX B SAMPLE MATLAB CODE FOR MULTIVARIABLE MPC OF HIGH-PURITY DISTILLATION COLUMN % MPC of high-purity distillation column % Companion m-files: % MPCoptimization_Construct_Matrices.m % and % MPCoptimization_MIMO_2.m % % Details of the algorithm are in the document MPC_Simulation_Notes_MIMO.doc % clear A clear B clear d clear y clear u clear h clear HP clear HF clear U clear D clear F clear R clear r clear HQP clear fQP clear ySetpoint global R HF HP D F U bzero HQP fQP kSimulation = 100; % Simulation time % Specify setpoints ySetpoint(1,1) = 1; ySetpoint(2,1) = 0; % Specify constant (load) disturbances d(1,1) = 0.0; d(2,1) = 0.0; % Need to specify: ni,no,h, n, N, p, m, r, umin, umax % ni = number of input variables % no = number of output variables % h = CELL ARRAY of FIR coefficient MATRICES (see example below) % n = smallest time delay % N = number of elements (MATRICES) in h (memory length of FIR model) % p = prediction horizon % m = control horizon % r = weight matrix on input moves % ySetpoint = vector of setpoints % umin = lower bound of control variables % umax = upper bound of control variables % Specify upper and lower bounds for manipulated inputs umin = -7500*[1; 1]; % lower bounds for manipulated inputs umax = 7500*[1; 1]; % upper bounds for manipulated inputs % >> Gofs=tf(1,[75 1]) % % Transfer function: % 1

  • Model Predictive Control Michael Nikolaou

    - 56 -

    % -------- % 75 s + 1 % % >> GofsD = c2d(Gofs,25) % % Transfer function: % 0.2835 % ---------- % z - 0.7165 % % Sampling time: 25 % >> step(GofsD) % >> GofsD = c2d(Gofs,50) % % Transfer function: % 0.4866 % ---------- % z - 0.5134 % Constract pulse-response MODEL as a CELL ARRAY hFIR = [0.4866 0.2498 0.1283 0.0659 0.0338 0.0174 0.0089 0.0046 0.0023 0.0012]; clear h Mtilde = [0.878 -0.864; 1.082 -1.096]; % MODEL matrix % M = [0.878 -0.864; 1.12 -1.096]; % %Pulse-response model coefficients are a CELL ARRAY h ={hFIR(1)*Mtilde... hFIR(2)*Mtilde... hFIR(3)*Mtilde... hFIR(3)*Mtilde... hFIR(5)*Mtilde... hFIR(6)*Mtilde... hFIR(7)*Mtilde... hFIR(8)*Mtilde... hFIR(9)*Mtilde... hFIR(10)*Mtilde}'; % Specify time delay n = 0; % time delay %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Controller tuning %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% p = 10; % prediction horizon length m = 1; % manipulated input horizon length clear rr rr = 0.01; r = rr*diag([1 1]); % input move suppression weight matrix %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Construct "real" system in state-space form pole = 0.5134; M = [0.878 -0.864; 1.082 -1.096]; % REAL matrix A = pole*eye(2); % "Real" system: y(k) = A*y(k-1) + B*u(k-1-n) + d B = (1-pole)*M;

  • Model Predictive Control Michael Nikolaou

    - 57 -

    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % No need to change anything below this line for any system %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% N = size(hFIR,1)-n; % number of nonzero coeff's in kernel of FIR model ni = size(umin,1); % number of inputs no = size(ySetpoint,1); % number of outputs % Initialization of simulation clear y for i = 1:n+N y(:,i) = zeros(no,1); end clear u for i = 1:n+N u(:,i) = zeros(ni,1); end clear uP for i = 1 : n+N uP(i,1) = {u(:,i)}; end uP = cell2mat(uP); % Set up optimization matrices that do no change at each time step [R, HF, HP, D, F, U, bzero, HQP, fQP] = ... MPCoptimization_Construct_Matrices(ni,no,h, n, N, p, m, r, umin, umax); % Simulation for k = n+N+1 : n+N+kSimulation y(:,k) = A*y(:,k-1) + B*u(:,k-1-n) + d; u(:,k) = MPCoptimization_MIMO_2(ni,no,h, n, N, p, m, r, ... y(:,k), ySetpoint, umin, umax, uP); clear uP uP = []; for i = n+N : -1 : 1 uP = [uP; u(:,k+1-i)]; end end % plot output figure(1) plot(y(:,N+1:end)') grid on xlabel('time step, k') ylabel('output, y_k') title(['p = ', num2str(p), ', m = ', num2str(m), ', r = ', num2str(rr)]) legend('y_1','y_2',0) % plot input figure(2) stairs(u(:,N+1:end)') grid on xlabel('time step, k') ylabel('input, u_k') title(['p = ', num2str(p), ', m = ', num2str(m), ', r = ', num2str(rr)]) legend('u_1','u_2',0) function [R, HF, HP, D, F, U, bzero, HQP, fQP] = ... MPCoptimization_Construct_Matrices(ni,no,h, n, N, p, m, r, umin, umax)

  • Model Predictive Control Michael Nikolaou

    - 58 -

    % This function constructs matrices needed for MPC optimization % that do not change from time-step to time-step % Notation % ni = number of input variables % no = number of output variables % h = CELL ARRAY of FIR coefficient MATRICES (see example below) % n = smallest time delay % N = number of elements (MATRICES) in h % p = prediction horizon % m = control horizon % r = weight matrix on input moves % yMeasured = vector of latest measurements % ySetpoint = vector of setpoints % umin = lower bound of control variables % umax = upper bound of control variables % uP = vector of past inputs, of dimension (n+N)*ni x 1 % = [u_(k-n-N) ... u_(k-1)]' global R HF HP D F U bzero HQP fQP % Construct diagR, first as cell array, then as matrix clear R for i = 1:m for j = 1:m if i == j R(i,j) = {r}; else R(i,j) = {zeros(ni,ni)}; end end end R = cell2mat(R); % convert R from cell array to matrix % Construct HF, first as cell array, then as matrix clear HF for i = 1 : p for j = 1 : p if j > i HF(i,j) = {zeros(no,ni)}; elseif i-j >= N HF(i,j) = {zeros(no,ni)}; else HF(i,j) = {h{n+i-j+1,1}}; end end end HF = cell2mat(HF); % convert HF from cell array to matrix % Construct HP clear HP clear temp1 clear temp2 for i = 1 : p for j = 1 : n+N if j >= n+i+1 temp1(i,j) = {h{n+i-j+1+n+N,1}}; else temp1(i,j) = {zeros(no,ni)}; end end

  • Model Predictive Control Michael Nikolaou

    - 59 -

    end temp1 = cell2mat(temp1); % convert temp1 from cell array to matrix % for i = 1 : p for j = 1 : n+N if j

  • Model Predictive Control Michael Nikolaou

    - 60 -

    % Construct right-hand side bzero of equality constraints bzero = zeros((p-m)*ni, 1); % Set up quadratic programming problem HQP = HF'*HF + D'*R*D; function [uk] = MPCoptimization_MIMO_2(ni,no,h, n, N, p, m, r, ... yMeasured, ySetpoint, umin, umax, uP) % Notation % ni = number of input variables % no = number of output variables % h = CELL ARRAY of FIR coefficient MATRICES (see example below) % n = smallest time delay % N = number of elements (MATRICES) in h % p = prediction horizon % m = control horizon % r = weight matrix on input moves % yMeasured = vector of latest measurements % ySetpoint = vector of setpoints % umin = lower bound of control variables % umax = upper bound of control variables % uP = vector of past inputs, of dimension (n+N)*ni x 1 % = [u_(k-n-N) ... u_(k-1)]' global R HF HP D F U bzero HQP fQP % This m-file sets up and solves the MPC on-line optimization problem % for a MIMO system % % Sample input data % ni = 2; % no = 2; % h(1,1) = {[0 0; 0 0]}; % h(2,1) = {[1 2; 3 4]}; % h(3,1) = {[0.1 0.2; 0.3 0.4]}; % n = 1; % N = 2; % p = 7; % m = 2; % r = [1 0; 0 2]; % yMeasured = [1;2]; % ySetpoint = [0; 0]; % umin = -1; % umax = 1; % uP = ones((n+N)*ni, 1); % temporary past u % end of input data % Construct yk yk = repmat(yMeasured, p, 1); % Construct ySP ySP = repmat(ySetpoint, p, 1); % Set up quadratic programming problem fQP = HF'*(HP*uP + yk - ySP) + D'*R*F*uP; % Solve quadratic programming problem using the QUADPROG command

  • Model Predictive Control Michael Nikolaou

    - 61 -

    uF = quadprog(HQP, fQP, ones(1,p*ni), Inf, U, bzero, ... repmat(umin,p,1), repmat(umax,p,1)); % umin*ones(p*ni,1), umax*ones(p*ni,1)); uk = uF(1:ni,1);

  • Model Predictive Control Michael Nikolaou

    - 62 -

    4.3. APPENDIX C MATLAB MPC CODE FOR EXAMPLE 2 clear a clear b clear disturbance clear y clear u global HF HP D U R F HQP fQP kSimulation = 40; % Simulation time ySetpoint = 0; d = 50; % constant disturbance umin = -4; % lower bound for manipulated input umax = 4; % upper bound for manipulated input h = [0 0 6 8 1]; n = 2; % time delay N = size(h,2) - n; % number of nonzero coefficients in kernel of pulse-response model p = 6; % prediction horizon length m = 2; % manipulated input horizon length r = 10000; % input move suppression weight g3 = 5; % "Real" system: y(k) = g3*u(k-3) + g4*u(k-4) + g5*u(k-5) + d g4 = 10; g5 = 2; % Initialization of simulation for i = 1:n+N+1 y(i) = 0; end for i = 1:n+N u(i) = 0; disturbance(i) = 0; end uP = u(1 : n+N)' for k = n+N+1 : kSimulation disturbance(k) = d; y(k) = g3*u(k-3) + g4*u(k-4) + g5*u(k-5) + d; u(k) = MPCoptimization(h, n, N, p, m, r, y(k), ySetpoint, umin, umax, uP); uP = u(k-n-N+1 : k)'; end % plot disturbance figure(1) subplot(3,1,1) plot(disturbance,'o-') xlabel('time step, k') ylabel('d_k') % plot output subplot(3,1,2) plot(y,'o-') xlabel('time step, k') ylabel('y_k') % plot input subplot(3,1,3) stairs(u) xlabel('time step, k') ylabel('u_k') function [uk] = MPCoptimization(h, n, N, p, m, r, yMeasured, ySetpoint, umin, umax, uP)

  • Model Predictive Control Michael Nikolaou

    - 63 -

    global HF HP D U R F HQP fQP R = r*eye(p); % input move supprssion weight matrix % Construct HF HF = toeplitz([h(n+1:n+N) zeros(1, p-N)], [h(n+1) zeros(1, p-1)]); % Construct HP clear temp temp = repmat([h(n+N:-1:n+1),zeros(1,n)], p, 1); HP = toeplitz(zeros(1, p), [zeros(1, n+1) h(n+N:-1:n+2)]) - temp; % Construct D temp = toeplitz([1 -1 zeros(1, m-2)], [1 zeros(1, m-1)]); D = [temp, zeros(m, p-m); zeros(p-m, m), zeros(p-m, p-m)]; % Construct F F = [zeros(1,n+N-1) -1; zeros(p-1, n+N)]; % Construct yk yk = repmat(yMeasured, p, 1); % Construct ySP ySP = repmat(ySetpoint, p, 1); %Construct U U = toeplitz(zeros(1,p-m),[zeros(1, m-1) [-1 1] zeros(1, p-m-1)]); bzero = zeros(p-m, 1); % Construct bzero % Set up quadratic programming problem HQP = HF'*HF + D'*R*D; fQP = HF'*(HP*uP + yk - ySP) + D'*R*F*uP; % Solve quadratic programming problem using the QUADPROG command uF = QUADPROG(HQP, fQP, ones(1,p), Inf, U, bzero, umin*ones(p,1), umax*ones(p,1)); uk = uF(1);

    1. Configuration of multivariable process control systems2. Model predictive Control (MPC) for multivariable systems2.1. Why MPC?2.2. Basic Concepts2.3. Prototypical MPC Formulation2.4. Elements of MPC Theory (Rawlings and Mayne, 2009)2.4.1. Models (RM 1.2)2.4.2. The linear quadratic regulator (LQR) (RM 1.3)2.4.3. Using the LQR formula in a moving horizon context (RM 1.3.4)2.4.4. Using the LQR formula in an infinite moving horizon context (RM 1.3.4) to ensure stability2.4.5. Sidebar: Lyapunov functions and stability of dynamic systems2.4.6. Designing finite-horizon LQR for closed-loop stability: Lyapunov function approach2.4.6.1. Idea 1: Finite-horizon LQR with terminal constraint2.4.6.1. Idea 2: Finite-horizon LQR with terminal penalty

    2.4.7. Nonlinear systems2.4.8. LQR with states not measured directly (RM 1.4)2.4.9. MPC = LQR with constraints (RM 2.4.3)

    3. References4. Appendices4.1. Appendix A Linear and Quadratic Programming Examples4.1.1. Linear Programming3F4.1.2. Linear Programming in General4.1.1. Linear Programming Sensitivity4.1.2. Quadratic Programming4.1.2.1. Case 14.1.2.2. Case 2

    4.1.3. Quadratic Programming in General

    4.2. Appendix B Sample Matlab Code For Multivariable MPC Of High-Purity Distillation Column4.3. Appendix C Matlab MPC code for EXAMPLE 2