modelling of paper mill

151
Stochastic Averaging Level Control and Its Application to Broke Management in Paper Machines by SHIRO OGAWA B. Eng., The University of Tokyo, 1967 M. Eng., The University of Tokyo, 1969 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in THE FACULTY OF GRADUATE STUDIES (Department of Electrical and Computer Engineering) We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA December 12, 2003 ©Shiro Ogawa, 2003

Upload: harsha

Post on 25-Sep-2015

24 views

Category:

Documents


7 download

DESCRIPTION

Modelling of Paper Mill

TRANSCRIPT

  • Stochastic Averaging Level Control and Its Application to Broke Management in Paper Machines

    by SHIRO OGAWA

    B. Eng., The University of Tokyo, 1967 M . Eng., The University of Tokyo, 1969

    A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF

    DOCTOR OF PHILOSOPHY

    in

    THE FACULTY OF GRADUATE STUDIES (Department of Electrical and Computer Engineering)

    We accept this thesis as conforming to the required standard

    THE UNIVERSITY OF BRITISH COLUMBIA December 12, 2003

    Shiro Ogawa, 2003

  • Abstract

    Averaging level control refers to liquid level control of storage tanks, where the objective is to keep

    the outlet flow u(t) as smooth as possible against the fluctuating inlet flow, while at the same time

    keeping the tank level y{t) within high and low limits. The thesis treats the stochastic averaging

    level control problem, where the input disturbance is a stochastic process.

    The problem is formulated to minimize a weighted sum of Var[u(r)] and Var[(/)] subject to

    the target Var[y(r)]. The state-space linear quadratic optimal control method is used, resulting

    in a linear state feedback controller. When the input disturbance is modelled as the output of a

    first-order low-pass filter driven by white noise, the optimal controller is a phase-lag network.

    Broke storage tank level control is important in stabilizing the paper machine wet end. It is

    treated as a special type of averaging level control, where the input disturbance Fb(t) is a two-state

    continuous-time Markov process. The spectrum of Fb(t) is obtained and the linear optimal con-

    troller is designed with the same methodology as for the general averaging level control problem.

    Taking this very specific nature of Fb(t), a new nonlinear control scheme called the minimum

    overflow probability controller (MOPC) is designed, and tested against data collected from a paper

    machine. The MOPC performs better than the optimal linear controller and manual control.

    A new theorem on the state probability distribution of a continuous-time Markov jump system

    is presented, which leads to new methods for evaluating the mean and the variance of the state of

    a linear jump system, and a new reliable numerical method to calculate the state distributions of

    jump systems. These results are utilized to evaluate the overflow probabilities of controllers.

    ii

  • Contents

    Abstract ,,. ii

    List of Tables vii

    List of Figures ix

    Acknowledgements x

    1 Introduction and Overview 1

    1.1 Background and Motivations 1 1.2 Overview and Structure of the Thesis 2

    1.2.1 Format and Notation 4 1.3 Contributions of the Thesis 4

    2 Averaging Level Control 7 2.1 Introduction 7 2.2 Mathematical Models 8

    2.2.1 Process Model 8 2.2.2 Disturbance Model . . ! 9 2.2.3 Flow Smoothness/Roughness Model 11

    2.3 Historical Perspective and Literature Review 13 2.3.1 P/PI Detuning 13 2.3.2 Deterministic Constrained Optimization 16 2.3.3 Constrained Minimum Variance Controller 17

    3 Linear Optimal Controllers 19 3.1 Introduction 19 3.2 State-Space Method 20 3.3 Evaluation of Variances 22 3.4 Stationary Input 23

    iii

  • 3.4.1 System Equation and Performance Index 23 3.4.2 State-Space Solution . 24 3.4.3 Noise-Free Observer 25 3.4.4 Controller Parameterization 26 3.4.5 Comparison to PI Controller 29

    3.5 Minimum Variance of Outlet Flow 32 3.5.1 Comparison to P Controller 33

    3.6 Random Walk Input 35 3.6.1 Effects of Damping Factor 37

    3.7 Summary 38 3.8 Mathematical Details 39

    3.8.1 Solution of Section 3.4.2 39 3.8.2 Rv(y) and Rv(u) in 3.4.4 39 3.8.3 Rv(y) and Rv(u) in 3.4.5 41

    4 Downstream Processes 43 4.1 Introduction 43 4.2 Downstream Process Model 44 4.3 State-Space Solution 47 4.4 Performance Comparison 49

    4.4.1 Effects of v(0 weight 50 4.4.2 Var[y2] reduction versus CJP 52

    4.5 Summary 54

    5 Markov Jump Systems 55 5.1 Introduction 55 5.2 Jump Markov Systems 56 5.3 Holding Time Density 57 5.4 State Distribution 59

    5.4.1 Entry Probabilities 63 5.4.2 Examples 63

    5.5 State Moments of Linear Jump Systems 71 5.5.1 Example 73

    5.6 Linear Quadratic Optimal Control 75 5.7 Summary 77 5.8 Appendix - Mathematical Details 78

    5.8.1 Covariance of x(f) 78 5.8.2 Linear Quadratic Control 80

    i v

  • 6 Broke Storage Tank Level Control 84 6.1 Introduction 84

    6.1.1 Paper Machine 84 6.1.2 Benefits of Wet End Stabilization 85 6.1.3 Broke Handling 87 6.1.4 Literature Review 87

    6.2 Characteristics of Broke Flow 88 6.2.1 Spectrum of Broke Flow ^ 89

    6.3 Flow Smoothness Requirements' . 92 6.4 Linear Optimal Controller 92

    6.4.1 Jump System Theory 94 6.4.2 Implementation Issues 95

    6.5 Nonlinear Controllers . 95 6.5.1 Minimum Overflow Probability Controller 96

    6.6 Optimal Change of u(i) 100 6.6.1 Asymptotic Analysis 101 6.6.2 Linear u(i) 103

    6.7 Summary 103 6.8 Maximum Principle 104

    7 Simulation With Industrial Data 110 7.1 Introduction HO 7.2 Data HO

    7.2.1 Process HO 7.2.2 Data Set C 112 7.2.3 Incoming Flow Estimate 113 7.2.4 Statistical Analysis 115 7.2.5 Probability Plots 117 7.2.6 Chi-Square Test 118 7.2.7 Summary 119

    7.3 Controller Design 120 7.3.1 QMOPC 120 7.3.2 Linear Controller 122

    7.4 Simulation 124 7.4.1 Manual Control 125 7.4.2 Linear Controller 125 7.4.3 QMOPC -127

    7.5 Summary 127

    v

  • 8 Conclusions 130 8.1 Summary 130 8.2 Future Research 132

    Bibliography 134

    vi

  • List of Tables

    7.1 Chi-square test of break and normal durations 119

    7.2 Overflow probabilities for QMOPC and MOPC 120

    7.3 Overflow probabilities for QMOPC 121

    7.4 The controller parameter for

  • List of Figures

    1.1 Averaging level control problem 1

    2.1 Storage tank material balance 8

    3.1 Parameters of C L vs. K 27

    3.2 Gain of C L for K = 0.1,1 and 10 28

    3.3 Magnitude of S for C L and C P I 28

    3.4 Comparison of C L and PI controller with n - 0.7 30

    3.5 Comparison of C L and the PI controller with various n 31

    3.6 The ratio of the variance of u(t) between CPD and Cp. x-axis is Rv(y) 34

    3.7 Performance change by rj for PI controller 37

    4.1 Downstream process and F u 44

    4.2 Gain plot of the downstream process transfer function 46

    4.3 System for tank level control loop and downstream process 47

    4.4 Gains of Co and CL 51

    4.5 Magnituedof5 forC D andC/, 51

    4.6 Variance reduction of y>i(j) vs variance of v(0 52 4.7 Effects of o)p on variance of y2{t) 52

    4.8 Effects of a>p on variance of ^ ( 0 with normalized downstream process cut-off

    frequency 53

    5.1 Symmetrical binary signal d(t) is filtered by a low-pass filter. 64

    5.2 Probability densities of filtered binary signal for B = 0.5,1, and 2 65

    5.3 Probability density of filtered binary signal for fi = 10 66

    viii

  • 5.4 0i, 02 and
  • .1.1 V,

    Acknowledgements

    During the research of this thesis, I have been fortunate to receive help from many people, and I

    would like to express my gratitude to them.

    First, to my supervisors, Prof. Guy Dumont, Prof. Michael Davies, and Dr. Bruce Allison.

    They warmly encouraged me to take up the challenge, and provided me with steady guidance

    throughout my study. Their patience with and critiques for my ever changing ideas, are very much

    appreciated. Dr. Allison introduced me to a fruitful area of averaging level control, and kick-

    started the research. It was one of highlights of my student life to exchange ideas with him on

    many subjects. Also thanks to Dr. Maryam Khanbaghi for introducing Markov jump systems and

    sharing her knowledge on broke tank level control with me.

    To the Network of Centres of Excellence, Mechanical Wood-Pulps, the Science Council of

    British Columbia, and the Pulp and Paper Institute of Canada, I thank for their financial assistance.

    I would like to thanks Messrs. Rick Harper and Leo Pelletier of NorskeCanada for supplying

    the industrial data. Without their help, this thesis could not have been completed.

    To my fellow graduate students, Dr. Junqiang Fan, Stevo Mijanovic, Manny Sidhu, and Dr.

    Michael Chong Ping, I would like to express my sincere gratitude for their friendship, and kindness

    to share knowledge with me. I appreciate that they treated me as their peer even though I am much

    older than them. I felt that I was young again. I thank Dr. Leonardo Kammer of PAPRICAN for

    his critique and advice on my work.

    The library services in the UBC and the Pulp and Paper Centre was essential to my research,

    and I thank the people their. Special thanks goes to Ms. Reta Penco for helping and teaching me in

    literature search.

    I am fortunate to have many friends who have supported me with their warm friendship. To my

    old MacBlo gangs, Glenn Swanlund, Eric Lofkrantz, and Ted Carlson, I thank for their continuing

    x

  • friendship. It is comforting to know that my old friends in Japan are always ready to help me. I

    would like to thank Dr. Toshio Kojima and Toshinori Watanabe for their encouragement.

    I would like to thank two of my friends, Bruce and Dr. Alf Isaksson who have been inspiration

    for me to pursue the study of control engineering. Bruce rekindled my enthusiasm for control

    theory when he implemented digester level control at Port Alberni more than 20 years ago. Alf's

    enthusiasm and dedication to control engineering is contagious.

    I would like to thank my supervisor at the University of Tokyo, where I studied 35 years ago as

    a master student, Prof. Mitsuru Terao who instilled the spirit of control engineering which I have

    kept ever since.

    Prof. Karl Astrom, after his retirement, told us that he had the best times in his life during his

    graduate study and retirement. No wonder I enjoyed so much because I was a graduate student

    and a retiree at the same time! However, one drawback of being an old graduate student is that

    I cannot share the joy of completing the research with my parents, whose memory sustained me

    during tough times.

    xi

  • Chapter 1

    Introduction and Overview

    1.1 Background and Motivations

    The research presented in this thesis started with a goal of devising a practical and effective auto-

    matic control scheme for broke storage tank level, which was identified as the most influential and

    difficult process for stabilizing a paper machine wet end. The importance of stabilizing a paper

    machine wet end and the role of the broke storage tank are detailed in Chapter 6. It was soon

    realized that the broke storage tank level control problem is a special case of the averaging level

    control problem depicted in Figure 1.1. Here, the incoming flow to the storage tank Fd(f) changes

    Fd(f) (disturbance)

    Downstream Process

    Figure 1.1: Averaging level control problem

    randomly and the tank level y(t) must be keptbetween the high limit yH and the low limit yi. The

    goal of averaging level control is to manipulate the outlet flow Fu(t) to minimize its adverse ef-

    fects on the downstream process(es), while at the same time keepingy(t) between^ a n d ^ . The

    yH

    yL

    m r

    Storage

    Tank

    m r

    Storage

    Tank

    I I -A Fu(t)

    1

  • 1.2. Overview and Structure of the Thesis 2

    broke storage tank level control problem has a very specific input disturbance, but the downstream

    processes are diffuse and difficult to pin down by a simple mathematical model.

    Before embarking on the task of designing a broke storage tank level controller, available tech-

    niques for averaging level control, stochastic and deterministic, were reviewed. Averaging level

    control can benefit many chemical processes that include surge tanks. Also, many areas in the pa-

    per machine wet end can be improved by proper applications of stochastic averaging level control.

    Chapters 2, 3 and 4 treat the stochastic averaging level control problem in a unified approach with

    optimization methods emphasizing the importance of the input disturbance characteristics and the

    downstream processes.

    The fact that the input disturbances to the broke storage tank can be modelled by a continuous

    time Markov process (Khanbaghi, 1998) motivated the study of Markov jump systems. Although

    it was found that jump linear quadratic theory as applied to the broke storage tank level control

    problem has no advantages over linear quadratic optimal control theory, a new theorem and nu-

    merical methods for evaluating the state variances and the state distributions are devised. The new

    methods are utilized in the synthesis and analysis of control schemes in Chapter 7.

    With the techniques of stochastic averaging level control and Markov jump system theory,

    linear and nonlinear controllers for the broke storage tank level control problem are designed and

    analyzed. The new control schemes are tested in simulations with industrial data.

    1.2 Overview and Structure of the Thesis

    The thesis has three main parts: (1) stochastic averaging level control, (2) Markov jump systems,

    and (3) broke storage tank level control. Trie first two parts provide technical tools necessary to

    attack the problem treated in the third part.

    Averaging Level Control

    Three consecutive chapters 2, 3, and 4 cover the stochastic averaging control problem. In Chapter

    2, the stochastic averaging level control problem is defined. The importance of the input distur-

    bance characteristics and downstream processes is highlighted. Existing techniques for determin-

    istic and stochastic averaging level control are reviewed. The popular random-walk assumption on

  • 1.2. Overview and Structure of the Thesis 3

    the input disturbances is scrutinized. A simple first-order low-pass disturbance model is preferred.

    Chapter 3 treats the problem with linear quadratic control methods. A state-space approach

    with noise-free observer is utilized. The small dimension of the problem makes it possible to obtain

    solutions in closed form. Thus, straightforward algorithms and good perspectives are obtained.

    In Chapter 4, the downstream process is included explicitly in the problem formulation. The

    performance of the resulting controller is compared with ones designed without downstream pro-

    cesses. A convenient parameterization is devised to streamline the performance comparison.

    Markov Jump Systems

    Chapter 5 presents new theorems on state probability distributions and their applications for the variance evaluation and the state distribution calculations. This chapter is independent from the

    previous chapters and can be read independently from the rest of the thesis. The new numerical

    method for calculating the state distribution is utilized in designing controllers for broke storage

    tank level.

    Broke Storage Tank Level Control

    Chapter 6 provides motivation and background for broke storage tank level control. Difficulties

    arise from the input disturbance characteristics, which are quite different from those encountered

    in usual averaging level control problems, and defining appropriate outlet flow smoothness re-

    quirements. A new nonlinear control scheme called the minimum overflow probability controller

    (MOPC) is introduced. Also, the optimal linear controller is designed for comparison.

    In Chapter 7, industrial data collected from a paper machine where the broke storage tank level

    is controlled manually, is used to estimate the incoming disturbance Fd(f) from a six-month history

    of the broke storage tank level y(t) and outlet flow Fu(t). The two controllers designed in Chapter

    6 are tuned for Fd(t) and tested by simulation. Comparisons are made among the two controllers

    and manual control.

  • 1.3. Contributions of the Thesis 4

    1.2.1 Format and Notation

    As the three parts of this thesis deal with subjects more or less independent from each other,

    overviews and literature reviews are placed at the beginning of each part, not at the beginning

    of the thesis. Detailed mathematical derivations are included at the end of each chapter rather

    than at the end of the thesis. This is a compromise between two goals: that the thesis should be

    comprehensive and that mathematical details should not impede the smooth logical flow.

    Most of the mathematical symbols are standard in the literature. The same symbol is used

    with different meanings, depending on the context, if there is no danger of confusion. Economy in

    notation is obtained by using the same symbol with different arguments to represent a time signal

    in continuous time, discrete time, and in the Laplace transform. For instance, y(t) denotes a process

    variable in continuous time, and y(s) the Laplace transform of the same variable. For discrete time,

    the standard integer symbols k etc. are used, i.e. y(i) denotes y(t), t = iAt, At being a sample

    period. A notation ":=" is used to signify "is defined to equal". Blackboard bold letters: F[event]

    means the probability of event and E[X] means the expectation of a random variable X.

    All the controllers in the thesis are meant to manipulate the outlet flow Fu(f) to control the

    tank level. The formal controller output is denoted with u(t), which may be u(t) = Fu(t) or u{t) =

    Fu(t) - E[Fd(t)]. In either case, Var[Fu(r)] = Var[w(r)] and Fu(t) = ii(t). The rate of change in the

    outlet flow Fu{t) appears often, and is given a special symbol v(t):

    d . d ,

    This convention is used throughout the thesis, but the definition of v(r) reappears a few times as

    reminder. Sometimes, u{t) is used as the manipulated variable of an abstract problem, such as,

    x(t) = Ax(t) + Bu{t). The meaning of u(t) should be clear from the contexts.

    1.3 Contributions of the Thesis

    Averaging Level Control

    The stochastic averaging level control problem is put on a solid theoretical basis by iden-

    tifying the importance of input disturbance characteristics and the outlet flow smoothness

  • 1.3. Contributions of the Thesis 5

    definition. Exploiting the continuous time state-space approach, clear perspectives are ob-

    tained for synthesis and analysis of the linear optimal controllers.

    The random walk input disturbance assumption is given a detailed analysis. Advantages of

    stationary disturbances are presented; The optimal linear controller for a first-order low-pass

    input disturbance, which is a phase lag network, is synthesized.

    A new problem formulation to include the downstream process explicitly is presented. This

    helps to clarify the meaning of outlet flow smoothness. With a simplified downstream pro-

    cess model, practical controllers are obtained.

    A proper state-space formulation is presented when the input disturbance is a random walk.

    The key is to make the state vector x(t) stationary, so that Var[x(r)] exists.

    Markov Jump System

    A new theorem on the state probability distributions is given in Chapter 5, which leads to

    new methods for evaluating the mean and the variance of the state of a linear jump system,

    and a new reliable numerical method to calculate the state distributions. Unlike approaches

    based on simulations, the new method does not have time-discretization approximations

    nor randomness due to computer generated random numbers. These results are utilized to

    evaluate the overflow probabilities of controllers.

    Linear quadratic optimal theory for jump systems, where the state has discontinuity, is ex-

    tended to include load-change disturbances in a natural way. This formulation provides a

    more flexible means to include Fu(t) in the performance index.

    Broke Storage Tank Level Control

    The actual incoming flow to a broke storage tank Fb(t) was estimated from the tank level

    and outlet flow measurements. This is the first published study of its kind. It was found that

    break and normal durations follow the exponential distribution quite well. This is the second

    confirmation of the published results of Khanbaghi et al. (1997). However, Fb(t) is not truly

    binary as was assumed by the previous studies based on material balance.

  • 1.3. Contributions of the Thesis 6

    A practical nonlinear controller with hard constraints on the outlet flow Fu(t) and its rate of

    change Fu(t) is presented. The controller minimizes the overflow probability and guarantees

    that the level will stay above the low limit yi. The new method for calculating the state dis-

    tribution of Markov jump systems makes the design process straightforward without relying

    on simulations. The nonlinear controller is compared to the optimum linear controller and

    manual control with the industrial data, and is found to control the tank level better and with

    a smaller maximum outlet flow.

    The optimal change strategy of Fu(t), where Fu(t) is changed from its initial value UQ to the

    final value u\ in a given time T, is investigated. The performance index is the integrated

    square error of the downstream process variable. With a simple downstream process model,

    the problem is solved with the Maximum Principle's singular control. A closed form solution

    is obtained, which makes it possible to compare linear change strategy to the theoretical

    optimum.

  • Chapter 2

    Averaging Level Control

    2.1 Introduction

    Averaging level control refers to liquid level control of storage tanks, where the objective is to

    keep the outlet flow as smooth as possible against the fluctuating inlet flow, while at the same time

    keeping the tank level within high and low limits (Figure 1.1). The term "averaging" indicates that

    the average of the tank level is significant rather than the level itself. For example, Shinskey (1967)

    (page 147) states, "This application is often referred to as 'averaging level control,' because it is

    desired that the manipulated flow follow the average level in the tank". The reason that the outlet

    flow Fu(t) should be smooth is to minimize its adverse effect on downstream processes. This is

    quite different from "normal" regulatory control problems, where the goal is to keep the controlled

    process variable y(t) close to its (constant) target yr. Even for regulatory problems, excessive

    movements in the manipulated variable u(t) are usually avoided for a number of reasons, including

    to avoid saturation in u(t) and to make a controller robust for modelling errors. However, smoothing

    of u(t) is never a principal goal of control. An underlying idea for investigating the averaging level

    control problem in the thesis is constrained optimization, where the "best" controller is a one that minimizes some performance index J subject to a constraint condition g(x, u) < 0. This idea

    provides systematic procedures to design controllers for given situations, which are characterized

    by J and g. Also this methodology makes it possible to compare many published control schemes

    in a consistent and unified way.

    7

  • 2.2. Mathematical Models 8

    2.2 Mathematical Models

    Optimization problems are tackled either analytically or numerically. In either case, mathemat-

    ical models for the problem are necessary. Mathematical models for practical problems are al-

    ways a compromise between fidelity to the target objects and mathematical tractability. To judge

    the fidelity and usefulness of the mathematical models, intimate knowledge of the subject area is

    necessary. Thus, construction of practically useful mathematical models is one of the important

    activities that distinguish control engineering from applied mathematics. The three main models

    needed for averaging level control are:

    1. disturbance model

    2. process model

    3. flow smoothness or roughness model

    In the following sections, the mathematical models for averaging level control are discussed in

    detail. This is important because historically mathematical models for the input disturbance and the

    flow roughness have been treated rather indifferently, although it will be found that they influence

    controller design greatly.

    2.2.1 Process Model

    Figure 2.1 shows a simplified diagram of a material balance around a storage tank, where V(t)

    denotes the liquid volume in the tank, sn.dy(t) the tank level. Fj(t) denotes the incoming flow, and Fu(t) the outgoing flow, both expressed in volume!time. Assuming that the liquid is noncompres-

    F u ( t )

    Figure 2.1: Storage tank material balance. V(t) is the volume of liquid in the tank, and y(f) is the level.

  • 2.2. Mathematical Models 9

    sive, we obtain the following equation on V(t):

    V(f)=V(0)+ f[Fd(t')-Fu(t')W (2.1) o

    Dividing the both sides of (2.1) by the cross-sectional area of the tank, A, yields,

    y(t) = v(0) + Kp f[Fd(t') - Fu(t')]dt', (2.2)

    where K p = 1 /A is the process gain. This simple integrator model works well in practice, and is

    used throughout the thesis. When the cross-sectional area varies by height, K p is not a constant,

    and (2.2) becomes nonlinear. However, linearity can be kept by treating the problem by (2.1).

    The level high limit is replaced by the volume high limit. A storage tank for noncompressive

    material is a rare example in process control of a simple mathematical model derived from the

    first principles that works well. In a physical process, many dynamical subsystems, including

    manipulating valves, and measurement sensors with noise and delay, are involved. However, they

    can be ignored, at least for liquid storage tanks in a paper machine wet end because the execution

    time of level control can be slower than other loops. For instance, a broke storage tank level can be

    controlled using control intervals of 30 seconds to 5 minutes, while most pressure or flow loops are

    executed at 1 second intervals. Therefore, these dynamics can be ignored by the level controller

    that is executing every 30 seconds to a few minutes. This fact is generalized in Marlin (1995), page

    The process has no dead time and a phase lag of only 90, indicating that feedback

    control would be straightforward for tight level control. This is actually the case in

    many systems, since the sensor and valve dynamics are usually negligible.

    The system parameter K p can be obtained with a good accuracy from experiments or physical

    dimensions of the tank. Thus, robustness problem due to the process uncertainty is not a concern.

    2.2.2 Disturbance Model

    The role of the disturbance model was rather neglected in the early optimum control literature.

    However, its importance, especially in process control, is now well accepted. Still this is not

    always the case as stated in Harris and MacGregor (1987):

    586:

  • 2.2. Mathematical Models 10

    All optimal controllers (i.e., those optimizing a performance index) must be based

    on a specified disturbance (or set-point change) model. Often in the design of such

    controllers the disturbance model is not explicitly stated, but rather indirectly implied

    in the problem formulation.

    In averaging level control, very few papers have paid detailed attention to the input disturbance.

    The term "averaging" itself suggests some stationary stochastic process because non-stationary

    process cannot be averaged. However, many papers accept the need to bring back the tank level to

    a setpoint to maximize the surge capacity. This suggests that the incoming flow is non-stationary

    since if Fdif) is stationary there is no need to bring back the tank level to its setpoint as stated in Shinskey (1988). Probably the authors of these papers envisioned the input disturbance as a

    combination of stationary rapid fluctuations, which can be averaged out, and less frequent step-type

    load changes. This kind of disturbance poses interesting control problems that involve statistical

    detection of abrupt changes or hidden Markov processes. However, this is left as one of the future

    endeavours.

    Most of the performance analyses are based on responses to step disturbances, probably be-

    cause of mathematical tractability. If the input disturbance is step-like and stays at a constant

    value long enough for the closed loop system to settle down to a quiescent state, then the analyses

    based on the step responses would be valid. However many real disturbances do not have this

    characteristic and care must be taken in interpreting analyses based on the step assumption.

    In the paper machine wet end, disturbances for storage tanks are mostly stochastic. Most of

    the studies in a stochastic context assume that the input disturbance is a random walk, which is

    non-stationary. The random walk is a popular choice for process control because it introduces

    integrating action in the controller. However, in reality, all disturbances are bounded and most

    of them can be deemed stationary if observed for a long enough time. In this thesis, stationary

    disturbance models are preferred because there is no strong need to eliminate offset for averaging

    level control. A simple stationary process with first-order dynamics is used as a disturbance model.

    Let d(t) denote the input disturbance deviation variable d(t) := Fj(t)-E[Fd(f)] and w(t) white noise.

    It is assumed that d(t) is the output of a first-order low-pass filter with the cutoff frequency

    d(s) = -^w(s). (2.3) S + 0)d

  • 2.2. Mathematical Models 11

    This model was used because most disturbances have some kind of roll-off frequency. Also the

    input disturbance model for broke tanks, which is a binary-valued random process, can be charac-

    terized with respect to the frequency power spectrum with this model.

    2.2.3 Flow Smoothness/Roughness Model

    All the literature in averaging level control states that one of the control objectives is to keep the

    outlet flow smooth. However, the mathematical definition of flow smoothness varies from author

    to author. Furthermore, some papers contain no mathematical definition of flow smoothness at

    all, so it must be guessed from context. To apply optimization based methods, which minimizes

    some performance index it is more convenient to express flow "roughness" or "variability",

    denoted with r(u(t)). In deterministic cases, the controller tries to minimize r(u(t)) subject to the level constraints. In stochastic cases, a natural choice for the flow roughness is the mean value of

    r(u(t)),E[r(u(t))]. Since the purpose of outlet flow smoothing is to minimize its adverse effect on the downstream

    processes, the flow roughness indicator should be selected taking into account the downstream

    process. As shown in Section 4.4, for downstream processes that can be approximated by a first-

    order system, Var[(r)] = Var[v(r)] works, well if the downstream process time constant is much

    shorter than that of the closed loop tank level system. If the downstream process mixes various

    material streams, and it is desired to keep mixing ratios constant, then Var[w(r)] rather than Var[v(/)]

    might be the better choice. However, if the input disturbance is non-stationary, Var[w(r)] does not

    exist, and cannot be used. For broke tank level control, Bonhivers et al. (1999a) and Dabros et al.

    (2003) expand the idea of downstream process to a certain paper quality index.

    Crisafulli and Peirce (1999) contains an excellent concrete example of the downstream process

    and industrial data. The process is a raw cane sugar factory, where crushed cane fibre and water,

    called "mixed juice", is heated and fed to a clarifier to extract clear juice. This clear juice is

    subsequently crystallized to produce the final product, sugar. The mixed juice goes through two

    surge tanks connected in series before it is fed to the clarifier. The paper reports a new feedforward

    control scheme to maximize the combined surge capacity of the two tanks resulting in smooth

    inlet flow and reduced turbidity of the clear juice. The paper says that, "the main technical control

  • 2.2. Mathematical Models 12

    objective was to minimize the rate of change of the secondary mixed juice flow." Since the inlet

    flow to the surge tank fluctuates randomly, E[|v(f)|], or the more mathematically tractable E[v(/)2] =

    Var[v(r)], would be an appropriate mathematical measure for flow roughness in this case.

    MRCO

    The maximum rate of change in the outlet flow (MRCO) is mathematically defined as

    MRCO := max |v(0|. 0

  • 2.3. Historical Perspective and Literature Review 13

    2.3 Historical Perspective and Literature Review

    It was recognized from the early stage of process control technology development that surge tanks

    require different control objectives from normal regulation loops, and many papers and sections of

    books have been written for averaging leve^cpntrol since the 1960s. They may be categorized into

    the following three groups: (1) P/PI Detuning!;(2) Deterministic Constrained Optimization and (3)

    Stochastic Constrained Optimization. Each approach is discussed in detail in the sections below.

    2.3.1 P/PI Detuning

    Studies published up to the early 1980's constitute this group. The ideas expressed in these papers

    reflect control hardware prevalent at that time, pneumatic or electronic single loop controllers.

    They are summarized as follows:

    1. no mathematical definitions of the input disturbance nor outlet flow smoothness are given.

    2. performance analyses are mostly non-mathematical being heavily depending on visual in-

    spection of responses for a step disturbance.

    3. controller synthesis is mostly ad hoc or heuristic.

    This lack of clear definitions both of the input, disturbance and of the meaning of flow smoothness

    lead to a number of different schemes claiming good performance.

    Proportional Only Control

    This is advocated by Shinskey in Shinskey (1988) and Shinskey (1994), although originally he

    preferred nonlinear controllers (Shinskey, 1967). Since there is no integrating action, the tank

    level tends to be away from its setpoint (off-set). To some authors, this off-set is a shortcoming of

    the P controller. However, when the input disturbance is stationary or bounded, the off-set is not

    disadvantageous. This point is clearly stated in (Shinskey, 1988): "Not only is there no need to

    return the level to some set point such as 50 percent, but this practice actually reduces the effective

    capacity of the vessel." Although, the meaning of flow smoothness is not stated clearly, it appears

    to be defined as a smaller variation of u{f). It is shown later that the P controller is close to optimal

    when the input disturbance is a first-order low-pass stationary process and the flow roughness

    is represented by Var[w(r)]. Thus, if me input disturbance is such that the downstream process

  • 2.3. Historical Perspective and Literature Review '.;)';[ 14

    operates better with a smaller Var[w(/)], then the P controller performs well. However, care must

    be taken since for some downstream processes, Var[v(0] is abetter indicator of the flow roughness.

    Then a phase-lag network or low-pass filter is a better choice.

    PI Tuning Rules

    In Cheung and Luyben (19796), tuning rules for PI controllers are given based on analyses of

    step responses. The design parameters are the maximum peak height (MPH) and the MRCO. No

    justifications for using the MRCO as a flow roughness indicator was given. It seems that the MRCO

    was selected for its mathematical tractability in step response analysis. The tuning procedure in

    the paper determines the PI controller settings from two design specifications, the MPH and the

    MRCO. This procedure potentially leads to a very lightly damped system depending on how the

    MPH and the MRCO are selected. A better procedure would be to fix the damping factor n to some

    reasonable value between 1 and 0.7, specify the desired MPH, and accept the resulting MRCO as

    a price to pay for obtaining the target MPH. This is done in Marlin (1995) with n = 1.

    Nonlinear PI Controllers

    In Shunta and Fehervari (1976), two kinds of nonlinear PI controllers were proposed, one with

    two settings switched by the level error (when the level error is small, slower tuning makes the

    outlet flow change more smoothly, and faster tuning kicks in when the level error exceeds a limit

    to prevent level limit violation), and the other called "wide range" which changes the proportional

    gain and reset time continuously. No tuning guidelines were given in the paper. To estimate the

    behaviour of a nonlinear controller against random input disturbance is a difficult problem even

    if the input disturbance distribution is available. Tuning of these controllers must be done on a

    trial-and-error basis. It seems these controllers did not gain much popularity.

    The basic weakness of nonlinear schemes based on the level error is that a large step-like input

    disturbance is not detected until the level error exceeds the threshold. It is best to act immediately

    against a large input disturbance to minimize any flow roughness indicator based on v(t) - u(t).

    As discussed later, the model predictive control (MPC) methods address this problem properly.

  • 2.3. Historical Perspective and Literature Review 15

    PL Level Controller

    A Proportional-Lag (PL) level controller was proposed in Luyben and Buckley (1977) and further

    studied in Cheung and Luyben (1979a). It is claimed that their approach maintains most of the

    desirable features of proportional-only control but eliminates off-set. It is shown below that the PL

    control is a PI controller when measurement noise or delay is negligible. In the PL controller, the

    outlet flow u(f) is a sum of a proportional term of the level error and a filtered inlet flow w(t):

    u(i) = KLe(t) + w(t), (2.4)

    where K L is the proportional gain, e(t) is the level error (y(f) -y r). The filter is a first-order low-pass

    filter KF/(TFs + 1) (Cheung and Luyben, 1979a). In the following analysis, KF is set to 1 to avoid an off-set in the level. Also there is no advantages in setting KF 1. Without losing generality,

    it is assumed that the level setpoint.yr is zero, so e(t) = y(t). Using the same symbols for Laplace

    transforms of time signals, we get

    u(s) = KLy(s) + -^-. (2.5) TfS + 1 The process is

    y(s) = ^[w(s) - (*)]. (2.6) s

    Then, the transfer function from w to y, GpL, is

    y ( s ) _ KpTfS ,

    v^(sj " (TFs + \)(KLKP + s) K~ls2 + s(KL + (KpTF)-1) + KLT Now, we close the loop with a standard PI controller Kc(l + 1 / T r s ) , where Kc is the proportional gain and Tr the reset time. Then, the transfer function from w to y, Gpj, is

    Q .= y W = (2S) P I' w(s) K~ ls 2 + K cs + KcT~l' K )

    From (2.7) and (2.8), for a given KL and TF, we can make GPL = GPI by selecting Kc and Tr as

    follows: Kc = KL + -r\=-, (2.9) Kpl F

    and

    Tr = TF^ = TF + -^ (2.10)

    Thus, the PL controller has no special advantages over the PI controller. The performance may be

    different from the PI controller if dead-time or large measurement noise is involved.

  • 2.3. Historical Perspective and Literature Review^'V 16

    2.3.2 Deterministic Constrained Optimization

    Supervisory computers and computer based distributed control systems (DCS) became widely

    available in the 1980s, and computation intensive control algorithms such as model predictive

    control (MPC) became practical as a result. Papers reviewed in this section are characterized by

    their handling of level constraints in a deterministic context (step input disturbance).

    DMC

    In Cutler (1982), the Dynamic Matrix Control (DMC) algorithm was applied for averaging level

    control. The paper does not give mathematical details, but the comments here can be inferred.

    In normal DMC, the future trajectory of the controlled variable in discrete time, y([k, k + Np]) (a

    positive integer is a prediction horizon) is compared to the desired trajectory yr([k, k + Np]) and the squared sum of the error E^'CKO ~>v(0)2 ^ a function of control increment, Au(k), is used in the performance index to be minimized. For averaging level control, the desired trajectory is set

    as a funnel shaped region centred around the level setpoint. The tank level error is deemed to be

    zero if it is within the funnel shaped region. Thus, the controller makes no move (Aw(r) = 0) if the

    level error is inside the funnelled region, or a smaller move just enough to bring back the level into

    the region if the level error is outside. The paper does not mention level constraint handling, which

    is one of the strength of DMC. It reported good filtering characteristics of the algorithm when

    the input disturbance is small so that the predicted level stays inside the funnel shaped region.

    Although a number of simulation results were included, actual applications of this DMC algorithm

    to averaging level control problems have not been reported (McDonald and McAvoy, 1986).

    Optimal Predictive Controller

    In McDonald and McAvoy (1986), the averaging level control problem was studied in continuous

    time for a step input disturbance of known amplitude. The control objective is to minimize the

    MRCO while keeping the level within constraints. Due to the simplicity of the plant dynamics

    and the MRCO criteria, analytical solutions, which are nonlinear, were obtained. The amplitude of

    the input disturbance is estimated from the level and the outlet flow, and the implemented control

    algorithm takes a form of predictive control, and thus, was named the optimal predictive controller

  • 2.3. Historical Perspective and Literature Review 17

    (OPC). The OPC does not have the weakness of the level error based nonlinear control schemes.

    The authors thought that it was necessary to bring the level to its setpoint, and combined a PI

    algorithm with the nonlinear solutions. The matter of how quickly the level should be brought

    back to the setpoint can only be discussed with stochastic input disturbances. The MRCO is

    difficult to use in a stochastic scenario. However, the paper treated the input disturbance as steps

    of undefined length. This makes the tuning procedure ad hoc, heavily depending on simulations.

    The contribution of the paper is to put the averaging level control problem in the MPC framework.

    MPC

    The problem discussed (above) by McDonald and McAvoy (1986) was treated in discrete time in

    Campo and Morari (1989). As in continuous time, if u(t) or Au(t) is not constrained, analytical

    solutions are available. The need to bring the level back to its setpoint was acknowledged without

    discussion. The problem was posed as a finite horizon constrained optimization. To bring the level

    to its setpoint, a terminal condition that the predicted level at the end of the control horizon, y(te), is equal to the setpoint yr was included. This makes the control algorithm cleaner than adding

    integral action in ad hoc way. The horizon length becomes a tuning parameter. The algorithm is

    implemented as a receding horizon controller. With the finite control horizon, the problem can be

    solved by linear programming when \u(t)\ and |Aw(r)| are constrained. As was discussed for the case of OPC, the proper selection of the control horizon requires

    stochastic arguments, but the paper treats the problem in a purely deterministic context. Neverthe-

    less, the paper provides the most complete treatment of the problem for step-like input disturbances

    and the MRCO criteria. When \u(t)\ and |Aw(f)| are not constrained, the control algorithm is imple-mentable on any DCS as was reported in Allison and Khanbaghi (2001).

    2.3.3 Constrained Minimum Variance Controller

    This methodology was introduced by Kelly (1998) for tuning PI controllers, and extended to more

    general cases including dead time by Foley et al. (2000). Both papers used a discrete-time for-

    mulation to investigate averaging level control as a stochastic optimal control problem. The input

    disturbance is assumed to be a random walk. The controller is synthesized to minimize the outlet

  • 2.3. Historical Perspective and Literature Review 18

    flow roughness (usually expressed as the variance of the outlet flow rate change Var[v(f)] in con-

    tinuous time or Var[Aw(&)] in discrete time) keeping the variance of the tank level Var[y(0] to a

    target value. The performance index J in discrete time, is set up as follows:

    J = Var[y(k)] + pVav[Au(k)]

    In discrete time, p can be set to zero to yield the minimum variance controller, where Au(t) is not

    constrained. The term "constrained" stems from the fact that p > 0 constrains Var[Aw(r)], and does

    not imply that the level is constrained. Constrained optimization of stochastic systems is a very

    difficult problem, and no practical algorithm for present day control hardware is available (Batina

    et al, 2002). The level is indirectly constrained by specifying its variance. Then, there is some

    probability that the level will violate its limits. For most practical applications, this method pro-

    vides satisfactory result because input disturbances in the real world are bounded. The gaussian

    assumption, which makes the input disturbance unbounded, is a fiction to expedite the mathemati-

    cal derivations. However, if the input disturbance happens to be larger than the design specification,

    the level will exceed the limit because there is no hard constraint mechanism. This aspect is differ-

    ent from OPC and MPC of the previous section, where closed loop hard level constraints are built

    into the algorithm.

    When there is no deadtime in the process, the optimal controller is a PI controller that makes

    the closed loop system second order with the damping factor 77 = V2/2. This fact was made

    clear by treating the problem in continuous time (Ogawa et al, 2002). This thesis extends this

    methodology to include stationary input disturbance and to include the downstream process in the

    performance index.

  • Chapter 3

    Linear Optimal Controllers

    3.1 Introduction

    In this chapter, linear optimal controllers for averaging level control are studied. Although linear

    controllers cannot guarantee that the level constraints are met for random walk disturbances, they

    are considered for the following reasons:

    1. For some problems, it is sufficient to detune existing linear controllers (PI or PID) in order

    to make the outlet flow smooth. The study here provides tuning guidelines.

    2. The performance of linear controllers can be calculated analytically (and for some simple

    problems, in closed form). Usually this is not the case for nonlinear controllers, where

    simulation is the only reliable way to assess performance. The linear optimum controllers

    provide a reliable benchmark, which any non-linear controller must surpass.

    The input disturbance Fd{t) is assumed to be either a non-stationary or stationary stochastic pro-

    cess. The performance index is chosen to express the smoothness of the outlet flow Fu{t) in math-

    ematically tractable form, usually the variance of Fu(f) or a weighted sum of the variances ofFu(t)

    and Fu(t). Note that the latter choice is only available for stationary input disturbances. The output

    of a controller is denoted by u(t) and either u(f) = Fu(t), or u(i) = Fu(f) - Fm (Fm = E[Fd(t)]). In

    either case, Fu(i) = ii(t) = v(t), and Var[Fu(t)] = Var[w(0] and Var[Fu(r)] = Var[v(0]. Thus in the

    sequel, u{t) is used for the variance expression. Letj>(r) denote the tank level andyr its setpoint. A

    19

  • 3.2. State-Space Method 20

    linear controller is sought that minimizes the following performance index J,

    J := E[(y(0 - yrf] + p2E[v(t)2 + n(u(t) - E[u(t)])2]

    = Var(y(0) + p2[Var(v(f)) + fiVar(u(t)), (3.1)

    where // = 0 when the disturbance is non-stationary. The weight p2 is a Lagrange multiplier

    (Newton et a l , 1957) so that Var[y(r)] is minimized subject to a constraint

    where a constant c is the upper limit of the flow variability. In practice, c is not specified, but

    p2 is adjusted to obtain an acceptable Var[y(r)]. Then the resulting optimum controller provides

    the minimum flow variability (or maximum smoothness). This approach was pioneered by Kelly

    (1998) and Foley et al. (2000) using discrete-time polynomial methods. In the thesis, the prob-

    lems are treated in continuous time using state-space methods, so that some characteristics of the

    resulting controllers, such as the closed loop damping factor, become evident. One advantage of

    the discrete-time approach is its ease of handling dead-time. However, the target applications are

    such that dead-time can be safely ignored.

    The problem is solved using the state-space linear quadratic (LQ) optimization methods combined

    with noise-free observers. Since most of problems in the thesis are single-input-single-output, the

    transfer function approach (Wiener-Hopf method) is a viable method, since there is no need for

    observer design. However, the state-space approach was adopted for the following reasons:

    1. The algebraic Ricatti equations (ARE) can be solved analytically for this size of problem,

    and controller parameters, therefore,, can be expressed in closed form. The procedure is

    more straightforward than the factorization which is necessary for the Wiener-Hopf method.

    When the problem dimension increases, stable computer algorithms and software for solving

    the ARE numerically are widely available. Presently, this is not the case for the Wiener-Hopf

    approach.

    Var[v(0] + //Var[w(r)] < c, (3.2)

    3.2 State-Space Method

  • 3.2. State-Space Method 21

    2. From the nature of the problem, noise-free observers are practical. Thus, the requirement

    of an observer is not really a disadvantage for the state-space approach. Rather it may,

    in some cases, add flexibility in controller design by providing a framework for selecting

    proper (or ad hoc) observers when the measurement noise cannot be ignored (Allison and

    Khanbaghi, 2001). Of course, by adding a noise model, the transfer function approach can

    produce the optimum controller that includes the observer. However, then the problem size

    becomes too large to accept easy factorization by hand and the approach loses the edge.

    The basic equations and notation which willibe used throughout this chapter are introduced here.

    The plant is expressed in the following state-space form.

    where x(t) is the state vector, u(t) is the manipulated variable (not Fu(t) here), w(t) is a Wiener

    process, and w(f) white noise. A, B, D are constant matrices of appropriate size. The white noise is

    expressed as a formal derivative of the Wiener process w(t). The derivative of the Wiener process

    does not exist in a strict mathematical sense, and the ordinary differential equation (ODE) (3.3)

    must be written as a stochastic differential equation (SDE). However, the SDE does not provide

    any advantage over the (formal) ODE in the development below. Thus the SDE notation is not

    used here. Generally, w(f) can be a vector process, but in the thesis w(t) and w(f) are always scalar

    processes. The performance index / is

    where Q is a positive semi-definite matrix and R a positive definite matrix. It is well known that

    in stochastic LQ problems where the white noise enters the system equation additively, the linear

    state feedback solutions are identical to those for the deterministic LQ problem (for example see

    Fleming and Rishel (1975)). The optimum control u(t)* that minimizes J is a feedback form,

    x(i) = Ax(t) + Bu(t) + Dw(t), (3.3)

    (3.4)

    u(ty = -Kx(t). (3.5)

    The feedback gain K is related to a positive-definite solution P of the following algebraic Ricatti

    equation (ARE): PA + ATP - PBR-lBTP +Q = 0, (3.6)

  • 3.3. Evaluation of Variances 22

    and

    , K = RrlBTP. (3.7)

    A number of stable computer algorithms and programs are available for numerically solving the

    ARE. However, the low dimensionality of the problems here makes it possible to obtain closed

    form solutions easily. Thus, most of the AREs are solved analytically, so that the equivalence of

    the state-space approach and the Wiener-Hopf method can been clearly seen.

    3.3 Evaluation of Variances

    It is necessary to calculate the variances of process variables to evaluate and compare the con-

    trollers. When a variable is the output of a rational transfer function driven by a signal with a

    rational spectrum, its variance can be calculated analytically. Ify(0 has a rational power spectrum density

    ._ N(s)N(-s)

    where

    Gv(-sr) := v 7 v , (3.8)

    N(s) = b-is" 1 + + bis + b0 D(s) = as"+an-is"~l + + a\s + ao,

    its variance Var(y(r)) is calculated as

    Var[y(01 = ^ - S H ^ . (3-9) N(s)N(-s) _JOO D(s)D(-s)1

    Let / denote the above integral when D(s) is an -th order polynomial. Newton et al. (1957)

    contains a table that lists / up to n = 10. The first three / are listed below.

    h 2

    h = 2a0ai

    / 2 = % i l % : ( 3 . 1 0 ) 2aoaia2

    bja0ai + (b\ - 2bob2)a0a3 + b\a2a^ 2a0ai(a\a2 - a0a3)

  • 3.4. Stationary Input 23

    It becomes unwieldy for n > 3 for hand calculations. Astrom (1970) includes a recursive algorithm

    and a FORTRAN program for /. In this chapter the following short-hand notation is used.

    IA ) - = ^ j J j 7^ 7777^ 7 ~ds- (3.H) D{s)j 2nj J _ 7 0 0 D(s)D(-s)

    3.4 Stationary Input

    In this section, the input disturbance Fd{i) is assumed to be the output of a first-order low-pass

    filter driven by white noise plus a bias. The state-space formulation is straightforward when the

    input disturbance is zero-mean. Let F m denote the mean of Fd(i), F m := E[Fd(t)]. The deviation

    variable d(i) for the input disturbance is defined.

    d ( t ) : = F d ( t ) - F m . (3.12)

    Then,

    d(s) = -^w(s), (3.13) S + lOd

    where o)d is the cut-off frequency and w(s) the Laplace transform of white noise.

    3.4.1 System Equation and Performance Index

    Since the process is an integrator, the mean of the outlet flow must be equal to Fm to keep the level

    stationary. Thus, u(t) is set as a deviation from its mean Fm.

    x\(t) = y(f) - y r tank level error

    xiit) - d(t) input disturbance

    u(t) = Fu(i) - F m outgoing flow deviation (manipulated variable)

    As shown below, x2(i) and u(t) appear as the input disturbance and outlet flow for the system

    equation. Xi(t) = Kp[Fd(t) - Fu(t)] = Kp[x2(t) + F m - (u(t) + Fm)] = Kp[x2(t) - u(t)]. (3.14)

    Since u(t) is stationary, not only the variance of u(t) = v(t) but also u(i) can be included in the

    performance index.

    / := E[(y(r) - yr)2] + p2E[v(t)2 + ^ u{t)2} (3.15)

    = Var[y(0] + p 2 (Var[v(0] + pVar[M(0]),

  • 3.4. Stationary Input 24

    where p is a weight for u(t) variation. The control weight is set to p 2 rather than p to simplify the

    calculations for the optimal gain below. When p > 0, the resulting controller tries to minimize

    the variances of both v(r) and u(t). Those cases are studied later in Chapter 6. Here, cases where

    p = 0 are investigated to compare with PI controllers for random-walk disturbances. To include

    u{f) = v(f) in the performance index in a straightforward way, v(r) is treated as a formal control

    input, and u(f) as an element of the state vector x(t),

    x(t) := x2{t) u(t)]T.

    Then the performance index J is expressed with the following two matrices Q and R,

    J = E{x(tf 1 0 0

    0 0 0

    0 0 up2 x(t) + v(t)

    ~R

    (3.16)

    (3-17)

    As X2(f) is the output of a low-pass filter with the cut-off frequency o>d,

    x2(s) = S + CJd Then the following system equation is obtained

    w{t) => X2(f) = -(OdX2(t) + 0Jdw(t).

    d_ Jt

    Xl(t)

    x2(t) =

    u(t)

    0 Kp - K p 0 -o>d 0 0 0 0

    3.4.2 State-Space Solution

    Analytical Solution

    The solution of (3.6), P, is expressed with its elements as

    P = P\\ Pn Pn

    ph P22 Pn

    Pl\ P32 P33

    Xi(t) 0 0

    x2(t) + 0 v(0 + u(t) 1

    I T

    0

    (3.18)

    (3.19)

    (3.20)

  • 3.4. Stationary Input 25

    Substitution of A,B,Q, and R of (3.19) and (3.17) into (3.6) yields six second order polynomial

    equations on pij, from which a positive definite P can be obtained in closed form (algebraic details

    are included in Section 3.8.1). Since

    K = BTP/p2 = [pu p 2 i p 3 i ] / p 2 , (3.21)

    pn, pn and 7733, which are listed below, are required for K.

    P\3 = ~P, P23 = ; ^ P i 3 = p ^ 2 p K p (3.22)

    Thus, K = [k\ k2 3] is obtained as follows.

    k\ = , k2 = , , k3 = J2KP p (3.23) P coj + ojd^KpT^ + Kp/p' V

    3.4.3 Noise-Free Observer

    In most applications, x2(t) is not measured, and an observer is necessary. A simple noise-free

    observer is constructed by differentiating x\(f). Since = Kp[x2(f)-u(t)] => sx\(s) = Kp[x2(s)-

    u(s)],

    x2(s) = sKpXx (s) + u(s). (3.24)

    Usually, constructing an observer by differentiating the measured variable is not a good practice

    due to measurement noise. However, the problems here are formulated with a formal manipulated

    variable ii(t) and differentiating once does not introduce real differentiation when the real manip-

    ulated variable is u(i). Even if real differentiation is necessary, usually level measurements have

    low noise. Also, most applications can be executed with long sampling periods, say 5 seconds to a

    few minutes, which make it possible to filter the level measurement adequately if noise is not neg-

    ligible. Thus a noise-free observer is practical. See Allison and Khanbaghi (2001) for a Kalman

    filtering approach for noisy cases.

    The equation u(t) = v(t) = -Kx(t) is expressed in its Laplace transform,

    su(s) = -kiXi(s) - k2x2(s) - kiu(s). (3.25)

    Substitution of (3.24) into (3.25) yields,

    u(s) (s + k2 + k3) = xi (s) (-Jfci - sk2K-1). (3.26)

    ;5 n n

  • 3.4. Stationary Input 26

    Let CL(s) denote the transfer function of the controller,

    u(s) = CL(sMs)-yr) = CL(s)Xl(s). (3.27)

    Then

    where

    -k\ - sk2K~] s + k\KJk2 s + b CL(s) = = -k2K-1 = Kc , (3.28)

    w 5 + ^ + ^ 3 p s + k2 + k3 s + a v '

    Kc k2/Kp a = k 2 + h (3.29)

    b - k\Kplk2.

    The sensitivity function S is

    S = 1 = ^ + f3 Kp s + b s2 + s(a + KPKC) + KpKcb' K' }

    1 + K - c s s + a

    From (3.29) and (3.23),

    a + KPKC = k2 + ki+ Kp(-k2/Kp) = h = ^2Kp/p, and

    KpKcb = Kp{-k2lKp)kxKplk2 = -^Kp = Kp/p. Thus,

    s _ s(s + a) _ s(s + a) s 2 + s y]2Kplp + Kp/p s 2 + y /2co c s + co 2 c '

    where wc = ^Kp/p. The closed loop system is a second order system with damping factor rj = V2/2. The second-order maximally flat (Butterworth) filter has the same damping factor. This fact

    does not come out clearly in the discrete-time approach (Foley et al, 2000), where the sampling

    period introduces additional complexity.

    3.4.4 Controller Parameterization

    For controller design, the following parameters are given:

    K p process gain

    o)d input disturbance cut-off frequency

  • 3.4. Stationary Input 27

    As a tuning parameter, the variance ratio Rv(y) between the input disturbance Fd(t) and the level

    y ( t ) is selected.

    Rv(y) := Var[y(0] VartxKO]

    (3.32) Var[Fd(t)] Var[d(t)]

    R v ( y ) is specified to meet the level constraint requirements (the standard deviation of the tank

    level is ^Rv(y) times the standard deviation of the incoming flow rate). A convenient way of characterizing the controller is by the bandwidth of the closed loop system relative to o>d,

    OJC K := (3.33)

    Substitution of y/Kp/p = OJC = KCJd into (3:23.) yields k2 and k-$ in terms of K and CJd- Then a and b are expressed with K and as follows.

    K2 + V2/c a - K2 + V2/c + 1 (jjd < did,

    , K 2 + V2/C + 1

    b = cjd > cod-V2K+ 1

    This shows that a < b , thus, the controller C L = K c ( s + b ) / ( s + a) is a phase lag network l . Figure

    3.1 shows a and b for K between 0.01 and 100 for o j d = \. Figure 3.2 shows the gain of C L for

    Figure 3.1: Plots of a and b of C L = K c ( s + b)/(s + a) for K between 0.01 and 100.

    K = 0.1,1, and 10 for Kp = 1. Figure 3.3 shows the magnitude of the sensitivity function S for 'This is why the controller is denoted with Ci, where L stands for "lag".

  • 3.4. Stationary Input 28

    20 -

    10 -

    CD

    10~2 10"1 10 101 102 to

    Figure 3.2: Gains of CL for K = 0.1,1 and 10. (Kp = 1 and cod = 1)

    K = 0.1,1 and 10. In addition, for comparison purposes, the magnitude of S for a PI controller,

    of which shape is a function of the damping factor n only and does not change by K (see Section

    3.4.5), is plotted. The plot marked as PI is for n = 0.7 and K= I.

    10~2 10~1 10 101 102 (0

    Figure 3.3: Magnitude of the sensitivity function S for CL for K = 0.1,1 and 10. \S \ for CPI with n = 0.7 is

    also plotted. (Kp = 1 and o>d = 1)

  • 3.4. Stationary Input 29

    The controller performance is represented with the variance ratio between d(i) and ii(t), Rv(u), Var[w(0]

    Rv(u) := (3.34) Var[d(r)] Thus, for a given Rv(y), smaller Rv(u) indicates a better controller. With the use of the formulae in (3.10), Rv(y) and Rv(u) can be expressed in terms of K (algebraic details are in Section 3.8.2),

    K 2 K 4 + 3 V 2 V + 9K 2 + 6V2K + 3 Rviy) = or, V2K(K 2 + V2K+ l ) 3

    Rv(ii) = or y/2~K(K 2+ V2/f+ : il). ( K2(\ + V2K)

    (K 2 + V2K) + KA (3.35)

    [\K2+ yj2K+\)

    It is observed that Rv(y) ~ K2/iod and Rv(u) ~ o?d. The variance ratio of the level is proportional to the square of the process gain (this is easy to understand), and is inversely proportional to the square

    of the input disturbance cutoff frequency. When cod is high, more energy in the input disturbance is

    contained in higher frequencies and it is easier to filter out them with the same closed loop system,

    which is a low-pass filter. When o>d is high, the input disturbance is "busy", and increases Rv(u). To subsume the two parameters Kp and o)d in the design parameter, the "standardized" variance

    ratio Rv(y) and Rv(u) are defined as follows.

    Rv(y) :=

    Rv(u) :-

    Ry(yWd K 4 + 3 V 2 V + 9K 2 + 6V2~/C + 3 *1

    R M =

    0J2d V2/c(/c2+ V2K+ 1)

    V2/d>2 + V2K + l ) 3

    1

    U 2 + V2*+ I ;

    (3.36)

    Rv(y) and ^v(") are convenient in comparing controllers as they are independent of Kp and a>d-

    Figure 3.4 shows Rv(y) and ^v(") for CL and a PI controller that makes the closed loop damping

    factor n = V2/2.

    3.4.5 Compar ison to P I Contro l ler

    A phase lag network necessary to implement the optimal controller CL = Kc(s+b)/(s+a) is usually not readily available in present day control system hardware, and it must be base loaded with the

    mean incoming flow Fm. On the other hand, PI controllers are a standard feature of every process

    control system. Also they don't have to be base-loaded with Fm. In short, the PI controller has a

  • 3.4. Stationary Input 30

    Figure 3.4: Rv(y) (decreasing graphs) and Rv(ii) (increasing graphs) for CL and a PI controller with r\ = 0.7. x-axis is K = (oc/

  • 3.4. Stationary Input 31

    If the optimum tuning of the PI controller is sought, it becomes the minimization of Rv(u) subject to

    Rv(y) = c\ (a positive constant c\ is the design specification). Rv(y) = c\ means 2T]K(K2 + 2TJK + 1) = 1/cj. Since 2VK(K2 + 2nK + 1) is the denominator of Rv(u), the minimization of Rv(u) becomes minimization of its numerator K3[(1 + 4n2)/e + 8n3]. So the optimum PI controller is characterized

    by a pair (K*, if):

    (K*,V*) = argmin{/c3[(l + 4n2)K + 8n3]) subject to 2TJK(K2 + 2TJK + 1) = l/c{.

    This is a mathematically tractable problem at least numerically. However, since (K*, n*) is a func-

    tion of Rv(y), the solution may be impractical. Practically more meaningful cases would result when the PI controller is tuned to a "moderate" (meaning that the closed loop system is not too

    oscillatory nor too slow) mode, say n between 0.5 and 2. When the input disturbance is a random

    walk, the optimum controller is a PI controller with n = V2/2 as shown in Section 3.6. Also the

    critical damping rj = 1 is used often. These two values of n and n = 2 for slow mode, are used

    for comparison. Figure 3.5 shows the ratio of ^v(") by the PI controller and Rv(u) by CL- The

    performance of the PI controller deteriorates when Rv(y) = Rv(y)a)j/K2 is high. Thus CL is worth consideration for cases of high Rv(y).

    2.8 -

    2.6 -

    10"3 1

  • 3.5. Minimum Variance of Outlet Flow 32

    3.5 Minimum Variance of Outlet Flow

    For some types of downstream processes, the adverse effect of Fu(f) is proportional to the deviation

    of Fu(t) from its mean value. Then the natural choice of the performance index is

    J=E[Xi(t)2+p2u(t)2]. (3-40)

    Now, u(i) is no longer included in J, so the system equation is constructed with u(t) as a control

    input.

    d_ dt

    Xl(t) 0 Kp X\(t) -Kp u(i) + 0 = + u(i) + x2(t) 0 -cod Xlit) 0 0)d

    w(t) (3.41)

    A B The optimal control w*(r)is a linear state feedback with the feedback gain K =[k\ k 2],

    u(s) = -kixi(s) - k2x2(s)

    With the noise-free observer as before (x2(s) = KpSX\(s) + u(s)),

    (3.42)

    u*(s) = x\(s)[-ki - k 2 K p S ] - k 2u(s) u*(s) = - sk2K-1 + k 2

    where

    c 0 = - -

    -x\(s) = (c0 + cis)xi(s), (3.43)

    (3.44) l+fe ' " l + k 2

    This is a proportional plus derivative (PD) controller. Let P = {pij} denote the solution of the ARE.

    Then, the feedback gain K is

    K = [kx k2] = BTP/p2 = [-Kppn/p2 - KpPl2/p2]

    The ARE is solved analytically with the following result.

    Pn = -7T, P\2 = Kp pcod + Kp

    (3.45)

    (3.46)

    Then Kpp +iod 1 co = , Ci = (3.47)

    pOJd PUd

    The controller CPD(s) = Co + C{S is not proper, and is implemented with a low-pass filter a/(s + a) to make the controller proper as

    a s + b CPD{s) - ci(s;+ CQ/CI ) = K r . , (3.48) s + a s + a

    where K c = ac\ and b = c 0 /ci . Thus, the PD controller is implemented as a lead-lag controller.

  • 3.5. Minimum Variance of Outlet Flow 33

    3.5.1 Comparison to P Controller

    In Shinskey (1994), the proportional only (P) controller is recommended for averaging level con-

    trol. Although flow smoothness was not mathematically defined, it can be understood from the

    context that the author sought to reduce the variance of u(t). In the following, the PD controller is

    compared to P controllers. Although, the PD controller is implemented as a lead-lag network, its

    performance is evaluated as CPD(s) = CQ + c\s to simplify the calculation and to provide the best case scenario for the PD controllers.

    PD Controller Performance

    The sensitivity function S is

    S = 1 = s = 1 . s ( 3 49) 1 + (c0 + cxs)Kp/s 5(1 + Kpci) + KpC0 1 + Kpcx s + \+Kpc\

    It is convenient to define a parameter o)c as

    From (3.47),

    So

    coc = -^. (3.50) P

    OJc+OJd , KpC0 l + K p c \ = , and = OJC. (3.51)

    ojd 1 + Kpc\ S = -J* i _ . (3.52)

    COc + OJd S + 0)c

    As before, define K as o>c = KOJJ, then

    S = . (3.53) 1 + K S + KOJd

    After some calculations, Rv(y) is obtained.

    The variance ratio between Var[w(r)] and Vax[d(t)], Rv(u) is VarKQ] ir^ + Sic+l)

  • 3.5. Minimum Variance of Outlet Flow 34

    P Controller

    The transfer function of the P controller C p(s) is

    Cp Kr (3.56)

    Then the sensitivity function with this controller is

    1 S = 1 + KcKp/s s + KCKP S + OJC

    (3.57)

    where coc = KCKP = KOJJ. Then, Rv(y) and Rv(u) for Cp are obtained.

    Rv(y) = OJIK(K+ 1) Rv(y) = 1

    K(K+ l)

    Rv(u) = K+ 1

    (3.58)

    (3.59)

    The reduction of R v (u) by C P D compared to C P is plotted against Rv(y) in Figure 3.6. The maximum

    reduction is 0.8638 when Rv(y) = 0.7077. This reduction is 0.93 in terms of standard deviation.

    So the PD controller reduces the standard deviation of u{t) by 7 % compared to the P controller

    at best. In reality, the reduction is less thanr7 % due to the low-pass filter. This small reduction

    probably means that for practical purposes, the PD controller is no better than the P controller.

    Figure 3.6: The ratio of the variance of u{f) between Cpo and Cp. x-axis is Rv(y).

  • 3.6. Random Walk Input 35

    3.6 Random Walk Input

    Although stationary processes are preferred as a model of the input disturbance, the random-walk

    case is treated here for two reasons: (a) to show that the optimal controller is the same PI controller

    for deterministic cases where the input disturbance is a step, and (b) to show a proper way to choose

    the state variables for a non-stationary disturbance. Doss et al. (1983) considered the control of

    liquid levels in three tanks in series by a state-space LQ approach without explicitly including the

    input disturbance in the state , but resorted to other more intuitive approaches that they felt led to

    a simpler, more physically acceptable scheme. Harris and MacGregor (1987) reworked the same

    problem, and showed that the same satisfactory result can be obtained by the transfer function

    method with a disturbance model. If the input disturbance had been included in Doss's work as

    shown below, the state-space approach would have produced a satisfactory result.

    A continuous-time random walk is a Wiener process w(t). Since w(f) is non-stationary, the

    difference between the incoming flow and the outgoing flow, which is stationary, is chosen as

    a state variable. With a Wiener process w(t) as the input disturbance, the manipulated variable

    u(i) = Fu(t), and the tank level error x\(t) = y(t) -yr, the process is expressed as

    = xi(0) + K p f (w(r)-u(T))dT. Jo With X2(t) = w(t) - u(t), w(f) = dw(t)/dt, and v(t) = u(t), the system equation is

    (3.60)

    d_ dt

    X\(t) 0 Kp *i(0 0 v(0 + 0

    = + v(0 + x2(i) 0 0 x2(t) -1 1 w(t). (3.61)

    B The performance index J is

    J := E[xi(t)2 + p2v(02] = E{x(t)T x(t) + v(0 v(0} (3.62)

    Note that the outlet flow u(t) itself cannot be included in the performance index because it is non-

    stationary and may not have a well-defined variance. The optimum control v(t)* that minimizes /

    is a feedback form

    v(0* = -Kx(t) (3.63)

  • 3.6. Random Walk Input 36

    As before, the solution of the ARE, P = {prf is obtained analytically. Substitution of A,B,Q, and R of (3.61) and (3.62) into (3.6) yields the following three equations.

    -p22/p2 + \ = 0 PnKp-pnP22/p2 = 0 (3.64)

    IpnKp - p\2lp2 = 0

    The above three equations and the positive definiteness of P determine ptj as follows.

    P n = p , P 2 2 = P y j 2 p K p p n = ^ 2 p / K p

    And

    (3.65)

    (3.66) K = R-lBrP = [-pl2/p2 -P22/P2] = [-l/p - y/2p~Kp/p]

    Since u ( t ) = v(t) = - K x ,

    su(s) = xi(s)/p + x2(s) yjlpKpIp (3.67)

    Usually x2(t) is not available for measurement and an observer is necessary. A simple noise-free

    observer is designed below. Since x\(t) = Kpx2(t), in Laplace transform, sx\(s) = Kpx2(s). Thus,

    x2(s) = SX\ (s) (3.68)

    Substitution of (3.68) into (3.67) yields

    su(s) = X\(s)

    Thus

    fl ^2pKP\ ~ + s-p = xi(s)

    u(s) = X\(s) 1

    +

    (3.69)

    (3.70) sp ypKp

    This is a PI controller with the proportional gain Kc = yf^, and the reset time Tr = ^2p/Kp. The sensitivity function S = 1/(1 + PC) is

    1 S = l + * z ( + / X \ s 2 + Syl2Kp7^ + Kp/p S2 + 2^U>CS + G J I

    (3.71)

    where OJC = \fKp~/p. So the optimum controller is a PI controller that makes the closed loop system a second order low-pass filter with a damping factor 0.5 VI.

  • 3.6. Random Walk Input 37

    3.6.1 Effects of Damping Factor

    Some PI tuning rules recommend the damping factor n = 1 for integrating processes , for example

    Morari and Zafiriou (1989). Effects of n on Var[v(/)] was investigated when n is deviated from the

    optimum value rf = 0.5 V2. The PI controller is tuned so that the sensitivity function becomes

    s 2 S = -^z = (3.72)

    s1 + 2T]0)cS + (x>Lc Then Var[y(r)] and Var[v(f)] become (denoting the variance of w(t) with cr^),

    Var|M0] = S . Var[v(0] = ^ I ' c (3.73)

    For a given Var[y(J)] target, o)c is adjusted for each n n* to keep Var[y(0] to its target. Then

    Var[v()] is compared to the optimum value Var*[v(0],

    Var[v(0] (4n2 + l ) 2 V 2 ( 1 x l / 3

    Var*[v(0] 4n l V 2 ^ J (3.74)

    This is plotted for 77 = [0.5 1.0] in Figure 3.7. Var[v(f)] increases by 5% when 7/ = 1.

    1.06H

    r\ (damping factor) .1 r

    Figure 3.7: Comparison of the optimum controller (n = V2/2) and non-optimum (n = [0.5,1]). X-axis is

    for the damping factor n and Y-axis for the variance of u divided by the optimum variance.

  • 3.7. Summary 38

    3.7 Summary

    The averaging level control problem is formulated as a minimization of Var[w] subject to the target

    tank level variance Var[y(f)]. The problem is solved by the state-space method combined with

    a noise-free observer. When the input disturbance d(t) is modelled as a stationary random pro-

    cess (output of a first-order low-pass filter with a cutoff frequency ojd driven by white noise), the

    optimum controller CL is a phase lag network,

    Ci = Kc . s + a

    When the input disturbance is modelled as a random walk, the optimum controller is a PI controller,

    Cpi = Kc . s

    For both Q, and C/>/, the closed loop system is a second-order system with damping factor V2/2.

    A single parameter Rv(y) that characterizes the controller is introduced,

    Var[y(Q] (ojd\2

    For a stationary input disturbance, Ci and CPI are compared by the ratio of Var[zi(r)] with the two

    controllers set up to give the same tank level variance. CL is not much different from CPI when

    Rv(y) is small (below 1), but CL performs better for higher Rv(y). For instance, for Rv(y) = 10, CL's Var[w(0] is 1/2.6 of that of CPI.

    When the flow smoothness is represented by the variance of u(t) in stead of u(f), the opti-

    mum controller, which minimizes Var[u(t)] with a target tank level variance, is a proportional plus derivative (PD) controller, Cpo,

    CpD = CQ + C \ S .

    However, the performance difference between CPD and a proportional only controller CP is small

    (7 % in standard deviation of u(t) at best), and for practical purposes, Cp is recommended.

  • 3.8. Mathematical Details 39

    3.8 Mathematical Details

    3.8.1 Solution of Section 3.4.2

    The A R E yields the following seven equations.

    () -P2n/P2 + 1 = 0 (b) pnKp - pX2Ab - pnpnlp1 = 0 (c) - pnKp - pnpislp2 = 0 ( d ) 2 p n K p - 2 p 2 2 A b - p \ 3 i p 2 = 0 ( e ) - p l 2 K P + p l 3 K p - p 2 , A b - p 2 3 p 3 3 / p 2

    ( J ) - 2 p l 3 K p - p 2 3 / p 2 + p p 2 = 0

    First from (a), p \ 3 = p , but from (J), p \ 3 must be - p to make p 3 3 real.

    (a) - p 2 1 3 /p 2 + 1 = 0 => p l 3 = -p (/) - 2p l 3Kp - p\ 3lp 2 +pp 2 = 0 => p i 3 = p yJ2pK p +pp 2

    Then using (c),

    (c) -pnKp-pnp 3 3/p 2 = 0 => - /?nA> + ^2pK p + pp2 = 0 => /?n = ^2plK p +p(p/K p) 2

    Using (6),

    ( 6 ) p u K p - p n h - p n P i - i l p 2 = 0 = > p l 2 = ^ l ( p n K p - p u p 2 3 / p 2 ) = A ; \ ^ j 2 p K p + p p 2 + p 2 i / p )

    Then, substitute the known to (e),

    (e) - p l 2 K p + / ^ .K /? - p n h - P2iPnlpV=> - p u K p -pKp - p 2 i A b - p 2 3 y J 2 p K p + p p 2 / p

    So Kp(p + A~bl yj2pKp+pp2) Kp(Abp + y/2pKp + pp2)

    p 2 3 = = Ab + ^/2Kp/p+p + KpA-blp-x A2 + Ab ^/2Kp/p+p + Kpp~l

    3.8.2 Rv(y) and R v (u) in 3.4.4

    As the sensitivity function

  • 3.8. Mathematical Details 40

    the transfer functions from d(s) to x\(s) GDY and from d(s) to ii(s) G D y are K p ( s + d)

    GDY =

    GDU =

    s2 + y/2o)cs + co2

    K p K c ( s 2 + bs) s2 + ^j2cocs + co2

    Thus

    Var[x!(0] = /v 1 Kp(s + a) = K2pIv s + a

    [s + cod s2 + y/2cocs + co2c)

    We express coc as Kcod, then a, b, and Kc are expressed as follows: ,,2

    U 3 + *2(o>rf+ V2oc) + s( yf2coccod+co2) + 4/o>2,

    a = >C + A/2/C

    K2 + V2A: + 1 COD = KA(OD < 0)d

    K2 + V2/c + 1 b = iOd = KbO)d > 0)d 1 + V2K

    Var[x,(0]=^/v s + a

    = K:

    [s3 + s2(l + K V2)w r f + S(K V2 + /c2)^2 + K 2 ^ ,

    2 AC 2 W 3 +^( l - r / cV2)^

    2K2co6d ( ( 1 + K V2)(/f V2 + /c2) - K2)

    1 +

    ' P2w 3 V2/rt>2-t- V2/c+ 1) Since Var[d(t)] = Iv(l/(s + cod)) = \/(2cod),

    _ Var[x,(Q] _ 2 Rv^ ~ Verity] ~ K

    l + J ^ ^ ^ l + K ^ _K2P * 4 + 3 V 2 3 + 9tt2 + 6 V 2 + 3 * co2. V2K(K 2 + V2K+ 1) w2 V2K(K2 + V2K+ l ) 3

    So

    *v(v) := Rv(yWd K4 + 3 V2K3 + 9K2 + 6 V2K + 3 *2 V2K(K2 + V2/f+ l )

    3 (3.75)

    Similarly for Var[w(0] and Rv(u), ( 1 KpKc(s2+bs) ) Var[zi(r)] = 7V

    + 5 2 + yJ2coCS + C02) s 2 + bs

    U 3 + s2(cod + V2fa>c) + s( ^[2coccod + co2) + codco2

    Since K P K C = - k 2 , KpKc

    y 2(l + V2/c) V2/c+ 1

    3 Here, the variance ratio is concerned, so the input variance can be set to any convenient value.

  • 3.8. Mathematical Details 41

    So,

    Var[zi(0] = ( K\\ + V2K) ^

    OJd 2 + Vz* + 1 )

    s 2 + bs U 3 + S2(\ + K V2)itd + S(K V2 + K2)CJ2 + K2OJ.

    Substituting b = ^j-^^^d, and after some algebra, we get,

    Rv(u) = 0>A

    V2K(K2 + V2/f + 1) + V5#c + u

    *v(u) := /?V(M) 1

    w2, ~ V2K(K2 + V2* +1)

    V ( l + V2*) ,K2+ V2K + l j

    (K2 + V2K) + (3.76)

    3.8.3 R v(y) and i?v(w) in 3.4.5 _1 frS

    s + T~L . s + b

    Then

    Thus,

    CPI(s) = K c \ \ + ^ - = K c ~ ^ - = K c -

    S =

    b = T:1

    1 + Ikri+b s 2 + sK p K c + K p K c b s 2 + 2rjojcs +

    KPKC = 2na>c, and KpKcb = OJ2

    Substitution of K P K C - 2T]OJC into K p K c b = o 2 yields

    So,

    G n v =

    GDU =

    Kps Kps s 2 + 2no)cs + OJ2 s 2 + 2r)KO)dS + K2to2d

    KpKc(s2 + bs) 2 n o j c ( s 2 + sojc/2n) 2nKOJds2 + K2o?ds S2 + 2r\(x>cS + QJ>2 S2 + 2j]KCOdS + K2W2d S2 + 2r}K0)dS + K2OJ2

    Var[xi(0] = /v ( , , ^ , , , ) = K2L s + ojd s2 + 2nKiOdS + K2cod j ^ v \5- 3 + ^2(1 +2r)K)a>d + s(2nK+K2)oJd + K20Jd,

    2 ((1 +2i7/t)(2^K+r2)w5 - K2ufy 2

  • 3.8. Mathematical Details 42

    Rv(y) = Var[xi(0]2wrf = cod2r]K(K2 + 2nK + 1)

    Rv(y) := Rv(y)co2d 1

    2TJK(K2 + 2TJK + 1) (3.77)

    Var[w(0] = Iv 1 2nKu>dS2 + K2u)ds

    s + cod s 2 + 2r\K0)ds + K2O>2 2rjKO>dS2 + K2co2ds

    d , \S3 + s 2(\ +2nK)cod + s(2rjK+K2)a>d + K2cod; {2t]Kcod)2{2T}K + K 2) + K4G>4 ^ 1 + 4n2)/c + St]3)

    2cod2r]K(K2 + 2T]K +1) 4TJK(K2 + 2TJK + 1)

    R v ( u ) = Yar[u(t)]2tOd = OJ2K\(1+4?]2)K + 8T]3)

    2TJK(K2 + 2nK + 1)

  • Chapter 4

    Downstream Processes

    4.1 Introduction

    In the previous chapter, the smoothness of the outlet flow Fu(t) is mathematically captured as the

    variance of Fu(t) = v(t) and/or Fu(t), and the problem is solved by optimization with the following performance index (3.15),

    J\ = VarLMO] +p2 (Var[v(0] +pVar[U(0]),

    wherey\(t) is the tank level and u(t) is the deviation of Fu(t) from its mean value Fm. The reason

    for requiring outlet flow smoothness is to minimize adverse effects of tank level control on the

    downstream processes. In this chapter, the downstream process variable is explicitly included in

    the problem formulation. The averaging level control problem is considered as the minimization

    of the downstream variable variance while keeping the tank level between high and low limits. Let

    y 2 ( t ) denote the downstream process variable. The minimization of Var[y2(r)] makes the problem

    mathematically tractable, and also it makes practical sense for some cases. For example, y 2 ( t ) may

    represent a product quality parameter, and y2(t) must be in a specified range for the product to be

    acceptable. Minimizing Var[y2(0L then maximizes the product yield.

    In this chapter, the following performance index is used.

    J := Var[y,(0] +P^ Var[y2(0] +p2Var[v(0] (4.1)

    As in Chapter 3, pi and p 2 serve as Lagrange multipliers so that Var[yi(0] is minimized subject to

    Pi Var[v(0] + plVar\y2(t)] = c, where c is some constant. Actually, c is not given, and pi and p 2 are

    43

  • 4.2. Downstream Process Model 44

    adjusted to obtain a required Var[yi(r)], which is specified as one of the design parameters. Since

    the problems are treated in continuous time, non-zero p\ is necessary to keep the feedback gain K

    and Var[v(r)] finite.

    4.2 Downstream Process Ivlodel

    Figure 4.1 depicts a model of a combined system of the tank level control loop and the downstream

    process, where the outlet flow Fu(t) enters at the output of the downstream process P2. The

    c2

    Fd(i)-

    Ci

    Fu(t) 6-r*y2(t)

    Figure 4.1: The tank level control loop (Ci and P\) and the downstream process P2. The outlet flow Fu(t) enters at the output of the P2.

    purpose of the model is to investigate how the downstream process influences tank level controller

    design when the statistical relation between Fu(t) and y2(f) is taken into account. The model is not meant for designing a multivariable system, where Fu(t) is determined from both y\(t) and

    y2(t). Thus, the interaction between Fu(t) and y2(t) are simplified excluding any deadtime or gain units, which may exist between Fu(t) and y2(t). Therefore, it is assumed that the downstream process variable y2(t) is not available to the tank level control loop. If Fu(f) affects more than one

    downstream processes, y2(t) is a hypothetical "representative" process variable that has no real

    physical entity.

    Downstream processes are represented here by a first-order plus deadtime model.

    K2 Pi = -e~TS = K2-

    T2S +1 S + 0>2

    where K 2 is the steady state gain, T 2 the time constant, co 2 = T 2 the cutoff frequency and T the

  • 4.2. Downstream Process Model 45

    deadtime. It is assumed that P2 is closed with a PI controller C2,

    C2 = Kc ( l + -U = KCS-^-, (4.2) \ Trsj s

    where Kc is the proportional gain, Tr the integrating time, and cor = l/Tr. Then the sensitivity

    functions'2 is o 1

  • 4.2. Downstream Process Model 46

    20

    10

    0

    -10 CD T3

    -20

    -30

    -40

    -50 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2

    log(to)

    Figure 4.2: The magnitude of Gvi (solid line) and two approximations for io2 - K2 = r = 1 and Kc - 0.3. The dashed line is for OJp = io'p and dotted line is for a>p = l.4a>'p.

    Figure 4.2 shows the true Gvi and two approximations with different choice of a>p.

    When u enters at the input of the downstream process, Gvi has an additional turn-over fre-

    quency a>2 from which \Gvi\ decreases with -40dB/decade. If such a model is used, the resulting

    controller, which is more complex, will produce outputs that have more high frequency compo-

    nents. Therefore, in the sequel, the downstream process is assumed to be a simple low-pass filter

    driven by v(t). Let x3(t) denote the deviation of^ (0 from its mean value:

    xi(t):=y2{f)-E\y2(t)l (4.10)

    Since x3(t) is the output of a low-pass filter driven by v(r),

    x3(i) = -ojpx3(t) + v(t). (4.11)

    In reality, there is some "gain" involved between v(t) and x3(t) depending on where and how u(t)

    enters the downstream process. Then the downstream model is

    x 3 ( s ) = - ^ - v ( s ) , (4.12) S + (Op

    where Kj is the gain. The downstream process variable x3(t) is assumed not available for the tank

    level control loop, and recovered by a noise-free observer from xi(t) and w(r) using the downstream

    process model (4.12). Therefore,

    Nc(s)

    'The exact value depends on the plant uncertainty and T (Ogawa, 1995), but the IMC rules recommend similar

    values (Morari and Zafiriou, 1989) '

  • 4.3. State-Space Solution 47

    where CD(S) is the controller transfer function. CD is calculated to attain required Var[yi(0] and

    Var[v(r)] with the minimum Var[y2] by adjusting pi and p 2 in (4.1). Let CD denote the optimum

    controller for Kj = 1, and C'D for Kd \, both with the same Var[yi(r)] and Var[v(f)]. Let cr2,

    denote Var[y2(r)] f r Kd = 1 and o\ for Kd \. From (4.12), cr2 = K^a2 if the same controller

    is used for both cases. If CD i1 C'D, cr2 Kda2. This contradicts the fact that CD and C'D are the

    optimum controllers for each case. Thus, CD must be equal to C'D. This means that Kd does not

    influence the optimum controller. However it does influence the value of p 2 to obtain the desired

    Var[yi(r)] and Var[v(r)]. Thus, Kd is set to a convenient value for calculation.

    4.3 State-Space Solution

    The downstream process variable x->,{f) is added to the state variables of Chapter 3.

    x \ (0 = y\ (0 ~~ yr tank level error

    x2(0 = Fd(i) - Fm input disturbance deviation

    *3(0 = yi(f) ~ E[y2(0] downstream deviation variable

    u(t) = Fu(t) - Fm outgoing flow deviation (manipulated variable)

    In this chapter, the input disturban