simulated hardware design of artificial neural networks for adaptive plant control

10
ELSEVIER Electric Power Systems Research 37 (1996) 231 240 ELECTRIC POW|R $W$T|m$ |8|nnoH Simulated hardware design of artificial neural networks for adaptive plant control C.S. Chang*, F. Wang, A.C. Liew Power Systems Laboratory, National Unirersity c~[" Singapore, Kent Ridge Crescent, Singapore 0511, Singapore Received 29 March 1996 Abstract In this paper, an artificial neural network (ANN) hardware circuit design for implementing online plant parameter identification and plant control is presented. The parallel structure of the ANN hardware is typical of the feedforward network with a real-time back-propagation training algorithm. The circuit is designed for implementing energy function minimization and the gradient descent algorithm. Different schemes of the hardware design are discussed for realizing adaptive control functions. Simulated results show that the proposed ANN circuit design has fulfilled the performance objective as required. Ke)'words: Artificial neural networks; Hardware circuit design; Controller design 1. Introduction Applying artificial neural network (ANN) techniques to implement adaptive control systems has become increasingly popular in the past few years. Networks have been constructed to perform various tasks, such as dynamic system modeling and control optimization. ANN hardware and software implementation are the two major branches in research. ANNs consist of highly connected simple processing units. The synthesis effort of individual processing units can be applied to generate complex network perfor- mance. The characteristics and properties of ANNs are being actively explored [1,2] for: (a) function approxi- mation the ability of the ANN to approximate arbi- trary mapping, especially of nonlinear functions, which stems from its collective computation structure; (b) parallel/distributed processing--constituted by simple processing units, the ANN is capable of implementing a parallel structure, which leads to fast processing ability; (c) hardware implementation the simple-structured computation can be easily implemented in hardware with additional speed and in a smaller physical space using VSLI technology; (d) learning and adaptation through various online or offline learning rules, ANNs * Corresponding author. 0378-7796/96/$15.00 (c" 1996 Elsevier Sciencc S.A. All rights reserved Pll S0378-7796(96)01063-2 are capable of establishing an intelligent mapping be- tween input and output. Added to other features, such as data fusion, multivariable systems, etc., ANNs have received increasing interest in various fields. In the area of control, most research work for apply- ing ANNs is focused upon nonlinear systems and paral- lel implementation. Due to the advantages stated above, the ANN is widely considered to be an ideal tool for realizing adaptive control functions. For this purpose, various architectures for applying ANNs to implement adaptive control have been proposed, which have included models for process estimation and con- trol of dynamic systems [3-5]. The use of microprocessor technology for implement- ing conventional adaptive control has been widely re- searched in the past twenty years, and has reached a considerable degree of maturity. In adaptive control, some or all control settings are made adjustable, based on certain estimation algorithms, in response to real- time measurements of operating conditions. These con- trol algorithms derive data from the estimated system state and evaluate optimal controls for each sample time interval. The above control functions demand a tremendous amount of computation. There has been an upsurge of interest in applying ANNs in this area, since ANNs are capable of implementing multiprocessors or massive parallel processors. Such architectures have significant

Upload: cs-chang

Post on 04-Jul-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Simulated hardware design of artificial neural networks for adaptive plant control

E L S E V I E R Electric Power Systems Research 37 (1996) 231 240

ELECTRIC POW|R $W$T|m$ |8|nnoH

Simulated hardware design of artificial neural networks for adaptive plant control

C.S. Chang*, F. Wang, A.C. Liew Power Systems Laboratory, National Unirersity c~[" Singapore, Kent Ridge Crescent, Singapore 0511, Singapore

Received 29 March 1996

Abstract

In this paper, an artificial neural network (ANN) hardware circuit design for implementing online plant parameter identification and plant control is presented. The parallel structure of the ANN hardware is typical of the feedforward network with a real-time back-propagation training algorithm. The circuit is designed for implementing energy function minimization and the gradient descent algorithm. Different schemes of the hardware design are discussed for realizing adaptive control functions. Simulated results show that the proposed ANN circuit design has fulfilled the performance objective as required.

Ke)'words: Artificial neural networks; Hardware circuit design; Controller design

1. Introduction

Applying artificial neural network (ANN) techniques to implement adaptive control systems has become increasingly popular in the past few years. Networks have been constructed to perform various tasks, such as dynamic system modeling and control optimization. ANN hardware and software implementation are the two major branches in research.

ANNs consist of highly connected simple processing units. The synthesis effort of individual processing units can be applied to generate complex network perfor- mance. The characteristics and properties of ANNs are being actively explored [1,2] for: (a) function approxi- mation the ability of the ANN to approximate arbi- trary mapping, especially of nonlinear functions, which stems from its collective computation structure; (b) parallel/distributed processing--consti tuted by simple processing units, the ANN is capable of implementing a parallel structure, which leads to fast processing ability; (c) hardware implementation the simple-structured computation can be easily implemented in hardware with additional speed and in a smaller physical space using VSLI technology; (d) learning and adaptation through various online or offline learning rules, ANNs

* Corresponding author.

0378-7796/96/$15.00 (c" 1996 Elsevier Sciencc S.A. All rights reserved

Pll S0378-7796(96)01063-2

are capable of establishing an intelligent mapping be- tween input and output. Added to other features, such as data fusion, multivariable systems, etc., ANNs have received increasing interest in various fields.

In the area of control, most research work for apply- ing ANNs is focused upon nonlinear systems and paral- lel implementation. Due to the advantages stated above, the ANN is widely considered to be an ideal tool for realizing adaptive control functions. For this purpose, various architectures for applying ANNs to implement adaptive control have been proposed, which have included models for process estimation and con- trol of dynamic systems [3-5].

The use of microprocessor technology for implement- ing conventional adaptive control has been widely re- searched in the past twenty years, and has reached a considerable degree of maturity. In adaptive control, some or all control settings are made adjustable, based on certain estimation algorithms, in response to real- time measurements of operating conditions. These con- trol algorithms derive data from the estimated system state and evaluate optimal controls for each sample time interval.

The above control functions demand a tremendous amount of computation. There has been an upsurge of interest in applying ANNs in this area, since ANNs are capable of implementing multiprocessors or massive parallel processors. Such architectures have significant

Page 2: Simulated hardware design of artificial neural networks for adaptive plant control

232 C.S. Chang et a l . / Eh, etric Power Systems Research 37 (1996)231 240

advantages over conventional computers [6]. One area of interest in the ANN hardware is for implementing the gradient descent algorithm for back-propagation learning. Two approaches are commonly used: (i) the direct control approach which maps the ANN struc- ture directly to the hardware; and (ii) the indirect control approach which exploits the matrix processing units in order to simplify the hardware implementa- tion.

From the viewpoint of control, the implemented ANN should ideally behave like a dynamic compo- nent in the overall system. One common approach for training the ANN in a dynamic process is with the use of the gradient descent method for parameter op- timization based on partial derivatives [7,8]. The ANN performance is evaluated by the gradient of a set of energy functions which constitutes the moving surface of the dynamic model of the ANN. Such a concept may be applied to optimize the control parameters of linear time-invariant dynamic systems, and may be extended to certain nonlinear dynamic systems.

Adaptive damping control has been proposed for static var sources (SVSs) in a weakly coupled power system [9 11]. Application of ANN techniques to such a problem has been studied [1] as an alternative to exploiting the potentials of SVS functions. Two feedforward ANNs have been applied to perform the required parameter estimation and optimization func- tions.

In this paper, an ANN hardware circuit design for implementing the software ANN functions from Ref. [1] is presented. This paper will demonstrate how such simple ANN hardware is suited to implement adaptive controllers. Section 2 outlines the method of minimum-variance (MV) control, and several schemes for implementing the control strategy. Section 3 pre- sents the ANN techniques for performing the gradient descent method for estimating system parameters. Section 4 deals with the analog circuit for parameter estimation. Performance of the hardware design based on MATEAB/SIMULINK simulation is described. The ANN hardware for adaptive controller imple- mentation and its simulated performance are dis- cussed in Section 5. A summary of the achievements of the present work and concluding remarks are con- tained in Section 6.

signal u(t) at each time step t which will minimize the variance of the output y(t). The criterion can be writ- ten in energy function form:

J = ED, Z(t 4- k)] (1)

Note that the input u(t) can only affect y(t') for t ' > t 4-k, so that the energy function involves time delay k.

Assume that a single-input single-output (SISO) system is described as

A(z 1 ) y ( t ) = z - k B ( z - l ) u ( t ) + C(z l)e(t) (2)

y ( t ) = A * ( z 1)y(t) + z kB(z 1)u(t)+ C(z-I)e( t ) (3)

where

A(z t) = l + alz l + a2z 2... + a,~z n

A * ( Z I)._= _ a l z l a z z - 2 . . . _ a , z ,,

B(z 1) : bo 4- hi z 1... 4- bm z m

where y(t) is the output and u(t) is the input of the process, e(t) is a sequence of independent Gaussian- distributed noise, and the time delay is k (= n - m).

If the process in Eqs. (2) and (3) is minimum phase, the MV regulator is given by [12]

a(z ' ) u ( t ) = - f ( z ')y(t) (4)

where the polynomials G and F are the minimum-de- gree solution to the Diophantine equation

A ( z - l ) G ( z - I ) 4 - z ~B(z l)F(z m)=B(z l)C(z i) (5)

Two methods have been applied to implement the MV control, namely, the direct and the indirect schemes [13].

2.1, Direct control

The direct self-tuning control algorithm [12] can be defined as a regression model:

y(t) = F(z ~)y'(t - k) + G(z ')u'(t - k) + e(t) (6)

where y'(t) and u'(t) are the filtered signals of y(t)/Ao and u(t)/Ao. Ao is the desired observer polynomial.

If the relationship between the measured signal y(t) and u(t) is deterministic, the control law can be eval- uated directly.

2. Layout of adyaptive control schemes 2.2. Indirect control

In this section, a brief review of the properties of minimum-variance (MV) control is discussed in order to apply the ANNs in the following sections.

The MV control is based upon optimization meth- ods. The goal of MV control is to generate a control

The control law may also be derived through the indirect method [i3]. The process parameters A and B are identified firstly based upon a certain parameter estimation method, and then the Diophantine equa- tion is solved for F and G:

Page 3: Simulated hardware design of artificial neural networks for adaptive plant control

C.S. (Tmng et al . /Electric Power Systems Research 37 (1996) 231 240 233

A ( z l)G(z I ) + z kB(z 1)F(z ' ) = B ( z 1)C(z 1)

The control signal u(t), therefore, can be derived through the following equation:

0 = F(z l)y(t) + G(z 1)u(t) (7)

The above two methods may be extended to a wider range of applications, as described in the fol- lowing.

2.3. Servo control

Both the direct and indirect MV control methods may be extended to servo control, in which the set- point signal r(t) is also considered in the scheme. The corresponding model is

y ( t ) = A*( z ' )y( t ) + z kB(z l)u(t) + R ( z - ' ) r ( t ) (8)

where r(t) is a measurable setpoint. For servo perfor- mance, however, the energy function should include the error terms, and is of the form [14]:

J = E{[y( t - k ) - r(t + k ) ] 2 } (9)

2.4. Generalized M V control

For nonminimum-phase systems, a process with dead-time exp(- '~) , the generalized minimum-variance (GMV) control, was introduced which consists of the minimization of a generalized energy function [15]. The approach introduces a process involving a pseudo-output defined by

O(t) = Py ( t ) + Qu(t - k ) + Rr( t - k ) (10)

where P is a specified transfer function for an auxil- iary output ~u(t) = P(z ~)y(t) with P(z ~) = P n ( Z - 1)/

Pd(Z-1). Q is another user-specified transfer function for reducing the excessive control activity associated with minimum-variance control with a small sample interval k.

The GMV control may also be implemented in the direct or indirect scheme. The difference is that the control criterion for the GMV is to minimize the en- ergy function of the variance of the pseudo-output:

J = E[~b 2(t + k)l (11)

The control law for GMV can, therefore, be writ- ten as

G ( z - l ) u ( t ) = - - F(z l)y(t) + H ( z 1)r(t) (12)

where G, F and H are polynomials which will be estimated through the adaptive control algorithm:

G ( z - l ) = l + g l z l + . . . + g % z "~ (13)

r ( z -1) = f o + f l z ' + ... + f , f i - " I (14)

H ( z 1) = ho + h lZ 1 + ... + h,,h z - ,h (15)

3. System parameter estimation and optimization

A system model is a description of the properties of a certain process. The system model parameter es- timation is to establish certain algorithms to search for the most suitable parameters for the model. With this established system model, optimal control schemes can be implemented. In this section, al- gorithms for parameter estimation and optimization suitable for ANN implementation are discussed.

3.1. Estimation algorithms

Consider a system represented by a discrete-time transfer function with an output signal y( t ) , where t denotes the time step for each sample collection. The control input signal is u(t) and the Gaussian-dis- tributed random noise signal is e(t).

A ( z ' ) y ( t ) = B(z ' ) u ( t - 1) + C(z 1)e(t) (16)

where

A(z l ) = l + a l z l + . . . + a n z

B ( z - 1) = bo + bjz- I + ... + bmz "

C ( z - I) = l + clz ~ + ... + clz t

To estimate the coefficients of functions A( ' ) , B ( ' ) and C(.), suppose that all the input -output pairs are measurable and are expressed by vector data pairs:

{u(t), y(t)} t = [0, N]

where N is made sufficiently large for estimation pur- poses. To emphasize the object to be estimated and the data available, we rewrite the system model (Eq. (16)) in a backward time step form:

I 1 m / =

y ( N ) x T i N )

O + e(t) (17)

where 0

0 x = [al . . . . . a., bo . . . . . bm, Cl . . . . . el]

x ( t ) is a data vector to be measured:

xT(t) = [ - - y ( t - - 1) . . . . . - - y ( t - - n),

u ( t - 1) . . . . . u ( t - m),

e(t - 1) . . . . . e(t -- l)] (19)

Note that Eqs. (17)-(19) contain values of the noise e ( t - 1 ) , ..., e( t - l) which are unobservable. One way of eliminating the unknown values e(t) from Eq. (19) is to replace the unobservable values e(t) by the predicted error ~(t) as in the following equation:

represents the unknown system parameters:

(18)

Page 4: Simulated hardware design of artificial neural networks for adaptive plant control

234 CS. Chang et al. "Electric Power Systems Research 37 (1996) 231 240

~(t) = y ( t ) - X T ( t ) O ( t - 1) (20)

where

XV(t) = [ - y ( t - 1) . . . . . - y ( t - n),

u(t - 1) . . . . . u(t - m),

E(t 1) . . . . . E(t - 1)] (21)

To estimate the parameters 0, Eq. (17) is rewritten in the following form:

= y - XO (23)

We construct an energy function J with respect to Eq. (23), and minimize J to achieve parameters 0:

N

J(& = E o[d/(O)] (24) i ~ l

The weighting function cr can be one of the following types:

quadratic function

a[e] =/eVe (25)

logistic function

a[e] = flz ln[cosh(e/p)] (26)

Applying the gradient approach for the minimization of the energy function:

d0) L d~ = t~ ~P(e,)X/(i) j = 1,2 . . . . . N (27) i = 1

where /z is known as the learning rate in ANN imple- mentation.

The above system of differential equations may be converted into difference equations by Euler's rule [161:

N

o ( k + l ) ~ - o } t " J ÷ f l E~(e~k))Xi( i ) j = l , 2 . . . . . N i I

(28)

which may be solved iteratively to estimate the sys- tem parameters.

3.2. Gradient approach Jor M V control

In Section 2, several schemes for implementing the MV control are reviewed. In this section, both the direct and indirect MV schemes are implemented by ANN techniques.

It has been shown that a measurable function can be approximated to any desired accuracy by ANNs. Accordingly, we can rewrite Eq. (8) as

G(x, Wg)= N ( ~wgx ) (29)

where N denotes a mapping. Coefficients wg, ~it and wh represent the parameters of G, F and H in such a way that the estimated (x, w~'~), (x, w}r)), and (x, w~; )) can approximate G, F and H with a certain accuracy ~. w(t) denotes the estimate of w at the time step t. x are the measurable system states. Therefore, control u(t) can be defined as follows:

G(x, we)u(t) = - F(x, ~,[l)y(t) + I2I(x, wh)r(t) (32)

Recalling the generalized system by Eq. (10), the control action can be expressed as

q~(t + 1) = Gu(t) + Fy(t) - I~r(t) (33)

We define the energy function by

J = E[~(qS)] (34)

Then weights w~, wl~ and wh are adjusted according to the gradient descent method:

%E ~,(,+l~ wU~ + (35) V g,j : "g,/~h I[tg..i~h ~,, (t)

rv g . j h

where the scalar /z specifies the learning rates. By using the method in Section 2, the weights G, F and H can be calculated. The control signal u (~+u will be generated for the next time step from Eq. (32).

For a linear time-invariant process model:

Ay(t) = Bu(t - 1) + ~(t) (36)

The indirect control scheme can be designed as fol- lows. Assume that process parameters A and B can be identified separately, and the control function is

6u(t ) = - Fy(t) + Hr(t)

By substituting Eq. (32) into Eq. (36):

(AG + z - 'BF)y(t) = BHr(t) + G~(t) (37)

For servo control,

y ( t ) - r ( t )

B ( H - z 1 F ) - AG . . G - -A~+z-F~-k: r ( t ) + A G + z _ , B 1 ) ~ ( t ) (38)

We construct the energy function from Eq. (38):

J = E{[aD'(t) - r(t)]} (39)

Applying the gradient descent method to the ener- gy function, we can minimize the energy function J in order to derive the control parameters G, F and H.

Page 5: Simulated hardware design of artificial neural networks for adaptive plant control

C.S. CTtang et a l . / Electric Power Systems Research 37 (1996)231-240 235

u(t) ~,[ S y s t e m ) -- ~ y ( t + l )

u(t) ,.] / / / ' I "~(t+l)

u(t- 1) ~ y(t+ 1)~ u(t-m)

y(t) y(t-l)

y(t-n)

"1 Neural Network

Fig. 1. Plant model paarameter estimation.

4. Circuit design for ANN estimation and control

One of the significant features of the ANN based estimation and control is the hardware implementation. In this section, the analog circuit design is discussed. Simulation results are also presented.

4.1. Parameter estimation implementation

Feedforward neural networks may consist of one or several layers of processing units, or neurons. The feedforward network topology may be varied according to the application requirements. In fact, a network topology with only one hidden layer using a typical squashing function is capable of approximating any function to any desired degree of accuracy [17]. If the relationship between the input and output is determinis- tic and the number of processing units is sufficient, the neural network is always capable of approximating any prescribed function. This consideration provides a basis for applying ANNs for dynamic system identification.

A schematic diagram of system identification is pre- sented in Fig. 1. Based on the sampled input u and output y, the back-propagation algorithm is imple- mented by updating the ANN weights using errors between the estimated and measured outputs, where the block T refers to the time delay unit.

sampler

monitor

delay1 HNunit

backprol

delay2 Nl~lunitl backpro2

weight2

delay3 NNunit2 baekpro3

weigN3

delay4 NNunit3

backpro4

weight4

Fig. 2. Circuit for plant parameter estimation using an ANN.

Page 6: Simulated hardware design of artificial neural networks for adaptive plant control

236

. synapses x0] bias input ~)l 1~5

C.S. Chang et al. ,'Electric Power Systems Research 37 (1996) 231-240

~( sj ) output

threshold function

Fig. 3. Basic artificial neuron cell.

t outfiers

U s. " ~P:(.)

-'

'~'~(s:) ~ - ~ - ~ _ h E ,

I i ' ~ : ( s l ) I ,

[ I i I i v ! + ± , ,

~' v v y • •

Fig. 4. Signal mapping through threshold functions.

The gradient descent procedure, or the delta rule, is widely used in back-propagation algorithms for net- work learning. As presented in Eqs. (27), (28) and (35), the approach's learning rate/~ plays an important role in the efficiency of the learning procedure. In the fol- lowing part of this section, a circuit design and some considerations of the parameter setting are presented.

A circuit design for the ANN based scheme of parameter estimation is illustrated in Fig. 2. The block circuit is designed and simulated on the SIMULINK tool from the MATLAB software package [18]. The circuit consists of three main function blocks, the time delay unit (T), the processing unit (NEU), and the weight updating unit (BP). The topology of the circuit is a parallel structure of processing units.

In the structure, the processing unit NEU is a typical McCulloch-Pitts model, which is one of the most widely applied models, and is illustrated in Fig. 3. The signal s is the weighted sum of inputs {xi}, i = 1, 2 . . . . . N, which is modified through a squashing function to form the output y. The variables x0 and w0 refer to the bias input and its corresponding weight, and wo = 1. The squashing function can take many forms. For this work, the derivative of Eq. (26) is used:

1 - exp( - fix) f (x) = (40)

1 + exp( - fix)

The function becomes saturated within the range ( - 1, + 1). The coefficient fl in Eq. (40) refers to the steepness parameter, which gives an indication of the sensitivity of the processing units to the input stimulus. Several ways have been studied to choose the steepness parameter fl [19,20]. In the following design, it is set manually to be a gain, whose value is related to the possible distribution of the summed input signals shown in Fig. 4:

< prob(llsll < 1) /J maxllsll (41)

According to Eq. (41), fl determines the sensitivity of the ANN to the signal s. A large value of fl results in a small portion of the summed signals being distributed within the range ( - 1, + 1). The processing unit NEU behaves like a gate, switching between - 1 and + 1. If Prob(]iu ]] < 1) = 0.99, it means that 99% of the signal s will be mapped within the range ( - 1, + 1), the pro- cessing unit largely performs like a linear function within limits.

A processing unit NEU is illustrated in Fig. 5, which is a simulation of the McCulloch-Pitts model. In the NEU design, the steepness parameter is represented by fl and is prespecified as shown in Eq. (40). The input signals are represented by a content vector 'phi' and a weight vector 'weight'. The dimension of the content array may be changed by adding the phi-weight pairs in the summation node.

Weight update in the ANN training process is imple- mented by the back-propagation (BP) block of the overall structure, shown in Fig. 6, where the function of the BP block is to carry out the following calculation:

N

Z ~u (°-~k)) q~j( i ) i

where ~ is the learning rate, ~u is the output of the processing unit and ~ represents the input signal. In Fig. 6, 'alpha' represents the learning rate, 'o' is the processing unit output ~, and 'phi' is the input signal ~b.

In the overall structure of Fig. 2, 'T' is a time delay function block. It temporarily stores the data samples, and updates its contents at each time step. Block 'I' represents the integration function, which updates the 'weights' of the ANN.

4.2. Parameter estimation results

To demonstrate the performance of the circuit de- sign, consider a simple process given by the following transfer function:

Page 7: Simulated hardware design of artificial neural networks for adaptive plant control

C.S. Chang et al . / Electric Power Systems Research 37 (1996) 231-240 237

I ~ P rod u clt weight1 ? b i a s

p ~ Sum

I--~ I Product2 I _m... _ ~

to monitor invert phai4

I - q 3 weight4

Fig. 5. A neuron simulated on SIMULINK (NEU).

(exp(u[1])-l.0)/(exp(u[1])+ 1) threshold o u t p u t

1.0667 Gp(s) s 2 + 0.667s + 0.10667 (42)

which may be discretized using the z-transform as

1.387z + 0.889 Gp(z) ----= z2 _ 1.036z + 0.2636 (43)

Fig. 7(a) (d) shows the convergence of the parameter estimation by using the ANN circuit proposed in Fig. 2. Four case studies have been carried out with (a) c~ = 0.1, (b) ~ = 0.2, (c) c~ = 0.3, and (d) c~ = 0.2, where fl remains constant at 0.25 in (a) (c) and is equal to 0.5 in (d). Increasing the learning rate ~ speeds up learning, but causes numerical oscillations before convergence.

The above learning performance is based on the gradient descent approach which may be extended for more complicated ANN designs [8].

5. Control implementation

The MV control algorithms discussed in Section 3 are developed to return the plant back to the steady- state operating condition. This is the major aim of the SVS damping control loop [1]. Other situations exist, however, where the control requirement is to regulate y( t ) against varying operating conditions. Several ways have been proposed and implemented to accommodate such a time-varying setpoint r(t). The most obvious way is to apply the MV control algorithms with the input signal y ( t ) - r(t). As a result, the plant output should then be r(t) plus the regulation error E[e(t)]. Thus, the energy function of the MV control algorithms is of the same form as Eq. (8). The corresponding control law is written in an incremental form as

F a u ( t ) = [ r f t ) - y ( t ) ] (44)

The control configuration is illustrated in Fig. 8.

5.1. Controller circuit design

The method developed for the ANN based parameter estimation as presented in Section 4 can also be em- ployed in designing control systems. Two control schemes are designed, namely, the direct adaptive and the indirect adaptive control algorithms.

5.1.1. Direct control scheme A SIMULINK program is illustrated in Fig. 9. It is

designed to simulate the analog ANN circuit implemen- tation. In the diagram, the circuit is mainly integrated into two blocks, namely, the estimation block for esti- mating the controller parameters, and the controller block for control action implementation. The detailed circuits for each block are given in Ref.[1].

5.1.2. Indirect A N N control scheme The indirect control scheme has a similar structure to

that of the direct control scheme. However, three inte- grated circuit blocks are employed in this control scheme for the plant model estimation, control parame- ter optimization, and controller implementation (Fig. 10). The detailed circuits are also given Ref. [1].

5.2. Numerical simulation

We take the same plant model as in Section 4.2 as an example.

5.2. I. Simulation results with the direct control design Fig. l l(a) and (b) shows the plant output y( t ) and

the control input u(t) of the proposed direct ANN controller in response to a square-wave reference signal r(t). The ANN controller starts self-learning from the

Page 8: Simulated hardware design of artificial neural networks for adaptive plant control

238 C.S. Cliang et a l . / Electric Power Systems Research 37 (1996) 231-240

beginning, and is activated at t = 100. The ANN weights are given very small random initial values. The simulation period takes 400 time steps. If more steps of

t Sum rate

phi4

Fig. 6. The weighted sum function•

"t

Time Steps (a) c~=0.1, 13=025

' ~ [ _ . . . . bl - - - ~ J

, 5 1

Time Steps (b) ct=0.2, 13=025

Time Steps (c) c~:0.3, ~=0.25

Time Steps (d) c~=o z, 13=o5

Fig. 7. Simulated results for parameter estimation.

plant i

__ : _ _ -~ + l y(t) r(t) + V F ~ Au(t) F ~ ]: z B ~ , ~ - , '

regulator integrator . . . . . .

Fig. 8. MV control with incremental action and nonzero reference signal r(t).

I = ,4 .o . / /

Fig. 9. A direct A N N adaptive control scheme.

• - - y o t ~ t

gen [

Fig. 10. An indirect A N N adaptive control scheme•

calculation are carried out, the plant output will track the reference signal more closely.

Fig. 12(a) and (b) shows the effectiveness of the A N N control under n.oise. In this case, the adaptive control is activated at t = 100. During the initial time, t = 0 to 100, the plant is severely disturbed by noise. Later on, the ANN controller reduces the disturbances as soon as the controller has self-learned (see the plant output at t > 150).

5.2.2. Simulation results with the indirect ANN control design

With the same study parameters and initial condi- tions, the ANN controller is triggered off at the 100th time step and achieves similar effectiveness (Figs. 13 and 14) as in the previous direct control design.

6. Conclusions

This paper reports the progress in a preliminary controller design, in which the ANNs perform plant parameter identification and control optimization func- tions. A key element of the design is a feedforward ANN with dynamic learning capability. The ANN structure is mainly made up of a number of basic neuron components and integrators. The learning and steepness rates are assumed as predetermined gains.

Page 9: Simulated hardware design of artificial neural networks for adaptive plant control

C.S. ('hang et al. /' Electric Power Systems Researeh 37 (1996) 231-240 239

One of the advantages of using the ANN to imple- ment adaptive control is its parallel computation char- acteristic. In this paper, a design using straightforward mapping of the ANN structure to circuit blocks is presented. The circuit blocks are arranged in a parallel form. However, a drawback to the design is its large number of connections which tends to become a limit- ing factor for the size of the ANNs.

Examples are presented to demonstrate the effective- ness of the design. Both the indirect and direct control schemes achieved similar effectiveness under varying operating conditions.

-! output /input

. ' '/ ' , l/

Time Steps

(a) plant response y(t) with reference signal r(t)

'/!f I

A u

i f 1 Time Steps

(b) incremental control signal Au(t)

Fig. l 1. The direct ANN adaptive control performance with square- wave reference input.

-- .2

Plant Output

V I ~J V t ~ AdaptivcContr01Closedln

~o lOO I~o 2oo 2~o

Time Steps

(a) plant output y(t) with noise input r(t)

0.4

0.~'

0

--0.2

~0.4 0

Control Signal -Closed In

Time Steps

(b) control signal u(t), starting from 100th step

Fig. 12. The direct ANN control with noise input.

Time Step (a) plant output y(t) with reference sign r(t)

Au

i

Time Steps

(b) incremental control signal Au(t)

Fig. 13. The indirect ANN adaptive control performance with square- wave reference input.

~ , A/~ Plant Output . F 11 . I V 1~ -/~ Noise mput 1

~ ' v e Control Closed In i

Time Steps (a) plant output y(t) and noise input u(t)

o ~ . . . . . .

Control In Signal (I o 2 Closed

( \

o . ,,..,,,/~ ; 1.11, f , ~ /'1 ,I I / ~ ,'~ /

O 6 5 O I O 0 " 5 0 Z O O 2 5 ~ ~ 0 0 ~ 0

Time Steps

(b) ANN control signal u(t), sta~ing from 100th step

Fig. 14. The indirect ANN control with noise input.

References

[1] F. Wang, Application of artificial neural networks to SVS con- trol for improving power systems damping, Thesis, Department of Electrical Engineering, National University of Singapore, 1995.

[2] K.J. Hunt, D. Sbarbaro, R. Zbikowski and P.J. Gawthrop, Neural networks for control systems - - a survey, Automatica, 28 (6) (1992) 1083-1112.

[3] B. Widrow and M.A. Lehr, 30 years of adaptive neural net- works: perceptron, madaline, and backpropagation, Prec. 1EEE, 78 (1990) 1415 1442.

[4] K.S. Narendra and K. Parthasarathy, Identification and control of dynamical systems using neural networks, IEEE Trans. Neural Networks, 1 (1990) 4 27.

[5] F. Chen, Back-propagation neural networks for nonlinear self- tuning adaptive control, IEEE Control Syst. Mag., 10 (Apr.) (1990) 44-48.

Page 10: Simulated hardware design of artificial neural networks for adaptive plant control

240 C.S. Chang et a l . / Electric Power Systems Research 37 (1996) 231-240

[6] M. Kawato, Y. Uno, M. lsobe and R. Suzuki, Hierarchical neural network model for voluntary movement with applica- tion to robotics, IEEE Control Syst. Mug., 8 (Apr. (1988) 8-16.

[7] K.W. Przytula and V.K. Prasanna, Parallel Digital Implemen- tations of Neural Networks, Prentice-Hall, Englewood Cliffs, NJ, 1993.

[8] K.S. Narendra and K. Parthasarathy, Gradient methods for the optimization of dynamical systems containing neural net- works, IEEE Trans. Neural Networks, 2 (2) (199l) 252 262.

[9] J.G. Kuschewski, S. Hui and S.H. Zak, Application of feed- forward neural networks to dynamical system identification and control, 1EEE Trans. Control Syst., 1 (1) (1993) 37 49.

[10] P.K. Dash, A.M. Sharaf and E.F. Hill, An adaptive stabilizer for thyristor controlled static var compensators for power sys- tems, 1EEE Trans. Power Syst., 4 (2) (1989) 403 410.

[11] J.R. Smith et al., An enhanced LQ adaptive vat unit con- troller for power system damping, IEEE Trans. Power Syst., 4 (2) (1989) 443 451.

[12] K.J. ~str6m and B. Wittenmark, Adaptive Control, Addison- Wesley, New York, 1989.

[13] M.J. Grimble, Weighted minimum-variance self-tuning control, Int. J. Control, 36 (4) (1982) 597 609.

[14] P.E. Wellstead and M.B. Zarrop, SelJ-tuning Systems, Control and Signal Processing, Wiley, Chichester, UK, 1991.

[15] D.W. Clarke, Self-tuning control of nonminimum-phase sys- tems, Automatica, 20 (5) (1984) 501 517.

[16] A. Cichocki and R. Unbehauen, Neural Networks for Opti- mization and Signal Processing, Wiley, Chichester, UK, 1993.

[17] K. Hornik, M. Stinchcombe and H. White, Multilayer feedfor- ward networks are universal approximators, Neural Networks, 2 (1989) 359-366,

[18] Simulink User Guide, MathWorks, Natick, MA, 1992. [19] A. Sperduti and A. Starita, Speed up learning and network

optimization with extended back propagation, Neural Net- works, 6 (1993) 365-383.

[20] J.K. Kruschke and J.R. Movellan, Benefits of gain, speeded learning and minimal hidden layers in back-propagation net- works, IEEE Trans. Syst. Man Cybern., 21 (1) (1991) 273-280.