generating fuzzy networks

16

Click here to load reader

Upload: lenomagnus

Post on 11-May-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Generating fuzzy networks

January 9, 2012 13:15 00306

International Journal of Neural Systems, Vol. 22, No. 1 (2012) 21–35c© World Scientific Publishing Company

DOI: 10.1142/S0129065712003067

A NOVEL EFFICIENT LEARNING ALGORITHMFOR SELF-GENERATING FUZZY NEURAL

NETWORK WITH APPLICATIONS

FAN LIU∗ and MENG JOO ER†

School of EEE, Nanyang Technological UniversitySingapore, 639798, Singapore

[email protected][email protected]

In this paper, a novel efficient learning algorithm towards self-generating fuzzy neural network (SGFNN)is proposed based on ellipsoidal basis function (EBF) and is functionally equivalent to a Takagi-Sugeno-Kang (TSK) fuzzy system. The proposed algorithm is simple and efficient and is able to generate afuzzy neural network with high accuracy and compact structure. The structure learning algorithm of theproposed SGFNN combines criteria of fuzzy-rule generation with a pruning technology. The Kalman filter(KF) algorithm is used to adjust the consequent parameters of the SGFNN. The SGFNN is employed ina wide range of applications ranging from function approximation and nonlinear system identification tochaotic time-series prediction problem and real-world fuel consumption prediction problem. Simulationresults and comparative studies with other algorithms demonstrate that a more compact architecturewith high performance can be obtained by the proposed algorithm. In particular, this paper presentsan adaptive modeling and control scheme for drug delivery system based on the proposed SGFNN.Simulation study demonstrates the ability of the proposed approach for estimating the drug’s effect andregulating blood pressure at a prescribed level.

Keywords: Self-generating fuzzy neural network; Ellipsoidal Basis Function (EBF); criteria of generationand pruning; Kalman filter (KF) algorithm.

1. Introduction

Over the past decade, many novel artificial intel-ligence or machine learning algorithms have beensuccessfully developed and widely applied to manyapplications, for example, neural network models forearthquake magnitude prediction using multiple seis-micity indicators,1 estimation of the freeway workzone capacity based on neuro-fuzzy logic model,2

nonparametric identification of structures basedon fuzzy wavelet neural networks using nonlinearautoregressive moving average with exogenous inputsapproach3 and nonlinear complex system identifi-cation based on internal recurrent neural networks(IRNN).4

Recently, many researchers focus on combiningevolutionary algorithms (EA) with machine learn-ing algorithms. Hung and Adeli5 combined a geneticalgorithm (GA) with an adaptive conjugate gradi-ent neural network learning algorithm for training offeedforward neural networks. Elragal6 adopted parti-cle swarm optimization (PSO)7 algorithm to updatethe weights and bias of a neural network to improvenetwork prediction accuracy.

Another well-known work is the development offuzzy neural network (FNN) which has been provento be able to reap the benefits of fuzzy logic and neu-ral networks.1,8–12 Theoretical investigations havebeen proven that fuzzy logic systems and neural

∗Corresponding author.

21

Page 2: Generating fuzzy networks

January 9, 2012 13:15 00306

22 F. Liu & M. J. Er

networks can approximate any function to any pre-scribed accuracy provided that sufficient fuzzy rulesor hidden neurons are available.13,14 In FNN sys-tems, standard neural networks are designed toapproximate a fuzzy inference system through thestructure of neural networks while the parameters ofthe fuzzy system are modified by means of learningalgorithms used in neural networks.15 Twin issuesassociated with a fuzzy system are (1) parameterestimation which involves determining parameters ofpremises and consequents and (2) structure identifi-cation which involves partitioning the input spaceand determining the number of fuzzy rules for a spe-cific performance.16

FNN systems have been found to be very effi-cient and of widespread use in several areas such asadaptive control,1,7,18,19 signal processing,20 nonlin-ear system identification,21,22 pattern recognition23

and so on. Besides the well-known adaptive-network-based fuzzy inference system (ANFIS),24 manyFNN algorithms have been presented. Juang et al.9

proposed an online self-constructing neural fuzzyinference network, which is a modified Takagi-Sugeno-Kang (TSK) type fuzzy system possess-ing the learning ability of a neural network. Lenget al.25 proposed a self-organizing fuzzy neural net-work (SOFNN) employing optimal brain surgeon(OBS) as the pruning method. A novel hybrid learn-ing approach, termed self-organizing fuzzy neuralnetworks based on genetic algorithms (SOFNNGA),which is used to design a growing FNN and to imple-ment Takagi-Sugeno (TS) type fuzzy models hasbeen also proposed by Leng et al.26 However, likemost online learning algorithms, it encounters theproblem of slow learning speed due to the grow-ing and pruning criteria and complicated learningprocess due to the use of GA in optimizing thetopology of the initial network structure. A majorsignificant work of developing FNNs is the onlinesequential learning algorithm known as resourceallocating network (RAN),27 to dynamically deter-mine the number of hidden layer neurons basedon the property of input samples. Enhancement ofRAN, named RANEKF,28 was proposed where theextended Kalman filter (EKF) method rather thanthe least mean squares (LMS) algorithm was usedfor updating parameters of the network. Anotherimprovement of RAN developed in Ref. 23 employsa pruning method whereby inactive hidden neurons

can be detected and removed during the learningprocess. Other improvements of the RAN in Ref. 29takes into consideration of the orthogonal techniquessuch as QR factorization and singular value decom-position (SVD) to determine the appropriate inputstructure of the RBF network and prune the irrele-vant neurons within the same network. Another sig-nificant development of FNNs was made in Ref. 3where a new dynamic time-delay fuzzy wavelet neu-ral network has been proposed. The model consistsof dynamic time-delay neural network, wavelet, fuzzylogic and the reconstructed state space concept30,31

from the chaos theory which can be used for manyapplications such as structural system identification3

and nonlinear system control.32

Recently, the idea of self-generating method hasbeen proposed in FNN systems. The well-knowngrowing and pruning radial basis function network(GAP-RBF)33 algorithm generated FNN automat-ically based on growing and pruning approaches.The generalized GAP-RBF (GGAP-RBF)34 algo-rithm can be used for arbitrary sampling densityof training samples. A fast and accurate onlineself-organizing scheme for parsimonious fuzzy neu-ral network (FAOS-PFNN)35 based on RBF neuralnetworks has been proposed to accelerate the learn-ing speed and increase the approximation accuracyby incorporating pruning strategy into new growthcriteria. Unfortunately, like most of the RBF-basedonline learning algorithms, all the widths of Gaus-sian membership functions of the input variables ina rule are the same due to the use of RBF neuralnetworks. This usually does not coincide with thereality, especially when input variables have signifi-cantly different operating intervals.

Based on the key idea of self-generating method,this paper presents an efficient algorithm for con-structing a self-generating fuzzy neural network(SGFNN) that identifies a TSK-type fuzzy model.The structure learning algorithm for generating newellipsoidal basis function (EBF) neurons is based onthe system error criterion and the ε-completeness offuzzy rules.36 The salient features of the approachcan be summarized as follow.

• By using criteria of generation and pruning neu-rons, the SGFNN can recruit or remove EBF neu-rons automatically so as to achieve optimal systemperformance.

Page 3: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 23

• All the widths of Gaussian membership functionsof input variables in a rule are different and can beadjusted due to the use of EBF neural network.

• Overlapping of membership function can be over-come significantly since the number of membershipfunctions of every input variable is defined sepa-rately. They can be the same or different.

• The KF algorithm is adopted as consequentparameter adjustment learning algorithm. The lin-ear least squares (LLS) method is employed toadjust weights of many other FNNs.10,12 Althoughit is computationally simple and fast for deter-mining weights online, it is expensive in computa-tion when dealing with matrix inversion. The KFalgorithm shows good performance and robust-ness in noisy environment. To make a compromisebetween learning speed and system robustness,the KF algorithm rather than the LLS methodis used to adjust consequent parameters of theSGFNN.

The effectiveness of the proposed SGFNN algo-rithm is demonstrated via some benchmark problemsin the areas of function approximation, nonlineardynamic system identification, chaotic time-seriesprediction and real-world benchmark regressionproblem. Comprehensive comparisons with otherpopular learning algorithms have been made. In par-ticular, an adaptive modeling and control schemebased on the SGFNN for drug delivery system ispresented. The proposed SGFNN is a novel intelli-gent modeling tool, which can model the unknownnonlinearities of the complex drug delivery systemand adapt online to changes and uncertainties of thesystem.

This paper is organized as follows. Section 2 intro-duces the proposed SGFNN algorithm. The structurelearning algorithm which includes criteria of genera-tion and pruning neurons is given in details in Sec. 3.The KF algorithm for parameter learning is also pre-sented in this section. Section 4 presents simulationresults and comparative studies with other popularlearning algorithms. Furthermore, the adaptive mod-eling and control scheme of the drug delivery systemusing the proposed SGFNN is presented in this sec-tion. A detailed discussion on the merits and work-ing principle of the SGFNN algorithm is presentedin Sec. 5. Finally, conclusions are drawn in Sec. 6.

1x

Layer 1Input layer

Layer 2Membershipfunction layer

rx

Layer 4Output layer

Layer 3Rule lalyer

11A

jA1

uA1

1rA

1R

rjA

ruA

jR

……

……

……

……

uR

jw

1w

uw

y

Fig. 1. Structure of the SGFNN.

2. The Proposed Self-GeneratingFuzzy Neural Network

The SGFNN is constructed based on EBF neuralnetworks which are functionally equivalent to TSKfuzzy model. The SGFNN has a total of four lay-ers as shown in Fig. 1. Layer one transmits valuesof input linguistic variable xi(i = 1, 2, . . . , r) to thenext layer directly, where r is the number of inputvariables. Each input variables xi has u member-ship functions Aij(j = 1, 2, . . . , u) as shown in layertwo, which are in the form of a Gaussian functiongiven by

Aij = exp

(− (xi − cij)2

σ2ij

)i = 1, 2, . . . , r

j = 1, 2, . . . , u (1)

where Aij is the jth membership function of the ithinput variable xi and cij and σij are the center andwidth of the jth membership function with respectto the ith neuron, respectively. Layer three is the rulelayer. Each node in this layer represents a possibleIF-part of fuzzy rules. If the T-norm operator usedto compute each rule’s firing strength is multiplica-tion, the output of the jth rule Rj(j = 1, 2, . . . , u) isgiven by

ϕj(x1, x2, . . . , xr) = exp

(−

r∑i=1

(xi − cij)2

σ2ij

)

j = 1, 2, . . . , u (2)

Layer four is the output layer and each node rep-resents an output linguistic variable. The weighted

Page 4: Generating fuzzy networks

January 9, 2012 13:15 00306

24 F. Liu & M. J. Er

summation of incoming signals is given by

y(x1, x2, . . . , xr) =u∑

j=1

wjϕj (3)

where y is the output variable and wj is the THEN-part or connection weight of the jth rule and ϕj isobtained from (2).

For the TSK model, weights are polynomials ofthe input variables given by

wj = aj · b = a0j + a1jx1 + · · · + arjxr

j = 1, 2, . . . , u (4)

where aj = [a0j a1j a2j · · · arj] is the weight vectorof input variables with respect to rule j and b =[1 x1 x2 · · ·xr]

T is a column vector.

3. Learning Algorithm

The learning algorithm of the SGFNN is based onstructure and parameter learning algorithm whichconstructs the FNN automatically and dynamically.In this section, the learning process of the SGFNNincluding structure learning and parameter learn-ing is presented. In structure learning, FNN withhigh accuracy and compact structure is constructed.The EBF neurons are generated and pruned dynami-cally during the learning process. In parameter learn-ing, the KF algorithm is used to adjust consequentparameters of the SGFNN.

3.1. Criteria of fuzzy-rule generation

3.1.1. System errors

The output error of the SGFNN system with regardto the reference signal is an important criterion todetermine whether a new rule should be recruited ornot. Consider the ith observation (xi, di) where xi isthe input vector and di is the desired output. Theoverall output of SGFNN with the existing struc-tures is denoted by yi. The system error is definedas follows:

‖ei‖ = ‖di − yi‖. (5)

If

‖ei‖ > ke (6)

where ke is a predefined error tolerance, a newfuzzy rule should be considered if other criteria of

generation have been satisfied simultaneously. Theterm ke decays during the learning process as follows

ke =

emax 1 < i < n/3

max[emax × βi, emin] n/3 ≤ i ≤ 2n/3

emin 2n/3 < i ≤ n

(7)

where emax is the maximum error chosen, emin isthe desired accuracy of the SGFNN output, n is thelearning iteration and β is a convergence constant,which is shown to be

β =(

emin

emax

)3/n

. (8)

3.1.2. System errors

The ε-completeness of fuzzy rules is for any inputwithin the operating range, there exists at least onefuzzy rule such that the match degree (or firingstrength) is not less than ε. The minimum valueof ε is usually selected as εmin = 0.5.37 The firingstrength of each rule shown in (2) can be regardedas a function of the regularized Mahalanobis distance(M-distance), i.e.,

ϕ(x1, x2, . . . , xr) = exp[−md2(j)] (9)

where

md(j) =√

(X − Cj)T Σ−1j (X − Cj) (10)

is the M-distance where X = (x1, x2, . . . , xr)T ∈ Rr,

Cj = (c1j , c2j, . . . , crj)T ∈ Rr and Σ−1

j is calculatedas follows:

−1∑J

=

1σ2

1j

0 · · · 0

01

σ21j

0 0

0 0. . . 0

0 0 01

σ21j

j = 1, 2, . . . , u.

(11)

According to the ε-completeness criterion of fuzzyrules, when a new observation (X i, di), i =1, 2, . . . , n, arrives, the M-distance mdi(j) betweenthe observation X i and the center vector Cj(j =1, 2, . . . , u) of existing EBF units is calculatedaccording to (9) and (10).

Page 5: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 25

Find

J = arg min1≤j≤u

(md i(j)). (12)

If

mdimin = md i(J) > kd (13)

this implies that the existing FNN is not satisfiedwith ε-completeness and a new rule should be con-sidered. Here, kd is a predefined threshold that canbe chosen as follows

kd =

dmax =

√ln(

1εmin

)1 < i < n/3

max[dmax × γi, dmin] n/3 ≤ i ≤ 2n/3

dmin =

√ln(

1εmax

)2n/3 < k ≤ n

(14)

where γ ∈ (0, 1) is decay constant which is given by

γ =(

dmin

dmax

)3/n

=

(√ln(1/εmax)ln(1/εmin)

)3/n

. (15)

The idea for the choice of ke and kd is calledcoarse learning. The reason is to first find and covermore troublesome positions which have large errorsbetween the desired and actual outputs but are notproperly covered by existing rules.10,21

3.2. Criteria of fuzzy-rule pruning

If inactive hidden neurons can be deleted duringlearning process, a more parsimonious network topol-ogy can be achieved. In the SGFNN learning algo-rithm, the pruning strategy is the same as theGDFNN38 which is based on the error reduction ratio(ERR) method of Ref. 39. The ERR method is usedto calculate the sensitivity and significance of fuzzyrules in order to check which rules would be deleted.

Suppose for n observations, (3) can be written asa linear regression model or in the following compactform:

D = ΦW + E (16)

where D ∈ Rn is the desired output and E is theerror vector.

The regressor matrix Φ can be rewritten as

Φ = KA (17)

where K is an n × v (v = u × (r + 1)) matrix withorthogonal columns and A is a v×v upper triangularmatrix. Substituting (16) into (17), we obtain

D = KAW + E = KG + E. (18)

The orthogonal least squares solution, G is given byG = (KT K)−1

KT D or equivalently

gi =kT

i D

kTi ki

1 ≤ i ≤ v. (19)

The ERR due to ki as defined in Ref. 39 is given by

erri =g2

i kTi ki

DT D1 ≤ I ≤ v. (20)

Substituting (19) into (20) yields

erri =kT

i D

kTi kiDT D

1 ≤ I ≤ v. (21)

Define the ERR matrix ERR = (ρ1, ρ2, . . . , ρu) ∈R(r+1)×u whose elements are obtained from (21) andthe jth column of the ERR corresponding to the jthrule. Furthermore, define

ηj =

√ρT

j ρj

r + 1j = 1, 2, . . . , u (22)

Then ηj represents the significance of the jth rule. If

ηj < kerr j = 1, 2, . . . , u (23)

where kerr is a predefined parameter, then the jthrules is pruned.

3.3. Determination of premiseparameters

When a new rule has been generated, the problem ishow to allocate its parameters for a Gaussian mem-bership function which includes centers and widths.

Firstly, suppose that u neurons have been gener-ated. A new neuron will be generated when the ithobservation X i(i = 1, 2, . . . , n) arrives according tothe criteria of rule generation. Next, the incomingmultidimensional input vector X i is projected to thecorresponding one-dimensional membership functionfor each input variable xk(k = 1, 2, . . . , r) and wedefine the Euclidean distance (E-distance) betweenthe data xi

k and boundary set Φk as follows:

edk(j) = |xik − Φk(j)| j = 1, 2, . . . , u + 2 (24)

where u is the number of generated neurons andΦk ∈ {xi min, ci1, ci2, . . . , ciu, xi max}.

Page 6: Generating fuzzy networks

January 9, 2012 13:15 00306

26 F. Liu & M. J. Er

We define

jmin = arg minj=1,2,...,u+2

(edk(j)). (25)

If

edk(jmin) ≤ km (26)

where km is a predefined constant, the new incom-ing data xi

k can be represented by existing fuzzy setsAkjmin (ckjminσkjmin), (k = 1, 2, . . . , r) without gener-ating a new membership function. Otherwise, a newGaussian membership function is allocated whosewidth and center are defined as follows:

σk =max{|ck − ck−1|, |ck − ck+1|}√

ln(1/ε)(27)

ck(u + 1) = xik (28)

where ck−1, ck+1 are the two centers of neighboringmembership functions of the membership function.

3.4. Determination of consequentparameters

After the premise parameters and structure of theSGFNN are determined, it is important to deter-mine the consequent parameters. In this paper, theKF algorithm11 is used to adjust the consequentparameters.

Firstly, we suppose that u neurons are generatedfor n observations with r input variables. Rewriting(3) in the following compact form

Y = WΨ, (29)

the KF algorithm consists of a recurrent formulagiven by

si = Si−1 − St−1ΨiΨTi Si−1

1 + ΨTi Si−1Ψi

i = 1, 2, . . . , n (30)

Wi = Wi−1 + SiΨi(T Ti − ΨT

i Wi−1) i = 1, 2, . . . , n

(31)

with the initial conditions given by W0 = 0 and S0 =δI, where Si is the error covariance matrix for theith observation, Ψi is the ith column of Ψ, Wi is theweight matrix after the ith iteration, δ is a positivelarge number and I is an identify matrix.

4. Illustrative Examples

In this section, the effectiveness of the proposed algo-rithm is demonstrated by MATLAB-based simula-tion studies on five examples. They are the two-inputnonlinear sinc function approximation,24 nonlineardynamic system identification,21 Mackey-Glass time-series prediction problem,21 real-world benchmarkregression problem40 and real-world drug deliverysystem.41 Simulation results are compared withother learning algorithms, such as the RBF-AFS,21

the OLS,39 the MRAN,23 the ANFIS,24 the DFNN,10

the GDFNN,38 the SOFNN,25 the SOFNNGA,26 theRAN,27 the RANEKF,28 the GAP-RBF,33 the OS-ELM(RBF)42 and the FAOS-PFNN.35

4.1. Example 1: Two — inputnonlinear sin c function

This function was used to demonstrate the efficiencyof the ANFIS.24 The sinc function is defined asfollows:

z = sin c(x, y)x ∈ [−10, 10], y ∈ [−10, 10]. (32)

A total of 121 two-input data sampled and the cor-responding target data are used as the training data.The parameters of the SGFNN are chosen as follows:

β = 0.9, emax = 0.5, emin = 0.03, kerr = 0.00015,km = 0.5, εmax =0.8, εmin = 0.5 and δ =300.

In order to determine the effect of noise, thetraining data are mixed with Gaussian white noisesequences which have zero mean and different vari-ances as shown in Table 1. The results are illustratedin Table 1 and Fig. 2.

For the same variance (e.g. σ = 0.1), the SGFNNgenerates ten fuzzy rules with ten membership func-tions for input variables x and y respectively. Thetotal number of parameters is 70 and the root meansquared error (RMSE) is 0.0229. The number ofparameters is less than those of ANFIS,24 i.e. 72and the SOFNNGA,26 i.e. 76, but is more than the

Table 1. Results of two-input function with noise.

Variances Number of Number of

(σ2) fuzzy rules parameters RMSE

σ = 0 9 59 0.0139σ = 0.01 8 56 0.0175σ = 0.05 9 59 0.0217σ = 0.1 10 70 0.0229

Page 7: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 27

0 20 40 60 80 100 120 1400

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08Root Mean Squared Error

Sample Patterns

RM

SE

Fig. 2. Root Mean Squared Error (RMSE).

SOFNN,25 i.e. 68. The RMSE of the SGFNN is lessthan that of SOFNN25 (which is 0.0767), but is morethan the RMSE of the SOFNNGA.26 The SGFNNhas better performance than the ANFIS24 and theSOFNN25 in terms of network structure and RMSE.However, the RMSE for training of the SGFNN isclose to that of SOFNNGA,26 i.e., 0.0173 with fewerparameters.

4.2. Example 2: Nonlinear dynamicsystem identification

The identified nonlinear dynamic system is describedas follows:

y(t + 1) =y(t)y(t − 1)[y(t) + 2.5]1 + y2(t) + y2(t − 1)

+ u(t)

t ∈ [1, 200], y(0) = 0, y(1) = 0,

u(t) = sin(2π/25) (33)

To identify the plant, a series-parallel identificationmodel governed by the following equation is used:

y(t + 1) = f(y(t), y(t − 1), u(t)) (34)

where f is the function implemented by the SGFNNwith three inputs and one output model. There are200 input-target data sets chosen as training data.The parameters of the SGFNN are set as follows:

β = 0.9, emax = 0.5, emin = 0.03, kerr = 0.0015,km = 0.5, εmax = 0.8, εmin = 0.5 and δ = 320.

Simulation results are shown in Figs. 3 and 4.The membership functions of input variables y(t),y(t − 1), and u(t) are shown in Figs. 5 to 7.

0 20 40 60 80 10 0 120 140 160 180 2000

1

2

3

4

5

6Fuzzy Rule Generation

Sample Patterns

Num

ber

of F

uzzy

Rul

es

Fig. 3. Fuzzy rule generation.

0 20 40 60 80 100 120 140 160 180 2000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18

0.2Root Mean Squared Error

Sample Patterns

RM

SE

Fig. 4. Root Mean Squared Error (RMSE).

-2 -1 0 1 2 3 40

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Input y(t)

Mem

bers

hip

Fun

ctio

ns

Membership Functions of Input y(t)

Fig. 5. Membership functions of input y(t).

Page 8: Generating fuzzy networks

January 9, 2012 13:15 00306

28 F. Liu & M. J. Er

-2 -1 0 1 2 3 40

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Input y(t-1)

Mem

bers

hip

Fun

ctio

ns

Membership Functions of Input y(t-1)

Fig. 6. Membership functions of input y(t − 1).

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Input u(t)

Mem

bers

hip

Fun

ctio

ns

Membership Functions of Input u(t)

Fig. 7. Membership functions of input u(t).

A set of six fuzzy rules is generated with five,four and four membership functions for inputs y(t),y(t − 1) and u(t). It can be seen that the number ofeach input variables is not the same. Table 2 shows acomparison of structure and performance with differ-ent algorithms. The RMSE of the SGFNN is 0.0228which is less than that of the other algorithms andthe total number of parameters is 44 which is also lessthan that of the other algorithms. As seen in Table 2,for nonlinear dynamic system identification problem,the proposed SGFNN algorithm outperforms otherlearning algorithms. The SGFNN provides a satis-factory RMSE performance in spite of a simpler net-work structure.

Table 2. Results of nonlinear dynamic system identifi-cation.

Number of Number ofAlgorithms fuzzy rules parameters RMSE

OLS39 65 326 0.0288

RBF-AFS21 35 280 0.1384

DFNN10 6 48 0.0283

GDFNN38 6 48 0.0241SGFNN 6 44 0.0228

4.3. Example 3: Mackey-Glasstime-series prediction

The Mackey-Glass time-series prediction21 is abenchmark problem which has been considered bymany researchers. The time-series is generated by

x(t + 1) = (1 − a)x(t) +bx(t − τ)

1 + x10(t − τ)(35)

The same parameters as in Refs. 10 and 21 i.e., a =0.1, b = 0.2, τ = 17 and the initial condition ofx(0) = 1.2 are chosen. The prediction model is alsothe same as Refs. 10 and 21 i.e.

x(t + 6) = f [x(t), x(t − 6), x(t − 12), x(t − 18)]

(36)

For the purpose of training and testing, 4000 sam-ples are generated between t = 0 and t = 4000from (35) with the initial conditions x(t) = 0 fort < 0 and x(0) = 1.2. We choose 1000 data pointsbetween t = 124 and t = 1123 for preparing the inputand output sample data in (36). In order to demon-strate the prediction ability of the SGFNN approach,another 1000 data points between t = 1124 andt = 2123 are tested. Simulation results and compar-isons with the OLS,39 RBF-AFS21 and DFNN10 arepresented in Table 3.

From the simulation results, it is clear that theSGFNN can obtain better performance with less

Table 3. Comparisons of structure and performancewith different algorithms.

Number of Training TestingAlgorithms fuzzy rules RMSE RMSE

OLS39 13 0.0158 0.0163

RBF-AFS21 21 0.0107 0.0128

DFNN10 5 0.0132 0.0131SGFNN 7 0.0112 0.0113

Page 9: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 29

RMSE for training and testing even it has generatedmore rules than the DFNN.10 However, the SGFNNshows superiority compared with the OLS39 and theRBF-AFS21 in terms of the RMSE for testing andnetwork structure respectively.

4.4. Example 4: Fuel consumptionprediction of automobiles

In order to further validate the performanceof the proposed SGFNN algorithm, compar-isons of the SGFNN with other popular learn-ing algorithms22,23,27,28,33,35,42 are presented forbenchmark prediction problem named auto-mpgprediction.40 All the simulation results are averagedover 50 trials. The average RMSE for training andtesting are calculated and compared in this section.

The auto-mpg problem is to predict the fuel con-sumption (miles per gallon) of different models ofcars based on the displacement, horsepower, weightand acceleration of cars. A total of 392 observationsare collected for the prediction problem. Each obser-vation consists of seven inputs (four continuous ones:displacement, horsepower, weight, acceleration, andthree discrete ones: cylinders, model year and ori-gin) and one continuous output (the fuel consump-tion). For simplicity, the seven input attributes andone output have been normalized to the range [0, 1].For the sake of comparisons with other learning algo-rithms, 320 training data and 72 testing data are ran-domly chosen from the auto-mpg database in eachtrial of simulation studies. Table 4 summarizes theresults for auto-mpg regression problem in terms oftraining RMSE, testing RMSE and the number ofgenerated fuzzy rules. The number of generated fuzzyrules for OS-ELM (RBF)42 was determined basedon the model selection process while for the otheralgorithms, it is generated automatically by the algo-rithms. As observed from Table 4, the average num-ber of fuzzy rules of the SGFNN is 5.15 which isslightly more than the other algorithms except theOS-ELM (RBF).42 The average training RMSE ofthe SGFNN is 0.0613 which is less than that ofthe other algorithms except the FAOS-PFNN.35 Itmeans that the approximation performance of theSGFNN is better than the other algorithms exceptthe FAOS-PFNN.35 It should be highlighted that theSGFNN has the least testing RMSE and best gener-alization performance among all learning algorithms.

Table 4. Comparisons of the SGFNN with differentalgorithms on auto-mpg problem.

Number of Training TestingAlgorithms fuzzy rules RMSE RMSE

RAN27 4.44 0.2923 0.3080

RANEKF28 5.14 0.1088 0.1387

MRAN22,23 4.46 0.1086 0.1376

GAP-RBF33 3.12 0.1144 0.1028

OS-ELM (RBF)42 25 0.0696 0.0759

FAOS-PFNN35 2.9 0.0321 0.0775SGFNN 5.15 0.0613 0.0658

4.5. Example 5: Real-world drugdelivery system

For the real-world application, we employ theSGFNN to model unknown nonlinearities of com-plex blood pressure system. We investigate the useof fuzzy neural network technique for modeling andautomatic control of mean arterial pressure (MAP)through the intravenous infusion of sodium nitro-prusside (SNP).

Control of MAP in many clinical situations suchas certain operation procedures for hypertensivepatient is one attractive application in postsurgi-cal drug delivery systems. A powerful medication forcontrol of MAP is SNP that has emerged as an effec-tive vasodilator drug.43 A model of the MAP41 of apatient under the influence of SNP is given as follows:

MAP(t) = p0 + ∆p(t) + pd(t) + n(t) (37)

where MAP(t) is the mean arterial pressure, p0 is theinitial blood pressure, ∆p(t) is the change in pressuredue to infusion rate of SNP, pd(t) is the change inpressure due to the rennin reflex action which is thebody’s reaction to the use of a vasodilator drug andn(t) is a stochastic background noise.

A nominal discrete-time model of the MAP of apatient under the influence of SNP is given as follows:

y(t) = f [y(k − 1), u(k − d), u(k − m)]

= a0y(k − 1) + b0u(k − d) + b1u(k − m) + n(k)

(38)

where y(k) is the output of the system which rep-resents the change in MAP from the initial bloodpressure at discrete time k, u(k) is the input ofsystem which represents the infusion of SNP at dis-crete time k, d and m are integer delays which rep-resent the initial transport delay and recirculation

Page 10: Generating fuzzy networks

January 9, 2012 13:15 00306

30 F. Liu & M. J. Er

time delay, respectively, a0, b0 and b1 are parameterswhich may vary considerably from patient to patientor within the same patient under different conditions,and n(k) is an unknown disturbance term which maycontain unmodeled dynamics, disturbance, measure-ment noise, effects due to sampling of continuoustime signals, etc. The model is also known as autore-gressive with exogenous inputs model.

Using linear modeling techniques, the parametersa0, b0 and b1 are assumed to be constant, thus result-ing in a linear system. The time delays denoted by d

and m are constant integer in (38). This is a restric-tive assumption since in practical systems these val-ues may vary from patients to patients or within thesame patient under different conditions. It is sug-gested that d and m have a general range between30 s and 80 s.44

In the context of using the SGFNN for bloodpressure control, the FNN is viewed as a modelingmethod. The knowledge about system dynamics andmapping characteristics are stored in the network.Here, direct inverse control method is used to con-trol blood pressure system. Direct inverse controlmethod is based on the reference model of the sys-tem, the FNN is used to learn and approximate theinverse dynamics of the drug delivery system, andthen the resulting FNN is used to estimate the druginfusion rate given the desired blood pressure levelr(t). When the FNN is used as a controller in a drugdelivery system, the control object is to obtain appro-priate control input u(t) to make the output of sys-tem y(t) approximate the desired blood pressure levelr(t). The control procedure consists of two stages: (1)learning stage and (2) application stage. In learningstage, the FNN is used to identify the inverse dynam-ics of the drug delivery system while in applicationstage the FNN is viewed as a controller to generateappropriate control input. The direct inverse controlmethod is shown in Fig. 8. It can be easily derivedfrom (38) that the inverse model of the dynamic sys-tem is given by

u(k) = f−1[y(k + d), y(k − 1 + d), . . . ,

u(k − m + d)] (39)

The generation of u(k) requires knowledge of thefuture values y(k + d) and y(k − 1 + d). To over-come the problem, they are usually replaced by theirreference values r(k + d) and (k − 1 + d). Anotherproblem is that the inverse of function f−1 may not

Fig. 8. Control structure of drug delivery system.

always exist. Instead of considering the existence ofthe function f−1, the inverse model of the dynam-ics system can always be configured in a nonlinearregression model as follows

u(k) = g[y(k), y(k − 1), . . . , y(k − m + d)]

= G(z, k) (40)

where z = [y(k)y(k − 1), . . . , y(k − m + d)]T .The SGFNN is trained to obtain an estimate of

the inverse dynamics as illustrated in Fig. 8. Theoutput of the SGFNN is calculated as

uSGFNN (z, k) = D(z) (41)

The inverse dynamics of the drug delivery system isidentified by the SGFNN and then it is used as acontroller to generate the control output. The objec-tively of simulation studies is to demonstrate thecapability of the SGFNN to approximate a dynamicsystem and control a drug delivery system based onsensitive model.

Without any loss of generality, we assume theinteger delays d = 3, m = 6 and sample time is15 s. Usually, we use the sensitive model of the drugdelivery system which is described as

y(k) = 0.606y(k − 1) + 3.5u(k − 3) + 1.418u(k − 6)

(42)

In the simulation study, the FNN is trained tomodel the inverse dynamics of the drug deliverysystem. The input signal to the system (SNP infu-sion rate) for SGFNN training is set as u(k) =|A sin(2πk/250)|, where A is set to be 10. For thepurpose of training, 200 training samples are gen-erated from (42) with initial conditions y(k) = 0,u(t) = 0 for t ≤ 0.

Page 11: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 31

The inverse model of (42) is given by

u(k) = f(y(k), y(k − 3)) (43)

The parameters of the SGFNN are set as follows:

β = 0.9, γ = 0.95, emax = 0.2, emin = 0.02, kerr =0.0001, km = 0.95, εmax = 0.8, εmin = 0.5, andδ = 400.

The results are illustrated in Figs. 9 and 10. Atotal of seven fuzzy rules are generated during thetraining process. The RMSE of the paradigm at the

0 20 40 60 80 100 120 140 160 180 2000

1

2

3

4

5

6

7fuzzy rule generation

sample patterns

Fig. 9. Fuzzy rule generation.

0 20 40 60 80 100 120 140 160 180 2000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16root mean squared error

sample patterns

Fig. 10. Root Mean Squared Error (RMSE) duringtraining process.

end of the training process is shown in Fig. 10. Itcan be seen from these figures that the proposedFNN can model the drug delivery system very wellas the RMSE is 0.0134 at the end of training pro-cess. After training, the SGFNN is tested for onlineadaptive control of the system. The reference trajec-tory represents a reduction of MAP from 140 mmHgto 100 mmHg initially, maintaining the level at100mmHg. Considering another situation in simula-tion study, the output measurement y(k) is corruptedby white noise n(k) with a variance level of 1mmHg(resulting in peak-to-peak noise variance of approx-imately ±4mmHg), which is considered as moder-ate noise level in the physiological system. Simula-tion results are shown in Figs. 11 to 13. Figure 11depicts comparison between the actual change anddesired change of blood pressure. The long-dashcurve denotes the desired change of blood pressure.The desired change of blood pressure maintains at0 mmHg initially, and then increases to 40mmHg atts = 500. The solid curve denotes the actual changeof blood pressure in the drug delivery system. Fig-ure 12 demonstrates that the SGFNN controller isable to regulate the MAP to the desired set-pointeven with the noise. It can be seen from the Fig. 12,for sensitive condition (k = − 2.88mmHg/ml/h)the overshoot of MAP is 3.76% which is lessthan that of the fuzzy controller45 i.e. 13.2%. Theactual infusion rate of the SNP is demonstratedin Fig. 13.

0 100 200 300 400 500 600 700 800-5

0

5

10

15

20

25

30

35

40

45Real DP and Desired DP

sample number

Fig. 11. (−) Actual change of blood pressure and (−−)desired change of blood pressure.

Page 12: Generating fuzzy networks

January 9, 2012 13:15 00306

32 F. Liu & M. J. Er

0 100 200 300 400 500 600 700 80095

100

105

110

115

120

125

130

135

140

145Real Map Using SGFNN controller

sample number

Fig. 12. Real MAP using the SGFNN controller.

0 100 200 300 400 500 600 700 800-1

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

4Actual Infusion Rate Uac

Sample number

Fig. 13. Actual infusion rate of the SNP.

5. Discussions

The basic idea of the proposed SGFNN algorithmis to construct a TSK fuzzy system based on EBFneural networks. The motivation of this paper is toprovide a simple and efficient algorithm to configurea fuzzy neural network so that (1) the system couldbe used as a modeling tool to model and control ofa nonlinear dynamic system (2) the system couldbe used for predicting real-world benchmark pre-diction problem. Many related learning algorithmshave been developed by other researchers as shownin Sec. 1. Here, we would like to give a comparative

study between other state-of-the-arts algorithms andthe proposed SGFNN.

5.1. Structure identification

The structure identification of the proposed algo-rithm is self-adaptive. The resulting structuredepends critically on the generation and pruning cri-teria of the learning algorithm.

5.2. Parameter adjustment

The method of parameter adjustment has greatimpact on the learning speed of the proposed learn-ing algorithm. In the proposed algorithm, nonlin-ear parameters (premise parameters) are directlyadjusted during the learning process. On the otherhand, linear parameters (consequent parameters) aremodified in each step by the KF method in whichthe solution is globally optimal. The learning speedis much faster than other algorithms9,15,21 in whichthe back-propagation (BP) algorithm is employed.The BP method is well-known to be slow and easyto be trapped into local minima.

If the SGFNN is employed for online identifica-tion or control process, the adaptive capability ofthe KF algorithm would decrease when more sampledata are collected, especially, if the identified systemis to account for time-varying characteristics of theincoming data. Therefore, the effect of old trainingdata should decay when new ones arrive. One com-monly used approach is to add a forgetting factor λ

to (30):

St =1γ

(St−1 − Si−1ΨiΨT

i St−1

1 + ΨTi St−1Ψi

)i = 1, 2, . . . , n

(44)

where 0 < λ < 1.

5.3. Generalization

Another important issue of FNNs is generalizationcapability. Note that the approximation and gener-alization capability of the resulting FNN dependson the structure and parameters of the system. Inthis paper, two criteria are used to create the fuzzyrules and the KF method is adopted to update theconsequent parameters of the SGFNN. It followsthat parsimonious network structure and suitable

Page 13: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 33

weights in consequents can be obtained simultane-ously. Consequently, the resulting fuzzy neural net-work is able to achieve good generalization accuracy.As seen from the auto-mpg prediction, the SGFNNcan obtain the best generation performance of alllearning algorithms.

6. Conclusions

In this paper, a novel efficient algorithm towardsconstructing a self-generating fuzzy neural network(SGFNN) performing TSK fuzzy systems based onEBF neural networks has been proposed. Struc-ture and parameter identification of the SGFNNcan be done automatically and simultaneously.Structure learning is based on criteria of genera-tion and pruning neurons. The KF algorithm hasbeen used to adjust consequent parameters of theSGFNN. The effectiveness of the proposed algo-rithm has been demonstrated in nonlinear functionapproximation, nonlinear dynamic system identifi-cation, time-series prediction and real-world bench-mark prediction problem. Simulation results showthat a more efficient fuzzy neural network withhigh accuracy and compact structure can be self-generated by the proposed SGFNN. Comprehensivecomparisons with other well-known learning algo-rithms have been presented in this paper. In sum-mary, the SGFNN is a very efficient algorithm forfunction approximation, nonlinear system identifi-cation, time-series prediction and real-world bench-mark prediction problem. In particular, an adaptivemodeling and control scheme based on the SGFNNfor drug delivery system is presented. The proposedSGFNN is a novel intelligent modeling tool, whichcan model the unknown nonlinearities of the complexdrug delivery system and adapt on line to changesand uncertainties in the system.

References

1. A. Panakkat and H. Adeli, Recurrent neural net-work for approximation earthquake time and loca-tion prediction using multiple seismicity indicators,Computer-Aided Civil and Infrastructure Engineer-ing 24(4) (2009) 280–292.

2. H. Adeli and X. Jiang, Neuro-Fuzzy logic modelfor freeday work zone capacity estimation, Jour-nal of Transportation Engineering 129(5) (2003)484–493.

3. H. Adeli and X. Jiang, Dynamic fuzzy wavelet neu-ral network model for structural system identifi-cation, Journal of Structural Engineering 132(1)(2006) 102–111.

4. G. Puscasu and B. Codres, Nonlinear system iden-tification based on internal recurrent neural net-works, International Journal of Neural Systems19(2) (2009) 115–125.

5. S. L. Hung and H. Adeli, A parallel genetic/neuralnetwork learning algorithm for MIMD shared mem-ory machines, IEEE Transactions on Neural Net-works 5(6) (1994) 900–909.

6. H. M. Elragal, Improving neural networks predic-tion accuracy using particle swarm optimizationcombiner, International Journal of Neural Systems19(5) (2009) 387–393.

7. D. Wu, K. Warwick, Z. Ma, M. N. Gasson, J. G.Burgess, S. Pan and T. Z. Aziz, Prediction of Parkin-son’s disease tremor onset using radial basis functionneural network based on particle swarm optimiza-tion, International Journal of Neural Systems 20(2)(2010) 109–116.

8. J. S. Wang and C. S. G. Lee, Efficient neuro-fuzzycontrol systems for autonomous underwear vehiclecontrol, in Proceeding of IEEE International Con-ference on Robotics and Automation 3 (2001) 2986–2991.

9. C. F. Juang and C. T. Lin, An on-line self-constructing neural fuzzy inference network and itsapplications, IEEE Transactions on Fuzzy Systems6(1) (1998) 12–32.

10. S. Wu and M. J. Er, Dynamic fuzzy neural networks-a novel approach to function approximation, IEEETransactions on System, Man, Cybernetics, Part B,Cybernetics 30(2) (2000) 358–364.

11. C. T. Lin and C. S. G. Lee, Neural Fuzzy Systems: ANeuro-Fuzzy Synergism to Intelligent Systems (Pren-tice Hall: Upper Saddle River, 1996).

12. D. Nauck, Neuro-fuzzy systems: Review andprospects, in Proceeding of the 5th EuropeanCongress on Intelligent Techniques and Soft Com-puting (EUFIT’97) (1997) 1044–1053.

13. J. S. R. Jang and C. T. Sun, Functional equiva-lence between radial basis function networks andfuzzy inference systems, IEEE Transactions on Neu-ral Networks 4(1) (1993) 156–159.

14. A. M. Schaefer and H. G. Zimmermann, Recurrentneural networks are universal approximators, Inter-national Journal of Neural Systems 17(4) (2007)253–263.

15. D. A. Linkens and H. O. Nyongesa, Learning systemsin intelligent control: An appraisal of fuzzy, neuraland genetic algorithm control applications, in IEEProceedings on Control Theory and Applications 143(1996) 367–386.

16. M. Sugeno and G. T. Kang, Structure identificationof fuzzy model, Fuzzy Sets and Systems 28(1) (1988)15–33.

Page 14: Generating fuzzy networks

January 9, 2012 13:15 00306

34 F. Liu & M. J. Er

17. K. Tanaka, M. Sano and H. Watanabe, Modeling andcontrol of carbon monoxide concentration using aneuro-fuzzy technique, IEEE Transactions on FuzzySystems 3(3) (1995) 271–279.

18. Y. Gao and M. J. Er, An intelligent adaptive con-trol scheme for postsurgical blood pressure regula-tion, IEEE Transactions on Neural Networks 16(2)(2005) 475–483.

19. D. C. Theodoridis Y. S. Boutalis and M. A.Christodoulou, Indirect adaptive control of unknownmulti variable nonlinear systems with parametricand dynamic uncertainties using a new neuro-fuzzysystem description, International Journal of NeuralSystems 20(2) (2010) 129–148.

20. J. P. Deng, N. Sundararajan and P. Saratchan-dran, Communication channel equalization usingcomplex-valued minimal radial basis function neuralnetworks, IEEE Transactions on Neural Networks13(6) (2002) 687–696.

21. K. B. Cho and B. H. Wang, Radial basis functionbased adaptive fuzzy systems and their applicationsto system identification and predication, Fuzzy Setsand Systems 83(3) (1996) 325–339.

22. Y. Lu, N. Sundararajan and P. Saratchandran, Asequential learning scheme for function approxi-mation using minimal radial basis function neu-ral networks, Neural Computation 9(2) (1997)461–478.

23. Y. Lu, N. Sundararajan and P. Saratchandran, Per-formance evaluation of a sequential minimal radialbasis function (RBF) neural network learning algo-rithm, IEEE Transactions on Neural Networks 9(2)(1998) 308–318.

24. J. S. R. Jang, ANFIS: Adaptive-network-based fuzzyinference system, IEEE Transactions on System,Man, Cybernetics, Part B, Cybernetics 23(3) (1993)665–684.

25. G. Leng, G. Prasad and T. M. McGinnity, An on-linealgorithm for creating self-organizing fuzzy neu-ral networks, Neural Networks 17(10) (2004) 1477–1493.

26. G. Leng and T. M. McGinnity, Design for self-organizing fuzzy neural networks based on geneticalgorithms, IEEE Transactions on Fuzzy Systems14(6) (2006) 755–765.

27. J. Platt, A resource-allocating network for func-tion interpolation, Neural Networks 3(2) (1991)213–225.

28. V. Kadirkamanathan and M. Niranjan, A functionestimation approach to sequential learning with neu-ral network, Neural Computation 5(6) (1993) 954–975.

29. M. Salmeron, J. Ortega, C. G. Puntonet and A.Prieto, Improved RAN sequential prediction usingorthogonal technique, Neurocomputing 41(1) (2001)153–172.

30. A. Samant and H. Adeli, Enhancing neural net-work incident detection algorithms using wavelets,Computer-Aided Civil and Infrastructure Engineer-ing 16(4) (2001) 239–245.

31. A. Karim and H. Adeli, Comparison of the fuzzy –wavelet RBFNN freeway incident detection modelwith the California algorithm, Journal of Trans-portation Engineering 128(1) (2002) 21–30.

32. X. Jiang and H. Adeli, Dynamic fuzzy wavelet neu-roemulator for nonlinear control of irregular high-rise building structures, International Journal forNumerical Methods in Engineering 74(7) (2008)1045–1066.

33. G. B. Huang, P. Saratchandran and N. Sundarara-jan, An efficient sequential learning algorithm forgrowing and pruning RBF (GAP-RBF) networks,IEEE Transactions on System, Man, Cybernetics,Part B, Cybernetics 34(6) (2004) 2284–2292.

34. G. B. Huang, P. Saratchandran and N. Sun-dararajan, A generalized growing and pruning RBF(GGAP-RBF) neural network for function approx-imation, IEEE Transactions on Neural Networks16(1) (2005) 57–67.

35. N. Wang and M. J. Er and X. Y Meng, A fastand accurate online self-organizing scheme for parsi-monious fuzzy neural networks, Neurocomputing 72(16-18) (2009) 3818–3829.

36. C. C. Lee, Fuzzy logic in control system: Fuzzylogic controller, IEEE Transactions on System, Man,Cybernetics, Part B, Cybernetics, Part. I, II 20(2)(1990) 404–436.

37. L. X. Wang, A Course in Fuzzy Systems and Control(Englewood Cliffs, Prentice Hall, 1997).

38. S. Wu, M. J. Er and Y. Gao, A fast approach forautomatic generation of fuzzy rules by generalizeddynamic fuzzy neural networks, IEEE Transactionson Fuzzy Systems 9(4) (2001) 578–594.

39. S. Chen, C. F. N. Cowan and P. M. Grant, Orthog-onal least squares learning algorithm for radial basisfunction network, IEEE Transactions on Neural Net-works 2(2) (1991) 302–309.

40. A. Frank and A. Asuncion, UCI Machine LearningRepository [http://archive.ics.uci.edu/ml]. Irvine,CA: University of California, School of Informationand Computer Science (2010).

41. J. B. Slate, Model-Based Design of a Controller forInfusing Sodium Nitroprusside During PostsurgicalHypertension, Ph.D dissertation (Univ. Wisconsin,Madison, WI, 1980).

42. N. Y. Liang, G. B. Huang, P. Saratchandran and N.Sundararajan, A fast and accurate online sequentiallearning algorithm for feedforward networks, IEEETransactions on Neural Networks 17(6) (2006)1411–1423.

43. J. G. Reves, L. G. Shepppard, R. Wallach and W. A.Lell, Therapeutic uses of sodium nitroprusside and

Page 15: Generating fuzzy networks

January 9, 2012 13:15 00306

A Novel Efficient Learning Algorithm for Self-Generating Fuzzy Neural Network with Applications 35

an automated method of administration, Int Anes-thesiol Clinic 16(2) (1978) 51–88.

44. S. Isaka and A. V. Sebald, Control strategies for arte-rial blood pressure regulation, IEEE Transactions onBiomedical Engineering 40(4) (1993) 353–363.

45. H. Ying, M. McEachern, D. W. Eddleman and L. C.Sheppard, Fuzzy control of mean arterial pressure inpostsurgical patients with sodium nitroprusside infu-sion, IEEE Transactions on Biomedical Engineering39(10) (1992) 1060–1070.

Page 16: Generating fuzzy networks

Copyright of International Journal of Neural Systems is the property of World Scientific Publishing Company

and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright

holder's express written permission. However, users may print, download, or email articles for individual use.