Research ArticleOnline Regularized and Kernelized Extreme LearningMachines with Forgetting Mechanism
Xinran Zhou Zijian Liu and Congxu Zhu
School of Information Science and Engineering Central South University Changsha 410083 China
Correspondence should be addressed to Zijian Liu liuzijian2015126com
Received 8 April 2014 Revised 25 May 2014 Accepted 17 June 2014 Published 10 July 2014
Academic Editor Ching-Ter Chang
Copyright copy 2014 Xinran Zhou et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
To apply the single hidden-layer feedforward neural networks (SLFN) to identify time-varying system online regularizedextreme learning machine (ELM) with forgetting mechanism (FORELM) and online kernelized ELM with forgetting mechanism(FOKELM) are presented in this paperThe FORELM updates the output weights of SLFN recursively by using Sherman-Morrisonformula and it combines advantages of online sequential ELM with forgetting mechanism (FOS-ELM) and regularized onlinesequential ELM (ReOS-ELM) that is it can capture the latest properties of identified system by studying a certain number of thenewest samples and also can avoid issue of ill-conditioned matrix inversion by regularization The FOKELM tackles the problemof matrix expansion of kernel based incremental ELM (KB-IELM) by deleting the oldest sample according to the block matrixinverse formula when samples occur continually The experimental results show that the proposed FORELM and FOKELM havebetter stability than FOS-ELM and have higher accuracy than ReOS-ELM in nonstationary environments moreover FORELMand FOKELM have time efficiencies superiority over dynamic regression extreme learning machine (DR-ELM) under certainconditions
1 Introduction
Plenty of research work has shown that the single hidden-layer feedforward neural networks (SLFN) can approximateany function and form decision boundaries with arbitraryshapes if the activation function is chosen properly [1ndash3] However most of traditional approaches (such as BPalgorithm) for training SLFN are slow due to their iterativesteps To train SLFN fast Huang et al proposed a learningalgorithm called extreme learning machine (ELM) whichrandomly assigns the hidden nodes parameters (the inputweights and hidden layer biases of additive networks or thecenters and impact factors of RBF networks) and then deter-mines the output weights by the Moore-Penrose generalizedinverse [4 5] The original ELM is batch learning algorithm
For some practical fields where the training data aregenerated gradually online sequential learning algorithmsare preferred over batch learning algorithms as sequentiallearning algorithms donot require retrainingwhenever a newsample is received Hence Liang et al developed a kind ofonline sequential ELM (OS-ELM) using the recursive leastsquare [6] OS-ELM for SLFN produced better generalization
performance at faster learning speed compared with theprevious sequential learning algorithms Moreover for time-varying environments recently several incremental sequen-tial ELMs are presented they apply constant or adaptiveforgetting factor [7 8] or iteration approach [9] to strengthennew samplersquos contribution on model Speaking theoreticallythey cannot eliminate old samplesrsquo effect on model thor-oughly To let ELM study the latest properties of identifiedobject Zhao et al developed online sequential ELMwith for-getting mechanism (FOS-ELM) [10] Fixed-memory extremelearning machine (FM-ELM) of Zhang andWang [11] can bethought of as a special case of FOS-ELM with the parameter119901 in [10] being 1 Although experimental results show FOS-ELM has higher accuracy [10] it may encounter the matrixsingularity problem and run unstably
As a variant of ELM regularized ELM (RELM) [12ndash14] which is equivalent to the constrained optimizationbased ELM [15 16] mathematically absorbing the thoughtof structural risk minimization of statistical learning theory[17] can overcome the overfitting problem of ELM andprovides better generalization ability than original ELMwhennoises or outliers exist in the dataset [12] Furthermore the
Hindawi Publishing CorporationMathematical Problems in EngineeringVolume 2014 Article ID 938548 11 pageshttpdxdoiorg1011552014938548
2 Mathematical Problems in Engineering
regularized OS-ELM (ReOS-ELM) developed by Huynh andWon [13] which is equivalent to sequential regularized ELM(SRELM) [14] and least square incremental ELM (LS-IELM)[18] essentially can avoid singularity problem
If the feature mapping in SLFN is unknown to users thekernel based ELM (KELM) can be constructed [15 16] Forthe application where samples arrive gradually Guo et aldeveloped kernel based incremental ELM (KB-IELM) [18]
However in time-varying or nonstationary applicationsthe newer training data usually carrymore information aboutsystems and the older ones possibly carry less informationeven misleading information that is the training samplesusually have timeliness The ReOS-ELM and KB-IELM can-not reflect the timeliness of sequential training data wellOn the other hand if a huge number of samples emergefor KB-IELM the required storage space for matrix willinfinitely increase with learning ongoing and new samplesarriving ceaselessly and at last storage overflow will happennecessarily so KB-IELM cannot be utilized at all under thecircumstances
In this paper we combine advantages of FOS-ELM andReOS-ELM and propose online regularized ELMwith forget-ting mechanism (FORELM) for time-varying applicationsFORELM can overcome the potential matrix singularityproblem by using regularization and eliminate effectionsof the outdated data on model by incorporating forgettingmechanism Like FOS-ELM the ensemble skill also maybe employed in FORELM to enhance its stability that isFORELM comprises 119901 (119901 ge 1) ReOE-ELMs with forgettingmechanism each of which trains a SLFN the average ofthose outputs represents the final output of the ensembleof these SLFN Additionally forgetting mechanism also isincorporated into KB-IELM and online kernelized ELMwithforgetting mechanism (FOKELM) is presented which candeal with the problem of matrix expansion of KB-IELMThedesigned FORELM and FOKELM update model recursivelyThe experimental results show the better performance ofFORELM and FOKELM approach in nonstationary environ-ments
It should be noted that our methods adjust the outputweights of SLFN due to addition and deletion of the samplesone by one namely learn and forget samples sequentially andnetwork architecture is fixed They are completely differentfrom those offline incremental ELMs (I-ELM) [19ndash21] andincremental RELM [22] seeking optimal network architec-ture by adding hidden nodes one by one and learning the datain batch mode
The rest of this paper is organized as follows Section 2gives a brief review of the basic concepts and related worksof ReOS-ELM and KB-IELM Section 3 proposes new onlinelearning algorithms namely FORELM and FOKELM Per-formance evaluation is conducted in Section 4 Conclusionsare drawn in Section 5
2 Brief Review of the ReOS-ELMand KB-IELM
For simplicity ELM based learning algorithm for SLFN withmultiple input single output is discussed
The output of a SLFN with 119871 hidden nodes (additive orRBF nodes) can be represented by
119891 (x) =
119871
sum
119894=1
120573119894119866 (a119894 119887119894 x) = h (x)120573 x isin 119877
119899 a119894isin 119877119899 (1)
where a119894and 119887119894are the learning parameters of hidden nodes
120573 = [1205731 1205732 120573
119871]T is the vector of the output weights and
119866(a119894 119887119894 x) denotes the output of the 119894th hidden node with
respect to the input x that is activation function h(x) =
[119866(a1 1198871 x) 119866(a
119871 119887119871 x)] is a featuremapping from the 119899-
dimensional input space to the 119871-dimensional hidden-layerfeature space In ELM a
119894and 119887
119894are randomly determined
firstlyFor a given set of distinct training data (x
119894 119910119894)119873
1sub
119877119899
times 119877 where x119894is an 119899-dimensional input vector and 119910
119894
is the corresponding scalar observation the RELM that isconstrained optimization based ELM can be formulated as
Minimize 119871119875ELM
=1
2
100381710038171003817100381712057310038171003817100381710038172+ 119888
1
21205852 (2a)
Subject to H120573 = Y minus 120585 (2b)
where 120585 = [1205851 1205852 120585
119873]T denotes the training error Y =
[1199101 1199102 119910
119873]T indicates the target value of all the samples
H119873times119871
= [h(x1)T h(x
119873)T]Tis the mapping matrix for the
inputs of all the samples 119888 is the regularization parameter (apositive constant)
Based on the KKT theorem the constrained optimizationof (2a) and (2b) can be transferred to the following dualoptimization problem
119871119863ELM
=1
2
100381710038171003817100381712057310038171003817100381710038172+ 119888
1
21205852minus 120572 (H120573 minus Y + 120585) (3)
where 120572 = [1205721 1205722 120572
119873] is the Lagrange multipliers vector
Using KKT optimality conditions the following equationscan be obtained
120597119871119863ELM
120597120573= 0 997888rarr 120573 = HTa (4a)
120597119871119863ELM
120597120585= 0 997888rarr a = 119888120585 (4b)
120597119871119863ELM
120597120572= 0 997888rarr H120573 minus 119884 + 120585 = 0 (4c)
Ultimately 120573 can be obtained as follows [12 15 16]
120573 = (119888minus1I + HTH)
minus1
HTY (5a)
or 120573 = HT(119888minus1I + HHT
)minus1
Y (5b)
In order to reduce computational costs when119873 gt 119871 onemay prefer to apply solution (5a) and when 119873 lt 119871 one mayprefer to apply solution (5b)
If the feature mapping h(x) is unknown one can applyMercerrsquos conditions on RELM The kernel matrix is defined
Mathematical Problems in Engineering 3
as Ω = HHT and Ω119894119895
= h(x119894)h(x119895)T
= 119870(x119894 x119895) Then the
output of SLFN by kernel based RELM can be given as
119891 (x) = h (x)120573 = h (x)HT(119888minus1I + HHT
)minus1
Y
= [119870 (x x1) 119870 (x x
2) 119870 (x x
119873)] (119888minus1I +Ω)
minus1
Y
(6)
21 ReOS-ELM The ReOS-ELM that is SRELM and LS-IELM can be retold as follows
For time 119896 minus 1 let h119894
= h(x119894) H
119896minus1=
[hT119904
hT119904+1
sdot sdot sdot hT119896minus1
]T Y119896minus1
= [119910119904 119910119904+1
sdot sdot sdot 119910119896minus1]
T thenaccording to (5a) solution of RELM can be expressed as
P119896minus1
= (119888minus1I + HT
119896minus1H119896minus1
)minus1
(7a)
120573119896minus1
= P119896minus1
HT119896minus1
Y119896minus1
(7b)
For time 119896 the new sample (x119896 119910119896) arrives thus H
119896=
[HT119896minus1
hT119896]TY119896
= [YT119896minus1
119910119896]T applying Sherman-Morri-
son-Woodbury (SMW) formula [23] the current P and 120573 canbe computed as
P119896= (119888minus1I + HT
119896H119896)minus1
= (119888minus1I + HT
119896minus1H119896minus1
+ hT119896h119896)minus1
= (Pminus1119896minus1
+ hT119896h119896)minus1
= P119896minus1
minusP119896minus1
hT119896h119896P119896minus1
1 + h119896P119896minus1
hT119896
(8a)
120573119896= P119896HT119896Y119896= P119896(HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896(Pminus1119896minus1
P119896minus1
HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896((Pminus1119896
minus hT119896h119896)120573119896minus1
+ hT119896119884119896)
= 120573119896minus1
+ P119896hT119896(119910119896minus h119896120573119896minus1
)
(8b)
22 KB-IELM For time 119896 minus 1 let
A119896minus1
= (119888minus1I +Ω119896minus1)
=
[[[[
[
119888minus1
+ 119870 (x119904 x119904) 119870 (x119904 x119904+1) sdot sdot sdot 119870 (x119904 x119896minus1)119870 (x119904+1 x119904) 119888
minus1+ 119870 (x119904+1 x119904+1) sdot sdot sdot 119870 (x119904+1 x119896minus1)
119870(x119896minus1 x119904) 119870 (x119896minus1 x119904+1) sdot sdot sdot 119888minus1
+ 119870 (x119896minus1 x119896minus1)
]]]]
]
(9)
For time 119896 the new sample (x119896 119910119896) arrives thus
A119896= 119888minus1I +Ω
119896= [
A119896minus1
V119896
VT119896
V119896
] (10)
where V119896
= [119870(x119896 x119904) 119870(x
119896 x119904+1
) 119870(x119896 x119896minus1
)]T V119896
=
119870(x119896 x119896) + 1119888 Using the block matrix inverse formula [23]
Aminus1119896
can be calculated from Aminus1119896minus1
as
Aminus1119896
= [A119896minus1
V119896
VT119896
V119896
]
minus1
= [Aminus1119896minus1
+ Aminus1119896minus1
V119896120588minus1
119896VT119896+1
Aminus1119896minus1
minusAminus1119896minus1
V119905120588minus1
119896
minus120588minus1
119896VT119896Aminus1119896minus1
120588minus1
119896
]
(11)
where 120588119896= V119896minus VT119896Aminus1119896minus1
V119896
3 The Proposed FORELM and FOKELM
When SLFN is employed to model online for time-varyingsystem training samples are not only generated one by onebut also often have the property of timeliness that is trainingdata have a period of validity Therefore during the learningprocess by online sequential learning algorithm the older oroutdated training data whose effectiveness is less or is lostafter several unit times should be abandoned which is theidea of forgetting mechanism [10] ReOS-ELM (ie SRELMor LS-IELM) and KB-IELM cannot reflect the timelinessof sequential training data In this section the forgettingmechanism is added to them to eliminate the outdated datathat might have misleading or bad effect on built SLFNOn the other hand for KB-IELM to abandon samples canprevent matrix Aminus1 of (11) from expanding infinitely Thecomputing procedures of deleting sample are given and thecompleted online regularized ELM and kernelized ELMwithforgetting mechanism are presented
31 Decremental RELMand FORELM After RELMhas stud-ied a given number 119911 of samples and SLFN has been appliedfor prediction RELMwould discard the oldest sample (x
119904 119910119904)
from samples setLetH = [hT
119904+1hT119904+2
sdot sdot sdot hT119896]T Y = [119910119904+1 119910
119904+2sdot sdot sdot 119910119896]
T
thenH119896= [hT119904
HT]T Y119896= [119910119904YT
]T and
P = (119888minus1I + HTH)
minus1
= (119888minus1I + HT
119896H119896minus hT119904h119904)minus1
= (Pminus1119896
minus hT119904h119904)minus1
(12)
Furthermore using SMW formula then
P = P119896+
P119896hT119904h119904P119896
1 minus h119904P119896hT119904
(13)
Moreover
120573 = PHTY = P (HT119896Y119896minus hT119904119910119904) = P (Pminus1
119896P119896HT119896Y119896minus hT119904119910119904)
= P ((Pminus1 + hT119904h119904)120573119896minus hT119904119910119904) = 120573
119896+ PhT119904(h119904120573119896minus 119910119904)
(14)
Next time P119896+1
and 120573119896+1
can be calculated from P (viewedas P119896) and 120573 (viewed as 120573
119896) according to (8a) and (8b)
respectively
4 Mathematical Problems in Engineering
Suppose that FORELM may consist of 119901 ReOS-ELMswith forgetting mechanism by which SFLNs trained have thesame output of hidden node 119866(a 119887 x) and the same numberof hidden nodes 119871 In the following FORELM algorithm thevariables and parameters with superscript 119903 are relevant to the119903th SFLN to be trained by the 119903th ReOS-ELM with forgettingmechanism Synthesize ReOS-ELM and the decrementalRELM then we can get FORELM as follows
Step 1 Initialization
(1) Choose the hidden output function119866(a 119887 x) of SFLNwith the certain activation function and the numberof hidden nodes 119871 Set the value of 119901
(2) Randomly assign hidden parameters (a119903119894119887119903119894) 119894 = 1 2
119871 119903 = 1 2 119901
(3) Determine 119888 set P119903119896minus1
= 119888I119871times119871
Step 2 Incrementally learn initial 119911minus1 samples that is repeatthe following procedure for 119911 minus 1 times
(1) Get current sample (x119896 119910119896)
(2) Calculate h119903119896and P119903
119896 h119903119896
= [119866(119886119903
1 119887119903
1 119909119896) 119866(119886
119903
119871
119887119903
119871 119909119896)] calculate P119903
119896by (8a)
(3) Calculate 120573119903119896 if sample (x
119896 119910119896) is the first one then
120573119903119896= P119903119896(h119903119896)T119910119896 else calculate 120573119903
119896by (8b)
Step 3 Online modeling and prediction repeat the followingprocedure during every step
(1) Acquire current 119910119896 form new sample (x
119896 119910119896) and
calculate h119903119896 P119903119896 and 120573119903
119896by (8a) and (8b)
(2) Prediction form x119896+1
the output of the 119903th SFLN thatis prediction119910
119903
119896+1of119910119896+1
can be calculated by (1)119910119903119896+1
= h119903(x119896+1
)120573119903119896 the final prediction 119910
119896+1= sum119901
119903=1119910119903
119896+1119901
(3) Delete the oldest sample (x119904 119910119904) calculate P119903 and 120573
119903
by (13) and (14) respectively
32 Decremental KELM and FOKELM After KELM hasstudied the given number 119911 of samples and SLFN hasbeen applied for prediction KELM would discard the oldestsample (x
119904 119910119904) from samples set
Let Ω119894119895
= 119870(x119904+119894
x119904+119895
) 119894 119895 = 1 2 119896 minus 119904 A = Ω +
119888minus1I V = [119870(x
119904 x119904+1
) 119870(x119904 x119896)] V = 119870(x
119904 x119904)+1119888 then
A119896can be written in the following partitioned matrix form
A119896= Ω + 119888
minus1I = [V VVT A
] (15)
Moreover using the block matrix inverse formula suchequation can be obtained
Aminus1119896
= Ω + 119888minus1I = [
V VVT A
]
minus1
= [120588minus1
minus120588minus1VAminus1
minusAminus1VT120588minus1 Aminus1 + Aminus1VT
120588minus1VAminus1
]
(16)
where 120588 = V minus VAminus1VTRewrite Aminus1
119896in the partitioned matrix form as
Aminus1119896
= [Aminus1119896 (11)
Aminus1119896 (12end)
Aminus1119896 (2end1) A
minus1
119896 (2end2end)] (17)
Compare (16) and (17) Aminus1 can be calculated as
Aminus1 = Aminus1119896 (2end2end) minus Aminus1VT
120588minus1VAminus1
= Aminus1119896 (2end2end) minus
(minusAminus1VT120588minus1
) (minus120588minus1VAminus1)
120588minus1
= Aminus1119896 (2end2end) minus
Aminus1119896 (2end1) times Aminus1
119896 (12end)
Aminus1119896 (11)
(18)
Next time compute Aminus1119896+1
from Aminus1 (viewed as Aminus1119896)
according to (11)Integrate KB-IELMwith the decremental KELM further
we can obtain FOKELM as follows
Step 1 Initialization choose kernel 119870(x119894 x119895) with corre-
sponding parameter values and determine 119888
Step 2 Incrementally learn initial 119911minus1 samples calculateAminus1119896
if there exists a sample only then Aminus1119896
= 1(119870(x119896 x119896) + 119888minus1
)else calculate Aminus1
119896by (11)
Step 3 Online modeling and prediction
(1) Acquire new sample (x119896 119910119896) and calculateAminus1
119896by (11)
(2) Prediction form x119896+1
and calculate prediction119910119896+1
of 119910119896+1
by (6) namely
119910119896+1
= [119870 (x119896+1
x119896minus119911+1
) 119870 (x119896+1
x119896)]
times Aminus1119896
[119910119896minus119911+1
119910119896]T
(19)
(3) Delete the oldest sample (x119904 119910119904) calculateAminus1 by (18)
4 Performance Evaluation
In this section the performance of the presented FORELMand FOKELM is verified via the time-varying nonlinearprocess identification simulations Those simulations aredesigned from the aspects of accuracy stability and compu-tation complexity of the proposed FORELM and FOKELM
Mathematical Problems in Engineering 5
by comparison with the FOS-ELM ReOS-ELM (ie SRELMor LS-IELM) and dynamic regression extreme learningmachine (DR-ELM) [24] DR-ELM also is a kind of onlinesequential RELM and is designed by Zhang and Wang usingsolution (5b) of RELM and the block matrix inverse formula
All the performance evaluations were executed in MAT-LAB 701 environment running on Windows XP with IntelCore i3-3220 33 GHz CPU and 4GB RAM
Simulation 1 The unknown identified system is a modifiedversion of the one addressed in [25] by changing the constantand the coefficients of variables to form a time-varyingsystem as done in [11]
119910 (119896 + 1) =
119910 (119896)
1 + 119910(119896)2+ 119906(119896)
3119896 le 100
2119910 (119896)
2 + 119910(119896)2+ 119906(119896)
3100 lt 119896 le 300
119910 (119896)
1 + 2119910(119896)2+ 2119906(119896)
3300 lt 119896
(20)
The system (20) can be expressed as follows
119910 (119896) = 119891 (x (119896)) (21)
where 119891(sdot) is a nonlinear function and x(119896) is the regressioninput data vector
x (119896)
= [119910 (119896 minus 1) 119910 (119896 minus 119899119910) 119906 (119896 minus 119899
119889) 119906 (119896 minus 119899
119906)]
(22)
with 119899119889 119899119906 and 119899
119910being model structure parameters Apply
SLFN to approximate (20) accordingly (x(119896) 119910(119896)) is thelearning sample (x
119896 119910119896) of SLFN
Denote 1198960
= 119911 + max(119899119910 119899119906) minus 119899119889 1198961
= 1198960+ 350 The
input is set as follows
119906 (119896) =
rand (sdot) 119896 le 1198960
sin(2120587 (119896 minus 119896
0)
120) 1198960lt 119896 le 119896
1
sin(2120587 (119896 minus 119896
1)
50) 1198961lt 119896
(23)
where rand(sdot) generates random numbers which are uni-formly distributed in the interval (0 1)
In all experiments the output of hidden nodewith respectto the input x of a SLFN in (1) is set to the sigmoidal additivefunction that is 119866(a 119887 x) = 1(1 + exp(a sdot x + 119887)) thecomponents of a that is the input weights and bias 119887 arerandomly chosen from the range [minus2 2] In FOKELM theGaussian kernel function is applied namely 119870(x
119894 x119895) =
exp(minus119909119894minus 1199091198952120590)
The root-mean-square error (RMSE) of prediction andthe maximal absolute prediction error (MAPE) are regarded
as measure indices of model accuracy and stability respec-tively Consider
RMSE = radic1
1199052minus 1199051+ 1
1199052
sum
119894=1199051
(119910119894minus 119910119894)2
MAPE = max119894=1199051 1199052
1003816100381610038161003816119910119894 minus 119910119894
1003816100381610038161003816
(24)
where 1199051= 1198960+ 1 1199052= 650 Simulation is carried on for 650
instances In model (21) 119899119889
= 1 119899119906
= 2 and 119899119910
= 3 Due torandomness of parameters a 119887 and 119906(119905) during initial stage(119896 le 119896
0) the results of simulation must possess variation For
each approach the results are averaged over 5 trialsReOS-ELM does not discard any old sample thus it has
119911 rarr infinIn offline modeling the training samples set is fixed thus
one may search relative optimal values for model parameterof ELMs Nevertheless in online modeling for time-varyingsystem the training samples set is changing it is difficult tochoose optimal values for parameters in practice Thereforewe set the same parameter of these ELMs with the same valuemanually for example their parameters 119862 all are set to 250then we compare their performances
RMSE andMAPE of the proposed ELMs and other afore-mentioned ELMs are listed in Tables 1 and 2 respectivelyand the corresponding running time (ie training time pluspredicting time) of these various ELMs is given in Table 3
From Tables 1ndash3 one can see the following results
(1) RMSE andMAPE of FORELM are smaller than thoseof FOS-ELMwith the same 119911 and 119871 valuesThe reasonfor this is that (HTH) in FOS-ELM may be (nearly)singular at some instances thus (HTH)
minus1 calculatedrecursively is nonsense and unbelievable and when119871 ge 119911 FOS-ELM cannot work owing to its toolarge RMSE or MAPE Accordingly ldquotimesrdquo representsnullification inTables 1ndash3 whereas FORELMdoes notsuffer from such a problem In addition RMSE andMAPE of FOKELM also are smaller than those ofFOS-ELM with the same 119911 values
(2) RMSE of FORELM and FOKELM is smaller than thatof ReOS-ELMwith the same 119871 and119862when parameter119911 namely the length of sliding time windows isset properly The reason for this is that ReOS-ELMneglects timeliness of samples of the time-varyingprocess and does not get rid of effects of old samplescontrarily FORELM and FOKELM stress actions ofthe newest ones It should be noticed that 119911 is relevantto characteristics of the specific time-varying systemand its inputs In Table 1 when 119911 = 50 70 or 100 theeffect is good
(3) When 119911 is fixed with 119871 increasing RMSEs of theseELMs tend to decrease firstly But changes are notobvious later
(4) FORELM requires nearly the same time as FOS-ELMbut more time than ReOS-ELM It is because bothFORELM and FOS-ELM involve 119901 incremental and
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Mathematical Problems in Engineering
regularized OS-ELM (ReOS-ELM) developed by Huynh andWon [13] which is equivalent to sequential regularized ELM(SRELM) [14] and least square incremental ELM (LS-IELM)[18] essentially can avoid singularity problem
If the feature mapping in SLFN is unknown to users thekernel based ELM (KELM) can be constructed [15 16] Forthe application where samples arrive gradually Guo et aldeveloped kernel based incremental ELM (KB-IELM) [18]
However in time-varying or nonstationary applicationsthe newer training data usually carrymore information aboutsystems and the older ones possibly carry less informationeven misleading information that is the training samplesusually have timeliness The ReOS-ELM and KB-IELM can-not reflect the timeliness of sequential training data wellOn the other hand if a huge number of samples emergefor KB-IELM the required storage space for matrix willinfinitely increase with learning ongoing and new samplesarriving ceaselessly and at last storage overflow will happennecessarily so KB-IELM cannot be utilized at all under thecircumstances
In this paper we combine advantages of FOS-ELM andReOS-ELM and propose online regularized ELMwith forget-ting mechanism (FORELM) for time-varying applicationsFORELM can overcome the potential matrix singularityproblem by using regularization and eliminate effectionsof the outdated data on model by incorporating forgettingmechanism Like FOS-ELM the ensemble skill also maybe employed in FORELM to enhance its stability that isFORELM comprises 119901 (119901 ge 1) ReOE-ELMs with forgettingmechanism each of which trains a SLFN the average ofthose outputs represents the final output of the ensembleof these SLFN Additionally forgetting mechanism also isincorporated into KB-IELM and online kernelized ELMwithforgetting mechanism (FOKELM) is presented which candeal with the problem of matrix expansion of KB-IELMThedesigned FORELM and FOKELM update model recursivelyThe experimental results show the better performance ofFORELM and FOKELM approach in nonstationary environ-ments
It should be noted that our methods adjust the outputweights of SLFN due to addition and deletion of the samplesone by one namely learn and forget samples sequentially andnetwork architecture is fixed They are completely differentfrom those offline incremental ELMs (I-ELM) [19ndash21] andincremental RELM [22] seeking optimal network architec-ture by adding hidden nodes one by one and learning the datain batch mode
The rest of this paper is organized as follows Section 2gives a brief review of the basic concepts and related worksof ReOS-ELM and KB-IELM Section 3 proposes new onlinelearning algorithms namely FORELM and FOKELM Per-formance evaluation is conducted in Section 4 Conclusionsare drawn in Section 5
2 Brief Review of the ReOS-ELMand KB-IELM
For simplicity ELM based learning algorithm for SLFN withmultiple input single output is discussed
The output of a SLFN with 119871 hidden nodes (additive orRBF nodes) can be represented by
119891 (x) =
119871
sum
119894=1
120573119894119866 (a119894 119887119894 x) = h (x)120573 x isin 119877
119899 a119894isin 119877119899 (1)
where a119894and 119887119894are the learning parameters of hidden nodes
120573 = [1205731 1205732 120573
119871]T is the vector of the output weights and
119866(a119894 119887119894 x) denotes the output of the 119894th hidden node with
respect to the input x that is activation function h(x) =
[119866(a1 1198871 x) 119866(a
119871 119887119871 x)] is a featuremapping from the 119899-
dimensional input space to the 119871-dimensional hidden-layerfeature space In ELM a
119894and 119887
119894are randomly determined
firstlyFor a given set of distinct training data (x
119894 119910119894)119873
1sub
119877119899
times 119877 where x119894is an 119899-dimensional input vector and 119910
119894
is the corresponding scalar observation the RELM that isconstrained optimization based ELM can be formulated as
Minimize 119871119875ELM
=1
2
100381710038171003817100381712057310038171003817100381710038172+ 119888
1
21205852 (2a)
Subject to H120573 = Y minus 120585 (2b)
where 120585 = [1205851 1205852 120585
119873]T denotes the training error Y =
[1199101 1199102 119910
119873]T indicates the target value of all the samples
H119873times119871
= [h(x1)T h(x
119873)T]Tis the mapping matrix for the
inputs of all the samples 119888 is the regularization parameter (apositive constant)
Based on the KKT theorem the constrained optimizationof (2a) and (2b) can be transferred to the following dualoptimization problem
119871119863ELM
=1
2
100381710038171003817100381712057310038171003817100381710038172+ 119888
1
21205852minus 120572 (H120573 minus Y + 120585) (3)
where 120572 = [1205721 1205722 120572
119873] is the Lagrange multipliers vector
Using KKT optimality conditions the following equationscan be obtained
120597119871119863ELM
120597120573= 0 997888rarr 120573 = HTa (4a)
120597119871119863ELM
120597120585= 0 997888rarr a = 119888120585 (4b)
120597119871119863ELM
120597120572= 0 997888rarr H120573 minus 119884 + 120585 = 0 (4c)
Ultimately 120573 can be obtained as follows [12 15 16]
120573 = (119888minus1I + HTH)
minus1
HTY (5a)
or 120573 = HT(119888minus1I + HHT
)minus1
Y (5b)
In order to reduce computational costs when119873 gt 119871 onemay prefer to apply solution (5a) and when 119873 lt 119871 one mayprefer to apply solution (5b)
If the feature mapping h(x) is unknown one can applyMercerrsquos conditions on RELM The kernel matrix is defined
Mathematical Problems in Engineering 3
as Ω = HHT and Ω119894119895
= h(x119894)h(x119895)T
= 119870(x119894 x119895) Then the
output of SLFN by kernel based RELM can be given as
119891 (x) = h (x)120573 = h (x)HT(119888minus1I + HHT
)minus1
Y
= [119870 (x x1) 119870 (x x
2) 119870 (x x
119873)] (119888minus1I +Ω)
minus1
Y
(6)
21 ReOS-ELM The ReOS-ELM that is SRELM and LS-IELM can be retold as follows
For time 119896 minus 1 let h119894
= h(x119894) H
119896minus1=
[hT119904
hT119904+1
sdot sdot sdot hT119896minus1
]T Y119896minus1
= [119910119904 119910119904+1
sdot sdot sdot 119910119896minus1]
T thenaccording to (5a) solution of RELM can be expressed as
P119896minus1
= (119888minus1I + HT
119896minus1H119896minus1
)minus1
(7a)
120573119896minus1
= P119896minus1
HT119896minus1
Y119896minus1
(7b)
For time 119896 the new sample (x119896 119910119896) arrives thus H
119896=
[HT119896minus1
hT119896]TY119896
= [YT119896minus1
119910119896]T applying Sherman-Morri-
son-Woodbury (SMW) formula [23] the current P and 120573 canbe computed as
P119896= (119888minus1I + HT
119896H119896)minus1
= (119888minus1I + HT
119896minus1H119896minus1
+ hT119896h119896)minus1
= (Pminus1119896minus1
+ hT119896h119896)minus1
= P119896minus1
minusP119896minus1
hT119896h119896P119896minus1
1 + h119896P119896minus1
hT119896
(8a)
120573119896= P119896HT119896Y119896= P119896(HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896(Pminus1119896minus1
P119896minus1
HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896((Pminus1119896
minus hT119896h119896)120573119896minus1
+ hT119896119884119896)
= 120573119896minus1
+ P119896hT119896(119910119896minus h119896120573119896minus1
)
(8b)
22 KB-IELM For time 119896 minus 1 let
A119896minus1
= (119888minus1I +Ω119896minus1)
=
[[[[
[
119888minus1
+ 119870 (x119904 x119904) 119870 (x119904 x119904+1) sdot sdot sdot 119870 (x119904 x119896minus1)119870 (x119904+1 x119904) 119888
minus1+ 119870 (x119904+1 x119904+1) sdot sdot sdot 119870 (x119904+1 x119896minus1)
119870(x119896minus1 x119904) 119870 (x119896minus1 x119904+1) sdot sdot sdot 119888minus1
+ 119870 (x119896minus1 x119896minus1)
]]]]
]
(9)
For time 119896 the new sample (x119896 119910119896) arrives thus
A119896= 119888minus1I +Ω
119896= [
A119896minus1
V119896
VT119896
V119896
] (10)
where V119896
= [119870(x119896 x119904) 119870(x
119896 x119904+1
) 119870(x119896 x119896minus1
)]T V119896
=
119870(x119896 x119896) + 1119888 Using the block matrix inverse formula [23]
Aminus1119896
can be calculated from Aminus1119896minus1
as
Aminus1119896
= [A119896minus1
V119896
VT119896
V119896
]
minus1
= [Aminus1119896minus1
+ Aminus1119896minus1
V119896120588minus1
119896VT119896+1
Aminus1119896minus1
minusAminus1119896minus1
V119905120588minus1
119896
minus120588minus1
119896VT119896Aminus1119896minus1
120588minus1
119896
]
(11)
where 120588119896= V119896minus VT119896Aminus1119896minus1
V119896
3 The Proposed FORELM and FOKELM
When SLFN is employed to model online for time-varyingsystem training samples are not only generated one by onebut also often have the property of timeliness that is trainingdata have a period of validity Therefore during the learningprocess by online sequential learning algorithm the older oroutdated training data whose effectiveness is less or is lostafter several unit times should be abandoned which is theidea of forgetting mechanism [10] ReOS-ELM (ie SRELMor LS-IELM) and KB-IELM cannot reflect the timelinessof sequential training data In this section the forgettingmechanism is added to them to eliminate the outdated datathat might have misleading or bad effect on built SLFNOn the other hand for KB-IELM to abandon samples canprevent matrix Aminus1 of (11) from expanding infinitely Thecomputing procedures of deleting sample are given and thecompleted online regularized ELM and kernelized ELMwithforgetting mechanism are presented
31 Decremental RELMand FORELM After RELMhas stud-ied a given number 119911 of samples and SLFN has been appliedfor prediction RELMwould discard the oldest sample (x
119904 119910119904)
from samples setLetH = [hT
119904+1hT119904+2
sdot sdot sdot hT119896]T Y = [119910119904+1 119910
119904+2sdot sdot sdot 119910119896]
T
thenH119896= [hT119904
HT]T Y119896= [119910119904YT
]T and
P = (119888minus1I + HTH)
minus1
= (119888minus1I + HT
119896H119896minus hT119904h119904)minus1
= (Pminus1119896
minus hT119904h119904)minus1
(12)
Furthermore using SMW formula then
P = P119896+
P119896hT119904h119904P119896
1 minus h119904P119896hT119904
(13)
Moreover
120573 = PHTY = P (HT119896Y119896minus hT119904119910119904) = P (Pminus1
119896P119896HT119896Y119896minus hT119904119910119904)
= P ((Pminus1 + hT119904h119904)120573119896minus hT119904119910119904) = 120573
119896+ PhT119904(h119904120573119896minus 119910119904)
(14)
Next time P119896+1
and 120573119896+1
can be calculated from P (viewedas P119896) and 120573 (viewed as 120573
119896) according to (8a) and (8b)
respectively
4 Mathematical Problems in Engineering
Suppose that FORELM may consist of 119901 ReOS-ELMswith forgetting mechanism by which SFLNs trained have thesame output of hidden node 119866(a 119887 x) and the same numberof hidden nodes 119871 In the following FORELM algorithm thevariables and parameters with superscript 119903 are relevant to the119903th SFLN to be trained by the 119903th ReOS-ELM with forgettingmechanism Synthesize ReOS-ELM and the decrementalRELM then we can get FORELM as follows
Step 1 Initialization
(1) Choose the hidden output function119866(a 119887 x) of SFLNwith the certain activation function and the numberof hidden nodes 119871 Set the value of 119901
(2) Randomly assign hidden parameters (a119903119894119887119903119894) 119894 = 1 2
119871 119903 = 1 2 119901
(3) Determine 119888 set P119903119896minus1
= 119888I119871times119871
Step 2 Incrementally learn initial 119911minus1 samples that is repeatthe following procedure for 119911 minus 1 times
(1) Get current sample (x119896 119910119896)
(2) Calculate h119903119896and P119903
119896 h119903119896
= [119866(119886119903
1 119887119903
1 119909119896) 119866(119886
119903
119871
119887119903
119871 119909119896)] calculate P119903
119896by (8a)
(3) Calculate 120573119903119896 if sample (x
119896 119910119896) is the first one then
120573119903119896= P119903119896(h119903119896)T119910119896 else calculate 120573119903
119896by (8b)
Step 3 Online modeling and prediction repeat the followingprocedure during every step
(1) Acquire current 119910119896 form new sample (x
119896 119910119896) and
calculate h119903119896 P119903119896 and 120573119903
119896by (8a) and (8b)
(2) Prediction form x119896+1
the output of the 119903th SFLN thatis prediction119910
119903
119896+1of119910119896+1
can be calculated by (1)119910119903119896+1
= h119903(x119896+1
)120573119903119896 the final prediction 119910
119896+1= sum119901
119903=1119910119903
119896+1119901
(3) Delete the oldest sample (x119904 119910119904) calculate P119903 and 120573
119903
by (13) and (14) respectively
32 Decremental KELM and FOKELM After KELM hasstudied the given number 119911 of samples and SLFN hasbeen applied for prediction KELM would discard the oldestsample (x
119904 119910119904) from samples set
Let Ω119894119895
= 119870(x119904+119894
x119904+119895
) 119894 119895 = 1 2 119896 minus 119904 A = Ω +
119888minus1I V = [119870(x
119904 x119904+1
) 119870(x119904 x119896)] V = 119870(x
119904 x119904)+1119888 then
A119896can be written in the following partitioned matrix form
A119896= Ω + 119888
minus1I = [V VVT A
] (15)
Moreover using the block matrix inverse formula suchequation can be obtained
Aminus1119896
= Ω + 119888minus1I = [
V VVT A
]
minus1
= [120588minus1
minus120588minus1VAminus1
minusAminus1VT120588minus1 Aminus1 + Aminus1VT
120588minus1VAminus1
]
(16)
where 120588 = V minus VAminus1VTRewrite Aminus1
119896in the partitioned matrix form as
Aminus1119896
= [Aminus1119896 (11)
Aminus1119896 (12end)
Aminus1119896 (2end1) A
minus1
119896 (2end2end)] (17)
Compare (16) and (17) Aminus1 can be calculated as
Aminus1 = Aminus1119896 (2end2end) minus Aminus1VT
120588minus1VAminus1
= Aminus1119896 (2end2end) minus
(minusAminus1VT120588minus1
) (minus120588minus1VAminus1)
120588minus1
= Aminus1119896 (2end2end) minus
Aminus1119896 (2end1) times Aminus1
119896 (12end)
Aminus1119896 (11)
(18)
Next time compute Aminus1119896+1
from Aminus1 (viewed as Aminus1119896)
according to (11)Integrate KB-IELMwith the decremental KELM further
we can obtain FOKELM as follows
Step 1 Initialization choose kernel 119870(x119894 x119895) with corre-
sponding parameter values and determine 119888
Step 2 Incrementally learn initial 119911minus1 samples calculateAminus1119896
if there exists a sample only then Aminus1119896
= 1(119870(x119896 x119896) + 119888minus1
)else calculate Aminus1
119896by (11)
Step 3 Online modeling and prediction
(1) Acquire new sample (x119896 119910119896) and calculateAminus1
119896by (11)
(2) Prediction form x119896+1
and calculate prediction119910119896+1
of 119910119896+1
by (6) namely
119910119896+1
= [119870 (x119896+1
x119896minus119911+1
) 119870 (x119896+1
x119896)]
times Aminus1119896
[119910119896minus119911+1
119910119896]T
(19)
(3) Delete the oldest sample (x119904 119910119904) calculateAminus1 by (18)
4 Performance Evaluation
In this section the performance of the presented FORELMand FOKELM is verified via the time-varying nonlinearprocess identification simulations Those simulations aredesigned from the aspects of accuracy stability and compu-tation complexity of the proposed FORELM and FOKELM
Mathematical Problems in Engineering 5
by comparison with the FOS-ELM ReOS-ELM (ie SRELMor LS-IELM) and dynamic regression extreme learningmachine (DR-ELM) [24] DR-ELM also is a kind of onlinesequential RELM and is designed by Zhang and Wang usingsolution (5b) of RELM and the block matrix inverse formula
All the performance evaluations were executed in MAT-LAB 701 environment running on Windows XP with IntelCore i3-3220 33 GHz CPU and 4GB RAM
Simulation 1 The unknown identified system is a modifiedversion of the one addressed in [25] by changing the constantand the coefficients of variables to form a time-varyingsystem as done in [11]
119910 (119896 + 1) =
119910 (119896)
1 + 119910(119896)2+ 119906(119896)
3119896 le 100
2119910 (119896)
2 + 119910(119896)2+ 119906(119896)
3100 lt 119896 le 300
119910 (119896)
1 + 2119910(119896)2+ 2119906(119896)
3300 lt 119896
(20)
The system (20) can be expressed as follows
119910 (119896) = 119891 (x (119896)) (21)
where 119891(sdot) is a nonlinear function and x(119896) is the regressioninput data vector
x (119896)
= [119910 (119896 minus 1) 119910 (119896 minus 119899119910) 119906 (119896 minus 119899
119889) 119906 (119896 minus 119899
119906)]
(22)
with 119899119889 119899119906 and 119899
119910being model structure parameters Apply
SLFN to approximate (20) accordingly (x(119896) 119910(119896)) is thelearning sample (x
119896 119910119896) of SLFN
Denote 1198960
= 119911 + max(119899119910 119899119906) minus 119899119889 1198961
= 1198960+ 350 The
input is set as follows
119906 (119896) =
rand (sdot) 119896 le 1198960
sin(2120587 (119896 minus 119896
0)
120) 1198960lt 119896 le 119896
1
sin(2120587 (119896 minus 119896
1)
50) 1198961lt 119896
(23)
where rand(sdot) generates random numbers which are uni-formly distributed in the interval (0 1)
In all experiments the output of hidden nodewith respectto the input x of a SLFN in (1) is set to the sigmoidal additivefunction that is 119866(a 119887 x) = 1(1 + exp(a sdot x + 119887)) thecomponents of a that is the input weights and bias 119887 arerandomly chosen from the range [minus2 2] In FOKELM theGaussian kernel function is applied namely 119870(x
119894 x119895) =
exp(minus119909119894minus 1199091198952120590)
The root-mean-square error (RMSE) of prediction andthe maximal absolute prediction error (MAPE) are regarded
as measure indices of model accuracy and stability respec-tively Consider
RMSE = radic1
1199052minus 1199051+ 1
1199052
sum
119894=1199051
(119910119894minus 119910119894)2
MAPE = max119894=1199051 1199052
1003816100381610038161003816119910119894 minus 119910119894
1003816100381610038161003816
(24)
where 1199051= 1198960+ 1 1199052= 650 Simulation is carried on for 650
instances In model (21) 119899119889
= 1 119899119906
= 2 and 119899119910
= 3 Due torandomness of parameters a 119887 and 119906(119905) during initial stage(119896 le 119896
0) the results of simulation must possess variation For
each approach the results are averaged over 5 trialsReOS-ELM does not discard any old sample thus it has
119911 rarr infinIn offline modeling the training samples set is fixed thus
one may search relative optimal values for model parameterof ELMs Nevertheless in online modeling for time-varyingsystem the training samples set is changing it is difficult tochoose optimal values for parameters in practice Thereforewe set the same parameter of these ELMs with the same valuemanually for example their parameters 119862 all are set to 250then we compare their performances
RMSE andMAPE of the proposed ELMs and other afore-mentioned ELMs are listed in Tables 1 and 2 respectivelyand the corresponding running time (ie training time pluspredicting time) of these various ELMs is given in Table 3
From Tables 1ndash3 one can see the following results
(1) RMSE andMAPE of FORELM are smaller than thoseof FOS-ELMwith the same 119911 and 119871 valuesThe reasonfor this is that (HTH) in FOS-ELM may be (nearly)singular at some instances thus (HTH)
minus1 calculatedrecursively is nonsense and unbelievable and when119871 ge 119911 FOS-ELM cannot work owing to its toolarge RMSE or MAPE Accordingly ldquotimesrdquo representsnullification inTables 1ndash3 whereas FORELMdoes notsuffer from such a problem In addition RMSE andMAPE of FOKELM also are smaller than those ofFOS-ELM with the same 119911 values
(2) RMSE of FORELM and FOKELM is smaller than thatof ReOS-ELMwith the same 119871 and119862when parameter119911 namely the length of sliding time windows isset properly The reason for this is that ReOS-ELMneglects timeliness of samples of the time-varyingprocess and does not get rid of effects of old samplescontrarily FORELM and FOKELM stress actions ofthe newest ones It should be noticed that 119911 is relevantto characteristics of the specific time-varying systemand its inputs In Table 1 when 119911 = 50 70 or 100 theeffect is good
(3) When 119911 is fixed with 119871 increasing RMSEs of theseELMs tend to decrease firstly But changes are notobvious later
(4) FORELM requires nearly the same time as FOS-ELMbut more time than ReOS-ELM It is because bothFORELM and FOS-ELM involve 119901 incremental and
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 3
as Ω = HHT and Ω119894119895
= h(x119894)h(x119895)T
= 119870(x119894 x119895) Then the
output of SLFN by kernel based RELM can be given as
119891 (x) = h (x)120573 = h (x)HT(119888minus1I + HHT
)minus1
Y
= [119870 (x x1) 119870 (x x
2) 119870 (x x
119873)] (119888minus1I +Ω)
minus1
Y
(6)
21 ReOS-ELM The ReOS-ELM that is SRELM and LS-IELM can be retold as follows
For time 119896 minus 1 let h119894
= h(x119894) H
119896minus1=
[hT119904
hT119904+1
sdot sdot sdot hT119896minus1
]T Y119896minus1
= [119910119904 119910119904+1
sdot sdot sdot 119910119896minus1]
T thenaccording to (5a) solution of RELM can be expressed as
P119896minus1
= (119888minus1I + HT
119896minus1H119896minus1
)minus1
(7a)
120573119896minus1
= P119896minus1
HT119896minus1
Y119896minus1
(7b)
For time 119896 the new sample (x119896 119910119896) arrives thus H
119896=
[HT119896minus1
hT119896]TY119896
= [YT119896minus1
119910119896]T applying Sherman-Morri-
son-Woodbury (SMW) formula [23] the current P and 120573 canbe computed as
P119896= (119888minus1I + HT
119896H119896)minus1
= (119888minus1I + HT
119896minus1H119896minus1
+ hT119896h119896)minus1
= (Pminus1119896minus1
+ hT119896h119896)minus1
= P119896minus1
minusP119896minus1
hT119896h119896P119896minus1
1 + h119896P119896minus1
hT119896
(8a)
120573119896= P119896HT119896Y119896= P119896(HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896(Pminus1119896minus1
P119896minus1
HT119896minus1
Y119896minus1
+ hT119896119910119896)
= P119896((Pminus1119896
minus hT119896h119896)120573119896minus1
+ hT119896119884119896)
= 120573119896minus1
+ P119896hT119896(119910119896minus h119896120573119896minus1
)
(8b)
22 KB-IELM For time 119896 minus 1 let
A119896minus1
= (119888minus1I +Ω119896minus1)
=
[[[[
[
119888minus1
+ 119870 (x119904 x119904) 119870 (x119904 x119904+1) sdot sdot sdot 119870 (x119904 x119896minus1)119870 (x119904+1 x119904) 119888
minus1+ 119870 (x119904+1 x119904+1) sdot sdot sdot 119870 (x119904+1 x119896minus1)
119870(x119896minus1 x119904) 119870 (x119896minus1 x119904+1) sdot sdot sdot 119888minus1
+ 119870 (x119896minus1 x119896minus1)
]]]]
]
(9)
For time 119896 the new sample (x119896 119910119896) arrives thus
A119896= 119888minus1I +Ω
119896= [
A119896minus1
V119896
VT119896
V119896
] (10)
where V119896
= [119870(x119896 x119904) 119870(x
119896 x119904+1
) 119870(x119896 x119896minus1
)]T V119896
=
119870(x119896 x119896) + 1119888 Using the block matrix inverse formula [23]
Aminus1119896
can be calculated from Aminus1119896minus1
as
Aminus1119896
= [A119896minus1
V119896
VT119896
V119896
]
minus1
= [Aminus1119896minus1
+ Aminus1119896minus1
V119896120588minus1
119896VT119896+1
Aminus1119896minus1
minusAminus1119896minus1
V119905120588minus1
119896
minus120588minus1
119896VT119896Aminus1119896minus1
120588minus1
119896
]
(11)
where 120588119896= V119896minus VT119896Aminus1119896minus1
V119896
3 The Proposed FORELM and FOKELM
When SLFN is employed to model online for time-varyingsystem training samples are not only generated one by onebut also often have the property of timeliness that is trainingdata have a period of validity Therefore during the learningprocess by online sequential learning algorithm the older oroutdated training data whose effectiveness is less or is lostafter several unit times should be abandoned which is theidea of forgetting mechanism [10] ReOS-ELM (ie SRELMor LS-IELM) and KB-IELM cannot reflect the timelinessof sequential training data In this section the forgettingmechanism is added to them to eliminate the outdated datathat might have misleading or bad effect on built SLFNOn the other hand for KB-IELM to abandon samples canprevent matrix Aminus1 of (11) from expanding infinitely Thecomputing procedures of deleting sample are given and thecompleted online regularized ELM and kernelized ELMwithforgetting mechanism are presented
31 Decremental RELMand FORELM After RELMhas stud-ied a given number 119911 of samples and SLFN has been appliedfor prediction RELMwould discard the oldest sample (x
119904 119910119904)
from samples setLetH = [hT
119904+1hT119904+2
sdot sdot sdot hT119896]T Y = [119910119904+1 119910
119904+2sdot sdot sdot 119910119896]
T
thenH119896= [hT119904
HT]T Y119896= [119910119904YT
]T and
P = (119888minus1I + HTH)
minus1
= (119888minus1I + HT
119896H119896minus hT119904h119904)minus1
= (Pminus1119896
minus hT119904h119904)minus1
(12)
Furthermore using SMW formula then
P = P119896+
P119896hT119904h119904P119896
1 minus h119904P119896hT119904
(13)
Moreover
120573 = PHTY = P (HT119896Y119896minus hT119904119910119904) = P (Pminus1
119896P119896HT119896Y119896minus hT119904119910119904)
= P ((Pminus1 + hT119904h119904)120573119896minus hT119904119910119904) = 120573
119896+ PhT119904(h119904120573119896minus 119910119904)
(14)
Next time P119896+1
and 120573119896+1
can be calculated from P (viewedas P119896) and 120573 (viewed as 120573
119896) according to (8a) and (8b)
respectively
4 Mathematical Problems in Engineering
Suppose that FORELM may consist of 119901 ReOS-ELMswith forgetting mechanism by which SFLNs trained have thesame output of hidden node 119866(a 119887 x) and the same numberof hidden nodes 119871 In the following FORELM algorithm thevariables and parameters with superscript 119903 are relevant to the119903th SFLN to be trained by the 119903th ReOS-ELM with forgettingmechanism Synthesize ReOS-ELM and the decrementalRELM then we can get FORELM as follows
Step 1 Initialization
(1) Choose the hidden output function119866(a 119887 x) of SFLNwith the certain activation function and the numberof hidden nodes 119871 Set the value of 119901
(2) Randomly assign hidden parameters (a119903119894119887119903119894) 119894 = 1 2
119871 119903 = 1 2 119901
(3) Determine 119888 set P119903119896minus1
= 119888I119871times119871
Step 2 Incrementally learn initial 119911minus1 samples that is repeatthe following procedure for 119911 minus 1 times
(1) Get current sample (x119896 119910119896)
(2) Calculate h119903119896and P119903
119896 h119903119896
= [119866(119886119903
1 119887119903
1 119909119896) 119866(119886
119903
119871
119887119903
119871 119909119896)] calculate P119903
119896by (8a)
(3) Calculate 120573119903119896 if sample (x
119896 119910119896) is the first one then
120573119903119896= P119903119896(h119903119896)T119910119896 else calculate 120573119903
119896by (8b)
Step 3 Online modeling and prediction repeat the followingprocedure during every step
(1) Acquire current 119910119896 form new sample (x
119896 119910119896) and
calculate h119903119896 P119903119896 and 120573119903
119896by (8a) and (8b)
(2) Prediction form x119896+1
the output of the 119903th SFLN thatis prediction119910
119903
119896+1of119910119896+1
can be calculated by (1)119910119903119896+1
= h119903(x119896+1
)120573119903119896 the final prediction 119910
119896+1= sum119901
119903=1119910119903
119896+1119901
(3) Delete the oldest sample (x119904 119910119904) calculate P119903 and 120573
119903
by (13) and (14) respectively
32 Decremental KELM and FOKELM After KELM hasstudied the given number 119911 of samples and SLFN hasbeen applied for prediction KELM would discard the oldestsample (x
119904 119910119904) from samples set
Let Ω119894119895
= 119870(x119904+119894
x119904+119895
) 119894 119895 = 1 2 119896 minus 119904 A = Ω +
119888minus1I V = [119870(x
119904 x119904+1
) 119870(x119904 x119896)] V = 119870(x
119904 x119904)+1119888 then
A119896can be written in the following partitioned matrix form
A119896= Ω + 119888
minus1I = [V VVT A
] (15)
Moreover using the block matrix inverse formula suchequation can be obtained
Aminus1119896
= Ω + 119888minus1I = [
V VVT A
]
minus1
= [120588minus1
minus120588minus1VAminus1
minusAminus1VT120588minus1 Aminus1 + Aminus1VT
120588minus1VAminus1
]
(16)
where 120588 = V minus VAminus1VTRewrite Aminus1
119896in the partitioned matrix form as
Aminus1119896
= [Aminus1119896 (11)
Aminus1119896 (12end)
Aminus1119896 (2end1) A
minus1
119896 (2end2end)] (17)
Compare (16) and (17) Aminus1 can be calculated as
Aminus1 = Aminus1119896 (2end2end) minus Aminus1VT
120588minus1VAminus1
= Aminus1119896 (2end2end) minus
(minusAminus1VT120588minus1
) (minus120588minus1VAminus1)
120588minus1
= Aminus1119896 (2end2end) minus
Aminus1119896 (2end1) times Aminus1
119896 (12end)
Aminus1119896 (11)
(18)
Next time compute Aminus1119896+1
from Aminus1 (viewed as Aminus1119896)
according to (11)Integrate KB-IELMwith the decremental KELM further
we can obtain FOKELM as follows
Step 1 Initialization choose kernel 119870(x119894 x119895) with corre-
sponding parameter values and determine 119888
Step 2 Incrementally learn initial 119911minus1 samples calculateAminus1119896
if there exists a sample only then Aminus1119896
= 1(119870(x119896 x119896) + 119888minus1
)else calculate Aminus1
119896by (11)
Step 3 Online modeling and prediction
(1) Acquire new sample (x119896 119910119896) and calculateAminus1
119896by (11)
(2) Prediction form x119896+1
and calculate prediction119910119896+1
of 119910119896+1
by (6) namely
119910119896+1
= [119870 (x119896+1
x119896minus119911+1
) 119870 (x119896+1
x119896)]
times Aminus1119896
[119910119896minus119911+1
119910119896]T
(19)
(3) Delete the oldest sample (x119904 119910119904) calculateAminus1 by (18)
4 Performance Evaluation
In this section the performance of the presented FORELMand FOKELM is verified via the time-varying nonlinearprocess identification simulations Those simulations aredesigned from the aspects of accuracy stability and compu-tation complexity of the proposed FORELM and FOKELM
Mathematical Problems in Engineering 5
by comparison with the FOS-ELM ReOS-ELM (ie SRELMor LS-IELM) and dynamic regression extreme learningmachine (DR-ELM) [24] DR-ELM also is a kind of onlinesequential RELM and is designed by Zhang and Wang usingsolution (5b) of RELM and the block matrix inverse formula
All the performance evaluations were executed in MAT-LAB 701 environment running on Windows XP with IntelCore i3-3220 33 GHz CPU and 4GB RAM
Simulation 1 The unknown identified system is a modifiedversion of the one addressed in [25] by changing the constantand the coefficients of variables to form a time-varyingsystem as done in [11]
119910 (119896 + 1) =
119910 (119896)
1 + 119910(119896)2+ 119906(119896)
3119896 le 100
2119910 (119896)
2 + 119910(119896)2+ 119906(119896)
3100 lt 119896 le 300
119910 (119896)
1 + 2119910(119896)2+ 2119906(119896)
3300 lt 119896
(20)
The system (20) can be expressed as follows
119910 (119896) = 119891 (x (119896)) (21)
where 119891(sdot) is a nonlinear function and x(119896) is the regressioninput data vector
x (119896)
= [119910 (119896 minus 1) 119910 (119896 minus 119899119910) 119906 (119896 minus 119899
119889) 119906 (119896 minus 119899
119906)]
(22)
with 119899119889 119899119906 and 119899
119910being model structure parameters Apply
SLFN to approximate (20) accordingly (x(119896) 119910(119896)) is thelearning sample (x
119896 119910119896) of SLFN
Denote 1198960
= 119911 + max(119899119910 119899119906) minus 119899119889 1198961
= 1198960+ 350 The
input is set as follows
119906 (119896) =
rand (sdot) 119896 le 1198960
sin(2120587 (119896 minus 119896
0)
120) 1198960lt 119896 le 119896
1
sin(2120587 (119896 minus 119896
1)
50) 1198961lt 119896
(23)
where rand(sdot) generates random numbers which are uni-formly distributed in the interval (0 1)
In all experiments the output of hidden nodewith respectto the input x of a SLFN in (1) is set to the sigmoidal additivefunction that is 119866(a 119887 x) = 1(1 + exp(a sdot x + 119887)) thecomponents of a that is the input weights and bias 119887 arerandomly chosen from the range [minus2 2] In FOKELM theGaussian kernel function is applied namely 119870(x
119894 x119895) =
exp(minus119909119894minus 1199091198952120590)
The root-mean-square error (RMSE) of prediction andthe maximal absolute prediction error (MAPE) are regarded
as measure indices of model accuracy and stability respec-tively Consider
RMSE = radic1
1199052minus 1199051+ 1
1199052
sum
119894=1199051
(119910119894minus 119910119894)2
MAPE = max119894=1199051 1199052
1003816100381610038161003816119910119894 minus 119910119894
1003816100381610038161003816
(24)
where 1199051= 1198960+ 1 1199052= 650 Simulation is carried on for 650
instances In model (21) 119899119889
= 1 119899119906
= 2 and 119899119910
= 3 Due torandomness of parameters a 119887 and 119906(119905) during initial stage(119896 le 119896
0) the results of simulation must possess variation For
each approach the results are averaged over 5 trialsReOS-ELM does not discard any old sample thus it has
119911 rarr infinIn offline modeling the training samples set is fixed thus
one may search relative optimal values for model parameterof ELMs Nevertheless in online modeling for time-varyingsystem the training samples set is changing it is difficult tochoose optimal values for parameters in practice Thereforewe set the same parameter of these ELMs with the same valuemanually for example their parameters 119862 all are set to 250then we compare their performances
RMSE andMAPE of the proposed ELMs and other afore-mentioned ELMs are listed in Tables 1 and 2 respectivelyand the corresponding running time (ie training time pluspredicting time) of these various ELMs is given in Table 3
From Tables 1ndash3 one can see the following results
(1) RMSE andMAPE of FORELM are smaller than thoseof FOS-ELMwith the same 119911 and 119871 valuesThe reasonfor this is that (HTH) in FOS-ELM may be (nearly)singular at some instances thus (HTH)
minus1 calculatedrecursively is nonsense and unbelievable and when119871 ge 119911 FOS-ELM cannot work owing to its toolarge RMSE or MAPE Accordingly ldquotimesrdquo representsnullification inTables 1ndash3 whereas FORELMdoes notsuffer from such a problem In addition RMSE andMAPE of FOKELM also are smaller than those ofFOS-ELM with the same 119911 values
(2) RMSE of FORELM and FOKELM is smaller than thatof ReOS-ELMwith the same 119871 and119862when parameter119911 namely the length of sliding time windows isset properly The reason for this is that ReOS-ELMneglects timeliness of samples of the time-varyingprocess and does not get rid of effects of old samplescontrarily FORELM and FOKELM stress actions ofthe newest ones It should be noticed that 119911 is relevantto characteristics of the specific time-varying systemand its inputs In Table 1 when 119911 = 50 70 or 100 theeffect is good
(3) When 119911 is fixed with 119871 increasing RMSEs of theseELMs tend to decrease firstly But changes are notobvious later
(4) FORELM requires nearly the same time as FOS-ELMbut more time than ReOS-ELM It is because bothFORELM and FOS-ELM involve 119901 incremental and
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Mathematical Problems in Engineering
Suppose that FORELM may consist of 119901 ReOS-ELMswith forgetting mechanism by which SFLNs trained have thesame output of hidden node 119866(a 119887 x) and the same numberof hidden nodes 119871 In the following FORELM algorithm thevariables and parameters with superscript 119903 are relevant to the119903th SFLN to be trained by the 119903th ReOS-ELM with forgettingmechanism Synthesize ReOS-ELM and the decrementalRELM then we can get FORELM as follows
Step 1 Initialization
(1) Choose the hidden output function119866(a 119887 x) of SFLNwith the certain activation function and the numberof hidden nodes 119871 Set the value of 119901
(2) Randomly assign hidden parameters (a119903119894119887119903119894) 119894 = 1 2
119871 119903 = 1 2 119901
(3) Determine 119888 set P119903119896minus1
= 119888I119871times119871
Step 2 Incrementally learn initial 119911minus1 samples that is repeatthe following procedure for 119911 minus 1 times
(1) Get current sample (x119896 119910119896)
(2) Calculate h119903119896and P119903
119896 h119903119896
= [119866(119886119903
1 119887119903
1 119909119896) 119866(119886
119903
119871
119887119903
119871 119909119896)] calculate P119903
119896by (8a)
(3) Calculate 120573119903119896 if sample (x
119896 119910119896) is the first one then
120573119903119896= P119903119896(h119903119896)T119910119896 else calculate 120573119903
119896by (8b)
Step 3 Online modeling and prediction repeat the followingprocedure during every step
(1) Acquire current 119910119896 form new sample (x
119896 119910119896) and
calculate h119903119896 P119903119896 and 120573119903
119896by (8a) and (8b)
(2) Prediction form x119896+1
the output of the 119903th SFLN thatis prediction119910
119903
119896+1of119910119896+1
can be calculated by (1)119910119903119896+1
= h119903(x119896+1
)120573119903119896 the final prediction 119910
119896+1= sum119901
119903=1119910119903
119896+1119901
(3) Delete the oldest sample (x119904 119910119904) calculate P119903 and 120573
119903
by (13) and (14) respectively
32 Decremental KELM and FOKELM After KELM hasstudied the given number 119911 of samples and SLFN hasbeen applied for prediction KELM would discard the oldestsample (x
119904 119910119904) from samples set
Let Ω119894119895
= 119870(x119904+119894
x119904+119895
) 119894 119895 = 1 2 119896 minus 119904 A = Ω +
119888minus1I V = [119870(x
119904 x119904+1
) 119870(x119904 x119896)] V = 119870(x
119904 x119904)+1119888 then
A119896can be written in the following partitioned matrix form
A119896= Ω + 119888
minus1I = [V VVT A
] (15)
Moreover using the block matrix inverse formula suchequation can be obtained
Aminus1119896
= Ω + 119888minus1I = [
V VVT A
]
minus1
= [120588minus1
minus120588minus1VAminus1
minusAminus1VT120588minus1 Aminus1 + Aminus1VT
120588minus1VAminus1
]
(16)
where 120588 = V minus VAminus1VTRewrite Aminus1
119896in the partitioned matrix form as
Aminus1119896
= [Aminus1119896 (11)
Aminus1119896 (12end)
Aminus1119896 (2end1) A
minus1
119896 (2end2end)] (17)
Compare (16) and (17) Aminus1 can be calculated as
Aminus1 = Aminus1119896 (2end2end) minus Aminus1VT
120588minus1VAminus1
= Aminus1119896 (2end2end) minus
(minusAminus1VT120588minus1
) (minus120588minus1VAminus1)
120588minus1
= Aminus1119896 (2end2end) minus
Aminus1119896 (2end1) times Aminus1
119896 (12end)
Aminus1119896 (11)
(18)
Next time compute Aminus1119896+1
from Aminus1 (viewed as Aminus1119896)
according to (11)Integrate KB-IELMwith the decremental KELM further
we can obtain FOKELM as follows
Step 1 Initialization choose kernel 119870(x119894 x119895) with corre-
sponding parameter values and determine 119888
Step 2 Incrementally learn initial 119911minus1 samples calculateAminus1119896
if there exists a sample only then Aminus1119896
= 1(119870(x119896 x119896) + 119888minus1
)else calculate Aminus1
119896by (11)
Step 3 Online modeling and prediction
(1) Acquire new sample (x119896 119910119896) and calculateAminus1
119896by (11)
(2) Prediction form x119896+1
and calculate prediction119910119896+1
of 119910119896+1
by (6) namely
119910119896+1
= [119870 (x119896+1
x119896minus119911+1
) 119870 (x119896+1
x119896)]
times Aminus1119896
[119910119896minus119911+1
119910119896]T
(19)
(3) Delete the oldest sample (x119904 119910119904) calculateAminus1 by (18)
4 Performance Evaluation
In this section the performance of the presented FORELMand FOKELM is verified via the time-varying nonlinearprocess identification simulations Those simulations aredesigned from the aspects of accuracy stability and compu-tation complexity of the proposed FORELM and FOKELM
Mathematical Problems in Engineering 5
by comparison with the FOS-ELM ReOS-ELM (ie SRELMor LS-IELM) and dynamic regression extreme learningmachine (DR-ELM) [24] DR-ELM also is a kind of onlinesequential RELM and is designed by Zhang and Wang usingsolution (5b) of RELM and the block matrix inverse formula
All the performance evaluations were executed in MAT-LAB 701 environment running on Windows XP with IntelCore i3-3220 33 GHz CPU and 4GB RAM
Simulation 1 The unknown identified system is a modifiedversion of the one addressed in [25] by changing the constantand the coefficients of variables to form a time-varyingsystem as done in [11]
119910 (119896 + 1) =
119910 (119896)
1 + 119910(119896)2+ 119906(119896)
3119896 le 100
2119910 (119896)
2 + 119910(119896)2+ 119906(119896)
3100 lt 119896 le 300
119910 (119896)
1 + 2119910(119896)2+ 2119906(119896)
3300 lt 119896
(20)
The system (20) can be expressed as follows
119910 (119896) = 119891 (x (119896)) (21)
where 119891(sdot) is a nonlinear function and x(119896) is the regressioninput data vector
x (119896)
= [119910 (119896 minus 1) 119910 (119896 minus 119899119910) 119906 (119896 minus 119899
119889) 119906 (119896 minus 119899
119906)]
(22)
with 119899119889 119899119906 and 119899
119910being model structure parameters Apply
SLFN to approximate (20) accordingly (x(119896) 119910(119896)) is thelearning sample (x
119896 119910119896) of SLFN
Denote 1198960
= 119911 + max(119899119910 119899119906) minus 119899119889 1198961
= 1198960+ 350 The
input is set as follows
119906 (119896) =
rand (sdot) 119896 le 1198960
sin(2120587 (119896 minus 119896
0)
120) 1198960lt 119896 le 119896
1
sin(2120587 (119896 minus 119896
1)
50) 1198961lt 119896
(23)
where rand(sdot) generates random numbers which are uni-formly distributed in the interval (0 1)
In all experiments the output of hidden nodewith respectto the input x of a SLFN in (1) is set to the sigmoidal additivefunction that is 119866(a 119887 x) = 1(1 + exp(a sdot x + 119887)) thecomponents of a that is the input weights and bias 119887 arerandomly chosen from the range [minus2 2] In FOKELM theGaussian kernel function is applied namely 119870(x
119894 x119895) =
exp(minus119909119894minus 1199091198952120590)
The root-mean-square error (RMSE) of prediction andthe maximal absolute prediction error (MAPE) are regarded
as measure indices of model accuracy and stability respec-tively Consider
RMSE = radic1
1199052minus 1199051+ 1
1199052
sum
119894=1199051
(119910119894minus 119910119894)2
MAPE = max119894=1199051 1199052
1003816100381610038161003816119910119894 minus 119910119894
1003816100381610038161003816
(24)
where 1199051= 1198960+ 1 1199052= 650 Simulation is carried on for 650
instances In model (21) 119899119889
= 1 119899119906
= 2 and 119899119910
= 3 Due torandomness of parameters a 119887 and 119906(119905) during initial stage(119896 le 119896
0) the results of simulation must possess variation For
each approach the results are averaged over 5 trialsReOS-ELM does not discard any old sample thus it has
119911 rarr infinIn offline modeling the training samples set is fixed thus
one may search relative optimal values for model parameterof ELMs Nevertheless in online modeling for time-varyingsystem the training samples set is changing it is difficult tochoose optimal values for parameters in practice Thereforewe set the same parameter of these ELMs with the same valuemanually for example their parameters 119862 all are set to 250then we compare their performances
RMSE andMAPE of the proposed ELMs and other afore-mentioned ELMs are listed in Tables 1 and 2 respectivelyand the corresponding running time (ie training time pluspredicting time) of these various ELMs is given in Table 3
From Tables 1ndash3 one can see the following results
(1) RMSE andMAPE of FORELM are smaller than thoseof FOS-ELMwith the same 119911 and 119871 valuesThe reasonfor this is that (HTH) in FOS-ELM may be (nearly)singular at some instances thus (HTH)
minus1 calculatedrecursively is nonsense and unbelievable and when119871 ge 119911 FOS-ELM cannot work owing to its toolarge RMSE or MAPE Accordingly ldquotimesrdquo representsnullification inTables 1ndash3 whereas FORELMdoes notsuffer from such a problem In addition RMSE andMAPE of FOKELM also are smaller than those ofFOS-ELM with the same 119911 values
(2) RMSE of FORELM and FOKELM is smaller than thatof ReOS-ELMwith the same 119871 and119862when parameter119911 namely the length of sliding time windows isset properly The reason for this is that ReOS-ELMneglects timeliness of samples of the time-varyingprocess and does not get rid of effects of old samplescontrarily FORELM and FOKELM stress actions ofthe newest ones It should be noticed that 119911 is relevantto characteristics of the specific time-varying systemand its inputs In Table 1 when 119911 = 50 70 or 100 theeffect is good
(3) When 119911 is fixed with 119871 increasing RMSEs of theseELMs tend to decrease firstly But changes are notobvious later
(4) FORELM requires nearly the same time as FOS-ELMbut more time than ReOS-ELM It is because bothFORELM and FOS-ELM involve 119901 incremental and
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 5
by comparison with the FOS-ELM ReOS-ELM (ie SRELMor LS-IELM) and dynamic regression extreme learningmachine (DR-ELM) [24] DR-ELM also is a kind of onlinesequential RELM and is designed by Zhang and Wang usingsolution (5b) of RELM and the block matrix inverse formula
All the performance evaluations were executed in MAT-LAB 701 environment running on Windows XP with IntelCore i3-3220 33 GHz CPU and 4GB RAM
Simulation 1 The unknown identified system is a modifiedversion of the one addressed in [25] by changing the constantand the coefficients of variables to form a time-varyingsystem as done in [11]
119910 (119896 + 1) =
119910 (119896)
1 + 119910(119896)2+ 119906(119896)
3119896 le 100
2119910 (119896)
2 + 119910(119896)2+ 119906(119896)
3100 lt 119896 le 300
119910 (119896)
1 + 2119910(119896)2+ 2119906(119896)
3300 lt 119896
(20)
The system (20) can be expressed as follows
119910 (119896) = 119891 (x (119896)) (21)
where 119891(sdot) is a nonlinear function and x(119896) is the regressioninput data vector
x (119896)
= [119910 (119896 minus 1) 119910 (119896 minus 119899119910) 119906 (119896 minus 119899
119889) 119906 (119896 minus 119899
119906)]
(22)
with 119899119889 119899119906 and 119899
119910being model structure parameters Apply
SLFN to approximate (20) accordingly (x(119896) 119910(119896)) is thelearning sample (x
119896 119910119896) of SLFN
Denote 1198960
= 119911 + max(119899119910 119899119906) minus 119899119889 1198961
= 1198960+ 350 The
input is set as follows
119906 (119896) =
rand (sdot) 119896 le 1198960
sin(2120587 (119896 minus 119896
0)
120) 1198960lt 119896 le 119896
1
sin(2120587 (119896 minus 119896
1)
50) 1198961lt 119896
(23)
where rand(sdot) generates random numbers which are uni-formly distributed in the interval (0 1)
In all experiments the output of hidden nodewith respectto the input x of a SLFN in (1) is set to the sigmoidal additivefunction that is 119866(a 119887 x) = 1(1 + exp(a sdot x + 119887)) thecomponents of a that is the input weights and bias 119887 arerandomly chosen from the range [minus2 2] In FOKELM theGaussian kernel function is applied namely 119870(x
119894 x119895) =
exp(minus119909119894minus 1199091198952120590)
The root-mean-square error (RMSE) of prediction andthe maximal absolute prediction error (MAPE) are regarded
as measure indices of model accuracy and stability respec-tively Consider
RMSE = radic1
1199052minus 1199051+ 1
1199052
sum
119894=1199051
(119910119894minus 119910119894)2
MAPE = max119894=1199051 1199052
1003816100381610038161003816119910119894 minus 119910119894
1003816100381610038161003816
(24)
where 1199051= 1198960+ 1 1199052= 650 Simulation is carried on for 650
instances In model (21) 119899119889
= 1 119899119906
= 2 and 119899119910
= 3 Due torandomness of parameters a 119887 and 119906(119905) during initial stage(119896 le 119896
0) the results of simulation must possess variation For
each approach the results are averaged over 5 trialsReOS-ELM does not discard any old sample thus it has
119911 rarr infinIn offline modeling the training samples set is fixed thus
one may search relative optimal values for model parameterof ELMs Nevertheless in online modeling for time-varyingsystem the training samples set is changing it is difficult tochoose optimal values for parameters in practice Thereforewe set the same parameter of these ELMs with the same valuemanually for example their parameters 119862 all are set to 250then we compare their performances
RMSE andMAPE of the proposed ELMs and other afore-mentioned ELMs are listed in Tables 1 and 2 respectivelyand the corresponding running time (ie training time pluspredicting time) of these various ELMs is given in Table 3
From Tables 1ndash3 one can see the following results
(1) RMSE andMAPE of FORELM are smaller than thoseof FOS-ELMwith the same 119911 and 119871 valuesThe reasonfor this is that (HTH) in FOS-ELM may be (nearly)singular at some instances thus (HTH)
minus1 calculatedrecursively is nonsense and unbelievable and when119871 ge 119911 FOS-ELM cannot work owing to its toolarge RMSE or MAPE Accordingly ldquotimesrdquo representsnullification inTables 1ndash3 whereas FORELMdoes notsuffer from such a problem In addition RMSE andMAPE of FOKELM also are smaller than those ofFOS-ELM with the same 119911 values
(2) RMSE of FORELM and FOKELM is smaller than thatof ReOS-ELMwith the same 119871 and119862when parameter119911 namely the length of sliding time windows isset properly The reason for this is that ReOS-ELMneglects timeliness of samples of the time-varyingprocess and does not get rid of effects of old samplescontrarily FORELM and FOKELM stress actions ofthe newest ones It should be noticed that 119911 is relevantto characteristics of the specific time-varying systemand its inputs In Table 1 when 119911 = 50 70 or 100 theeffect is good
(3) When 119911 is fixed with 119871 increasing RMSEs of theseELMs tend to decrease firstly But changes are notobvious later
(4) FORELM requires nearly the same time as FOS-ELMbut more time than ReOS-ELM It is because bothFORELM and FOS-ELM involve 119901 incremental and
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Mathematical Problems in Engineering
Table 1 RMSE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 03474 00967 00731 00975 00817 mdashReOS-ELM mdash mdash mdash mdash mdash 00961DR-ELM 00681 00683 00712 00833 00671 mdash
FORELM (119901 = 2) 00850 00708 00680 00816 00718 mdash
119871 = 25
FOS-ELM 09070 01808 01324 00821 00883 mdashReOS-ELM mdash mdash mdash mdash mdash 00804DR-ELM 00543 00670 00789 00635 00656 mdashFORELM 00578 00697 00833 00653 00600 mdash
119871 = 50
FOS-ELM times times times 99541 04806 mdashReOS-ELM mdash mdash mdash mdash mdash 00529DR-ELM 00377 00351 00345 00425 00457 mdashFORELM 00344 00320 00324 00457 00456 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00359DR-ELM 00298 00306 00308 00391 00393 mdashFORELM 00312 00263 00288 00405 00378 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00351DR-ELM 00268 00270 00281 00417 00365 mdashFORELM 00270 00276 00284 00378 00330 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00344DR-ELM 00259 00284 00272 00363 00351 mdashFORELM 00263 00289 00274 00355 00325 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00296DR-ELM 00240 00255 00249 00362 00327 mdashFORELM 00231 00248 00256 00372 00310 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00292DR-ELM 00233 00253 00270 00374 00320 mdashFORELM 00223 00256 00252 00407 00318 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 00298DR-ELM 00234 00263 00278 00418 00323 mdashFORELM 00232 00238 00259 00372 00310 mdash
FOKELM 00236 00251 00267 00386 00331 mdashldquomdashrdquo represents nondefinition or inexistence in the caseldquotimesrdquo represents nullification owing to the too large RMSE or MAPE
decremental learning procedures but ReOS-ELMdoes one incremental learning procedure only
(5) Both FORELM andDR-ELMuse regularization trickso they should obtain the same or similar predictioneffect theoretically and Tables 1 and 2 also show theyhave similar simulation results statistically Howevertheir time efficiencies are different From Table 3 itcan be seen that for the case where 119871 is small or 119871 le 119911FORELM takes less time thanDR-ELMThus when 119871
is small and modeling speed is preferred to accuracyone may try to employ FORELM
(6) When 119911 is fixed if 119871 is large enough there are no sig-nificant differences between RMSE of FORELM andDR-ELM and RMSE of FOKELM with appropriateparameter values In Table 3 it is obvious that with 119871
increasing DR-ELM costs more and more time Buttime cost by FOKELM is irrelevant with 119871 Accordingto the procedures of DR-ELM and FOKELM if 119871 islarge enough to make calculating h(x
119894)h(x119895)T more
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 7
Table 2 MAPE comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 58327 06123 05415 10993 07288 mdashReOS-ELM mdash mdash mdash mdash mdash 03236DR-ELM 03412 04318 03440 07437 04007 mdash
FORELM (119901 = 2) 02689 03639 03135 08583 04661 mdash
119871 = 25
FOS-ELM 50473 17010 24941 09537 09272 mdashReOS-ELM mdash mdash mdash mdash mdash 04773DR-ELM 03418 03158 04577 05680 03127 mdashFORELM 03391 02568 03834 06228 02908 mdash
119871 = 50
FOS-ELM times times times 2098850 39992 mdashReOS-ELM mdash mdash mdash mdash mdash 02551DR-ELM 03876 03109 03808 05486 03029 mdashFORELM 03478 02960 03145 05504 03514 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02622DR-ELM 02473 02748 02163 05471 03381 mdashFORELM 03229 02506 02074 05501 02936 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03360DR-ELM 02725 02365 02133 05439 03071 mdashFORELM 02713 02418 02050 05432 03133 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02687DR-ELM 02539 02454 02069 05443 03161 mdashFORELM 02643 02370 02029 05443 03148 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02657DR-ELM 02520 02456 02053 05420 03260 mdashFORELM 02277 02382 02285 05430 03208 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03784DR-ELM 02388 02401 03112 05415 03225 mdashFORELM 02128 02325 03006 05416 03247 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 04179DR-ELM 02230 02550 03412 05431 03283 mdashFORELM 02228 02525 03347 05412 03293 mdash
FOKELM 02397 02345 03125 05421 03178 mdash
complex than calculating 119870(x119894 x119895) FOKELM will
take less time than DR-ELM
To intuitively observe and compare the accuracy andstability of these ELMs with the same a 119887 values and initial119906(119896) (119896 le 119896
0) signal absolute prediction error (APE) curves
of one trial of every approach (119871 = 25 119911 = 70) are shown inFigure 1 Clearly Figure 1(a) shows that at a few instancesprediction errors of FOS-ELM are much greater althoughprediction errors are very small at other instances thus FOS-ELM is unstable Comparing Figures 1(b) 1(c) 1(d) and 1(e)we can see that at most instances prediction errors of DR-ELM FORELM and FOKELM are smaller than those ofReOS-ELM and prediction effect of FORELM is similar tothat of DR-ELM
On the whole FORELM and FOKELM have higheraccuracy than FOS-ELM and ReOS-ELM
Simulation 2 In this subsection the proposed methods aretested on modeling for a second-order bioreactor processdescribed by the following differential equations [26]
1198881 (119905) = minus119888
1 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1198882 (119905) = minus119888
2 (119905) 119906 (119905) + 1198881 (119905) (1 minus 119888
2 (119905)) 1198901198882(119905)120574
1 + 120573
1 + 120573 minus 1198882 (119905)
(25)
where 1198881(119905) is the cell concentration that is considered as the
output of the process (119910(119905) = 1198881(119905)) 119888
2(119905) is the amount of
nutrients per unit volume and 119906(119905) represents the flow rateas the control input (the excitation signal for modeling) the1198881(119905) and 119888
2(119905) can take values between zero and one and 119906(119905)
is allowed a magnitude in interval [0 2]
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Mathematical Problems in Engineering
Table 3 Running time (second) comparison between the proposed ELMs and other ELMs on identification of process (20) with input (23)
119911 50 70 100 150 200 infin
119871 = 15
FOS-ELM (119901 = 2) 00620 00620 00621 00621 00622 mdashReOS-ELM mdash mdash mdash mdash mdash 00160DR-ELM 02650 03590 05781 10790 16102 mdash
FORELM (119901 = 2) 00620 00620 00620 00621 00622 mdash
119871 = 25
FOS-ELM 00621 00621 00621 00623 00623 mdashReOS-ELM mdash mdash mdash mdash mdash 00167DR-ELM 02500 03750 06094 10630 16250 mdashFORELM 00621 00621 00621 00622 00622 mdash
119871 = 50
FOS-ELM times times times 01250 01250 mdashReOS-ELM mdash mdash mdash mdash mdash 00470DR-ELM 02650 03906 06250 11100 16560 mdashFORELM 01400 01562 01560 01250 01250 mdash
119871 = 100
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 01250DR-ELM 02813 04219 06719 12031 17380 mdashFORELM 03750 04063 03440 03440 03280 mdash
119871 = 150
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 02344DR-ELM 03125 04540 07500 12970 19220 mdashFORELM 07810 08003 06560 06720 06570 mdash
119871 = 200
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 03901DR-ELM 03290 05160 07810 13440 19690 mdashFORELM 12702 12750 11250 10320 09375 mdash
119871 = 400
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 17350DR-ELM 03906 05781 08750 14680 22820 mdashFORELM 66410 64530 62030 56090 51719 mdash
119871 = 800
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 87190DR-ELM 05470 07970 12660 20938 29840 mdashFORELM 327820 316720 301400 275780 277500 mdash
119871 = 1000
FOS-ELM times times times times times mdashReOS-ELM mdash mdash mdash mdash mdash 136094DR-ELM 06250 10000 14530 24380 34060 mdashFORELM 514530 493440 468750 431880 389060 mdash
FOKELM 02811 04364 06951 11720 17813 mdash
In the simulation with the growth rate parameter 120573 =
002 the nutrient inhibition parameter 120574 is considered as thetime-varying parameter that is
120574 =
03 0 s lt 119905 le 20 s048 20 s lt 119905 le 60 s06 60 s lt 119905
(26)
Let 119879119904indicate sampling interval Denote 119905
0= (119911 +
max(119899119910 119899119906) minus 119899119889)119879119904 1199051= 1199050+ 350119879
119904 The input is set below
119906 (119905) =
05 rand (sdot) + 075 119905 le 1199050
025 sin(2120587 (119905 minus 119905
0)
24) + 1 119905
0lt 119905 le 119905
1
025 sin(2120587 (119905 minus 119905
1)
10) + 1 119905
1lt 119905
(27)
where at every sampling instance rand(sdot) generates randomnumbers which are uniformly distributed in the interval(0 1)
Set 119879119904
= 02 s With the same 119886 119887 and initial 119906(119905)APE curves of every approach (119871 = 20 119911 = 100) forone trial are drawn in Figure 2 Clearly on the whole APEcurves of FORELM and FOKELM are smaller than thoseof FOS-ELM and ReOS-ELM and FORELM has nearlythe same prediction effect as DR-ELM Further RMSE ofFOS-ELM ReOS-ELM DR-ELM FORELM and FOKELMare 0096241 0012203 0007439 0007619 and 0007102respectively
Throughmany comparative trials wemay attain the sameresults as the ones in Simulation 1
5 Conclusions
ReOS-ELM (ie SRELM or LS-IELM) can yield good gener-alization models and will not suffer from matrix singularity
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 9
100 200 300 400 500 6000
05
1
15
Sample sequence
APE
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(c) APE curves of DR-ELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(d) APE curves of FORELM
100 200 300 400 500 6000
01
02
03
04
05
Sample sequence
APE
(e) APE curves of FOKELM
Figure 1 APE curves of ELMs on process (20) with input (23)
or ill-posed problems but it is unsuitable in time-varyingapplications On the other hand FOS-ELM thanks to itsforgetting mechanism can reflect the timeliness of dataand train SLFN in nonstationary environments but it mayencounter the matrix singularity problem and run unstably
In the paper the forgetting mechanism is incorporated toReOS-ELM and we obtain FORELM which blends advan-tages of ReOS-ELM and FOS-ELM In addition the forget-ting mechanism also is added to KB-IELM consequentlyFOKELM is obtained which can overcomematrix expansionproblem of KB-IELM
Performance comparison between the proposed ELMsand other ELMs was carried out on identification of time-varying systems in the aspects of accuracy stability and com-putational complexity The experimental results show thatFORELM and FOKELM have better stability than FOS-ELMand have higher accuracy than ReOS-ELM in nonstationaryenvironments statistically When the number 119871 of hiddennodes is small or 119871 le the length 119911 of sliding time windowsFORELM has time efficiency superiority over DR-ELM Onthe other hand if 119871 is large enough FOKELM will be fasterthan DR-ELM
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
10 Mathematical Problems in Engineering
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(a) APE curves of FOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(b) APE curves of ReOS-ELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(c) APE curves of DR-ELM
100 200 300 400 500 6000
002
004
006
008
01A
PE
Time (Ts)
(d) APE curves of FORELM
100 200 300 400 500 6000
002
004
006
008
01
APE
Time (Ts)
(e) APE curves of FOKELM
Figure 2 APE curves of ELMs on process (25) with time-varying 120574 in (26)
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Mathematical Problems in Engineering 11
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper
Acknowledgment
The work is supported by the Hunan Provincial Science andTechnology Foundation of China (2011FJ6033)
References
[1] J Park and I W Sandberg ldquoUniversal approximation usingradial basis function networksrdquoNeural Computation vol 3 no2 pp 246ndash257 1991
[2] G B Huang Y Q Chen and H A Babri ldquoClassificationability of single hidden layer feedforward neural networksrdquoIEEE Transactions on Neural Networks vol 11 no 3 pp 799ndash801 2000
[3] S Ferrari and R F Stengel ldquoSmooth function approximationusing neural networksrdquo IEEE Transactions on Neural Networksvol 16 no 1 pp 24ndash38 2005
[4] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachine a new learning scheme of feedforward neural net-worksrdquo in Proceedings of the IEEE International Joint Conferenceon Neural Networks vol 2 pp 985ndash990 July 2004
[5] G B Huang Q Y Zhu and C K Siew ldquoExtreme learningmachineTheory and applicationsrdquoNeurocomputing vol 70 no1ndash3 pp 489ndash501 2006
[6] N Y Liang G B Huang P Saratchandran and N Sundarara-jan ldquoA fast and accurate online sequential learning algorithmfor feedforward networksrdquo IEEE Transactions on Neural Net-works vol 17 no 6 pp 1411ndash1423 2006
[7] J S Lim ldquoPartitioned online sequential extreme learningmachine for large ordered system modelingrdquo Neurocomputingvol 102 pp 59ndash64 2013
[8] J S Lim S Lee and H S Pang ldquoLow complexity adaptive for-getting factor for online sequential extreme learning machine(OS-ELM) for application to nonstationary system estimationsrdquoNeural Computing and Applications vol 22 no 3-4 pp 569ndash576 2013
[9] Y Gu J F Liu Y Q Chen X L Jiang and H C Yu ldquoTOSELM timeliness online sequential extreme learning machinerdquo Neu-rocomputing vol 128 no 27 pp 119ndash127 2014
[10] J W Zhao Z H Wang and D S Park ldquoOnline sequentialextreme learning machine with forgetting mechanismrdquo Neuro-computing vol 87 no 15 pp 79ndash89 2012
[11] X Zhang and H L Wang ldquoFixed-memory extreme learningmachine and its applicationsrdquo Control and Decision vol 27 no8 pp 1206ndash1210 2012
[12] W Y Deng Q H Zheng and L Chen ldquoRegularized extremelearning machinerdquo in Proceedings of the IEEE Symposium onComputational Intelligence and Data Mining (CIDM rsquo09) pp389ndash395 April 2009
[13] H T Huynh and Y Won ldquoRegularized online sequentiallearning algorithm for single-hidden layer feedforward neuralnetworksrdquo Pattern Recognition Letters vol 32 no 14 pp 1930ndash1935 2011
[14] X Zhang and H L Wang ldquoTime series prediction basedon sequential regularized extreme learning machine and its
applicationrdquoActaAeronautica et Astronautica Sinica vol 32 no7 pp 1302ndash1308 2011
[15] G B Huang D H Wang and Y Lan ldquoExtreme learningmachines a surveyrdquo International Journal of Machine Learningand Cybernetics vol 2 no 2 pp 107ndash122 2011
[16] G B Huang H M Zhou X J Ding and R Zhang ldquoExtremelearning rdquo IEEE Transactions on Systems Man and CyberneticsB Cybernetics vol 42 no 2 pp 513ndash529 2012
[17] V N Vapnik Statistical Learning Theory John Wiley amp SonsNew York NY USA 1998
[18] L Guo J H Hao and M Liu ldquoAn incremental extremelearning machine for online sequential learning problemsrdquoNeurocomputing vol 128 no 27 pp 50ndash58 2014
[19] G B Huang L Chen andC K Siew ldquoUniversal approximationusing incremental constructive feedforward networks withrandom hidden nodesrdquo IEEE Transactions on Neural Networksvol 17 no 4 pp 879ndash892 2006
[20] G B Huang and L Chen ldquoConvex incremental extremelearning machinerdquo Neurocomputing vol 70 no 16ndash18 pp3056ndash3062 2007
[21] G B Huang and L Chen ldquoEnhanced random search basedincremental extreme learning machinerdquo Neurocomputing vol71 no 16ndash18 pp 3460ndash3468 2008
[22] X Zhang and H L Wang ldquoIncremental regularized extremelearning machine based on Cholesky factorization and itsapplication to time series predictionrdquo Acta Physica Sinica vol60 no 11 Article ID 110201 2011
[23] G H Golub and C F van Loan Matrix Computations TheJohns Hopkins University Press Baltimore Md USA 3rdedition 1996
[24] X Zhang and H L Wang ldquoDynamic regression extremelearningmachine and its application to small-sample time seriespredictionrdquo Information andControl vol 40 no 5 pp 704ndash7092011
[25] K S Narendra andK Parthasarathy ldquoIdentification and controlof dynamical systems using neural networksrdquo IEEE Transac-tions on Neural Networks vol 1 no 1 pp 4ndash27 1990
[26] M O Efe E Abadoglu and O Kaynak ldquoA novel analysis anddesign of a neural network assisted nonlinear controller fora bioreactorrdquo International Journal of Robust and NonlinearControl vol 9 no 11 pp 799ndash815 1999
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of