a computational intelligence scheme for prediction equilibrium water dew point of natural gas in teg...
TRANSCRIPT
1
3
4
5
6
7 Q1
89
10
1112
1 41516171819
20
2 2
2324252627
28 Q229303132333435
3 6
54
55
56
57
58
59
60
61
62
Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
Contents lists available at ScienceDirect
Fuel
journal homepage: www.elsevier .com/locate / fuel
A computational intelligence scheme for prediction of equilibrium waterdew point of natural gas in TEG dehydration systems
http://dx.doi.org/10.1016/j.fuel.2014.07.0720016-2361/� 2014 Published by Elsevier Ltd.
⇑ Corresponding author. Tel.: +61 2 6626 9412.E-mail addresses: [email protected] (M.A. Ahmadi), Alireza.bahadori@s-
cu.edu.au (A. Bahadori).1 Tel.: +98 9126364936.
Please cite this article in press as: Ahmadi MA et al. A computational intelligence scheme for prediction of equilibrium water dew point of naturaTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07.072
Mohammad Ali Ahmadi a,1, Reza Soleimani b, Alireza Bahadori c,⇑a Department of Petroleum Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Iranb Department of Gas Engineering, Ahwaz Faculty of Petroleum Engineering, Petroleum University of Technology (PUT), Ahwaz, Iranc Southern Cross University, School of Environment, Science and Engineering, Lismore, NSW, Australia
h i g h l i g h t s
� Particle swarm optimization (PSO) is used to estimate the water dew point of natural gas in equilibrium with TEG.� The model has been developed and tested using 70 series of the data.� Back-propagation (BP) algorithm is used to estimate the water dew point of natural gas in equilibrium with TEG.� PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statistical criteria.
37383940414243444546474849
a r t i c l e i n f o
Article history:Received 4 November 2013Received in revised form 24 July 2014Accepted 24 July 2014Available online xxxx
Keywords:Gas dehydrationTriethylene glycolEquilibrium water dew pointPredictionParticle swarm optimizationArtificial neural network
505152
a b s t r a c t
Raw natural gases are frequently saturated with water during production operations. It is crucial toremove water from natural gas using dehydration process in order to eliminate safety concerns as wellas for economic reasons. Triethylene glycol (TEG) dehydration units are the most common type of naturalgas dehydration. Making an assessment of a TEG system takes in first ascertaining the minimum TEG con-centration needed to fulfill the water content and dew point specifications of the pipeline system. A flex-ible and reliable method in modeling such a process is of the essence from gas engineering view point andthe current contribution is an attempt in this respect. Artificial neural networks (ANNs) trained with par-ticle swarm optimization (PSO) and back-propagation algorithm (BP) were employed to estimate theequilibrium water dew point of a natural gas stream with a TEG solution at different TEG concentrationsand temperatures. PSO and BP were used to optimize the weights and biases of networks. The modelswere made based upon literature database covering VLE data for TEG–water system for contactor tem-peratures between 10 �C and 80 �C and TEG concentrations ranging from 90.00 to 99.999 wt%. Resultsshowed PSO-ANN accomplishes more reliable outputs compared with BP-ANN in terms of statisticalcriteria.
� 2014 Published by Elsevier Ltd.
53
63
64
65
66
67
68
69
70
71
1. Introduction
All natural gas streams contain significant amounts of watervapor as they exit from oil and gas reservoirs. Water vapor innatural gas can make several operational problems during theprocessing and transmission of natural gas such as line pluggingdue to formation of gas hydrates, reduction of line capacity dueto formation of free water (liquid), corrosion, and the decrease ofnatural gas heating value.
72
73
74
75
76
Various techniques can be executed to dehydrate natural gas.Among these gas dehydration methods, glycol absorption pro-cesses, in which glycol is considered as liquid desiccant (absorptionliquid), is the most common dehydration process used in the gasindustry since it approximate the features that fulfill the commer-cial application criteria.
In a typical TEG system, shown in Fig. 1, water-free TEG (lean ordry TEG) enters at the top of the TEG contactor where it is flowcountercurrent with wet natural gas stream flowing up the tower.Elimination of water from natural gas via TEG is based on physicalabsorption.
In TEG system, specification of the minimum concentration ofTEG to fulfill the water dew point of exit gas has always beenoperationally important. Indeed, the one single change that can
l gas in
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
Nomenclature
AcronymsANN artificial neural networkTEG triethylene glycolVLE Vapor–Liquid EquilibriumBP back-propagationMEG monoethylene glycolFFNN feed-forward neural networkGA genetic algorithmICA imperialist competitive algorithmMSE mean square errorPA pruning algorithmDEG diethylene glycolTREG tetraethylene glycolPSO particle swarm optimizationHGAPSO hybrid genetic algorithm and particle swarm optimiza-
tionR2 correlation coefficientMLP multilayer perceptronTST Twu–Sim–TassoneSPSO stochastic particle swarm optimizationUPSO unified particle swarm optimization
Symbols usedbH bias associated with hidden neuronsbO bias associated with output neuronc1, c2 trust parameterswt.% weight percent�C centigrade degreekPa kilopascalspsia pounds per square inch absoluteK number of input training dataA input signal (vector)W vector of weights and biases
grad the gradient of the performance functionr1, r2 random numberSH hidden neuron’s net input signalTd equilibrium water dew point temperatureT contactor temperaturesvi velocity of ith particlewH Weight between input and hidden layerxi position of ith particlexg gbest valuexi,p pbest value of particle iYpre predicted outputYexp actual outputOH output of the hidden neuron
Greek symbolsu activation functionx inertia weighta learning rate
Subscriptsi particle ij input jk in Eq. (7) kth iterationm number of neuron in the input layerz zth experimental data
Superscriptsn iteration numbermax maximummin minimumpre predictedexp experimental
2 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
be made in a TEG system, which will produce the largest effect ondew point depression, is the degree of TEG concentration (purity).To that end, it is needed to have a liquid–vapor equilibrium rela-tion/model for water–TEG system.
Several equilibrium correlations [1–7] for estimation theequilibrium water dew point of natural gas with a TEG dehydrationsystem can be found in the literature. Generally, the correlationspresented by Worley [4], Rosman [5] and Parrish et al. [1] worksatisfactorily and are suitable for most TEG system designs. How-ever, according to the literature [8], previously published correla-tions are unable to estimate precisely the equilibrium waterconcentration above TEG solutions throughout the vapor phase.
Parrish et al. [1] and Won [7] generated correlations in whichequilibrium concentrations of water throughout the vapor phasehave been ascertained at 100% TEG (unlimited dilution). Moreover,the other approaches employ data extrapolations at lower concen-trations to predict equilibrium throughout the unlimited dilutionarea [8]. The effect of pressure on TEG–water equilibrium is smallup to about 13,800 kPa (2000 psia) [1].
Recently, Bahadori and Vuthaluru [9] proposed a simple corre-lation for the prompt prediction of equilibrium water dew pointof a natural gas stream with a TEG solution in terms of TEG con-centrations and contactor temperatures. In addition, Twu et al.[10] employed the Twu–Sim–Tassone (TST) equation of state(EOS) [11] to specify the water–TEG system phase behavior.Furthermore, they presented an approach for employing the TSTEOS to determine water content and water dew point throughout
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
natural gas systems. Although, these methods (i.e. TST equation ofstate and simple correlation) have good predictive capability,applications of presented methods are typically limited to thesystem which they have been adapted for. As a matter of fact,aforementioned schemes need tunable parameters which shouldbe adjusted based upon experimental data points. Without exper-imental data points and adjusted parameters, aforementionedmodels are totally not reliable. In such circumstances, it ispreferable to develop and employed general models competentto predict phase behaviors of such systems. Among the variouspredictive methods, artificial neural network (ANN) is one ofthe competent methods enjoy great flexibility and capable toexplain multiple mechanisms of action [12]. ANNs are computa-tional schemes, either hardware or software which imitates thecomputational abilities of the human brain by using numbers ofinterconnected artificial neurons. The inimitability of ANN liesin its ability to acquire and create interrelationships betweendependent and independent variables without any prior knowl-edge or any assumptions of the form of the relationship madein advance [13].
In the last two decades, ANNs have become one of the most suc-cessful and widely applied techniques in many fields, includingchemistry, biology, materials science, engineering, etc. Especially,in the field of modeling of Vapor–Liquid Equilibrium (VLE) ANNshave successful track records [14–24].
Implications of artificial intelligent based approaches in variouscomplicated engineering aspects have got a noticeable attentions
ce scheme for prediction of equilibrium water dew point of natural gas in.072
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154 Q3
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
Fig. 1. Basic TEG dehydration unit.
M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx 3
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
through recent years such as application back propagation (BP)-feed forward neural network [25], couple of genetic algorithm(GA) and fuzzy logic [26], particle swarm optimization (PSO)[27–29], hybridized of PSO and GA (HGAPSO) [30,31], unified par-ticle swarm optimization (UPSO) [32], fuzzy decision tree (FDT)[33,34], imperialist competitive algorithm (ICA) [35–37], leastsquare support vector machine (LS-SVM) [38–40], and pruningalgorithm (PA) [41] have been applied to determine network struc-ture and involved parameters.
In this study, PSO is employed to specify the optimum values ofthe interconnection weights throughout feed-forward neural net-work in order to predict equilibrium water dew point temperatureof a natural gas stream with a TEG solution at different TEG con-centrations and contactor temperatures. Modeling results confirmthe integrity and show the ability of the suggested hybrid modelfor the estimation of water dew point with adequate precision incomparison with the real recorded data which are published inthe previous literatures (see Appendix A) [1,6].
188
189
190
191192
194194
195
196197
199199
200
201
202
203204205
207207
208209210
212212
213214
2. Artificial neural network
Artificial neural network (ANN), usually denoted to as neuralnetwork (NN), are an attempt at mimicking the information pro-cessing competences of biological nervous systems. The leading-edge picture of neural networks first came into being in the1940s by McCulloch and Pitts [42], who illustrated that networksof artificial neurons could, in principle, handle any arithmetic orlogical function. The fundamental element of processing through-out NN is a neuron (node) in which simple computations are car-ried out from a vector of input values. A neuron executes anonlinear transformation of the weighted sum of the incomingneuron inputs to yield the output of the neuron (see Fig. 2).
One of most conventional type of ANN approaches is multilayerperceptron (MLP) which belongs to a common category of config-urations named ‘‘feed-forward NN’’, a simple class of NN able ofapproaching general types of functions, counting integrable andcontinuous functions [43]. In the feed-forward NN, the track of signmovement is from the input layer, via hidden layers, to the outputlayer. Throughout the MLP configuration, the neurons areassembled into layers. The last and first layers are named outputand input layers correspondingly, because they illustrate outputs
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
and inputs of the general network. The residual layers are namedhidden layers. In a NN, each neuron—except neurons located inthe input layer—obtains and treats inputs from other neurons.The treated info is obtainable at the output termination of the neu-ron. Fig. 2 demonstrates the technique which the hidden layer’sneuron throughout a MLP handles the info.
Herein, each input to the 3th hidden neuron in a 3-layer feed-forward neural network is denoted by a1, a2, a3, . . . ,am, collectivelythey are referred to as the input vector. Every input is mint by a rel-evant weight wH
3,2, wH3,3, . . . ,wH
3,m which demonstrate the synap-tic neural links throughout natural nets and proceed in such amethod as to decrease or increase the input signs to the neuron.As a matter of fact, the factors of weight are adjustable constantsinside the network which specify the strength of the input signs.Weighted inputs are applied to the summation block, labeled R.The neuron has also a bias, bH
3, that is collected with the weightedinputs to create the net input. A bias demonstrates a weight whichdoes not join an input and an output of two neurons, but that isproduct by a unique sign and led to the neuron. A bias puts aspecific degree of the output sign of a neuron which is autonomousfrom input signs. The algebraic formulation for net can beexpressed as following as:
SH3 ¼ NET ¼
Xm
j¼1
wH3;j � aj þ bH
3 ð1Þ
The neuron performs as a mapping or activation function (NET)to generate an outcome OH
3 that can be shown as:
OH3 ¼ uðNETÞ ¼ u
Xm
j¼1
wH3;j � aj þ bH
3
!ð2Þ
where u stands for the neuron transfer function or the neuron acti-vation function. Three of the most commonly used activation func-tions are shown below.� Log-Sigmoid function (logsig)
uðsÞ ¼ 11þ e�s
ð3Þ
� Hyperbolic tangent function (tansig)
uðsÞ ¼ es � e�s
es þ e�sð4Þ
� Linear function (purelin)
ce scheme for prediction of equilibrium water dew point of natural gas in.072
215217217
218
219
220
221
222
223
224
225
226
227
228
229
230
231232
234234
235
236
237
238
239
240
241242244244
245
246
247
248
249
250
251
252
253
254
255
256
257
Fig. 2. Schematic of an artificial neuron within the hidden layer in a 3 layer feed-forward neural network.
Fig. 3. The flowchart of ANN trained with back-propagation algorithm [57].
4 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
uðsÞ ¼ s ð5Þ
It is worth to mention that w and b are both adaptable variablesof the neuron. The principal concept of NN is that such variablescan be modified with the purpose of the network shows someinteresting or desired performance. The thresholds and factors ofweight are updated throughout process of training. Therefore, todo a specific work we can train the network by regulating the biasor weight factors. There are numerous categories of approaches fortraining NN. Back propagation (BP) approach is one of the mostconventional types of training methods for MLP-FFNNs. ANN train-ing via dint of BP, which is one of the gradient descent algorithms,is an iterative optimization approach where the introduced objec-tive function is minimized by updating the interconnectionweights properly. The mean-squared-error (MSE) is a frequentlyengaged objective function that is formulated as below:
MSE ¼ 1K
XK
l¼1
ðYexpl � Ypre
l Þ2 ð6Þ
where K denotes the number of training samples, Yexpl and Ypre
l arethe recorded values and estimated data, respectively.
The straightforward application of BP learning iterativelyadjusts the network biases and interconnection weights through-out the track wherein the objective function declines most quickly(as shown in following equation, the gradient has negative sign).Iteration throughout this strategy can be demonstrated as:
Wkþ1 ¼Wk � akgradk ð7Þ
In which Wk stands for the vector of present biases and weights,gradk represents the present gradient of the performance function,and the parameter ak denotes called the learning rate. It is worthyto be mentioned that this training algorithm needs the differentia-bility of activation functions u since the weight adjust way is onthe basis of the gradient of the performance function which isdescribed in terms of the activation functions and weights.Interested readers are referred to the literature [44–48] to knowmore descriptions of technical point of views of BP training
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
approach. Fig. 3 presents the flowchart of training MLP feed-for-ward neural network by application of the BP algorithm. In thisstudy, the ANN paradigm trained with BP applied the Levenberg–Marquardt algorithm.
ce scheme for prediction of equilibrium water dew point of natural gas in.072
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282283
285285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301302
304304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx 5
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
3. Particle swarm optimization (PSO)
PSO is a stochastic population-based search approach inventedby Kennedy and Eberhart in 1995 [49], modeled on the social behav-ior of some kinds of animals (such as bird flocks, fish schooling, andswarm of insects) with the intention of gain more complicatedactivities that can be utilized to unravel difficult issues, mostly opti-mization problems [50]. This optimization algorithm can be readilyexecuted and it is inexpensive from a computational point of viewdue to its CPU speed and memory necessities are low [51].
PSO conducts the search for finding of the optima using a pop-ulation (swarm) of particles. Every particle throughout the swarmcharacterizes a candidate answer to the optimization issue. In aPSO scheme, every particle is ‘‘flown’’ over hyper-dimensionalspace of search, iteratively modifying its position in space of searchconsistent with its own flight knowledge as well as the flyingknowledge of further particles in the entire of space of search, sinceparticles of a swarm communicate good positions to each other. Aparticle thus employs the finest position experienced by itself andthe finest position of other particles to guide itself into an optimalanswer. The effectiveness of every particle (i.e. the ‘‘nearness’’ of aparticle to the overall optima) is evaluated through objective func-tion which is associated with the issue being unraveled [50].
After finding the two best aforementioned positions, during aniteration-based process every particle throughout the swarm isadjusted executing the below formulas:
vnþ1i ¼ xvn
i þ c1rn1 xn
i;p � xni
h iþ c2rn
2 xng � xn
i
h ið8Þ
xnþ1i ¼ xn
i þ vnþ1i ð9Þ
where n stands for the number of iteration and the index of the par-ticle is denoted by i. vn
i represents the particle i velocity at nth iter-ation, vnþ1
i is the velocity of particle i at n + 1th iteration. Theindividual finest position, xn
i;p, connected with particle i is the finestposition the particle has stayed meanwhile the first time stage(pbest), xn
g is the best value, obtained up to now (i.e. at nth iteration)by any particle in the swarm (gbest). c1 and c2 are the accelerationfactors related to pbest and gbest respectively and typically c1 andc2 values are set to 2. rn
1 and rn2 are random values with constant
distribution in the range [0,1] [52]. xnþ1i and xn
i are the position ofparticle i at n + 1th and nth iteration respectively. x is the weightof inertia, presented by Shi and Eberhart [53], that controls theexploration and exploitation of the search space [54]. Generally,the inertia weight is calculated by means of linear declining meth-odology where a primarily large weight of inertia is linearly reducedto a minor value [50]:
xn ¼ xmax � xmax �xmin
nmax
� �n ð10Þ
where xmax, xmin, n and nmax are the initial weight of inertia, thefinal weight of inertia, current iteration number and total iterationnumber (maximum number of iteration used in PSO) respectively.Usually xmax and xmin values are equal to 0.9 and 0.4 respectively[27,50,55].
PSO assigns various common points with evolutionary basedapproaches such as genetic algorithms (GAs). However, PSO enjoysnoticeable advantages. The two main advantages of PSO over GAsare [31]:
� Memory of PSO, that is, the information of worthy answers isremembered by all particles, while in GA, preceding informa-tion of the issue is demolished as soon as the new populationis generated.� GA use filtering operation such as selection operation, however;
PSO does not utilize one, and all the particles of the swarm are
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
retained throughout the process of searching to impart theirknowledge successfully.
Optimization of functions with continuous-valued variables isdone mainly via PSO. Optimizing weights and bias of NN is oneof the first implementations of PSO. The first studies in trainingMLP feed-forward neural networks using PSO [55,56] have illus-trated that the PSO is a competent substitute for training neuralnetwork. Frequent investigations have additional surveyed theability of PSO as a training approach for a number of various neuralnetwork configurations. Investigations have also demonstrated forparticular implementations that neural networks trained execut-ing PSO afford more precise outputs.
4. Implementation of ANN training using PSO algorithm
With the intention of employ PSO for training a neural network,an appropriate representation and fitness function are necessary.Meanwhile the main objective is to minimize the error, the objec-tive function is abridge the provided error (e.g. the MSE). Everyparticle demonstrates a nominee answer to the optimization issue,and subsequently the interconnection weights of a neural networkat training step are a answer, a sole particle illustrates singlecomprehensive network. Every component of a position vector ofparticles illustrates single neural network bias or weight. Employ-ing this illustration, PSO approach can be employed to specify thefinest weights for a neural network to minimize the fitness func-tion [50].
As a matter of fact, the fitness function for each particle isgained by adjusting the interconnection weights of ANN as deter-mined by the parameters of the particle and evaluating the fitnessfunction, gained in training of ANN. In the same way, the fitnessfunctions of the whole particles in the swarm are established.gbest particle is defined as the particle having lowest fitness func-tion and the fitness function of the gbest particle is contrasted withthe pre-defined precision. If the pre-defined precision is fulfilledsubsequently the process of training is discontinued. Else, thenew position and velocity of the particles are adjusted againaccording to Eqs. (8) and (9). The similar procedure is replicated
Fig. 4. The flowchart of ANN optimized with PSO algorithm [57].
ce scheme for prediction of equilibrium water dew point of natural gas in.072
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
0.7
0.75
0.8
0.85
0.9
0.95
1
0 2 4 6 8 10 12
Cor
rela
tion
Coe
ffic
ient
(R
2 )
Number Neurons
(a)
500
550
600
650
700
750
800
0 2 4 6 8 10 12
Mea
n Sq
uare
Err
or (
MSE
)
Number Neurons
(b)
Fig. 5. Variation of (a) R2 and (b) MSE with the number of hidden neurons.
Table 1Details of trained ANN with PSO for the estimation of the water dew point of a naturalgas stream in equilibrium with a TEG solution.
Type Value/comment
Input layer 2Hidden layer 7Output layer 1Hidden layer activation function LogsigOutput layer activation function PurelinNumber of data used for training 130Number of data used for testing 44Number of max iterations 200c1 and c2 in Eq. (8) 2Number of particles 22
-40-30-20-10
010203040506070
Wat
er D
ew P
oint
Tem
pera
ture
Data Index
(a)
Experimental Data
ANN Output
-50-40-30-20-10
0102030405060
0 50 100 150
0 10 20 30 40 50
Wat
er D
ew P
oint
Tem
pera
ture
Data Index
(b)
Experimental Data
ANN Output
Fig. 6. Actual versus predicted equilibrium water dew point using the BP-ANNmodel: (a) training and (b) testing.
-60
-40
-20
0
20
40
60
80
Wat
er D
ew P
oint
Tem
pera
ture
Data Index
(a)
Experimental Data
PSO-ANN Output
-60
-40
-20
0
20
40
60
80
0 50 100 150
0 10 20 30 40 50
Wat
er D
ew P
oint
Tem
pera
ture
Data Index
(b)
Experimental Data
PSO-ANN Output
Fig. 7. Actual versus predicted equilibrium water dew point using the PSO-ANNmodel: (a) training and (b) testing.
6 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
until the pre-defined precision is achieved [57]. The flowchart ofPSO-ANN is shown in Fig. 4. It should be mentioned that eachweight throughout the constructed NN is originally established inthe span of [�1,1] and each initial particle is an assortment ofweights produced arbitrarily in the span of [�1,1].
5. Results and discussion
As mentioned, ANNs were applied to construct reliable para-digms to predict the equilibrium water dew point temperature(Td). They were supplied by the contactor temperatures (T) andTEG concentrations (wt%) data as input variables.
The whole database was split into two divisions by a randomnumber generator: The first, which is used in the training process,includes 75% of the entire database and is equivalent to 130 datalines. The remaining points were save for validating and testingthe trained networks. This data set consists of 44 samples. It shouldbe mentioned that the first assortment is the training data bank,which is employed for optimizing the network biases and weightswhereas the testing assortment affords a wholly autonomousassessment of network integrity.
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
The hidden neurons’ number has a critical impact on theestimation integrity and precision. Many sources (for exampleRef. [58]) claimed that a feed-forward network with one hiddenlayer and enough neurons in the hidden layer, can fit any finiteinput–output mapping problem. In this respect, herein, networkswith one hidden layer with various hidden neurons were examined.The neurons number throughout the hidden layer illustrates the
ce scheme for prediction of equilibrium water dew point of natural gas in.072
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
y = 0.5066x + 7.7335
R² = 0.9679
-40
-30
-20
-10
0
10
20
30
40
-40 -20 0 20 40
AN
N O
utpu
t
Water Dew Point Temperature
(a)
Data
Linear (Data)
y = 0.4954x + 8.8406
R² = 0.9751
-40
-30
-20
-10
0
10
20
30
40
50
60
-40 -20 0 20 40 60
AN
N O
utpu
t
Water Dew Point Temperature
(b)
Data
Linear (Data)
Fig. 8. Regression plots of the BP-ANN model for: (a) training data set and (b)testing data set.
y = 0.9944x + 0.1149R² = 0.998
-40
-30
-20
-10
0
10
20
30
40
50
60
-40 -20 0 20 40 60
PSO
-AN
N O
utpu
t
Actual Water Dew Point Temperature
(a)
Data
Linear (Data)
y = 0.9987x + 0.1402R² = 0.9996
-40
-30
-20
-10
0
10
20
30
40
50
60
-40 -20 0 20 40 60
PSO
-AN
N O
utpu
t
Water Dew Point Temperature
(b)
Data
Linear (Data)
Fig. 9. Regression plots of the PSO-ANN model for: (a) training data set and (b)testing data set.
M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx 7
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
complication of the network, though the more complex networksare effective in estimate within the restrictions of the data bankemployed for their training, they travail from absence of adequateextension. Specification of the number of neurons in the hiddenlayer is performed on the basis of a trial and error approach.Fig. 5a shows the change of R2 versus the hidden neurons’ numberthroughout the hidden layer. As demonstrated in Fig. 5a, it is obser-vable that rising the hidden neurons’ number from 1 to 7 improvedthe coefficient of determination; conversely, no improvement fol-lowed in an additional rise from 7 to 10. Fig. 5b shows the influenceof the neurons’ number on MSE. According to Fig. 5a and b, the high-est R2 is observed and the MSE get the minimum when 7 neuronsemployed in the hidden layer. Therefore, a three-layer networkwith a 2 (input units):7 (neurons in hidden layer):1 (output neuron)architecture is the most appropriate. The details of the PSO-opti-mized network used in this study to predict equilibrium waterdew point temperature were given in Table 1.
With the purpose of gauging the effectiveness of the PSO-ANNapproach, a BP-ANN scheme was performed with the same databanks utilized in the PSO-ANN approach. The PSO-optimizednetwork trained via 50 generations conformed by a BP trainingalgorithm. For the BP training algorithm the values of momentumcorrection factor and learning coefficient are assigned to 0.001 and0.7, correspondingly.
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
As can be seen in Figs. 6 and 7, a comparison between predictedand actual equilibrium water dew point during the testing andtraining steps for both hybrid PSO-ANN and common BP-ANNapproaches is executed. As shown in Fig. 7, there are not major dif-ferences between the outputs of the PSO-optimized network andthe references values of equilibrium water dew point. It is clearthat the PSO-ANN approach depicts a higher integrity in estimationof equilibrium water dew point temperature compared with BP-ANN, with lower MSE for the training and test sets 43.935 and13.472 in contrast to 551.13 and 527.098 for BP-ANN, respectively.
The performance of trained networks with PSO andconventional BP can be also evaluated by conducting an analysisof regression between the models outcomes and the relevantobject. The cross plots of actual equilibrium water dew pointversus predicted values of training and testing data set usingPSO-ANN and BP-ANN approaches are depicted in Figs. 8 and 9.It can be seen that the fitting obtained by PSO-ANN is excellentsince the regression line (the best linear fit) overlaps with the diag-onal (perfect fit), as a result of a slope value close to 1 and minorvalue of the y-intercept (see Fig. 9) [59]. The training and testingcorrelation coefficients (R2) of PSO-ANN were found to be greaterthan 0.99 while those of BP-ANN model are not as favorably asPSO-ANN model. This means that the proposed hybrid PSO-ANN
ce scheme for prediction of equilibrium water dew point of natural gas in.072
430
431432
434434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491Q4
492
493
494
495
0
10
20
30
40
50
60
70
80
90
100
Mea
n Sq
uare
Err
or (
MSE
)
Epoch
500
550
600
650
700
750
800
850
900
950
1 11 21 31 41 51 61
1 11 21 31 41 51 61
Mea
n Sq
uare
Err
or (
MSE
)
Epoch
Fig. 10. Performance plot of: (a) PSO-ANN model and (b) ANN model.
-1500
-1000
-500
0
500
1000
1500
-40 -20 0 20 40 60 80
Water Dew Point Temperature
(a)
Training Data
Testing Data
-40
-30
-20
-10
0
10
20
30
40
-40 -20 0 20 40 60 80
Dev
iati
on %
Dev
iati
on %
Water Dew Point Temperature
(b)
Training Data
Testing Data
Fig. 11. Percent deviation between the actual and predicted isotherms againstactual data during the training and testing process: (a) BP-ANN model and (b) PSO-ANN model.
8 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
model has been well trained and tested and is superior to BP-ANNmodel. The formula for correlation coefficient is as follows:
R2 ¼ 1�PK
l¼1ðYexpl � Ypre
l Þ2
PKl¼1ðY
expl � Yexp
l Þ2 ð11Þ
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
where K denotes the number of training or testing samples,Yexp
l ; Yprel ; and Yexp
l are the experimental response, predictedresponse, and the mean of experimental response respectively.
Fig. 10 shows the performance plots for training, validation, testdata subset, and best models introduced for predicting equilibriumwater dew point. The performance plot shows the value of the per-formance function (MSE) against number of epochs. As can be seen,the validation and test data sets had similar trends; therefore, PSO-ANN can predict an unseen data set as well as the data set used forits validation [60].
Fig. 11 shows the scheme of actual data and % error between theactual and estimated equilibrium water dew point temperaturesduring the testing and training steps for both PSO-ANN and BP-ANN approaches. As shown in Fig. 11a, poor results are observedthrough BP-ANN model. However, the agreement between theactual equilibrium water dew point values and the PSO-ANNpredicted ones is acceptable. Considering the performance ofPSO-ANN globally, the effectiveness of the model is obvious sincethe vast majority of the training and testing data subsets falls inthe region bordered by a relative deviation less than 20%. As amatter of fact, only for six data points the deviation betweenexperimental and estimated equilibrium water dew point temper-ature was obtained to be P10% through the testing and trainingdevelopment. According to Fig. 11b, relative deviations located inthe span �18.96% to 16.33%, the magnitude of minimum relativedeviation is 0.0334%, and the average magnitude of deviation is2.909%, while for the testing data banks the relative deviationslocated in the span �14.92% to 7.545%, the magnitude of minimumrelative deviation is 0.0099%, and the average absolute deviationsis 1.676%.
6. Conclusions
1. According to the literature database, the feasibility of usingANN scheme trained with a new evolutionary algorithm,viz PSO, to predict equilibrium water dew point versus cont-actor temperature at different concentrations of TEG wasconsidered. The proposed PSO-ANN approach produced highreliability, with MSE and R2 of 13.472 and 0.998,respectively.
2. The use of PSO led to the rise of comprehensive searchingcapability for choosing appropriate initial weights of ANN.
3. To specify the optimal structure of the PSO-ANN approach,various three-layer feed-forward networks with differentneurons in hidden layer were tested. Tuning parameters(including acceleration constants (c1 and c2), number of max-imum iterations, number of particles, and time interval) ofproposed hybrid model were carefully carried out.
4. According to the graphical representations together with thestatistical error analysis, the optimum PSO-ANN scheme per-forms much better in accuracy than the common back prop-agation NN approach for the purpose of equilibrium waterdew point prediction due to unlike PSO algorithm there isa probability of trapping or undulating nearby a local min-ima in back propagation algorithms.
7. Uncited references
[61,62].
Appendix A
This section provides some of the data that used in this study.Table A1 reports the contactor temperature, concentration of TEGand corresponding equilibrium water dew point temperature.
ce scheme for prediction of equilibrium water dew point of natural gas in.072
496
497498499500501502503504505506
507508509510511512513514515516517518
Table A1Data used in this study [1,6].
ContactorT (�C)
TEGpurity (%)
Equilibrium waterdew point T (�C)
ContactorT (�C)
TEGpurity (%)
Equilibrium waterdew point T (�C)
ContactorT (�C)
TEGpurity (%)
Equilibrium waterdew point T (�C)
10 90 �6 70 99.8 4.5 30 99.98 �5515 90 �1 10 99.8 �46.5 35 99.98 �52.520 90 3 15 99.8 �43.5 40 99.98 �5025 90 8.5 20 99.8 �40 45 99.98 �47.530 90 13 25 99.8 �36.5 50 99.98 �4535 90 18 30 99.8 �33.5 55 99.98 �42.537 90 20 35 99.8 �30 60 99.98 �4010 95 �12 40 99.8 �26.5 65 99.98 �37.515 95 �8 45 99.8 �24 70 99.98 �3520 95 �4 50 99.8 �20.5 75 99.98 �32.525 95 1 55 99.8 �17 10 99.99 �7230 95 5 60 99.8 �14 15 99.99 �6935 95 9.5 65 99.8 �11 20 99.99 �66.540 95 14 70 99.8 �8.5 25 99.99 �63.545 95 19 75 99.8 �5.5 30 99.99 �61.510 97 �18 10 99.9 �52.5 35 99.99 �5915 97 �13.5 15 99.9 �49.8 40 99.99 �56.520 97 �10 20 99.9 �47 45 99.99 �5425 97 �6 25 99.9 �43.5 50 99.99 �5230 97 �2 30 99.9 �40.5 55 99.99 �4935 97 2 35 99.9 �37.5 60 99.99 �4740 97 6 40 99.9 �34 65 99.99 �4445 97 11.5 45 99.9 �31.5 70 99.99 �4250 97 15 50 99.9 �28 75 99.99 �39.555 97 19.5 55 99.9 �25 10 99.995 �7710 98 �22 60 99.9 �22.5 15 99.995 �7415 98 �18 65 99.9 �19.5 20 99.995 �7220 98 �14.5 70 99.9 �17 25 99.995 �6925 98 �11 75 99.9 �14 30 99.995 �6730 98 �7 10 99.95 �59 35 99.995 �64.935 98 �2.5 15 99.95 �56 40 99.995 �62.540 98 1.5 20 99.95 �54 45 99.995 �6045 98 6 25 99.95 �50 50 99.995 �57.550 98 9.5 30 99.95 �47.5 55 99.995 �5555 98 13.5 35 99.95 �44 60 99.995 �5360 98 17.5 40 99.95 �42 65 99.995 �5110 99 �30 45 99.95 �38.5 70 99.995 �4815 99 �26.5 50 99.95 �36 75 99.995 �4720 99 �22.5 55 99.95 �33.5 15 99.997 �7825 99 �19 60 99.95 �30 20 99.997 �7630 99 �15 65 99.95 �27.5 25 99.997 �7335 99 �11 70 99.95 �25 30 99.997 �71.540 99 �8 75 99.95 �22.5 35 99.997 �6845 99 �4 10 99.97 �63 40 99.997 �6750 99 �0.25 15 99.97 �60 45 99.997 �6455 99 3.5 20 99.97 �57.5 50 99.997 �6260 99 7.5 25 99.97 �54.5 55 99.997 �6065 99 11.5 30 99.97 �52 60 99.997 �57.570 99 14.5 35 99.97 �49 65 99.997 �5510 99.5 �37.5 40 99.97 �47 70 99.997 �5315 99.5 �34 45 99.97 �44.5 75 99.997 �51.520 99.5 �30 50 99.97 �4125 99.5 �27 55 99.97 �38.530 99.5 �23 60 99.97 �3635 99.5 �19.5 65 99.97 �3340 99.5 �16.5 70 99.97 �3145 99.5 �12.5 75 99.97 �2850 99.5 �9 10 99.98 �66.555 99.5 �6 15 99.98 �63.560 99.5 �2.5 20 99.98 �6165 99.5 1 25 99.98 �58
M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx 9
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
References
[1] Parrish WR, Won KW, Baltatu ME. Phase behavior of the triethylene glycol–water system and dehydration/regeneration design for extremely low dewpoint requirements. 65th GPA annual convention. San Antonio, TX; 1986.
[2] Townsend FM. Vapor–liquid equilibrium data for DEG and TEG–water–naturalgas system. In: Gas conditioning conference. University of Oklahoma, Norman,OK; 1953.
[3] Scauzillo FR. Equilibrium ratios of water in the water–triethylene glycol–natural gas system. J Petrol Technol 1961;13:697–702.
[4] Worley S. Super dehydration with glycols. In: Gas conditioning conference.University of Oklahoma, Norman, OK; 1967.
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
[5] Rosman A. Water equilibrium in the dehydration of natural gas withtriethylene glycol. SPE J 1973;13:297–306.
[6] Herskowitz M, Gottlieb M. Vapor–liquid equilibrium in aqueous solutions ofvarious glycols and polyethylene glycols. 1. Triethylene glycol. J Chem EngData 1984;29:173–5.
[7] Won KW. Thermodynamic basis of the glycol dew-point chart and itsapplication to dehydration. 73rd GPA annual convention New Orleans, LA;1994. p. 108–33.
[8] Association GP. Engineering data book: FPS version. Sections 16-26: GasProcessors Suppliers Association; 1998.
[9] Bahadori A, Vuthaluru HB. Rapid estimation of equilibrium water dew point ofnatural gas in TEG dehydration systems. J Nat Gas Sci Eng 2009;1:68–71.
ce scheme for prediction of equilibrium water dew point of natural gas in.072
519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551 Q5552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589
590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659
660
10 M.A. Ahmadi et al. / Fuel xxx (2014) xxx–xxx
JFUE 8331 No. of Pages 10, Model 5G
5 August 2014
[10] Twu CH, Tassone V, Sim WD, Watanasiri S. Advanced equation of state methodfor modeling TEG–water for glycol gas dehydration. Fluid Phase Equilib2005;228–229:213–21.
[11] Twu CH, Sim WD, Tassone V. A versatile liquid activity model for SRK, PR and anew cubic equation-of-state TST. Fluid Phase Equilib 2002;194–197:385–99.
[12] Carrera G, Aires-de-Sousa J. Estimation of melting points of pyridiniumbromide ionic liquids with decision trees and neural networks. Green Chem2005;7:20–7.
[13] Chen H, Kim AS. Prediction of permeate flux decline in crossflow membranefiltration of colloidal suspension: a radial basis function neural networkapproach. Desalination 2006;192:415–28.
[14] Urata S, Takada A, Murata J, Hiaki T, Sekiya A. Prediction of vapor–liquidequilibrium for binary systems containing HFEs by using artificial neuralnetwork. Fluid Phase Equilib 2002;199:63–78.
[15] Mohanty S. Estimation of vapour liquid equilibria of binary systems, carbondioxide–ethyl caproate, ethyl caprylate and ethyl caprate using artificialneural networks. Fluid Phase Equilib 2005;235:92–8.
[16] Bahasoei A. Estimation of hydrate inhibitor loss in hydrocarbon liquid phase.Pet Sci Technol 2009;27(9):943–51.
[17] Mohanty S. Estimation of vapour liquid equilibria for the system carbondioxide–difluoromethane using artificial neural networks. Int J Refrig2006;29:243–9.
[18] Ghanadzadeh H, Ahmadifar H. Estimation of (vapour + liquid) equilibrium ofbinary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2-ethyl-1-hexanol) using an artificial neural network. J Chem Thermodyn2008;40:1152–6.
[19] Bahadori A, Mokhatab S, Towler BF. Rapidly estimating natural gascompressibility factor. J Nat Gas Chem 2007;16(4):349–53.
[20] Ketabchi S, Ghanadzadeh H, Ghanadzadeh A, Fallahi S, Ganji M. Estimation ofVLE of binary systems (tert-butanol + 2-ethyl-1-hexanol) and (n-butanol + 2-ethyl-1-hexanol) using GMDH-type neural network. J Chem Thermodyn2010;42:1352–5.
[21] Guimaraes PRB, McGreavy C. Flow of information through an artificial neuralnetwork. Comput Chem Eng 19(Suppl. 1):741–6.
[22] Bahadori A. New model predicts solubility in glycols. Oil Gas J 105(8):50–5.[23] Sharma R, Singhal D, Ghosh R, Dwivedi A. Potential applications of artificial
neural networks to thermodynamics: vapor–liquid equilibrium predictions.Comput Chem Eng 1999;23:385–90.
[24] Lashkarbolooki M, Vaferi B, Shariati A, Zeinolabedini Hezave A. Investigatingvapor–liquid equilibria of binary mixtures containing supercritical or near-critical carbon dioxide and a cyclic compound using cascade neural network.Fluid Phase Equilib 2013;343:24–9.
[25] Bahadori A, Vuthaluru HB. A novel correlation for estimation of hydrateforming condition of natural gases. J Nat Gas Chem 2009;18(4):453–7.
[26] Potukuchi S, Wexler AS. Predicting vapor pressures using neural networks.Atmos Environ 1997;31:741–53.
[27] Soleimani R, Shoushtari NA, Mirza B, Salahi A. Experimental investigation,modeling and optimization of membrane separation using artificial neuralnetwork and multi-objective optimization using genetic algorithm. Chem EngRes Des 2013;91:883–903.
[28] Ebadi M, Ahmadi MA, Hikoei KF, Salari Z. Evolving genetic algorithm, fuzzylogic and Kalman filter for prediction of asphaltene precipitation due tonatural depletion. Int J Comput Appl 2011;35(1):12–6.
[29] Zendehboudi S, Ahmadi MA, James L, Chatzis I. Prediction of condensate-to-gasratio for retrograde gas condensate reservoirs using artificial neural networkwith particle swarm optimization. Energy Fuels 2012;26:3432–47.
[30] Ahmadi MA, Shadizadeh SR. New approach for prediction of asphalteneprecipitation due to natural depletion by using evolutionary algorithmconcept. Fuel 2012;102:716–23.
[31] Zendehboudi S, Ahmadi MA, Bahadori A, Shafiei A, Babadagli T. A developedsmart technique to predict minimum miscible pressure—eor implications. CanJ Chem Eng 2013;91:1325–37.
[32] Ali Ahmadi M, Zendehboudi S, Lohi A, Elkamel A, Chatzis I. Reservoirpermeability prediction by neural networks combined with hybrid geneticalgorithm and particle swarm optimization. Geophys Prospect2013;61:582–98.
[33] Ali Ahmadi M, Golshadi M. Neural network based swarm concept forprediction asphaltene precipitation due to natural depletion. J Pet Sci Eng2012;98–99:40–9.
[34] Ahmadi MA. Neural network based unified particle swarm optimization forprediction of asphaltene precipitation. Fluid Phase Equilib 2012;314:46–51.
Please cite this article in press as: Ahmadi MA et al. A computational intelligenTEG dehydration systems. Fuel (2014), http://dx.doi.org/10.1016/j.fuel.2014.07
[35] Ebadi M, Ahmadi MA, Gerami S, Askarinezhad R. Application fuzzy decisiontree analysis for prediction condensate gas ratio: case study. Int J Comput Appl2012;39(8):23–8.
[36] Ebadi M, Ahmadi MA, Hikoei KF. Application of fuzzy decision tree analysis forprediction asphaltene precipitation due natural depletion; case study. Aust JBasic Appl Sci 2012;6(1):190–7.
[37] Ahmadi MA, Ebadi M, Shokrollahi A, Majidi SMJ. Evolving artificial neuralnetwork and imperialist competitive algorithm for prediction oil flow rate ofthe reservoir. Appl Soft Comput 2013;13:1085–98.
[38] Zendehboudi S, Ahmadi MA, Mohammadzadeh O, Bahadori A, Chatzis I.Thermodynamic investigation of asphaltene precipitation during primary oilproduction: laboratory and smart technique. Ind Eng Chem Res2013;52:6009–31.
[39] Ahmadi M. Prediction of asphaltene precipitation using artificial neuralnetwork optimized by imperialist competitive algorithm. J Petrol ExplorProd Technol 2011;1:99–106.
[40] Fazeli H, Soleimani R, Ahmadi MA, Badrnezhad R, Mohammadi AH.Experimental study and modeling of ultrafiltration of refinery effluents.Energy Fuels 2013;27:3523–37.
[41] Ahmadi MA, Ebadi M, Hosseini SM. Prediction breakthrough time of waterconing in the fractured reservoirs by implementing low parameter supportvector machine approach. Fuel 2014;117:579–89.
[42] Ahmadi MA, Ebadi M. Evolving smart approach for determination dew pointpressure of condensate gas reservoirs. Fuel 2014;117(Part B):1074–84.
[43] Reed R. Pruning algorithms – a survey. IEEE Trans Neural Netw 1993;4:740–7.[44] McCulloch W, Pitts W. A logical calculus of the ideas immanent in nervous
activity. Bull Math Biophys 1943;5:115–33;Bahadori A, Vuthaluru HB. Prediction of silica carry-over and solubility insteam of boilers using simple correlation. Appl Therm Eng 2010;30(2–3):250–3.
[45] Scarselli F, Chung Tsoi A. Universal approximation using feedforward neuralnetworks: a survey of some existing methods, and some new results. NeuralNetworks 1998;11:15–37.
[46] Hagan MT, Demuth HB, Beale M. Neural network design. PWS Publishing Co.;1996.
[47] Baughman DR, Liu YA. Neural networks in bioprocessing and chemicalengineering. Academic Press; 1995.
[48] Freeman JA, Skapura DM. Neural networks: algorithms, applications, andprogramming techniques. Addison-Wesley; 1991.
[49] Haykin SS. Neural networks: a comprehensive foundation. Prentice Hall; 1999.[50] Mehra P, Wah BW. Artificial neural networks: concepts and theory. IEEE
Computer Soc. Press; 1992.[51] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE
international conference on neural networks, vol. 4; 1995. p. 1942–8.[52] Engelbrecht AP. Computational intelligence: an introduction. Wiley; 2007.[53] Eberhart RC, Simpson PK, Dobbins R, Dobbins RW. Computational intelligence
PC tools. AP Professional; 1996.[54] Bahadori A. Determination of well placement and breakthrough time in
horizontal wells for homogeneous and anisotropic reservoirs. J Petrol Sci Eng2010;75(1–2):196–202.
[55] Yuhui S, Eberhart R. A modified particle swarm optimizer. In: The 1998 IEEEinternational conference on evolutionary computation proceedings, 1998. IEEEworld congress on computational intelligence; 1998. p. 69–73.
[56] Sivanandam SN, Deepa SN. Introduction to genetic algorithms. Springer; 2007.[57] Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In:
Proceedings of the sixth international symposium on micro machine andhuman science, 1995 MHS ’95; 1995. p. 39–43.
[58] Kennedy J. The particle swarm: social adaptation of knowledge. In: IEEEinternational conference on evolutionary computation; 1997. p. 303–308.
[59] Geethanjali M, Raja Slochanal SM, Bhavani R. PSO trained ANN-baseddifferential protection scheme for power transformers. Neurocomputing2008;71:904–18.
[60] Bahadori A, Vuthaluru HB. Predicting emissivities of combustion gases. ChemEng Prog 2009;105(6):38–41.
[61] Bahadori A, Vuthaluru HB. Prediction of silica carry-over and solubility insteam of boilers using simple correlation. Appl Therm Eng 2010;30(2–3):250–3.
[62] Ghandehari S, Montazer-Rahmati MM, Asghari M. A comparison betweensemi-theoretical and empirical modeling of cross-flow microfiltration usingANN. Desalination 2011;277:348–55.
ce scheme for prediction of equilibrium water dew point of natural gas in.072