research article modeling slump of ready mix concrete using … · 2019. 7. 31. · research...

10
Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural Networks Vinay Chandwani, Vinay Agrawal, and Ravindra Nagar Department of Civil Engineering, Malaviya National Institute of Technology Jaipur, JLN Marg, Jaipur, Rajasthan 302017, India Correspondence should be addressed to Vinay Chandwani; [email protected] Received 29 August 2014; Accepted 20 October 2014; Published 11 November 2014 Academic Editor: Ping Feng Pai Copyright © 2014 Vinay Chandwani et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Artificial neural networks (ANNs) have been the preferred choice for modeling the complex and nonlinear material behavior where conventional mathematical approaches do not yield the desired accuracy and predictability. Despite their popularity as a universal function approximator and wide range of applications, no specific rules for deciding the architecture of neural networks catering to a specific modeling task have been formulated. e research paper presents a methodology for automated design of neural network architecture, replacing the conventional trial and error technique of finding the optimal neural network. e genetic algorithms (GA) stochastic search has been harnessed for evolving the optimum number of hidden layer neurons, transfer function, learning rate, and momentum coefficient for backpropagation ANN. e methodology has been applied for modeling slump of ready mix concrete based on its design mix constituents, namely, cement, fly ash, sand, coarse aggregates, admixture, and water-binder ratio. Six different statistical performance measures have been used for evaluating the performance of the trained neural networks. e study showed that, in comparison to conventional trial and error technique of deciding the neural network architecture and training parameters, the neural network architecture evolved through GA was of reduced complexity and provided better prediction performance. 1. Introduction Cement concrete is one of the most widely used construction materials in the world today. e material modeling of concrete is a difficult task owing to its composite nature. Although various empirical relationships in the form of regression equations have been derived from experimental results and are widely in use, these do not provide accuracy and predictability wherein the interactions among the num- ber of variables are either unknown, complex, or nonlinear in nature. Artificial neural networks (ANNs), touted as the next generation of computing, have been the preferred choice since the last few decades for modeling unstruc- tured problems pertaining to material behavior. e notable applications of ANN in modeling properties of concrete are implementations in predicting and modeling compressive strength of high performance concrete [1], self-compacting concrete [2], recycled aggregate concrete [3], rubberized con- crete [4], fibre reinforced polymer (FRP) confined concrete [5], durability of high performance concrete [6], predicting drying shrinkage of concrete [7], concrete mix design [8], and prediction of elastic modulus of normal and high strength concrete [9]. Known for its design mix precision leading to better quality concrete and ease of transportation and laying at the construction site, ready mix concrete (RMC) has emerged as a preferred concrete construction product catering to the requirement of the end user. One of the important properties of concrete that play an important role in the success of RMC is its workability. Workability of concrete is determined by the effort to lay and compact the freshly prepared concrete at construction site with minimum loss of homogeneity. e workability of concrete quantitatively measured in terms of slump value is an important quality parameter in RMC industry. e slump of concrete depends on the concrete’s design mix proportion. As noticed for other properties of concrete, the slump value also exhibits a highly nonlinear and complex functional relationship with concrete’s constituents. Recent studies, like prediction of slump and strength of ready mix concrete containing retarders and high strength concrete Hindawi Publishing Corporation Advances in Artificial Neural Systems Volume 2014, Article ID 629137, 9 pages http://dx.doi.org/10.1155/2014/629137

Upload: others

Post on 23-Jan-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Research ArticleModeling Slump of Ready Mix Concrete UsingGenetically Evolved Artificial Neural Networks

Vinay Chandwani Vinay Agrawal and Ravindra Nagar

Department of Civil Engineering Malaviya National Institute of Technology Jaipur JLN Marg Jaipur Rajasthan 302017 India

Correspondence should be addressed to Vinay Chandwani chandwani2yahoocom

Received 29 August 2014 Accepted 20 October 2014 Published 11 November 2014

Academic Editor Ping Feng Pai

Copyright copy 2014 Vinay Chandwani et al This is an open access article distributed under the Creative Commons AttributionLicense which permits unrestricted use distribution and reproduction in any medium provided the original work is properlycited

Artificial neural networks (ANNs) have been the preferred choice formodeling the complex and nonlinearmaterial behavior whereconventional mathematical approaches do not yield the desired accuracy and predictability Despite their popularity as a universalfunction approximator and wide range of applications no specific rules for deciding the architecture of neural networks catering toa specific modeling task have been formulatedThe research paper presents a methodology for automated design of neural networkarchitecture replacing the conventional trial and error technique of finding the optimal neural network The genetic algorithms(GA) stochastic search has been harnessed for evolving the optimum number of hidden layer neurons transfer function learningrate and momentum coefficient for backpropagation ANN The methodology has been applied for modeling slump of ready mixconcrete based on its design mix constituents namely cement fly ash sand coarse aggregates admixture and water-binder ratioSix different statistical performance measures have been used for evaluating the performance of the trained neural networksThe study showed that in comparison to conventional trial and error technique of deciding the neural network architecture andtraining parameters the neural network architecture evolved throughGAwas of reduced complexity and provided better predictionperformance

1 Introduction

Cement concrete is one of the most widely used constructionmaterials in the world today The material modeling ofconcrete is a difficult task owing to its composite natureAlthough various empirical relationships in the form ofregression equations have been derived from experimentalresults and are widely in use these do not provide accuracyand predictability wherein the interactions among the num-ber of variables are either unknown complex or nonlinearin nature Artificial neural networks (ANNs) touted asthe next generation of computing have been the preferredchoice since the last few decades for modeling unstruc-tured problems pertaining to material behavior The notableapplications of ANN in modeling properties of concrete areimplementations in predicting and modeling compressivestrength of high performance concrete [1] self-compactingconcrete [2] recycled aggregate concrete [3] rubberized con-crete [4] fibre reinforced polymer (FRP) confined concrete[5] durability of high performance concrete [6] predicting

drying shrinkage of concrete [7] concretemix design [8] andprediction of elastic modulus of normal and high strengthconcrete [9]

Known for its design mix precision leading to betterquality concrete and ease of transportation and laying at theconstruction site ready mix concrete (RMC) has emergedas a preferred concrete construction product catering to therequirement of the end user One of the important propertiesof concrete that play an important role in the success of RMCis its workability Workability of concrete is determined bythe effort to lay and compact the freshly prepared concreteat construction site with minimum loss of homogeneityThe workability of concrete quantitatively measured in termsof slump value is an important quality parameter in RMCindustry The slump of concrete depends on the concretersquosdesign mix proportion As noticed for other properties ofconcrete the slump value also exhibits a highly nonlinear andcomplex functional relationship with concretersquos constituentsRecent studies like prediction of slump and strength of readymix concrete containing retarders and high strength concrete

Hindawi Publishing CorporationAdvances in Artificial Neural SystemsVolume 2014 Article ID 629137 9 pageshttpdxdoiorg1011552014629137

2 Advances in Artificial Neural Systems

containing silica fume and plasticizers [10] predicting slumpof fly ash and slag concrete [11] modeling slump of highstrength concrete [12] modeling slump of high performanceconcrete [13] and modeling and analysis of concrete slumpusing laboratory test results [14] have proved the effective-ness of ANN in modeling slump of concrete

Besides ANN applications in modeling behavior of con-crete discussed above there aremanymultidisciplinary appli-cations of ANNwhich are beyond the scope of this paperThereason for this rapid growth in the field of neural networksis attributed to its ldquoblack boxrdquo nature which allows it to beapplied to almost every available problem without seekingthe knowledge of underlying relationships among the inputand output variables In spite of its popularity as a universalfunction approximator the neural networks are still beingdesigned using trial and error approach Generally iterativetechniques using different combinations of number of hiddenlayers and hidden layer neurons are employed in conjunctionwith different learning rate momentum coefficient andtransfer function to arrive at optimal neural network designThis technique of designing a neural network is thereforetime consuming and relies heavily on the experience ofthe designer In order to reduce the effort and time indesigning the optimal neural network architecture and itstraining parameters various studies for automatic designof neural network have been successfully performed in thepast by harnessing the stochastic search ability of geneticalgorithms [15ndash19] The autodesign of neural network usinggenetic algorithms (GA) has not been so far employedfor modeling the slump of concrete The research paperpresents a methodology for evolving an optimal architectureof neural network and its training parameters using GA formodeling the slump of concrete based on concretersquos designmix proportions

The study has been organized into sections Section 2deals with data collection and its description Section 3deals with the methodology in which trial and error neu-ral network modeling of concrete slump evolving neuralnetwork architecture and training parameters using geneticalgorithms and statistical performance measures have beendiscussed Results discussions and conclusions have beendealt with in Sections 4 5 and 6 respectively

2 Collection of Data and Its Description

The exemplar data for ANN was collected from the sameRMC plant to mitigate any chance of change caused in theslump data due to change in physical and chemical propertiesof the concrete design mix constituents The collected datacomprised 560 concrete design mix proportions and theircorresponding slump test valuesThe designmix proportionsincluded weight per m3 of cement pulverized fly ash (PFA)sand (as fine aggregate) coarse aggregate 20mm coarseaggregate 10mm admixture and water-binder ratio Therange (maximum and minimum values) of the RMC dataused in the study is shown in Table 1

Table 1 Range of RMC data used for neural network modeling

RMC data Maximum MinimumCement (kgm3) 425 100Fly ash (kgm3) 220 0Sand (kgm3) 900 550Coarse aggregate 20mm (kgm3) 788 58Coarse aggregate 10mm (kgm3) 771 343Admixture (kgm3) 55 10Water-binder ratio 076 036Concrete slump (mm) 175 110

3 Methodology

For conducting the study the Neural Network Toolbox andGlobal Optimization Toolbox included in the commerciallyavailable software MATLAB R2011b (version 7130564) wereused to implement the BPNN and GA respectively

31 Dividing Data into Training Validation and Test DataSets ANN derives learning capabilities through trainingusing input-output data pairs and subsequent generalizationability when subjected to unseen data The training andgeneralization of neural networks are accomplished using atraining data set and a validation data set respectively Therobustness of the trained and validated neural network istested using a test data set This procedure is accomplishedby dividing the entire data into three disjoint sets namelytraining data set validation data set and test data set Theavailable data was randomized and 70 of the data wasdesignated as training data set and the remaining 30 datawas equally divided to create the validation and test data setsrespectively

32 Normalization of Data The data used for trainingvalidation and testing of neural networks comprise inputsand corresponding output features of different identitieswhich normally have minimum similarities Moreover therange (minimumndashmaximumvalues) of the data used for eachinput and output component is also quite different In orderto scale down all the inputs and outputs in a particular boundrange preferably minus1 to +1 or 0 to +1 data normalization isperformed This type of normalization has the advantage ofpreserving exactly all relationships in the data and it does notintroduce any bias [20] This facilitates the learning speed asthese values fall in the region of sigmoid transfer functionwhere the output is most sensitive to the variations of theinput values [21] Linear scaling in the range minus1 to +1 has beenused in the present study having function

119909norm =2 lowast (119909 minus 119909min)

(119909max minus 119909min)minus 1 (1)

where119909norm is the normalized value of the variable119909 and119909maxand 119909min are the minimum and maximum values of variable119909 respectively

Advances in Artificial Neural Systems 3

Processing done by a neuron

Transfer function

w1

w2

w3

x1

x2

x3

Inpu

ts

Out

put

sum int

sum (wixi) + b

xn

wn

f[sum (wixi) + b]

b

Figure 1 Mathematical model of an artificial neuron

33 Neural Network Architecture and Training ParametersAn artificial neural network is an information processingparadigm which presents a computational analogy inspiredby human brain ANN consists of processing elements calledthe artificial neurons which are arranged in layers Thecomputational structure of an artificial neuron comprisesseveral inputs 119909

1 119909

2 119909

119899and an output 119910 Every input

is assigned a weight factor (1199081 119908

2 119908

119899) signifying the

importance of the input The synaptic weights carry bothpositive and negative values A positive weight value leads tothe forward propagation of information whereas a negativevalue inhibits the signal The neuron has an additional inputcalled bias 119887 and is interpreted as an additional weight Biashas a constant value and its incorporation within the neuronhelps in learning of the neural networkThe inputsmultipliedby corresponding weights are summed up and form the argu-ment of the neuronrsquos activation function Figure 1 representsthe mathematical model of an artificial neuron

The architecture of a neural network consists of threebasic layers denoted by ldquoinput layerrdquo ldquooutput layerrdquo and anumber of intermediate ldquohidden layersrdquo In case ofmultilayerfeedforward neural networks (MFNN) the neurons in eachlayer are connected in the forward direction only and nointralayer connections between the neurons are permittedThe input features form the neurons of the input layer andoutput features are represented by the neurons of the outputlayer The number of hidden layers and hidden layer neuronsdepends on the number of training cases the amount ofnoise and the degree of complexity of the function or theclassification desired to be learnt [22] Hornik et al [23]concluded that a three-layer feedforward neural networkwithbackpropagation algorithm can map any nonlinear relation-ship with desired degree of accuracy Some ldquorules of thumbrdquoacting as initial guidelines for choosing neural networkarchitecture have been suggested by [24ndash27] Neverthelessthe selection of hidden layers and hidden layer neurons is

a trial and error process and generally started by choosinga network with a minimum number of hidden layers andhidden neurons

MFNNs trained using backpropagation (BP) algorithmsare commonly used for tasks associated with functionapproximation and pattern recognition Backpropagationalgorithm in essence is a means of updating neural networksynaptic weights by backpropagating a gradient vector inwhich each element is defined as the derivative of an errormeasure with respect to a parameter [28] BP algorithm is asupervised learning algorithm wherein exemplar patterns ofassociated input and output data values are presented to theneural network during training The information from inputlayer to output layer proceeds in forward direction only andwith the help of BP algorithm the error computed is passedbackwards and the weights and biases are iteratively adjustedto enable learning of neural network by reducing networkerror to a threshold value

A suitable learning rate and momentum coefficient areemployed for efficient learning of the network A higherlearning rate leads to faster training but by doing so itproduces large oscillations in the weight change which mayforce the ANNmodel to overshoot the optimal weight valuesOn the other hand a lower learning rate makes convergenceslower and increases the probability of ANN model to gettrapped in local minima The momentum term effectivelyfilters out the high frequency variations of the error surfacein the weight space since it adds the effect of the pastweight changes on the current direction of movement in theweight space [29] A combined use of these parameters helpsthe BP algorithm to overcome the effect of local minimaThe utility of transfer functions in neural networks is tointroduce nonlinearity into the network A consequence ofthe nonlinearity of this transfer function in the operation

4 Advances in Artificial Neural Systems

Slump (mm)

Cement (kgm3)

Water-binder ratio

Input layer Hidden layer Output layer

PFA (kgm3)

Sand (kgm3)

Admixture (kgm3)

CA 20mm (kgm3)

CA 10mm (kgm3)

Figure 2 Single hidden layer neural network with five hidden layer neurons

of the network when so introduced is that the network isthereby enabled to deal robustly with complex undefinedrelations between the inputs and the output [30]

34 Evolving Neural Network Architecture and TrainingParameters Using Trial and Error In the present study theinput layer consists of seven neurons namely cement fly ashsand coarse aggregate (20mm) coarse aggregate (10mm)admixture content and water-binder ratio The output layercomprises a single neuron representing the slump value cor-responding to the seven input neurons defined above In thisstudy eleven single hidden layer neural network architecturesof different complexities with hidden layer neurons varyingin the range 5 to 20 have been used for evolving the optimalneural network architectureThe neural network architecturewith five hidden layer neurons for the present study is shownin Figure 2 The learning rate and momentum coefficienthave been varied in the range 0 to +1 Hyperbolic tangentand log-sigmoid transfer functions have been used in thehidden layers alongwith linear transfer function in the outputlayer Different combinations of learning rate andmomentumcoefficientwith hidden layer transfer function have been triedfor effective training of neural networks

The neural networks were trained using the trainingdata set The information to the neural network was pre-sented through input layer neurons The information prop-agated in the forward direction through hidden layers andwas processed by the hidden layer neurons The networkrsquosresponse at the output layer was evaluated and comparedwith actual output The error between the actual and thepredicted response of the neural network was computed andpropagated in the backward direction to adjust the weights

and biases of the neural network Using the BP algorithmthe weights and biases were adjusted in a manner to rendererror to a minimum value In the present study Levenberg-Marquardt backpropagation algorithm which is the fastestconverging algorithm preferred for supervised learning hasbeen used as training algorithm During training process theneural network has a tendency to overfit the training dataThis leads to poor generalization when the trained neuralnetwork is presented with unseen data The validation dataset is used to test the generalization ability of trained neural ateach iteration cycle Early stopping of neural network trainingis generally undertaken to avoid overfitting or overtrainingof the neural network In this technique the validationerror is also monitored at each iteration cycle along withtraining error and the training is stopped once the validationerror begins to increase The neural network architecturehaving the least validation error is selected as the optimalone

35 Evolving Neural Network Architecture and TrainingParameters Using Genetic Algorithms Genetic algorithm(GA) inspired by Darwinrsquos theory ldquosurvival of the fittestrdquo isa global search and optimization algorithm which involvesthe use of genetic and evolution operators GA is a pop-ulation based search technique that simultaneously workson a number of probable solutions to a problem at atime and uses probabilistic operators to narrow down thesearch to the region where there is maximum possibility offinding an optimal solution GA presents a perfect blend ofexploration and exploitation of the solution search space byharnessing computational models of evolutionary processeslike selection crossover and mutation GAs outperform

Advances in Artificial Neural Systems 5

Yes

Initialize population of chromosomes representing hidden layer neurons (N)

transfer function learning rate and momentum coefficient

Create neural network

Train the neural network using BP

algorithm and training parametersOutputs

Compute fitness of chromosomes as

training RMSE by comparing actual and

predicted outputs

Compute validation error (RMSE) by

comparing actual and predicted outputs

Inputs

Outputs

Increase invalidation

error

Update weights and biases

Threshold training error

reached

Selection

Saturation of fitness function or

maximum generations

reached

Crossover

Mutation

Stop training

Stop training

Next generation of population

Save neural network architecture and

training parameters

Yes

No

NoYes

No

Start

Inputs

Training data set

Validation data set

Gen

etic

algo

rithm

s ope

rato

rs

Back

prop

agat

ion

of er

rors

Stop

7-N-1

Figure 3 Evolving neural network architecture and training parameters using GA

the efficiency of conventional optimization techniques insearching nonlinear and noncontinuous spaces which arecharacterized by abstract or poorly understood expert knowl-edge [31]

Automatic design of neural network architecture and itstraining parameters is accomplished by amalgamating GAwith ANN during its training processThemethodology usesGA to evolve neural networkrsquos hidden layer neurons andtransfer function along with its learning rate andmomentum

coefficient The BP algorithm then uses these ANN designvariables to compute the training error The training processis monitored at each iteration by computing the valida-tion error The training of neural network is stopped oncevalidation error starts to increase This process is repeatednumber of times till optimum neural network architectureand its training parameters are evolved The steps of thismethodology are presented as flow chart in Figure 3 and aresummarized as below

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 2: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

2 Advances in Artificial Neural Systems

containing silica fume and plasticizers [10] predicting slumpof fly ash and slag concrete [11] modeling slump of highstrength concrete [12] modeling slump of high performanceconcrete [13] and modeling and analysis of concrete slumpusing laboratory test results [14] have proved the effective-ness of ANN in modeling slump of concrete

Besides ANN applications in modeling behavior of con-crete discussed above there aremanymultidisciplinary appli-cations of ANNwhich are beyond the scope of this paperThereason for this rapid growth in the field of neural networksis attributed to its ldquoblack boxrdquo nature which allows it to beapplied to almost every available problem without seekingthe knowledge of underlying relationships among the inputand output variables In spite of its popularity as a universalfunction approximator the neural networks are still beingdesigned using trial and error approach Generally iterativetechniques using different combinations of number of hiddenlayers and hidden layer neurons are employed in conjunctionwith different learning rate momentum coefficient andtransfer function to arrive at optimal neural network designThis technique of designing a neural network is thereforetime consuming and relies heavily on the experience ofthe designer In order to reduce the effort and time indesigning the optimal neural network architecture and itstraining parameters various studies for automatic designof neural network have been successfully performed in thepast by harnessing the stochastic search ability of geneticalgorithms [15ndash19] The autodesign of neural network usinggenetic algorithms (GA) has not been so far employedfor modeling the slump of concrete The research paperpresents a methodology for evolving an optimal architectureof neural network and its training parameters using GA formodeling the slump of concrete based on concretersquos designmix proportions

The study has been organized into sections Section 2deals with data collection and its description Section 3deals with the methodology in which trial and error neu-ral network modeling of concrete slump evolving neuralnetwork architecture and training parameters using geneticalgorithms and statistical performance measures have beendiscussed Results discussions and conclusions have beendealt with in Sections 4 5 and 6 respectively

2 Collection of Data and Its Description

The exemplar data for ANN was collected from the sameRMC plant to mitigate any chance of change caused in theslump data due to change in physical and chemical propertiesof the concrete design mix constituents The collected datacomprised 560 concrete design mix proportions and theircorresponding slump test valuesThe designmix proportionsincluded weight per m3 of cement pulverized fly ash (PFA)sand (as fine aggregate) coarse aggregate 20mm coarseaggregate 10mm admixture and water-binder ratio Therange (maximum and minimum values) of the RMC dataused in the study is shown in Table 1

Table 1 Range of RMC data used for neural network modeling

RMC data Maximum MinimumCement (kgm3) 425 100Fly ash (kgm3) 220 0Sand (kgm3) 900 550Coarse aggregate 20mm (kgm3) 788 58Coarse aggregate 10mm (kgm3) 771 343Admixture (kgm3) 55 10Water-binder ratio 076 036Concrete slump (mm) 175 110

3 Methodology

For conducting the study the Neural Network Toolbox andGlobal Optimization Toolbox included in the commerciallyavailable software MATLAB R2011b (version 7130564) wereused to implement the BPNN and GA respectively

31 Dividing Data into Training Validation and Test DataSets ANN derives learning capabilities through trainingusing input-output data pairs and subsequent generalizationability when subjected to unseen data The training andgeneralization of neural networks are accomplished using atraining data set and a validation data set respectively Therobustness of the trained and validated neural network istested using a test data set This procedure is accomplishedby dividing the entire data into three disjoint sets namelytraining data set validation data set and test data set Theavailable data was randomized and 70 of the data wasdesignated as training data set and the remaining 30 datawas equally divided to create the validation and test data setsrespectively

32 Normalization of Data The data used for trainingvalidation and testing of neural networks comprise inputsand corresponding output features of different identitieswhich normally have minimum similarities Moreover therange (minimumndashmaximumvalues) of the data used for eachinput and output component is also quite different In orderto scale down all the inputs and outputs in a particular boundrange preferably minus1 to +1 or 0 to +1 data normalization isperformed This type of normalization has the advantage ofpreserving exactly all relationships in the data and it does notintroduce any bias [20] This facilitates the learning speed asthese values fall in the region of sigmoid transfer functionwhere the output is most sensitive to the variations of theinput values [21] Linear scaling in the range minus1 to +1 has beenused in the present study having function

119909norm =2 lowast (119909 minus 119909min)

(119909max minus 119909min)minus 1 (1)

where119909norm is the normalized value of the variable119909 and119909maxand 119909min are the minimum and maximum values of variable119909 respectively

Advances in Artificial Neural Systems 3

Processing done by a neuron

Transfer function

w1

w2

w3

x1

x2

x3

Inpu

ts

Out

put

sum int

sum (wixi) + b

xn

wn

f[sum (wixi) + b]

b

Figure 1 Mathematical model of an artificial neuron

33 Neural Network Architecture and Training ParametersAn artificial neural network is an information processingparadigm which presents a computational analogy inspiredby human brain ANN consists of processing elements calledthe artificial neurons which are arranged in layers Thecomputational structure of an artificial neuron comprisesseveral inputs 119909

1 119909

2 119909

119899and an output 119910 Every input

is assigned a weight factor (1199081 119908

2 119908

119899) signifying the

importance of the input The synaptic weights carry bothpositive and negative values A positive weight value leads tothe forward propagation of information whereas a negativevalue inhibits the signal The neuron has an additional inputcalled bias 119887 and is interpreted as an additional weight Biashas a constant value and its incorporation within the neuronhelps in learning of the neural networkThe inputsmultipliedby corresponding weights are summed up and form the argu-ment of the neuronrsquos activation function Figure 1 representsthe mathematical model of an artificial neuron

The architecture of a neural network consists of threebasic layers denoted by ldquoinput layerrdquo ldquooutput layerrdquo and anumber of intermediate ldquohidden layersrdquo In case ofmultilayerfeedforward neural networks (MFNN) the neurons in eachlayer are connected in the forward direction only and nointralayer connections between the neurons are permittedThe input features form the neurons of the input layer andoutput features are represented by the neurons of the outputlayer The number of hidden layers and hidden layer neuronsdepends on the number of training cases the amount ofnoise and the degree of complexity of the function or theclassification desired to be learnt [22] Hornik et al [23]concluded that a three-layer feedforward neural networkwithbackpropagation algorithm can map any nonlinear relation-ship with desired degree of accuracy Some ldquorules of thumbrdquoacting as initial guidelines for choosing neural networkarchitecture have been suggested by [24ndash27] Neverthelessthe selection of hidden layers and hidden layer neurons is

a trial and error process and generally started by choosinga network with a minimum number of hidden layers andhidden neurons

MFNNs trained using backpropagation (BP) algorithmsare commonly used for tasks associated with functionapproximation and pattern recognition Backpropagationalgorithm in essence is a means of updating neural networksynaptic weights by backpropagating a gradient vector inwhich each element is defined as the derivative of an errormeasure with respect to a parameter [28] BP algorithm is asupervised learning algorithm wherein exemplar patterns ofassociated input and output data values are presented to theneural network during training The information from inputlayer to output layer proceeds in forward direction only andwith the help of BP algorithm the error computed is passedbackwards and the weights and biases are iteratively adjustedto enable learning of neural network by reducing networkerror to a threshold value

A suitable learning rate and momentum coefficient areemployed for efficient learning of the network A higherlearning rate leads to faster training but by doing so itproduces large oscillations in the weight change which mayforce the ANNmodel to overshoot the optimal weight valuesOn the other hand a lower learning rate makes convergenceslower and increases the probability of ANN model to gettrapped in local minima The momentum term effectivelyfilters out the high frequency variations of the error surfacein the weight space since it adds the effect of the pastweight changes on the current direction of movement in theweight space [29] A combined use of these parameters helpsthe BP algorithm to overcome the effect of local minimaThe utility of transfer functions in neural networks is tointroduce nonlinearity into the network A consequence ofthe nonlinearity of this transfer function in the operation

4 Advances in Artificial Neural Systems

Slump (mm)

Cement (kgm3)

Water-binder ratio

Input layer Hidden layer Output layer

PFA (kgm3)

Sand (kgm3)

Admixture (kgm3)

CA 20mm (kgm3)

CA 10mm (kgm3)

Figure 2 Single hidden layer neural network with five hidden layer neurons

of the network when so introduced is that the network isthereby enabled to deal robustly with complex undefinedrelations between the inputs and the output [30]

34 Evolving Neural Network Architecture and TrainingParameters Using Trial and Error In the present study theinput layer consists of seven neurons namely cement fly ashsand coarse aggregate (20mm) coarse aggregate (10mm)admixture content and water-binder ratio The output layercomprises a single neuron representing the slump value cor-responding to the seven input neurons defined above In thisstudy eleven single hidden layer neural network architecturesof different complexities with hidden layer neurons varyingin the range 5 to 20 have been used for evolving the optimalneural network architectureThe neural network architecturewith five hidden layer neurons for the present study is shownin Figure 2 The learning rate and momentum coefficienthave been varied in the range 0 to +1 Hyperbolic tangentand log-sigmoid transfer functions have been used in thehidden layers alongwith linear transfer function in the outputlayer Different combinations of learning rate andmomentumcoefficientwith hidden layer transfer function have been triedfor effective training of neural networks

The neural networks were trained using the trainingdata set The information to the neural network was pre-sented through input layer neurons The information prop-agated in the forward direction through hidden layers andwas processed by the hidden layer neurons The networkrsquosresponse at the output layer was evaluated and comparedwith actual output The error between the actual and thepredicted response of the neural network was computed andpropagated in the backward direction to adjust the weights

and biases of the neural network Using the BP algorithmthe weights and biases were adjusted in a manner to rendererror to a minimum value In the present study Levenberg-Marquardt backpropagation algorithm which is the fastestconverging algorithm preferred for supervised learning hasbeen used as training algorithm During training process theneural network has a tendency to overfit the training dataThis leads to poor generalization when the trained neuralnetwork is presented with unseen data The validation dataset is used to test the generalization ability of trained neural ateach iteration cycle Early stopping of neural network trainingis generally undertaken to avoid overfitting or overtrainingof the neural network In this technique the validationerror is also monitored at each iteration cycle along withtraining error and the training is stopped once the validationerror begins to increase The neural network architecturehaving the least validation error is selected as the optimalone

35 Evolving Neural Network Architecture and TrainingParameters Using Genetic Algorithms Genetic algorithm(GA) inspired by Darwinrsquos theory ldquosurvival of the fittestrdquo isa global search and optimization algorithm which involvesthe use of genetic and evolution operators GA is a pop-ulation based search technique that simultaneously workson a number of probable solutions to a problem at atime and uses probabilistic operators to narrow down thesearch to the region where there is maximum possibility offinding an optimal solution GA presents a perfect blend ofexploration and exploitation of the solution search space byharnessing computational models of evolutionary processeslike selection crossover and mutation GAs outperform

Advances in Artificial Neural Systems 5

Yes

Initialize population of chromosomes representing hidden layer neurons (N)

transfer function learning rate and momentum coefficient

Create neural network

Train the neural network using BP

algorithm and training parametersOutputs

Compute fitness of chromosomes as

training RMSE by comparing actual and

predicted outputs

Compute validation error (RMSE) by

comparing actual and predicted outputs

Inputs

Outputs

Increase invalidation

error

Update weights and biases

Threshold training error

reached

Selection

Saturation of fitness function or

maximum generations

reached

Crossover

Mutation

Stop training

Stop training

Next generation of population

Save neural network architecture and

training parameters

Yes

No

NoYes

No

Start

Inputs

Training data set

Validation data set

Gen

etic

algo

rithm

s ope

rato

rs

Back

prop

agat

ion

of er

rors

Stop

7-N-1

Figure 3 Evolving neural network architecture and training parameters using GA

the efficiency of conventional optimization techniques insearching nonlinear and noncontinuous spaces which arecharacterized by abstract or poorly understood expert knowl-edge [31]

Automatic design of neural network architecture and itstraining parameters is accomplished by amalgamating GAwith ANN during its training processThemethodology usesGA to evolve neural networkrsquos hidden layer neurons andtransfer function along with its learning rate andmomentum

coefficient The BP algorithm then uses these ANN designvariables to compute the training error The training processis monitored at each iteration by computing the valida-tion error The training of neural network is stopped oncevalidation error starts to increase This process is repeatednumber of times till optimum neural network architectureand its training parameters are evolved The steps of thismethodology are presented as flow chart in Figure 3 and aresummarized as below

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 3: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Advances in Artificial Neural Systems 3

Processing done by a neuron

Transfer function

w1

w2

w3

x1

x2

x3

Inpu

ts

Out

put

sum int

sum (wixi) + b

xn

wn

f[sum (wixi) + b]

b

Figure 1 Mathematical model of an artificial neuron

33 Neural Network Architecture and Training ParametersAn artificial neural network is an information processingparadigm which presents a computational analogy inspiredby human brain ANN consists of processing elements calledthe artificial neurons which are arranged in layers Thecomputational structure of an artificial neuron comprisesseveral inputs 119909

1 119909

2 119909

119899and an output 119910 Every input

is assigned a weight factor (1199081 119908

2 119908

119899) signifying the

importance of the input The synaptic weights carry bothpositive and negative values A positive weight value leads tothe forward propagation of information whereas a negativevalue inhibits the signal The neuron has an additional inputcalled bias 119887 and is interpreted as an additional weight Biashas a constant value and its incorporation within the neuronhelps in learning of the neural networkThe inputsmultipliedby corresponding weights are summed up and form the argu-ment of the neuronrsquos activation function Figure 1 representsthe mathematical model of an artificial neuron

The architecture of a neural network consists of threebasic layers denoted by ldquoinput layerrdquo ldquooutput layerrdquo and anumber of intermediate ldquohidden layersrdquo In case ofmultilayerfeedforward neural networks (MFNN) the neurons in eachlayer are connected in the forward direction only and nointralayer connections between the neurons are permittedThe input features form the neurons of the input layer andoutput features are represented by the neurons of the outputlayer The number of hidden layers and hidden layer neuronsdepends on the number of training cases the amount ofnoise and the degree of complexity of the function or theclassification desired to be learnt [22] Hornik et al [23]concluded that a three-layer feedforward neural networkwithbackpropagation algorithm can map any nonlinear relation-ship with desired degree of accuracy Some ldquorules of thumbrdquoacting as initial guidelines for choosing neural networkarchitecture have been suggested by [24ndash27] Neverthelessthe selection of hidden layers and hidden layer neurons is

a trial and error process and generally started by choosinga network with a minimum number of hidden layers andhidden neurons

MFNNs trained using backpropagation (BP) algorithmsare commonly used for tasks associated with functionapproximation and pattern recognition Backpropagationalgorithm in essence is a means of updating neural networksynaptic weights by backpropagating a gradient vector inwhich each element is defined as the derivative of an errormeasure with respect to a parameter [28] BP algorithm is asupervised learning algorithm wherein exemplar patterns ofassociated input and output data values are presented to theneural network during training The information from inputlayer to output layer proceeds in forward direction only andwith the help of BP algorithm the error computed is passedbackwards and the weights and biases are iteratively adjustedto enable learning of neural network by reducing networkerror to a threshold value

A suitable learning rate and momentum coefficient areemployed for efficient learning of the network A higherlearning rate leads to faster training but by doing so itproduces large oscillations in the weight change which mayforce the ANNmodel to overshoot the optimal weight valuesOn the other hand a lower learning rate makes convergenceslower and increases the probability of ANN model to gettrapped in local minima The momentum term effectivelyfilters out the high frequency variations of the error surfacein the weight space since it adds the effect of the pastweight changes on the current direction of movement in theweight space [29] A combined use of these parameters helpsthe BP algorithm to overcome the effect of local minimaThe utility of transfer functions in neural networks is tointroduce nonlinearity into the network A consequence ofthe nonlinearity of this transfer function in the operation

4 Advances in Artificial Neural Systems

Slump (mm)

Cement (kgm3)

Water-binder ratio

Input layer Hidden layer Output layer

PFA (kgm3)

Sand (kgm3)

Admixture (kgm3)

CA 20mm (kgm3)

CA 10mm (kgm3)

Figure 2 Single hidden layer neural network with five hidden layer neurons

of the network when so introduced is that the network isthereby enabled to deal robustly with complex undefinedrelations between the inputs and the output [30]

34 Evolving Neural Network Architecture and TrainingParameters Using Trial and Error In the present study theinput layer consists of seven neurons namely cement fly ashsand coarse aggregate (20mm) coarse aggregate (10mm)admixture content and water-binder ratio The output layercomprises a single neuron representing the slump value cor-responding to the seven input neurons defined above In thisstudy eleven single hidden layer neural network architecturesof different complexities with hidden layer neurons varyingin the range 5 to 20 have been used for evolving the optimalneural network architectureThe neural network architecturewith five hidden layer neurons for the present study is shownin Figure 2 The learning rate and momentum coefficienthave been varied in the range 0 to +1 Hyperbolic tangentand log-sigmoid transfer functions have been used in thehidden layers alongwith linear transfer function in the outputlayer Different combinations of learning rate andmomentumcoefficientwith hidden layer transfer function have been triedfor effective training of neural networks

The neural networks were trained using the trainingdata set The information to the neural network was pre-sented through input layer neurons The information prop-agated in the forward direction through hidden layers andwas processed by the hidden layer neurons The networkrsquosresponse at the output layer was evaluated and comparedwith actual output The error between the actual and thepredicted response of the neural network was computed andpropagated in the backward direction to adjust the weights

and biases of the neural network Using the BP algorithmthe weights and biases were adjusted in a manner to rendererror to a minimum value In the present study Levenberg-Marquardt backpropagation algorithm which is the fastestconverging algorithm preferred for supervised learning hasbeen used as training algorithm During training process theneural network has a tendency to overfit the training dataThis leads to poor generalization when the trained neuralnetwork is presented with unseen data The validation dataset is used to test the generalization ability of trained neural ateach iteration cycle Early stopping of neural network trainingis generally undertaken to avoid overfitting or overtrainingof the neural network In this technique the validationerror is also monitored at each iteration cycle along withtraining error and the training is stopped once the validationerror begins to increase The neural network architecturehaving the least validation error is selected as the optimalone

35 Evolving Neural Network Architecture and TrainingParameters Using Genetic Algorithms Genetic algorithm(GA) inspired by Darwinrsquos theory ldquosurvival of the fittestrdquo isa global search and optimization algorithm which involvesthe use of genetic and evolution operators GA is a pop-ulation based search technique that simultaneously workson a number of probable solutions to a problem at atime and uses probabilistic operators to narrow down thesearch to the region where there is maximum possibility offinding an optimal solution GA presents a perfect blend ofexploration and exploitation of the solution search space byharnessing computational models of evolutionary processeslike selection crossover and mutation GAs outperform

Advances in Artificial Neural Systems 5

Yes

Initialize population of chromosomes representing hidden layer neurons (N)

transfer function learning rate and momentum coefficient

Create neural network

Train the neural network using BP

algorithm and training parametersOutputs

Compute fitness of chromosomes as

training RMSE by comparing actual and

predicted outputs

Compute validation error (RMSE) by

comparing actual and predicted outputs

Inputs

Outputs

Increase invalidation

error

Update weights and biases

Threshold training error

reached

Selection

Saturation of fitness function or

maximum generations

reached

Crossover

Mutation

Stop training

Stop training

Next generation of population

Save neural network architecture and

training parameters

Yes

No

NoYes

No

Start

Inputs

Training data set

Validation data set

Gen

etic

algo

rithm

s ope

rato

rs

Back

prop

agat

ion

of er

rors

Stop

7-N-1

Figure 3 Evolving neural network architecture and training parameters using GA

the efficiency of conventional optimization techniques insearching nonlinear and noncontinuous spaces which arecharacterized by abstract or poorly understood expert knowl-edge [31]

Automatic design of neural network architecture and itstraining parameters is accomplished by amalgamating GAwith ANN during its training processThemethodology usesGA to evolve neural networkrsquos hidden layer neurons andtransfer function along with its learning rate andmomentum

coefficient The BP algorithm then uses these ANN designvariables to compute the training error The training processis monitored at each iteration by computing the valida-tion error The training of neural network is stopped oncevalidation error starts to increase This process is repeatednumber of times till optimum neural network architectureand its training parameters are evolved The steps of thismethodology are presented as flow chart in Figure 3 and aresummarized as below

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 4: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

4 Advances in Artificial Neural Systems

Slump (mm)

Cement (kgm3)

Water-binder ratio

Input layer Hidden layer Output layer

PFA (kgm3)

Sand (kgm3)

Admixture (kgm3)

CA 20mm (kgm3)

CA 10mm (kgm3)

Figure 2 Single hidden layer neural network with five hidden layer neurons

of the network when so introduced is that the network isthereby enabled to deal robustly with complex undefinedrelations between the inputs and the output [30]

34 Evolving Neural Network Architecture and TrainingParameters Using Trial and Error In the present study theinput layer consists of seven neurons namely cement fly ashsand coarse aggregate (20mm) coarse aggregate (10mm)admixture content and water-binder ratio The output layercomprises a single neuron representing the slump value cor-responding to the seven input neurons defined above In thisstudy eleven single hidden layer neural network architecturesof different complexities with hidden layer neurons varyingin the range 5 to 20 have been used for evolving the optimalneural network architectureThe neural network architecturewith five hidden layer neurons for the present study is shownin Figure 2 The learning rate and momentum coefficienthave been varied in the range 0 to +1 Hyperbolic tangentand log-sigmoid transfer functions have been used in thehidden layers alongwith linear transfer function in the outputlayer Different combinations of learning rate andmomentumcoefficientwith hidden layer transfer function have been triedfor effective training of neural networks

The neural networks were trained using the trainingdata set The information to the neural network was pre-sented through input layer neurons The information prop-agated in the forward direction through hidden layers andwas processed by the hidden layer neurons The networkrsquosresponse at the output layer was evaluated and comparedwith actual output The error between the actual and thepredicted response of the neural network was computed andpropagated in the backward direction to adjust the weights

and biases of the neural network Using the BP algorithmthe weights and biases were adjusted in a manner to rendererror to a minimum value In the present study Levenberg-Marquardt backpropagation algorithm which is the fastestconverging algorithm preferred for supervised learning hasbeen used as training algorithm During training process theneural network has a tendency to overfit the training dataThis leads to poor generalization when the trained neuralnetwork is presented with unseen data The validation dataset is used to test the generalization ability of trained neural ateach iteration cycle Early stopping of neural network trainingis generally undertaken to avoid overfitting or overtrainingof the neural network In this technique the validationerror is also monitored at each iteration cycle along withtraining error and the training is stopped once the validationerror begins to increase The neural network architecturehaving the least validation error is selected as the optimalone

35 Evolving Neural Network Architecture and TrainingParameters Using Genetic Algorithms Genetic algorithm(GA) inspired by Darwinrsquos theory ldquosurvival of the fittestrdquo isa global search and optimization algorithm which involvesthe use of genetic and evolution operators GA is a pop-ulation based search technique that simultaneously workson a number of probable solutions to a problem at atime and uses probabilistic operators to narrow down thesearch to the region where there is maximum possibility offinding an optimal solution GA presents a perfect blend ofexploration and exploitation of the solution search space byharnessing computational models of evolutionary processeslike selection crossover and mutation GAs outperform

Advances in Artificial Neural Systems 5

Yes

Initialize population of chromosomes representing hidden layer neurons (N)

transfer function learning rate and momentum coefficient

Create neural network

Train the neural network using BP

algorithm and training parametersOutputs

Compute fitness of chromosomes as

training RMSE by comparing actual and

predicted outputs

Compute validation error (RMSE) by

comparing actual and predicted outputs

Inputs

Outputs

Increase invalidation

error

Update weights and biases

Threshold training error

reached

Selection

Saturation of fitness function or

maximum generations

reached

Crossover

Mutation

Stop training

Stop training

Next generation of population

Save neural network architecture and

training parameters

Yes

No

NoYes

No

Start

Inputs

Training data set

Validation data set

Gen

etic

algo

rithm

s ope

rato

rs

Back

prop

agat

ion

of er

rors

Stop

7-N-1

Figure 3 Evolving neural network architecture and training parameters using GA

the efficiency of conventional optimization techniques insearching nonlinear and noncontinuous spaces which arecharacterized by abstract or poorly understood expert knowl-edge [31]

Automatic design of neural network architecture and itstraining parameters is accomplished by amalgamating GAwith ANN during its training processThemethodology usesGA to evolve neural networkrsquos hidden layer neurons andtransfer function along with its learning rate andmomentum

coefficient The BP algorithm then uses these ANN designvariables to compute the training error The training processis monitored at each iteration by computing the valida-tion error The training of neural network is stopped oncevalidation error starts to increase This process is repeatednumber of times till optimum neural network architectureand its training parameters are evolved The steps of thismethodology are presented as flow chart in Figure 3 and aresummarized as below

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 5: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Advances in Artificial Neural Systems 5

Yes

Initialize population of chromosomes representing hidden layer neurons (N)

transfer function learning rate and momentum coefficient

Create neural network

Train the neural network using BP

algorithm and training parametersOutputs

Compute fitness of chromosomes as

training RMSE by comparing actual and

predicted outputs

Compute validation error (RMSE) by

comparing actual and predicted outputs

Inputs

Outputs

Increase invalidation

error

Update weights and biases

Threshold training error

reached

Selection

Saturation of fitness function or

maximum generations

reached

Crossover

Mutation

Stop training

Stop training

Next generation of population

Save neural network architecture and

training parameters

Yes

No

NoYes

No

Start

Inputs

Training data set

Validation data set

Gen

etic

algo

rithm

s ope

rato

rs

Back

prop

agat

ion

of er

rors

Stop

7-N-1

Figure 3 Evolving neural network architecture and training parameters using GA

the efficiency of conventional optimization techniques insearching nonlinear and noncontinuous spaces which arecharacterized by abstract or poorly understood expert knowl-edge [31]

Automatic design of neural network architecture and itstraining parameters is accomplished by amalgamating GAwith ANN during its training processThemethodology usesGA to evolve neural networkrsquos hidden layer neurons andtransfer function along with its learning rate andmomentum

coefficient The BP algorithm then uses these ANN designvariables to compute the training error The training processis monitored at each iteration by computing the valida-tion error The training of neural network is stopped oncevalidation error starts to increase This process is repeatednumber of times till optimum neural network architectureand its training parameters are evolved The steps of thismethodology are presented as flow chart in Figure 3 and aresummarized as below

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 6: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

6 Advances in Artificial Neural Systems

(1) Initialization of Genetic Algorithm with Population ofChromosomes In GA the chromosomes contain the informa-tion regarding the solution to a problem The chromosomescontain number of genes which represent potential solutionsto a problem Being a population based heuristic the GAstarts with a number of chromosomes representing initialguesses to the possible solutions to a problem In the presentstudy the chromosomes represent the information regardingnumber of hidden layer neurons transfer function learningrate andmomentumcoefficientThenumbers of hidden layerneurons are initialized in the range 5 to 20 neurons and arerepresented as discrete integer number The GA is allowed toselect the transfer function between tangent hyperbolic andlog-sigmoid transfer functionsThe range of learning rate andmomentum coefficient is bounded in the range 0 to +1 and isrepresented by real numbers

The size of the population is chosen in such a way topromote evolving of optimal set of solutions to a particularproblem A large initial population of chromosomes tends toincrease the computational time whereas a small populationsize leads to poor quality solution Therefore population sizemust be chosen to derive a balance between the computa-tional effort and the quality of solution In the present studyan initial population size of 50 chromosomes is used

(2) Creating the Neural Network The neural network iscreated using randomly generated hidden layer neurons Asdiscussed above the number of input layer neurons is 7and output layer neurons is 1 hence the neural networkarchitecture is initialized as 7-N-1 where N is the numberof hidden layer neurons determined using GA The transferfunction for hidden layers is also initialized using GA

(3) Training of Neural Network and Evaluating Fitness of Chro-mosomes The neural network is trained using Levenberg-Marquardt backpropagation training algorithmThe learningparameters namely learning rate andmomentumcoefficientinitialized using GA are used during the training process forsystematic updating of neural network weights and biasesBP algorithm updates the weight and biases through back-propagation of errors Early stopping technique is appliedto improve the generalization of the neural network Thefitness function acts as measure of distinguishing optimalsolution from numerous suboptimal solutions by evaluatingthe ability of the possible solutions to survive or biologicallyspeaking it tests the reproductive efficiency of chromosomesThe fitness of each chromosome is computed by evaluatingthe root mean square error (RMSE) using

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

(2)

where119879119894and119875119894denote the target or observed values andANN

predicted concrete slump values respectively

(4) Selecting the Fitter Chromosomes The evolution operatorof GA called as selection operator helps in selecting the fitterchromosomes based on the value of the fitness function

A selection operator performs the function synonymous toa filtering membrane allowing the fitter chromosomes tosurvive to create new offspring As one moves from onegeneration of population to the next the selection operatorgradually increases the proportion of the fitter chromo-somes in the population The present study uses roulettewheel selection strategy which allows probability of selectionproportional to the fitness of the chromosome The basicadvantage of roulette wheel selection is that it discards noneof the individuals in the population and gives a chance to allof them to be selected [32]

(5) Creating Next Generation of Population The new gener-ation of population is created through two genetic operatorsof GA called crossover and mutationThe crossover operatoris a recombination operator and is applied to a pair of parentchromosomes with the hope to create a better offspring Thisis done by randomly choosing a crossover point and copyingthe information before this point from the first parent andthen copying the information from the second parent beyondthe crossover point The present study utilized the scatteredcrossover with probability 09 for recombining the two parentchromosomes for producing a fitter child

The mutation operator modifies the existing buildingblocks of the chromosomes maintaining genetic diversity inthe population It therefore prevents GA from getting trappedat a local minimum In contrast to crossover which exploitsthe current solution the mutation aids the exploration of thesearch space Too high mutation rate increases the searchspace to a level that convergence or finding global optimabecomes a difficult issue whereas a lower mutation ratedrastically reduces the search space and eventually leadsgenetic algorithm to get stuck in a local optima The presentstudy uses uniform mutation with mutation rate 002 Theprocedure for creating new population of chromosomes iscontinued till maximum generation limit is achieved or thefitness function reaches a saturation level Maximumnumberof generations used for present study is 150

36 Evaluating Performance of the TrainedModels The studyuses six different statistical performance metrics for evalu-ating the performance of the trained models The statisticalparameters aremean absolute error (MAE) rootmean squareerror (RMSE) mean absolute percentage error (MAPE)coefficient of correlation (119877) Nash-Sutcliffe efficiency (119864)and root mean square to standard deviation ratio (RSR) Theabove performance statistics were evaluated using

MAE = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

RMSE = radic 1119873

119873

sum

119894=1

(119879

119894minus 119875

119894)

2

MAPE () = 1119873

119873

sum

119894=1

1003816

1003816

1003816

1003816

119879

119894minus 119875

119894

1003816

1003816

1003816

1003816

119879

119894

times 100

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 7: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Advances in Artificial Neural Systems 7

119877 = (

sum

119873

119894=1((119879

119894minus 119879) (119875

119894minus 119875))

radic

sum

119873

119894=1(119879

119894minus 119879)

2

sum

119873

119894=1(119875

119894minus 119875)

2

)

119864 = 1 minus

sum

119873

119894=1(119879

119894minus 119875

119894)

2

sum

119873

119894=1(119879

119894minus 119879)

2

RSR = RMSE

radic

(1119873)sum

119873

119894=1(119879

119894minus 119879)

2

(3)

where119879119894and119875119894denote the target or observed values andANN

predicted values and 119879 and 119875 represent the mean observedand mean ANN predicted values respectively 119873 representsthe total number of data A lower value of MAE RMSEMAPE and RSR indicates good performance of the modelA higher value of 119864 and 119877 statistics above 090 indicates goodprediction of the model

4 Results

The neural network architecture was evolved through trialand error process by analyzing 30 different combinations ofhidden layer neurons transfer function learning rate andmomentum coefficientThe optimal neural network architec-ture (BPNN) was evolved as 7-11-1 having eleven hidden layerneurons with learning rate 045 momentum coefficient 085and tangent hyperbolic hidden layer transfer function Thesame operation was performed by incorporating GA duringthe training of ANN The GA was able to evolve the optimalneural network architecture and training parameters in 92generations (Figure 4) The time taken by GA to reach thesaturation level of fitness function 22357mm was evaluatedas 3053412 seconds The GA evolved neural network archi-tecture (ANN-GA) comprised 9 hidden layer neurons andtangent hyperbolic transfer function The optimal learningrate and momentum coefficient for backpropagation neuralnetwork were computed as 03975 and 09385 respectivelyBoth ANN-GA and BPNN models subsequent to trainingwere validated and tested The results in terms of the perfor-mance statistics are presented in Table 2

The entire RMC data was also used for evaluating theprediction ability of the trained models namely BPNN andANN-GA The regression plots showing the prediction oftrained BPNN and ANN-GAmodels are exhibited in Figures5(a) and 5(b) respectivelyThe statistical performance for theentire data set is tabulated at Table 3

5 Discussions

The results of the study show that amalgamation of GAwith ANN during its training phase leads to evolving ofoptimal neural network architecture and training parametersIn comparison to trial and error BPNN neural networkhaving architecture 7-11-1 the hybrid ANN-GA automaticallyevolved a less complex architecture 7-9-1 Moreover the

0

5

10

15

20

25

30

0 20 40 60 80 100

Generations

Fitn

ess f

unct

ion

(RM

SE) (

mm

)

Figure 4 Fitness function (RMSE) versus generation

Table 2 Statistical performance of ANN models for trainingvalidation and testing data sets

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

TrainingANN 17378 24027 11862 09804 09610 01974ANN-GA 1506 22357 10479 09830 09663 01837

ValidationANN 19829 27489 13474 09746 09482 02276ANN-GA 16299 24687 10991 09794 09582 02044

TestingANN 20651 29582 13916 09735 09474 02294ANN-GA 17769 26295 12382 09803 09584 02039

Table 3 Statistical performance of the trained ANNmodels for theentire data set

Model MAE(mm)

RMSE(mm)

MAPE() 119877 119864 RSR

OverallANN 18236 25470 12412 09783 09569 02075ANN-GA 15754 23345 10841 09819 09638 01902

optimal training parameters evolved using GA were ableto enhance the learning and generalization of the neuralnetwork In comparison to BPNN model the ANN-GAmodel provided a lower error statistics MAE RMSE MAPEand RSR value of 1506mm 22357mm 10479 and 01837during training 16299mm 24687mm 10991 and 02044during validation and 17769mm 26295mm 12382 and02039 during testing respectively The trained ANN-GAmodel gave higher prediction accuracy with higher valuesof statistics 119877 and 119864 during training validation and testingof trained models The performance statistics computed forthe entire data set using the trained ANN-GA model showsa lower MAE RMSE MAPE and RSR value of 15754mm23345mm 10841 and 01902 respectively and higher 119864and119877 values of 09819 and 09638 respectively in comparison

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 8: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

8 Advances in Artificial Neural Systems

100

110

120

130

140

150

160

170

180

100 120 140 160 180

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

R2= 09571

(a)

Pred

icte

d slu

mp

(mm

)

Observed slump (mm)

100

110

120

130

140

150

160

170

180

100 120 140 160 180

R2= 09640

(b)

Figure 5 Regression plot of BPNN and ANN-GA predicted slump versus observed slump

to trained BPNN model Overall the performance metricsshow that theANN-GAmodel has consistently outperformedthe BPNN model

6 Conclusions

The study presented a methodology of designing the neu-ral networks using genetic algorithms Genetic algorithmspopulation based stochastic search was harnessed during thetraining phase of the neural networks to evolve the numberof hidden layer neurons type of transfer function and thevalues of learning parameters namely learning rate andmomentum coefficient for backpropagation based ANN

The performance metrics show that ANN-GA modeloutperformed the prediction accuracy of BPNN modelMoreover the GA was able to automatically determine thenumber of hidden layer neurons which were found to beless than those evolved using trial and error methodologyThe hybrid ANN-GA provided a good alternative overtime consuming conventional trial and error technique forevolving optimal neural network architecture and its trainingparameters

The proposed model based on past experimental datacan be very handy for predicting the complex materialbehavior of concrete in quick time It can be used as adecision support tool aiding the technical staff to easilypredict the slump value for a particular concrete design mixThis technique will considerably reduce the effort and timeto design a concrete mix for a customized slump withoutundertaking multiple trials Despite the effectiveness andadvantages of this methodology it is also subjected to somelimitations Since the mathematical modeling of concreteslump is dependent on the physical and chemical propertiesof the designmix constituents hence the same trainedmodelmay or may not be applicable for accurate modeling of slumpon the basis of design mix data obtained from other RMCplants deriving its raw material from a different source

Conflict of Interests

The authors declare that there is no conflict of interestsregarding the publication of this paper

References

[1] I-C Yeh ldquoModeling of strength of high-performance concreteusing artificial neural networksrdquoCement andConcrete Researchvol 28 no 12 pp 1797ndash1808 1998

[2] MUysal andHTanyildizi ldquoEstimation of compressive strengthof self compacting concrete containing polypropylene fiber andmineral additives exposed to high temperature using artificialneural networkrdquo Construction and Building Materials vol 27no 1 pp 404ndash414 2012

[3] Z H Duan S C Kou and C S Poon ldquoPrediction of com-pressive strength of recycled aggregate concrete using artificialneural networksrdquo Construction and Building Materials vol 40pp 1200ndash1206 2012

[4] A Abdollahzadeh R Masoudnia and S Aghababaei ldquoPredictstrength of rubberized concrete using atrificial neural networkrdquoWSEAS Transactions on Computers vol 10 no 2 pp 31ndash402011

[5] H Naderpour A Kheyroddin and G G Amiri ldquoPrediction ofFRP-confined compressive strength of concrete using artificialneural networksrdquoComposite Structures vol 92 no 12 pp 2817ndash2829 2010

[6] R Parichatprecha and P Nimityongskul ldquoAnalysis of durabilityof high performance concrete using artificial neural networksrdquoConstruction and Building Materials vol 23 no 2 pp 910ndash9172009

[7] L Bal and F Buyle-Bodin ldquoArtificial neural network for predict-ing drying shrinkage of concreterdquo Construction and BuildingMaterials vol 38 pp 248ndash254 2013

[8] T Ji T Lin and X Lin ldquoA concrete mix proportion designalgorithm based on artificial neural networksrdquo Cement andConcrete Research vol 36 no 7 pp 1399ndash1408 2006

[9] F Demir ldquoPrediction of elastic modulus of normal and highstrength concrete by artificial neural networksrdquo Constructionand Building Materials vol 22 no 7 pp 1428ndash1435 2008

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 9: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Advances in Artificial Neural Systems 9

[10] W P S Dias and S P Pooliyadda ldquoNeural networks for predict-ing properties of concretes with admixturesrdquo Construction andBuilding Materials vol 15 no 7 pp 371ndash379 2001

[11] I-C Yeh ldquoExploring concrete slump model using artificialneural networksrdquo Journal of Computing in Civil Engineering vol20 no 3 pp 217ndash221 2006

[12] A Oztas M Pala E Ozbay E Kanca N Caglar and MA Bhatti ldquoPredicting the compressive strength and slump ofhigh strength concrete using neural networkrdquoConstruction andBuilding Materials vol 20 no 9 pp 769ndash775 2006

[13] I-C Yeh ldquoModeling slump flow of concrete using second-order regressions and artificial neural networksrdquo Cement andConcrete Composites vol 29 no 6 pp 474ndash480 2007

[14] A Jain S Kumar Jha and S Misra ldquoModeling and analysisof concrete slump using artificial neural networksrdquo Journal ofMaterials in Civil Engineering vol 20 no 9 pp 628ndash633 2008

[15] R B Boozarjomehry and W Y Svrcek ldquoAutomatic design ofneural network structuresrdquoComputers ampChemical Engineeringvol 25 no 7-8 pp 1075ndash1088 2001

[16] J S Son DM Lee I S Kim and S K Choi ldquoA study on geneticalgorithm to select architecture of a optimal neural networkin the hot rolling processrdquo Journal of Materials ProcessingTechnology vol 153-154 no 1ndash3 pp 643ndash648 2004

[17] M Saemi M Ahmadi and A Y Varjani ldquoDesign of neural net-works using genetic algorithm for the permeability estimationof the reservoirrdquo Journal of Petroleum Science and Engineeringvol 59 no 1-2 pp 97ndash105 2007

[18] P G Benardos and G-C Vosniakos ldquoOptimizing feedforwardartificial neural network architecturerdquo Engineering Applicationsof Artificial Intelligence vol 20 no 3 pp 365ndash382 2007

[19] S Wang X Dong and R Sun ldquoPredicting saturates of sourvacuum gas oil using artificial neural networks and geneticalgorithmsrdquo Expert Systems with Applications vol 37 no 7 pp4768ndash4771 2010

[20] K L Priddy and P E Keller Artificial Neural NetworksAn Introduction SPIEmdashThe International Society for OpticalEngineering Bellingham Wash USA 2005

[21] M M Alshihri A M Azmy and M S El-Bisy ldquoNeuralnetworks for predicting compressive strength of structural lightweight concreterdquo Construction and Building Materials vol 23no 6 pp 2214ndash2219 2009

[22] D Sovil V Kvanicka and J Pospichal ldquoIntroduction tomultilayer feed forward neural networksrdquo Chemometrics andIntelligent Laboratory Systems vol 39 no 1 pp 43ndash62 1997

[23] K Hornik M Stinchcombe and HWhite ldquoMultilayer feedfor-ward networks are universal approximatorsrdquo Neural Networksvol 2 no 5 pp 359ndash366 1989

[24] A Blum Neural Networks in C++ An Object-Oriented Frame-work for Building Connectionist Systems John Wiley amp SonsNew York NY USA 1992

[25] M J A Berry and G Linoff Data Mining Techniques JohnWiley amp Sons New York NY USA 1997

[26] Z Boger and H Guterman ldquoKnowledge extraction fromartificial neural network modelsrdquo in Proceedings of the IEEEInternational Conference on Systems Man and Cybernetics vol4 pp 3030ndash3035 Orlando Fla USA October 1997

[27] K Swingler Applying Neural Networks A Practical GuideMorgan Kaufman San Francisco Calif USA 2001

[28] M H Sazli ldquoA brief review of feed-forward neural networksrdquoCommunications Faculty of Science University of Ankara vol50 no 1 pp 11ndash17 2006

[29] S Rajasekaran and G A V Pai Neural Networks Fuzzy Logicand Genetic Algorithms Synthesis amp Applications Prentice-Hallof India Private Limited New Delhi India 2003

[30] A Y Shamseldin A E Nasr and K M OrsquoConnor ldquoCompar-sion of different forms of the multi-layer feed-forward neuralnetworkmethod used for river flow forecastingrdquoHydrology andEarth System Sciences vol 6 no 4 pp 671ndash684 2002

[31] A Mellit S A Kalogirou and M Drif ldquoApplication of neuralnetworks and genetic algorithms for sizing of photovoltaicsystemsrdquo Renewable Energy vol 35 no 12 pp 2881ndash2893 2010

[32] N M Razali and J Geraghty ldquoGenetic algorithm performancewith different selection strategies in solving TSPrdquo in Proceedingsof the World Congress on Engineering vol 2 pp 1134ndash1139 2011

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Page 10: Research Article Modeling Slump of Ready Mix Concrete Using … · 2019. 7. 31. · Research Article Modeling Slump of Ready Mix Concrete Using Genetically Evolved Artificial Neural

Submit your manuscripts athttpwwwhindawicom

Computer Games Technology

International Journal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Distributed Sensor Networks

International Journal of

Advances in

FuzzySystems

Hindawi Publishing Corporationhttpwwwhindawicom

Volume 2014

International Journal of

ReconfigurableComputing

Hindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Applied Computational Intelligence and Soft Computing

thinspAdvancesthinspinthinsp

Artificial Intelligence

HindawithinspPublishingthinspCorporationhttpwwwhindawicom Volumethinsp2014

Advances inSoftware EngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Electrical and Computer Engineering

Journal of

Journal of

Computer Networks and Communications

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporation

httpwwwhindawicom Volume 2014

Advances in

Multimedia

International Journal of

Biomedical Imaging

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

ArtificialNeural Systems

Advances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

RoboticsJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Computational Intelligence and Neuroscience

Industrial EngineeringJournal of

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014

The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014

Human-ComputerInteraction

Advances in

Computer EngineeringAdvances in

Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014