intelligent rfid positioning system through immune-based feed-forward neural network
TRANSCRIPT
J Intell ManufDOI 10.1007/s10845-013-0832-0
Intelligent RFID positioning system through immune-basedfeed-forward neural network
R. J. Kuo · J. W. Chang
Received: 6 April 2013 / Accepted: 26 August 2013© Springer Science+Business Media New York 2013
Abstract This study intends to propose a feed-forwardneural network for RFID positioning system. The proposednetwork integrates artificial immune network for optimiza-tion (Opt-aiNET) and artificial immune system (AIS) withclone selection to train the connecting weights of feed-forward neural network. It is able to learn the relationshipbetween the received signal strength indication and pickingcart position. Since the proposed learning algorithm ownsboth the merits of Opt-aiNET and AIS with clone selection,it is able to avoid falling into the local optimum and possessesthe learning capability. The computational results for learn-ing two continuous functions show that the proposed algo-rithm has better performance than other immune-based back-propagation neural network. In addition, the model evalua-tion results also indicate that the proposed algorithm reallycan predict the picking cart position more correctly than othermethods.
Keywords Radio frequency identification ·Back-propagation neural network · Artificial immunesystems with clonal selection · Artificial immunenetwork · Positioning system
R. J. Kuo (B)Department of Industrial Management, National Taiwan Universityof Science and Technology, No. 43, Section 4, Kee-Lung Road,Taipei 106, Taiwan, ROCe-mail: [email protected]
J. W. ChangChipMOS Technologies, No. 37, Xintai Rd.,Zhubei City 302, Hsinchu County, Taiwan, ROCe-mail: [email protected]
Introduction
Due to rapid development of globalization which makes thesupply chain management more complicated, there are moreand more companies applying radio frequency identification(RFID) to warehouse management. The obvious advantagesof RFID are high-speed scanning, penetrating and memo-rable. In addition to recycling, using the RFID system canalso reduce business costs in identifying the position of goodsand picking carts. In warehouse management, one of mostcritical issues is goods picking. Though some studies havepointed out that RFID is able to enhance the picking per-formance, yet most of them focus on knowing the productposition instead of picking staff position. However, if a deci-sion support system is established to plan the picking rout, itis still necessary to know the position of picking staff.
There are many studies on introducing wireless locationmethods, but they all have accuracy problem when encoun-tering complicated environment variables. There may besome advanced positioning devices can be used to solvewarehouse positioning problem, but they are always verycostly. Therefore, this study intended to employ RFID of rel-atively lower cost to develop a positioning method. Besides,in view of excellent performance of artificial immune sys-tem (AIS) in many fields (Hart and Timmis 2008), thisstudy developed a back-propagation neural network throughintegration of different AIS schemes, clone selection andimmune network, for predicting the picking cart position.Efficiency of this method was verified by collecting realapplication data. Received signal strength indication (RSSI)measured by RFID readers served as training data, and afterbeing trained by using the proposed method, a new groupof RSSI would be read according to the model constructedthrough signal feature calculation. Based on the estimatedvalues, object position could be known, and this could help
123
J Intell Manuf
logistics enterprises to plan picking rout and increase pickingefficiency.
The remainder of this paper is organized as follows.“Literature survey” section presents the related backgroundneeded for the current research, while the proposed methodis explained in “Methodology” section. “Simulation resultsand Case study” sections demonstrate the feasibility ofthe proposed method using some benchmark functions andRFID positioning data, respectively. Finally, the concludingremarks are made in “Conclusions” section.
Literature survey
This section will briefly present the related research in theareas of RFID, back-propagation neural network, and appli-cations of soft computing techniques to back-propagationneural network.
Radio frequency identification
Radio frequency identification has been wildly applied andbecome one part of our daily life (Landt 2005). It is a kind oftechnology which employs radio wave to identify an object.The typical RFID system consists of four parts: reader, tag,antenna, and host computer system (Chawla and Ha 2007;Ranky 2006; Shepard 2005). It has been applied to manydifferent applications, like package tracking and intelligentrecognition (Salama and Mahmoud 2009; Cheng and Prabhu2013; Zhang et al. 2012). Zhou and Shi (2009) provideda very complete survey for the available technologies forlocalization with a focus on RFID technologies. For posi-tioning system, it can be divided into indoor and outdoorones. The generally known outdoor systems include globalpositioning system (GPS) and car navigation systems com-bined with electronic map. The common indoor position-ing systems comprise infrared, ultrasonic, and wireless localarea network (WLAN) positioning. RFID has been studiedin recent years due to its newfound attention. As compared tooutdoor positioning, indoor positioning system may be eas-ily affected by ambient environment during measurement ofsignals. This may cause instability and greater variation ofsignals. Moreover, indoor space is narrow, accurate position-ing is more difficult and also is more important.
Positioning system provides actual position informationof an object in a specific space according to the space fea-ture. For RFID system, specific space is signal range ofwireless access points arranged by organizations, also calledRFID signal space. The features of the space are collectivelyreferred to as features of RFID wireless signals, which canbe RSSI or features of wireless signals calculated accord-ing to strength, such as signal difference of two wirelesspoints or strength decrease difference of RFID due to obstruc-
tion of object. The recently employed positioning techniquesinclude Triangulation, Scene Analysis and Proximity, etc(Hightower et al. 2000; Ni et al. 2004).
Back-propagation (BP) neural network
Artificial neural networks are computer models built to emu-late the human pattern recognition function and are well-known massively parallel computing models which haveexhibited excellent behavior in the resolution of problems inmany areas such as artificial intelligence, engineering, andsupplier selection. The most common applied network is themulti-layer perceptron with error back-propagation learningalgorithm. It is a kind of gradient steepest descent methodused to minimize the cost function (Rumelhart et al. 1986).
Artificial immune networks
In 1974, “Towards a network theory of the immune system,”the first AIS was proposed by Jerne, from then on, manystudies in wide field related to AIS were published (Chenet al. 2013; Qiu and Lau 2012). For instance, Aydin et al.(2012) presented an adaptive artificial immune classificationapproach for diagnosis of induction motor faults. Ülker etal. (2009) applied an AIS approach to CNC tool path gen-eration. AIS are a class of computationally intelligent sys-tems inspired by the principles and processes of the verte-brate immune system. The algorithms typically exploit theimmune system’s characteristics of learning and memory tosolve a problem.
Optimization version of an artificial immune network(Opt-aiNET) based artificial immune network (aiNET) isone of the common techniques which are inspired by spe-cific immunological theories that explain the function andbehavior of the mammalian adaptive immune system (Tim-mis and Edmonds 2004). It’s a well-known immune inspiredalgorithm for function optimization. The characteristics ofOpt-aiNET are that population size is changeable, numberof clone is identical for all antibodies, it will suppress thesimilar antibodies, and after suppression, new cell will beadded to the population in order not to get stuck to the localminimum (Pasti and Castro 2006).
Applications of soft computing techniques toback-propagation learning algorithm
Soft computing techniques, like genetic algorithm (GA) andparticle swarm optimization (PSO) algorithm, have a betterability to escape from local optima with robust global search-ing and offer an efficient tool to optimize the model struc-ture (Tian et al. 2010, Tuzkaya et al. 2013). Consequently,the core procedure in soft computing is the repeated genera-tional steps of selection, reproduction and evaluation. These
123
J Intell Manuf
characteristics, as well as other supplementary benefits suchas ease of implementation, parallelism, no requirement fora differentiable or continuous objective function etc., makethem an attractive choice for general optimization problem(Whigham et al. 2006).
The most applied soft computing technique to BP neuralnetwork is GA. Due to two main operators, crossover andmutation, in GA, it is able to avoid getting stuck to the localminimum. Ceravolo et al. (2009) applied integration of GAand BP neural network to predict and control the environmen-tal temperature. The study showed that it can provide betterprediction compared to conventional BP neural network. Linet al. (2008) employed simulated annealing to determine thetraining rate, momentum, number of hidden layer nodes forBP neural network. This makes BP neural network have bet-ter performance. Besides, Lin et al. (2009) used PSO algo-rithm to determine the BP neural network parameters andapplied it to the classification problem. Li and Chung (2005)used ant colony system to find the connecting weights forBP neural network in order to enhance its forecasting perfor-mance. Satapathy and Subhashini (2008) used tabu searchmethod to train BP neural network and proved that it canconverge faster than gradient steepest descent method.
Methodology
The main objective of this study is to propose a RFID posi-tioning system which employs an integration of AIS withclonal selection and Opt-aiNET for formulating the feed-forward neural network. It is able to learn the relationshipbetween RSSI signals and picking cart position. Through thetrained network, the position can be estimated as a new datais captured by RFID readers. Then, the system can use thisinformation to plan the picking rout for the picking cart inorder to result in shorter picking distance. Besides, the infor-mation can be stored into the built-in chip of RFID.
The proposed RFID-based positioning system is consistedof three parts. The first part is data collection of RFID signals,while the second part is RFID data transformation. The thirdpart uses the collected data to train the proposed aiNBSB fordetermining important inputs and connecting weights. Thedetailed discussion for each part is as follows.
RFID data collection
Recently, most of the wireless positioning systems use theRSSI to estimate the performance. When Radio Frequencyis passing in the air, the signal strength will decrease becauseof the distance and transmission media. Using this index, wecan estimate the range of signal instead of determining thesignal direction. This study employs RSSI to estimate thedistance between the signal resource and receiver. General
speaking, a single receiver is not enough to determine thecorrect position. Thus, it requires three or more fixed-pointreceivers to estimate the distance.
Data transformation
In the telecommunications field (IEEE 802.11 System), RSSIrepresents a received radio signal strength index. It has twounits. One is the absolute value of dBm for the power, andthe other is the partition of the power. The unit of the valueobtained from the subtraction of absolute value for the poweris db. The model of the receivers used in this study is OMRONV750-series UHF RFID System. Its unit is dBm and rangeis from −70 to −40 dBm. If the dBm value is close to 0, thesignal strength of RSSI index is strong.
Before feeding the data into FSBP-AIS, the data need tobe normalized in [0,1] according to following equation:
Ri = Xi − Xmin
Xmax − Xmin, (1)
where Xmin is the minimum of data Xi for all i, Xmax themaximum of data Xi for all i , and Xi is the i th data of dataX .
Integration of artificial immune network and optimizationartificial immune network for learning back-propagationneural network
Artificial immune system is a kind of meta-heuristics thatsimulated a biological immune system. It has some char-acteristics from a biological immune system, like diversity,tolerance, immune memory, learning and self-identification.It has been applied to recognition, data mining and optimiza-tion. In AIS, there are different heuristics can be applied.They include clone selection, negative selection, aiNET,and immune genetic algorithm. In this study, an integrationof aiNET with Back-propagation scheme and optimizationAIS using clone selection with Back-propagation scheme(aiNBSB) for learning feed-forward neural network is pro-posed to formulate the weights of the forecasting model. Byimplementing opt-aiNET first, it can maintain the diversityof the solution in order to avoid getting stuck to the localminimum. Then, clone selection which is a reaction to fightwith the invaders is employed. In clone selection, only thecells which are able to identify the antigens can be prolifera-tion, making the recognition of invaders more effective. Theflowchart of the proposed algorithm is illustrated in Fig. 1.The detailed discussion of each step is as follows.
(1) Antibody encodingThe data need to be encoded before evolved by AIS. The
most common way is binary encoding and the floating-pointencoding. For the current application, the front one is used.All solutions will be compiled into a series of antibody to
123
J Intell Manuf
Fig. 1 aiNBSB algorithm framework
solve the problems. Thus, if number of features is s, theneach antibody will have s bits represented either by 0 or 1. Ifbit value is equal to 0, then it means that the correspondingfeature will not be selected. Otherwise, it should be selected.For every antibody, at least one bit should be equal to 1.Regarding number of antibodies, they are consisted of n newantibodies and m memory cells. Thus, the total number ofinitial antibodies is p which is the sum of n and m.
(2) Stop criterionStop criterion is applied to determine whether the algo-
rithm has reached the specified number of iterations or not.If yes, then stop; Otherwise, continue the following steps.
(3) Affinity evaluation between antigen and antibody andselection
The basic mission of immune system is to recognize theforeign cells and molecule. If the difference between anti-body and antigen is smaller, it represents that the affinity
between antibody and antigen will be higher. Thus, the recog-nition effect will be also better. For neural network, the objec-tive for updating the connecting weights is to decrease thedistance between target output and actual output. The dis-tance is represented by mean square error (MSE) of targetand actual outputs. Then, it is transformed into affinity. Thesmaller MSE has the larger affinity. The transformation func-tions are as follows:
d j =∑N
a=1
√(Fa − Ta)2
Nand (2)
(Ag) j = 1
1 + d j, (3)
where N is the number of data, Pk is the actual output of kthdata, Tk is the target output of kth data, and di is the error of i thantibody. Basically, the smaller distance between antibodyand antigen means the higher affinity between antibody and
123
J Intell Manuf
antigen. Then select the highest Sn affinity of antibodies forclone.
(4) CloneGenerate a number Nc of clones for each network cell.(5) MutationThere are two parts for mutation. Half of the antibodies
will be mutated by the gradient steepest descent method, andthe other half will mutate the rest clone proportionally to thefitness of its parent cell, but keep the parent cell. The mutationfollows the following equation:
αj =(
1
ρ
)
e−(Ag) j and (4)
ω′k = ωk + α · N (0, 1) , (5)
where α j and Ag j is the j th antibody’s mutation rate andfitness, respectively. ρ is a parameter that controls the decayof the inverse exponential function, ωk is the kth weight ofa antibody, N (0, 1) is a Gaussian random variable of zeromean and standard deviation σ = 1.
(6) Recalculate the affinityAfter mutation, combine C and Cr sets of antibodies. Cal-
culate the affinity of antibodies again.(7) Check the antibodies are stable or not?If the average error of the population is not significantly
different from the previous iteration, then continue; Other-wise, return to Step 4. The average error can be calculated asfollowing equation:
Ravg =∑p
j=1 d ′j
p, (6)
where ρ is the number of total antibodies.(8) SuppressionDetermine the affinity of all cells in the network. Suppress
all but the highest fitness of those cells whose affinities areless than the suppression thresholdσs and determine the num-ber of network cells, named memory cells, after suppression.
(9) Update the random cells and memory cellsIntroduce a percentage d % of randomly generated cells.(10) Produce the initial antibodiesAfter aiNET is met the stop criterion, then take the m
highest affinity antibodies as the memory cells for AIS andgenerate n random cells. Combine these two parts as theinitial antibodies.
(11) Stop criterion is met or not?Determine whether the algorithm has reached the specified
number of iterations or not. If yes, then stop. Otherwise,continue the following steps.
(12) Affinity evaluation between antigen and antibody andselection
The calculation of fitness and affinity are the some withstep 3, sort the antibodies and select the Sc highest to clone,the larger affinity means the larger opportunity to be cloned.
(13) CloneClone means the process of copying creatures by biotech-
nology. In AIS, it means copying the antibody cells for thereason of increasing the competitiveness of good antibodies.Clone different amount of antibodies according to the affin-ity. In short, when antibody A has greater affinity, the ratio ofcloned cells is higher. The reproductive rate is proportionalto the affinity. Given that it will generate C set of antibodyafter cloning.
The clone amount of ith antibody = C × (Ag)i∑i (Ag)i
, (7)
Because it may result in non-integer, so the result needs tobe rounded.
(14) MutationIn immune system of biological, mutation means the pro-
tein in the antibody changes. In AIS, it means the featuresselected in the antibody changes. Mutation may cause anti-body getting better or worse. However, it is also viewed as‘driving force’ of convergence. Bad mutation will be elimi-nated by selection, but good mutation will pass down. Thus,appropriate mutation is necessary. The difference betweenAIS and GA is that GA has fix mutation rate, but AIS doesn’t.The mutation rate of AIS is inverse to the affinity. Accord-ing to research of (Ramaswamy et al. 2007), transformationfunction is as follows:
αj =(
1
ρ
)
e−(Ag) j , (8)
where α j is the mutation rate of jth antibody. Basically, thegreater affinity has the smaller mutation rate of antibody bythis equation. ρ is the given constant. The goal is to limit themutation into some range and avoid that the affinity Ag isequal to 0. Then, generate a random number ranging between0 and 1. If the random number is smaller than αi , antibodywill mutate. If not, antibody won’t mutate. The way is to add aGaussian random variable to change the weight. Assume thatthere will be C set of antibodies after clone, and C set of muta-tion rate after transform. Generally, there are many methodsabout mutation. The general method is to select which parentto mutate, and then replace the parent by random number. Butthis kind of method doesn’t change the structure. Althoughthe method is feasible, the efficiency may be not good.
(15) Recalculate the affinity and selectAfter mutation, combine C and Cr set of antibodies. Cal-
culate the affinity of antibodies again. Calculate the affin-ity of new antibodies and memory cell in every iteration isbecause there are no good antibodies in the first iteration, theantibodies of memory cell in the first iteration are generatedrandomly. After clone and mutation, calculate the affinityagain to select the antibody with better affinity.
123
J Intell Manuf
Table 1 The best combinationsof parameters for differentalgorithms for Matyas function
Algorithm BP AIS AISBP aiNET aiNBSB
Number of iterations 1,000 1,000 1,000 1,000 1,000
Number of data 70 70 70 70 70
Number of new antibodies – 15 15 Variable Variable
Number of memory cells – 5 5 Variable Variable
Maturate parameter ρ – 1.2 1.2 1 1
Training rate η 0.9 – 0.9 – 0.9
Momentum α 0.9 – 0.9 – 0.9
Fitness threshold between antibodies θ – – – 12 12
Stability value σ – – – 0.012 0.012
(16) Replace the memory cellThe antibodies with better affinity can replace the mem-
ory cell. The antibodies with high affinity are not easy to bereplaced. They can be replicated more good antibody in theclone step to increase the concentration of good antibodies.After this step, go back to Step 10.
Simulation results
Based on the algorithm presented in “Methodology” section,this section will apply two benchmark functions to assessthe proposed algorithm first. The algorithm is written inC language. aiNBSB, aiNETB (Castro and Zuben 2001,b),BP neural network (Rumelhart et al. 1986), AISBP (Pastiand Castro 2006), AIS (Hunt and Cooke 1996), and aiNET(Castro and Zuben 2001,b). Two non-linear functions, Boothfunction and Matyas function are selected. The detailed dis-cussion is as follows.
Matyas function
The related information about Matyas function is as follows:
(1) Definition: f (x) = 0.26(x2
1 + x22
) − 0.48x1x2
(2) Search domain: −10 ≤ xi ≤ 10, i = 1, 2(3) Number of local minima: Some(4) The global minima: (x1, x2) = (0, 0) , f (x1, x2) = 0
Taguchi experimental design
Since there are some parameters needing determination forthe proposed algorithm, this study applies Taguchi experi-mental design to decrease the number of experimental com-binations. For BP, there are three levels for training rate andmomentum term. They are 0.1, 0.5 and 0.9 for training rateand 0.1, 0.5 and 0.9 for momentum term. For AIS and AISBP,three levels for ρ are 0.8, 1 and 1.2. Three levels for η are0.1, 0.5 and 0.9. Three levels for α are 0.1, 0.5 and 0.9. for
Fig. 2 The convergence curves of different algorithms for Matyasfunction
aiNBSB, three levels for ρ are 0.8, 1.0 and 1.2. Three lev-els for η are 0.1, 0.5 and 0.9. three levels for α are 0.1, 0.5and 0.9. three levels for θ are 9, 12, 15. Three levels for σ
are 0.008, 0.010 and 0.012. After Taguchi method, the bestcombinations of parameters are illustrated in Table 1.
From Fig. 2, we can see that for Matyas function, aiNBSBis able to converge the fastest while AIS has the worst con-vergence speed.
K-fold cross validation
This study employs 10-fold cross validation to testify theproposed algorithm. The average is calculated for 10 folds.Table 2 indicates that aiNBSB has the smallest MSE value,0.000042, while AIS has the largest MSE value, 0.039478 forthe training data. Besides, AISBP’s MSE value is 0.001425which is the second best solution. The test results are pre-sented in Table 3. The results are similar to training results.
Hypothesis test
Based on the results listed in Table 2, hypothesis testing isimplemented to determine whether the significant differenceexists or not among five algorithms. The confidence inter-val is set as 95 % (α = 0.05). μB P , μAI S , μAI SB P , andμai N ET μaiNBSB represent the average MSE values for BP,AIS, aiNET and aiNBSB, respectively. The pairwise hypoth-esis tests are as follows:
123
J Intell Manuf
Table 2 The 10-fold crossvalidation results of trainingdata for Matyas function
Algorithm Fold
1 2 3 4 5
BP MSE 0.001619 0.002409 0.001541 0.001826 0.001120
AIS MSE 0.041937 0.041325 0.035783 0.045642 0.033685
AISBP MSE 0.000651 0.001481 0.000897 0.001782 0.00179
aiNET MSE 0.000873 0.001284 0.000994 0.000981 0.000826
aiNBSB MSE 0.000040 0.000046 0.000043 0.000034 0.000037
Algorithm Fold
6 7 8 9 10
BP MSE 0.0012 0.001215 0.001009 0.000509 0.001681
AIS MSE 0.037216 0.041864 0.044237 0.036991 0.036097
AISBP MSE 0.001547 0.001679 0.000419 0.000941 0.003059
aiNET MSE 0.00094 0.001109 0.001079 0.001129 0.001108
aiNBSB MSE 0.000038 0.000043 0.000044 0.000045 0.000048
BP average 0.001413
AIS average 0.039478
AISBP average 0.001425
aiNET average 0.001032
aiNBSB average 0.000042
Table 3 The 10-fold crossvalidation results of test data forMatyas function
Algorithm Fold
1 2 3 4 5
BP MSE 0.003970 0.002862 0.001569 0.000973 0.001828
AIS MSE 0.092353 0.009756 0.024229 0.019501 0.063587
AISBP MSE 0.001627 0.00093 0.00097 0.00038 0.002952
aiNET MSE 0.00295 0.000865 0.000734 0.00051 0.002493
aiNBSB MSE 0.000343 0.000083 0.000021 0.000086 0.000125
Algorithm Fold
6 7 8 9 10
BP MSE 0.000783 0.001595 0.001297 0.001799 0.001752
AIS MSE 0.017947 0.063574 0.083072 0.034933 0.075607
AISBP MSE 0.001472 0.002248 0.000797 0.001708 0.003748
aiNET MSE 0.002072 0.000818 0.000967 0.001218 0.001698
aiNBSB MSE 0.000066 0.000047 0.000070 0.000237 0.000037
BP average 0.001843
AIS average 0.048456
AISBP average 0.001683
aiNET average 0.001433
aiNBSB average 0.000112
Test I : H0 : μaiNBSB ≥ μB P
H1 : μaiNBSB < μB P
Test II : H0 : μaiNBSB ≥ μAI S
H1 : μaiNBSB < μAI S
Test III : H0 : μaiNBSB ≥ μAI SB P
H1 : μaiNBSB < μAI SB P
Test IV : H0 : μaiNBSB ≥ μai N ET
H1 : μaiNBSB < μai N ET
123
J Intell Manuf
Table 4 The hypothesis testbetween aiNBSB algorithm andother algorithms for Matyasfunction
Hypothesis test aiNBSB versus BP aiNBSB versus AIS aiNBSB versus AISBP aiNBSB versus aiNET
Statistic value Z =−9.985421881 Z =−52.14863379 Z =−7.214171 Z =−22.96908
Test result Reject H0 Reject H0 Reject H0 Reject H0
Table 5 The best combinationsof parameters for differentalgorithms for Booth function
Algorithm BP AIS AISBP aiNET aiNBSB
Number of iterations 1,000 1,000 1,000 1,000 1,000
Number of data 70 70 70 70 70
Number of new antibodies – 15 15 Variable Variable
Number of memory cells – 5 5 Variable Variable
Maturate parameter ρ – 1 1 1 1
Training rate η 0.9 – 0.9 – 0.9
Momentum α 0.9 - 0.9 – 0.9
Fitness threshold betweenantibodies θ
– – – 12 12
Stability value σ – – – 0.01 0.01
The hypothesis test result as shown in Table 4 indicates thataiNBSB is significant better than the other four algorithms.
Booth function
The related information for Booth function is as follows:
(1) Definition: f (x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)2
(2) Search domain: −10 ≤ x1 ≤ 10, i = 1, 2(3) Number of local minima: Some(4) The global minima: (x1, x2) = (1, 3) , f (x1, x2) = 0
Taguchi experimental design
Similar to “Taguchi experimental design” section was con-ducted to determine the best parameter combination. Table 5presents the result. In addition, the convergence curves of dif-ferent algorithms are illustrated in Fig. 3. It reveals that theproposed aiNBSB algorithm can achieve the smallest MSEvalue compared to other algorithms. Similarly, AIS algorithmstill has the worst performance.
K-fold cross validation
The 10-fold cross validation is also applied to testify theproposed algorithm. The average is calculated for 10 folds.Table 6 indicates that aiNBSB has the smallest MSE value,0.000045, while AIS has the largest MSE value, 0.023768.Besides, AISBP’s MSE value is 0.000780 which is the secondbest solution. The test results are presented in Table 7. Theresults are similar to training results.
Fig. 3 The convergence curves of different algorithms for Booth func-tion
Hypothesis test
Based on the results listed in Table 6, hypothesis testing isimplemented to determine whether the significant differenceexists or not among five algorithms. The confidence intervalis set as 95 % (α = 0.05). The hypothesis test result as shownin Table 8 indicates that aiNBSB is significant better than theother four algorithms.
Case study
This section will demonstrate the way to apply the pro-posed aiNBSB neural network to the RFID-based position-ing system. In this study, the employed type of RFID readerwas OMRON V750-series UHF RFID System. The unit ofRSSI displayed in this device was dBm, ranging from −70to −40 dBm. The closer it was to 0 dBm, the stronger theRSSI signal strength was. Due to different impacts of differ-ent environment, value measured by RSSI was not inverselyproportional to distance; thus, this study repeatedly tested
123
J Intell Manuf
Table 6 The 10-fold crossvalidation results of trainingdata for Booth function
Algorithm Fold
1 2 3 4 5
BP MSE 0.000297 0.000294 0.000224 0.000167 0.000319
AIS MSE 0.015623 0.023202 0.026561 0.027178 0.022261
AISBP MSE 0.001158 0.001003 0.00022 0.000178 0.001171
aiNET MSE 0.001373 0.000919 0.002335 0.002487 0.001774
aiNBSB MSE 0.000044 0.000043 0.000038 0.000041 0.000048
Algorithm Fold
6 7 8 9 10
BP MSE 0.000211 0.000275 0.000128 0.000165 0.000216
AIS MSE 0.028696 0.026023 0.021851 0.027296 0.018990
AISBP MSE 0.00097 0.001874 0.000213 0.000151 0.00086
aiNET MSE 0.003160 0.001673 0.001335 0.002113 0.000779
aiNBSB MSE 0.000049 0.000052 0.000031 0.000057 0.000047
BP average 0.000230
AIS average 0.023768
AISBP average 0.000780
aiNET average 0.001795
aiNBSB average 0.000045
Table 7 The 10-fold crossvalidation results of test data forBooth function
Algorithm Fold
1 2 3 4 5
BP MSE 0.000271 0.000317 0.000326 0.000426 0.000312
AIS MSE 0.058582 0.011409 0.003551 0.006986 0.007352
AISBP MSE 0.000547 0.000434 0.000275 0.000265 0.000986
aiNET MSE 0.000814 0.001641 0.004692 0.001044 0.00064
aiNBSB MSE 0.000096 0.000114 0.000651 0.000133 0.000083
Algorithm Fold
6 7 8 9 10
BP MSE 0.00037 0.000409 0.000277 8.133E-05 0.000447
AIS MSE 0.031292 0.054775 0.055492 0.004844 0.009485
AISBP MSE 0.00068 0.002018 0.000498 0.000372 0.001157
aiNET MSE 0.001888 0.002056 0.006291 0.000702 0.000762
aiNBSB MSE 0.000117 0.000028 0.000751 0.000256 0.000135
BP average 0.000324
AIS average 0.024377
AISBP average 0.000723
aiNET average 0.002053
aiNBSB average 0.000236
the signal value and had obtained its feature through net-work training. For the purpose of comparison, four otheralgorithms including BP, AIS, AISBP, and aiNET will alsobe testified.
Experimental scenario
In order to simulate the real environment, some products areput on the rack. Under each product position, an RFID tag is
123
J Intell Manuf
Table 8 The best combinationsof parameters for differentalgorithms for Booth function
Hypothesis test aiNBSB versus BP aiNBSB versus AIS aiNBSB versus AISBP aiNBSB versus aiNET
Statistic value Z =−9.706863 Z =−21.4416 Z =−5.571893 Z =−8.519856
Test result Reject H0 Reject H0 Reject H0 Reject H0
Fig. 4 Simulated facility layout
attached. The simulated environment is illustrated as shownin Figure. For each individual rack, 12 tags are used. Thus,there will be 12 RSSI values, or input features. The corre-sponding output variable is the position which is representedby two output nodes. The basic concept is that the moveof picking staff will influence the RSSI values. After datanormalization, they are employed to train the proposed algo-rithm. The purpose is to estimate the picking cart position.The experimental scenario is illustrated in Fig. 4.
Taguchi experimental design
No matter neural networks or meta-heuristics, parametersdetermination is a very tough work. Thus, this study employsTaguchi method to find the best combination of parameters.The best combination of parameters for every algorithm islisted in Table 9.
Fig. 5 The convergence curves of different algorithms for RFID data
Figure 5 shows that applying aiNBSB algorithm to trainthe feed-forward neural network has the fastest convergencewhile AIS algorithm is the slowest and with worst solution.
K-fold cross validation
Based on the results obtained from the Taguchi method, thisstudy employs 10-fold cross validation to testify the perfor-mance of the proposed algorithm. The computational resultsare depicted in Tables 10 and 11 for training and test, respec-tively. For training data, aiNBSB has the smallest averageMSE value, 0.000042 while AIS has the worst MSE value,0.039478. For the test data, the result is similar. The aiNBSBalgorithm has the smallest average MSE value, 0.000112while AIS has the largest average MSE value, 0.048456.
Hypothesis test
Based on the results listed in Table 10, hypothesis testingis implemented to determine whether the significant differ-ence exists or not among five algorithms. The confidenceinterval is set as 95 % (α = 0.05). μB P , μAI S , μAI SB P , and
Table 9 The best combinationsof parameters for differentalgorithms for RFID data
Algorithm BP AIS AISBP aiNET aiNBSB
Number of iterations 1,000 1,000 1,000 1,000 1,000
Number of data 70 70 70 70 70
Number of new antibodies – 15 15 Variable Variable
Number of memory cells – 5 5 Variable Variable
Maturate parameter ρ – 1.2 1.2 0.8 0.8
Training rate η 0.5 – 0.9 – 0.9
Momentum α 0.9 – 0.9 – 0.9
Fitness threshold betweenantibodies θ
– – – 60 60
Stability value σ – – – 0.01 0.01
123
J Intell Manuf
Table 10 The 10-fold crossvalidation results of trainingdata for RFID data
Algorithm Fold
1 2 3 4 5
BP MSE 0.000125 0.000130 0.000181 0.000166 0.000163
AIS MSE 0.031169 0.037250 0.033792 0.033659 0.030315
AISBP MSE 0.000211 0.000068 0.000054 0.000131 0.000097
aiNET MSE 0.007542 0.005744 0.006980 0.008402 0.008988
Regression MSE 0.001627 0.001282 0.001789 0.001727 0.001627
Stepwise regression MSE 0.002032 0.001717 0.002136 0.001822 0.001919
aiNBSB MSE 0.000011 0.000001 0.000002 0.000014 0.000007
Algorithm Fold
6 7 8 9 10
BP MSE 0.000194 0.000130 0.000115 0.000106 0.000081
AIS MSE 0.034867 0.029973 0.037317 0.027360 0.034455
AISBP MSE 0.000179 0.000112 0.000152 0.000204 0.000151
aiNET MSE 0.008955 0.006225 0.008478 0.008458 0.007643
Regression MSE 0.001759 0.001796 0.001438 0.001538 0.001629
Stepwise regression MSE 0.001818 0.002091 0.001758 0.001844 0.001983
aiNBSB MSE 0.000005 0.000004 0.000005 0.000003 0.000003
BP average 0.000139
AIS average 0.033016
AISBP average 0.000136
aiNET average 0.007741
Regression 0.001621
Stepwise regression 0.001912
aiNBSB average 0.000006
Table 11 The 10-fold crossvalidation results of test data forRFID data
Algorithm Fold
1 2 3 4 5
BP MSE 0.000368 0.003171 0.001128 0.000180 0.000058
AIS MSE 0.039798 0.030287 0.033963 0.030633 0.029603
AISBP MSE 0.000137 0.002426 0.000695 0.000113 0.000082
aiNET MSE 0.009594 0.008565 0.008000 0.011544 0.009967
Regression MSE 0.002334 0.006566 0.0007936 0.001413 0.002592
Stepwise regression MSE 0.002415 0.005406 0.000666 0.000755 0.002700
aiNBSB MSE 0.000011 0.000001 0.000002 0.000014 0.000007
Algorithm Fold
6 7 8 9 10
BP MSE 0.000440 0.000256 0.000392 0.000701 0.000232
AIS MSE 0.042920 0.039396 0.037066 0.016713 0.041675
AISBP MSE 0.000217 0.000160 0.000185 0.000174 0.000099
aiNET MSE 0.009650 0.007080 0.015003 0.007506 0.012025
Regression MSE 0.001011 0.000609 0.005614 0.004071 0.002387
Stepwise regression MSE 0.000849 0.001098 0.004998 0.003824 0.002137
aiNBSB MSE 0.000005 0.000004 0.000005 0.000003 0.000003
123
J Intell Manuf
Table 11 continuedAlgorithm Fold
6 7 8 9 10
BP average 0.000693
AIS average 0.034205
AISBP average 0.000429
aiNET average 0.009893
Regression 0.002739
Stepwise regression 0.002485
aiNBSB average 0.000048
Table 12 The bestcombinations of parameters fordifferent algorithms for RFIDdata
Hypothesis test aiNBSB vs. BP aiNBSB vs. AIS aiNBSB vs. AISBP aiNBSB vs. aiNET
Statistic value Z =−19.064957 Z =−26.275710 Z =−7.992374 Z =−22.321520
Test result Reject H0 Reject H0 Reject H0 Reject H0
μai N ET μaiNBSB represent the average MSE values for BP,AIS, aiNET and aiNBSB, respectively. The hypothesis testresult as shown in Table 12 indicates that aiNBSB is signifi-cant better than the other four algorithms.
Conclusions
This study has presented a novel learning algorithm with inte-gration of Opt-aiNET and AIS with clone selection for back-propagation neural network. The proposed aiNBSB algo-rithm is able to learn the relationship between RSSI val-ues and picking cart position. Its performance is verified bylearning two continuous functions. In addition, the modelevaluation results also indicate that the proposed algorithmreally can predict the picking cart position more correctlythan other methods. Basically, once the picking cart positionis known, the picking rout can be planned in order to providethe shortest picking distance.
However, in the experiments, we found that the signals arefrequently influenced by the environment and facilities. Thus,a more robust method can be proposed in order to overcomethe stability problem. In addition, future study can try to pre-dict the types of object. This information can make rout plan-ning more practical. Definitely, other meta-heuristics, likebee colony optimization, can be integrated into the heuristicsin order to provide better estimation.
Acknowledgments This study is financially supported by NationalScience Council of Taiwan Government under contract number NSC99-2221-E-011-057-MY3. Her support is appreciated.
References
Aydin, I., Karakose, M., & Akin, E. (2012). An adaptive artificialimmune system for fault classification. Journal of Intelligent Manu-facturing, 23(5), 1489–1499.
Ceravolo, F., Felice, M. D., & Pizzuti, S. (2009). Combining back-propagation and genetic algorithms to train neural networks for ambi-ent temperature modeling in Italy. Lecture Notes in Computer Sci-ence, 5484, 123–131.
Chawla, V., & Ha, D. S. (2007). An overview of passive RFID. IEEECommunications Magazine, 45(9), 11–17.
Chen, M. H., Chang, P. C., & Lin, C. H. (2013). A self-evolving artifi-cial immune system II with T-cell and B-cell for permutation flow-shop problem. Journal of Intelligent Manufacturing. doi:10.1007/s10845-012-0728-4.
Cheng, C. Y., & Prabhu, V. (2013). An approach for research and trainingin enterprise information system with RFID technology. Journal ofIntelligent Manufacturing, 24(3), 527–540.
De Castro, L. N., & Von Zuben, F. J. (2001a). aiNET: An artificialimmune network for data analysis. International Journal of Compu-tation Intelligence and Application, 1(3), 231–259.
De Castro, L. N., & Von Zuben, F. J. (2001b). “An immunologicalapproach to initialize feedforward neural network weights”, Artifi-cial Neural Nets and Genetic Algorithm, 126–129.
Hart, E., & Timmis, J. (2008). Application areas of AIS: The past, thepresent and the future. Applied Soft Computing, 8(1), 191–201.
Hightower, J., Borriello, G., & Want, R. (2000). SpotON: An indoor 3Dlocation sensing technology based on RF signal strength In: Seattle:University of Washington, Department of Computer Science andEngineering.
Hunt, J. E., & Cooke, D. E. (1996). Learning using an artificial immunesystem. Journal of Network and Computer Applications, 19(2),189–212.
Landt, J. (2005). The history of RFID. IEEE Protentials, 24(4), 8–11.Li, J. B., & Chung, Y. K. (2005). A novel back-propagation neural
network training algorithm designed by an ant colony optimization.IEEE/PES Transmission and Distribution Conference and Exhibi-tion: Asia and Pacific, pp. 1–5.
Lin, S. W., Chen, S. C., Wu, W. J., & Chen, C. H. (2009). Parameterdetermination and feature selection for back-propagation network byparticle swarm optimization. Knowledge and Information Systems,21(2), 249–266.
Lin, S. W., Tseng, T. Y., Chou, S. Y., & Chen, S. C. (2008). A simulated-annealing-based approach for simultaneous parameter optimizationand feature selection of back-propagation networks. Expert Systemswith Applications, 34(2), 1491–1499.
Ni, L. M., Liu, Y., Lau, Y. C., & Patil, A. P. (2004). LANDMARC:Indoor location sensing using active RFID. Wireless Networks, 10(6),701–710.
123
J Intell Manuf
Pasti, R., & De Castro, L. N. (2006). An immune and a gradient-basedmethod to train multi-layer perceptron neural networks. InternationalJoint Conference on Neural Networks, pp. 2075–2082.
Qiu, X., & Lau, H. Y. K. (2012). An AIS-based hybrid algorithm forstatic job shop scheduling problem. Journal of Intelligent Manufac-turing. doi:10.1007/s10845-012-0701-2.
Ramaswamy, S. A. P., Venayagamoorthy, G. K., & Balakrishnan, S. N.(2007). Optimal control of class of non-linear plants using artificialimmune systems: Application of the clonal selection algorithm. InIEEE International Symposium on Intelligent Control (pp. 249–254).Singapore.
Ranky, P. G. (2006). An introduction to radio frequency identification(RFID) methods and solutions. Assembly Automation, 26(1), 28–33.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learn-ing representations by back-propagating errors. Nature, 323(6088),533–536.
Salama, A. M. A., & Mahmoud, F. I. (2009). Using RFID technology infinding position and tracking based on RSSI. In International con-ference on advances in computational tools for engineering appli-cations, Zouk Mosbeh, Lebanon, 15–17, 532–536.
Satapathy, J.K. & Subhashini, K.R. (2008). Tabu based back prop-agation algorithm for performance improvement in communica-tion channels. In TENCON 2008–2008 IEEE Region 10 Conference(pp. 1–6).
Shepard, S. (2005). RFID: Radio frequency identification (pp. 55–63).New York: McGraw-Hill.
Tian, J., Li, M., & Chen, F. (2010). Dual-population based coevolution-ary algorithm for designing RBFNN with feature selection. ExpertSystems with Applications, 37(10), 6904–6918.
Timmis, J., & Edmonds, C. (2004). A comment on opt-AiNET: Animmune network algorithm for optimisation. Genetic and Evolu-tionary Computation, 3102, 308–317.
Tuzkaya, G., Gülsün, B., Tuzkaya, U. R., Onut, S., & Bildik, E. (2013). Acomparative analysis of meta-heuristic approaches for facility layoutdesign problem: A case study for an elevator manufacturer. Journalof Intelligent Manufacturing, 24(2), 357–372.
Ülker, E., Emin Turanalp, M., & Selçuk Halkaci, H. (2009). An artificialimmune system approach to CNC tool path generation. Journal ofIntelligent Manufacturing, 20(1), 67–77.
Whigham, P. A., Dick, G., & Recknagel, F. (2006). Exploring sea-sonal patterns using process modelling and evolutionary computa-tion. Ecological Modelling, 195(1–2), 146–152.
Zhang, Y., Jiang, P., Huang, G., Qu, T., Zhou, G., & Hong, J.(2012). RFID-enabled real-time manufacturing information trackinginfrastructure for extended enterprises. Journal of Intelligent Manu-facturing, 23(6), 2357–2366.
Zhou, J., & Shi, J. (2009). RFID localization algorithms andapplications–a review. Journal of Intelligent Manufacturing, 20(6),695–707.
123