[IEEE 2009 International Conference on Artificial Intelligence and Computational Intelligence - Shanghai, China (2009.11.7-2009.11.8)] 2009 International Conference on Artificial Intelligence and Computational Intelligence - Feature Selection with Discrete Binary Differential Evolution

Download [IEEE 2009 International Conference on Artificial Intelligence and Computational Intelligence - Shanghai, China (2009.11.7-2009.11.8)] 2009 International Conference on Artificial Intelligence and Computational Intelligence - Feature Selection with Discrete Binary Differential Evolution

Post on 14-Dec-2016

215 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Feature selection with discrete binary differential evolution </p><p>Xingshi He Department of Mathematics Xian polytechnic University </p><p>Xian 710048, P.R.China e-mail: xingshi_he@163.com </p><p>Qingqing Zhang Department of Mathematics Xian polytechnic University </p><p>Xian 710048, P.R.China e-mail: suiyue2959@163.com</p><p>Na Sun Department of Mathematics Xian polytechnic University </p><p>Xian 710048, P.R.China e-mail: sunn827@sina.com </p><p>Yan Dong Department of Mathematics Xian polytechnic University </p><p>Xian 710048, P.R.China e-mail: dongyan840214@126.com </p><p>AbstractThe processing of data from the database using data mining algorithms need more special methods. In fact, some redundancy and irrelevant attributes reduce the performance of data mining, so the problem of feature subset selection becomes important in data mining domain. This paper presentes a new algorithm which is called discrete binary differential evolution (BDE) algorithm to select the best feature subsets. The relativity of attributes is evaluated based on the idea of mutual information. Experiments using the new feature selection method as a preprocessing step for SVM, C&amp;R Tree and RBF network are done.We find that the method is very effective to improve the correct classification rate on some datasets and the BDE algorithm is useful for feature subset selection. </p><p>Keywords-differential evolution; data mining; feature </p><p> selection; mutual information </p><p>I INTRODUCTION The success of data mining on a given task is affected by </p><p>many factors. The quality of the data is one of these factors. If information is irrelevant or redundant, or the data is noisy and unreliable, knowledge discovery is more difficult.Feature subset selection is a process that identify and remove the irrelevant and redundant information as much as possible. It is necessary to select a small number of highly predictive features in order to avoid over-fitting the training data. Regardless of a learner attempting to select features itself or ignoring the issue, feature selection prior to learning can be beneficial. Reducing the dimension of the data reduces the size of the hypothesis space and allows algorithms to operate faster and more effectively. In some cases,the accuracy on future classification can be improved; in others, the result is a more compact, easily interpreted representation of the target concept. Algorithms performing feature selection as a preprocessing step prior to learning can generally be placed into one of two broad categories. One approach, referred to as the wrapper [1],is a method to select useful features depending on special problems and learning algorithms. This approach has been proved useful but very </p><p>slow to execute. For this reason, wrappers do not scale well to large datasets containing many features. Another approach, called the filter [1], operates independently of any learning algorithms undesirable features are filtered out of the data before induction commences. Filters have been proved to be much faster than wrappers and hence can be applied to enlarge data sets containing many features. Their general natures allow them to be used with any learners, unlike the wrapper, which must be rerun when switching from one learning algorithm to another. This paper presents a new approach to feature selection, called BDE(discrete binary differential evolutionbased feature selection). The approach uses a population-based heuristics to evaluate the worth of features. The algorithm is simple, fast to execute by applying suitable correlation measures. </p><p>The rest of this paper is organized as follows.The second section describes the BDE algorithm. Section 3 presents experimental results of using BDE as a pre-processor for learning algorithms. The last section summaries and discusses future work. </p><p>II BDE: DISCRETE BINARY DIFFERENTIAL EVOLUTION BASED FEATURE SELECTION </p><p>A. Feature evaluation The purpose of feature selection is to decide which of the </p><p>initial (possibly large) number of features is included in the final subset and which is ignore. If there are n possible features initially, then there are 2n possible subsets. So we must use heuristic methods to search the feature subset space in reasonable time. In this paper, we use BDE algorithm to get the best subset. The key problem is to define a rule for evaluating the worth of a subset of features. This worth takes into account the usefulness of individual features with the purpose of predicting the class label along with the level of inter-correlation among them. So we use a new fitness function (according to Equation 1) to select feature [2]. </p><p>2009 International Conference on Artificial Intelligence and Computational Intelligence</p><p>978-0-7695-3816-7/09 $26.00 2009 IEEEDOI 10.1109/AICI.2009.438</p><p>327</p></li><li><p> '</p><p>*( )</p><p>( 1) *cf</p><p>ff</p><p>k rf s</p><p>k k k r=</p><p>+ (1) </p><p>Where ( )f s is the worth of a feature subset S containing k features as fitness function for BDE, cfr is the average feature-class correlation, 'ffr 'ffr is the average feature-feature inter-correlation and cfr , 'ffr are individually indicated how much predictive with a group of features and how much redundancy among them. The heuristic handles irrelevant features as they will be poor predictors of the class, redundant attributes are discriminated against as they will be highly correlated with one or more of the other features. </p><p>In order to apply equation 1 to estimate the merit of a feature subset, it is necessary to compute the correlation (dependence) between attributes. For discrete class problems, we use information to estimate the degree of association between features. If X and Y are discrete random variables, equations 2 and 3 give the entropy of Y before and after observing X. </p><p>2( ) ( ) log ( )y Y</p><p>H y p y p y</p><p>= (2) 2( | ) ( ) ( | ) log ( | )</p><p>x X y YH y x p x p y x p y x</p><p>= (3) The amount by which the entropy of Y decreases reflects the additional information about Y provided by X and is called the information gain [3]. Information gain is given by equation 4 </p><p>( , ) ( ) ( | )gain x y H y H y x= (4) Here, we use information gain to indicate the degree of </p><p>correlation between x and y. if ( , )gain x y is very big, then the correlation of x and y is very high. So we can </p><p>commutate cfr and 'ffr in equation 1 using equations 5 and 6 below. </p><p>1 ( , )cf ii S</p><p>r gain x yk </p><p>= (5) '</p><p>1 ( , )( ) i jff i S j S</p><p>r gain x xk n k </p><p>=</p><p> (6) Where S indicate the attribute set which do not belong to </p><p>set S, n is the total number of all attributes and y is the class attribute. </p><p>B. Searching the Feature Subset Space using BDE 1) Differential Evolution (DE) algorithm </p><p>DE algorithm is a population-based heuristic search procedure which was first introduced by Rainer Storn and Kenneth Price in 1997[4]. Start with NP individuals as solution vectors randomly, use mutation, crossing and selection operation through numerical encoding, and then get the best individual as the problems answer [5]. By experimentation, we recently noticed that DE has </p><p>exceptional performance compared to other evolution algorithms in numerical optimization problems. Surprisingly, DE requires hardly any parameters tuning and works very reliably with excellent overall results over a wide set of benchmark and real-world problems. </p><p>Firstly, we get NP individuals randomly, the individuals have the form: </p><p>, 1, , 2, , , ,[ , , ]i G i G i G D i Gx x x x= " 1, 2,i NP= " Where G is the generation number and D is the problems dimension.In each generation, for individual ix , we use operation and get the vector iv called donor vector according to formula 7. </p><p>1, 2, 3,( )i r G r G r Gv x F x x= + (7) Where the mutation factor F is a constant from [0, 2] and the three vectors 1,r Gx , 2,r Gx and 3,r Gx are selected randomly such that the indices i, r1 and r2are distinct from 1, 2,,NP </p><p>Secndly, we use crossing operation and get the trail vector iu through formula 8. </p><p>, , 1 ,, , 1</p><p>, , 1 ,</p><p>j i G j i randj i G</p><p>j i G j i rand</p><p>v if rand CR or j Iu</p><p>x f rand CR or j I+</p><p>++</p><p> == &gt; </p><p>1,2, , ; 1,2,i NP j D= =" " (8) Where , ~ (0,1)j irand U , randI is a random integer from [1, 2, D], CR is a number predefined between [0, 1] by user which called crossing factor. </p><p>Lastly, we use selection operation and get the target </p><p>vector , 1i Gx + is compared with the trail vector , 1i Gu + and </p><p>the one with the lowest fitness function value is admitted to the next generation through formula 9. Mutation, crossing and selection continue until some stopping criterion is reached. </p><p>, 1 , 1 ,, 1</p><p>,</p><p> f( ) ( ) </p><p>otherwise</p><p> 1, 2,...,</p><p>i G i G i Gi G</p><p>i G</p><p>u if u f xx</p><p>x</p><p>i NP</p><p>+ +</p><p>+</p><p>= </p><p>=</p><p> (9) </p><p>2) Feature selection use BDE algorithm In DE algorithm, it adopts to numerical encode for </p><p>continuous space. In this paper, we transform numerical encoding to binary encoding depending on some probability using below formula 10 and we call this method Binary Differential Evolution (BDE).So we can use the method to deal with discrete optimization problems. </p><p>1 () exp( | |)0</p><p>ii</p><p>if rand xd</p><p>otherwise </p><p>= (10) Where ()rand is a random number between 0 and 1 </p><p> So each individual encode format as follows: </p><p>1 0 1 . 0</p><p>328</p></li><li><p>Where 1 indicates the feature is selected and all selected features group a best feature subset. The pseudo-code is as follows: Begin </p><p>G=1; Initialize the NP individuals ,i GX randomly; </p><p>For G=1 to Gmax do For i =1 to NP do Mutation step: for each individual ,i GX , get the donor </p><p>vector , 1i GV + according to formula 7; Crossing step: get the trail vector , 1i GU + according to </p><p>formula 8 by vector ,i GX and , 1i GV + ; Discrete step: get temp vector ,i GX and , 1i GU + </p><p>according to formula 10; Get the feature subset and compute ,i XJ , ,i UJ </p><p>according to formula 1 by vector ,i GX , , 1i GU + ; Selection step: get the target vector , 1i GX + according </p><p>to formula 9; End for </p><p>G=G+1; End for </p><p>End </p><p>III EXPERIMENTS In order to evaluate the effectiveness of BDE as a global </p><p>feature selector for common machine learning algorithms, experiments were performed using six standard datasets from the UCI collection [6]. The datasets and their characteristics are listed in Table 1. The parameters of BDE algorithm were set as follows: mutation factor F=0.3, crossing factor CR=0.7and Max iterations Gmax=10*dim (dim is dimension of dataset). Three data mining algorithms representing three diverse approaches to learning were used in the experiments: </p><p>Support Vector Machine (SVM), C&amp;R Tree and RBF network. All the experiments were conducted 10 runs. In each experiment, each data set was randomly divided into two parts: 60% as training sets and the rest as test sets .The correct classification rate averaged over 10 runs for the training sets and the test sets. The results were listed in table 2, 3,4. </p><p>ACKNOWLEDGMENT This work is supported by Mechanism Design Theory </p><p>and Application Research through Shaanxi Province Department of Education Research Project(08JK285)and Xian Polytechnic University Postgraduate Innovation Foundation(chx090721). </p><p>REFERENCES [1] John G. H; Kohavi.R; and Peger. P: Irrelevant features and </p><p>the subset selection problem. In Machine Learning: Proceedings of the Eleventh International Conference. Morgan Kaufmann, 1994. </p><p>[2] Mark A. Hall: Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning. Proceedings of the Seventeenth International Conference on Machine Learning, 2000, Pages: 359 366. </p><p>[3] Quinlan J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993. </p><p>[4] Rainer.Storn; Kenneth.Price: Differential Evolution: A simple and efficient adaptive scheme for global optimization over continuous spaces. Global Optimization, 11, 1997, Pages: 341-359. </p><p>[5] Vesterstrom.J; Thomsen.R: A comparative study of differential evolution, particle swarm optimization, and evolutionary algorithms on numerical benchmark problems. Congress on Evolutionary Computation, 2004, Pages: 1980-1987. </p><p>[6] Blake.C; Keogh.E; Merz C.J: UCI Repository of Machine Learning Databases (1998). www.ics.uci.edu/mlearn/MLRepository.html. </p><p> Table 1: The structure of datasets used in the experiments </p><p>datasets number of attributes Instances classes </p><p>Vote 16 435 2 Zoo 16 101 7 </p><p>Flare 10 1066 2 Breast 9 683 2 Lung 56 32 2 Exactly 13 1000 2 </p><p> Table2: the best feature subsets </p><p> datasets Number of the best feature subset Vote 10 Zoo 9 Flare 7 Breast 5 lung 35 </p><p>exactly 9 </p><p>329</p></li><li><p>Table 3: the correct rate of classification algorithms in train datasets </p><p>datasets C&amp;R Tree SVM RBF network Bef Aft Bef Aft Bef Aft Vote 0.9732 0.9349 1 0.9923 0.9732 0.9272 Zoo 1 1 1 1 1 1 Flare 0.8406 0.8219 0.8812 0.825 0.8219 0.8215 Breast 0.9756 0.9707 1 1 0.978 0.98 Lung 0.95 0.95 1 1 0.95 0.95 Exactly 0.765 0.74 1 0.8767 0.7017 0.705 </p><p> Table 4: the correct rate of classification algorithms in test datasets </p><p>datasets C&amp;R Tree SVM RBF network Bef Aft Bef Aft Bef Aft Vote 0.8483 0.8736 0.5172 0.5172 0.8368 0.8909 Zoo 0.85 0.85 0.8 0.85 0.9 0.875 Flare 0.8333 0.8474 0.5609 0.5609 0.8774 0.878 Breast 0.9194 0.9304 0.6024 0.62 0.9487 0.9487 Lung 0.5333 0.6333 0.4 0.42 0.5233 0.4867 Exactly 0.7 0.685 0.4617 0.4617 0.675 0.67 </p><p>330</p></li></ul>