Feature selection using tabu search method

Download Feature selection using tabu search method

Post on 02-Jul-2016

214 views

Category:

Documents

2 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>This work was supported by the National Natural ScienceFoundation of China (69675007) and BeijingMunicipal NaturalScience Foundation (4972008).*Corresponding author. Tel.:#86-10 68907155.E-mail address: zhb@public.bta.net.cn (H. Zhang).</p><p>Pattern Recognition 35 (2002) 701}711</p><p>Feature selection using tabu search method</p><p>Hongbin Zhang*, Guangyu Sun</p><p>Computer Institute of Beijing Polytechnic University, West San Huan North Road 56, 69 Beijing 100044, People's Republic of China</p><p>Received 9 February 1999; received in revised form 6 July 2000; accepted 5 December 2000</p><p>Abstract</p><p>Selecting an optimal subset from original large feature set in the design of pattern classi"er is an important and di$cultproblem. In this paper, we use tabu search to solve this feature selection problem and compare it with classic algorithms,such as sequential methods, branch and boundmethod, etc., and most other suboptimal methods proposed recently, suchas genetic algorithm and sequential forward (backward) #oating search methods. Based on the results of experiments,tabu search is shown to be a promising tool for feature selection in respect of the quality of obtained feature subset andcomputation e$ciency. The e!ects of parameters in tabu search are also analyzed by experiments. 2001 PatternRecognition Society. Published by Elsevier Science Ltd. All rights reserved.</p><p>Keywords: Feature selection; Tabu search; Pattern classi"er; Search methods; Curse of dimensionality</p><p>1. Introduction</p><p>The problem of feature selection is important in de-signing pattern classi"er, whose goal is to acquire ane$cient subset so as to reduce the dimension of thefeature set. When the number of initial features becomestoo large, not only the cost of collecting features in-creases, but the performance of designed classi"er willnot be guaranteed in the case of small sample size. Fromthe standpoint of Bayesian decision rules there are no`bada features, i.e., the performance of a Bayes classi"ercannot be improved by eliminating a feature. But inpractice, the assumptions in designing Bayes classi"erare usually not satis"ed. As a consequence, for a givenamount of samples, reducing the number of features mayresult in improving a non-ideal classi"er's performance.In applications, there are two forms of feature selec-</p><p>tion, each form addresses a speci"c objective and leads toa distinct type of optimization. The "rst form is to "nd anoptimal subset having a prede"ned number of features</p><p>and yield the lowest error rate of a classi"er. This isa non-constrained combinatorial optimal problem. An-other form of feature selection is aimed at seeking thesmallest subset of features for which error rate is belowa given threshold. This is a constrained combinatorialoptimal problem, in which the error rate serves as a con-straint and the smallest subset of features is the primarysearch criterion.Let F" f1 , f2 ,2, f be the initial set of features with</p><p>cardinality n. S is a subset of F. S represents subset size,i.e., the number of inclusive features in the subset. Letm denote the required subset size in the 1st-form offeature selection problem and t the tolerable threshold inthe 2nd-form of feature selection problem. Let error(S) bethe relevant error rate or some other measures of perfor-mance such as class separability (e.g., Mahalanobis dis-tance), when S is used to design the classi"er. The twoforms of feature selection problem can be represented asfollows:</p><p>1st-form of feature selection:min J(S)"error(S)s.t. SLF, S"m, m(n.</p><p>2nd-form of feature selection:min J(S)"Ss.t. S-F, error(S)(t.</p><p>0031-3203/01/$22.00 2001 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.PII: S 0 0 3 1 - 3 2 0 3 ( 0 1 ) 0 0 0 4 6 - 2</p></li><li><p>Theoretically, both forms of feature selection areNP-hard problems. So the optimal solution cannot beguaranteed to be acquired except for doing exhaustivesearch in the solution space [1]. But exhaustive search isfeasible only for small n. For a bigger n, the explosivecomputational cost makes the exhaustive search imprac-ticable. The branch and bound search scheme (BB) intro-duced by Narendra and Fukunaga is more e$cient bymeans of pruning a lot of `infeasiblea solutions [2]. Onthe assumption that the objective function is monotonic,this algorithm is potentially capable of examining allfeasible solutions so that it is an optimal method. Unfor-tunately, monotonic condition is seldom satis"ed. More-over, even if 99.9% computational cost saving is done byBB, the computational complexity remains exponential.Thus, BB or its improved version*relaxed BB (by intro-ducing the concept of approximate monotonicity)[3]*are still unavailable for a large n.The need of trade-o! between the optimality and e$-</p><p>ciency of algorithms for NP-problems was recognizedearly, and the main stream of feature selection researchwas thus directed toward suboptimal but e$cient androbust methods. The preliminary works on feature selec-tion was started in the early 60s. The approaches offeature selection at that time were based on probabilisticmeasures of class separability and on entropies. In somemethods the independence of features was assumed andthe features were selected on the basis of their individualmerits. Such methods ignore the interactions among fea-tures. As a result, the selected subsets are not satisfactory.As was pointed out by Cover, the best two independentfeatures do not have to be the two best [4]. The sequen-tial forward selection method (SFS), sequential backwardselection method (SBS), and their generalized versionsGSFS, GSBS invented later belong to greedy algorithmsin essence. These algorithms begin with a feature subsetand sequentially add or remove features until some ter-mination criterion is met. But these methods su!er fromthe so-called `nesting e!ecta. It means that the featuresdiscarded cannot be re-selected, and the features selectedcannot be removed later. Since these algorithms do notexamine all possible feature subsets, they are not guaran-teed to produce the optimal result. The plus l take awayr (PTA) methods was proposed to prevent the `nestingae!ect. But there is no theoretical guidance to determinethe appropriate value of l and r. Max}min (MM) methodproposed by Backer and Shipper is a computationallye$cient method [5]. It only evaluates the individual andpairwise merits of features. The experiments made bymany researchers show that this method gives the poor-est results [6,7]. This con"rms that it is not possible toselect a feature subset in a high-dimensional space basedon only two-dimensional information measures withouta substantial information loss.In recent years, there have been some developments in</p><p>the problem of feature selection. Siedlecki and Sklansky</p><p>introduced the use of genetic algorithm (GA) for thisproblem and obtained good results [8,9]. But the prema-ture phenomena in GA seem to be di$cult to avoidand it needs to carefully select and test the parametersin GA. Pudil et al. improved the sequential methodsby introducing sequential forward #oating selection(SFFS) and sequential backward #oating selection(SBFS) [10]. These two methods can be understoodas plus 1-minus x and minus 1-plus x, where x is dynam-ically changed according to the backtrack e!ect. SFFSand SBFS avoid the problem of prede"ning l and rin PTA, and have a mechanics to control the depth ofbacktrack. According to the experimental results ofJain and Zongker [7], SFFS and SBFS achieve resultscomparable to the optimal algorithm (BB) but arefaster than BB.To conclude, despite some progress has been obtained,</p><p>the available feature selection techniques for large featureset are not yet completely satisfactory. They are eithercomputationally feasible but far from optimal, or theyare optimal or almost optimal but cannot cope with thecomputational complexity of feature selection problemsof realistic size. Recently, there has been a resurgence ofinterest in applying feature selection methods due to theapplication needs. In the application of informationfusion of multiple sensors' data, integration of multiplemodels and data mining, the number of features is usu-ally quite large. In some cases, it may be over 100 [7]. It isnecessary to research more powerful methods for featureselection, which should give very good results and shouldbe computationally more e$cient. In Ref. [11], Sklanskystated that when the initial set contains 20 or morefeatures and the selected subset contains 10 or fewerfeatures, the problem of selecting a best or near-bestsubset can be quite di$cult because we must search alarge*often astronomically large*space of subsets ofcandidate features, and determine a "gure of merit for anoptimum or near-optimum classi"er operating on eachsubset in this space. He referred to such problems aslarge-scale feature selection. Nowadays along with theupgrading of computer performance, the feature selectioncontaining about 20 features has become easier, and thefeature selection problem including 20}49 features haschanged to so-called `medium-sizeda problem. But for aclassi"er based on personal computer, the feature selec-tion including a few dozens or over 100 of features is stilla di$cult and challenging problem.It has been argued that since feature selection is typi-</p><p>cally done in an o!-line manner, the execution time ofa particular algorithm is of much less important than itsultimate classi"cation. While this is generally true forfeature sets of appropriate size. However, if the number offeatures reaches a few dozen or even more, the executiontime becomes extremely important as it may be impracti-cal to run some algorithms even once on such large datasets.</p><p>702 H. Zhang, G. Sun / Pattern Recognition 35 (2002) 701}711</p></li><li><p>In a recent paper [12], Kudo and Sklansky carried outa comparative study of algorithms for large-scale featureselection (where the number of features is over 50). Un-fortunately, their comparative study does not includetabu search method. In this paper, we introduce the useof tabu search method for feature selection, and comparethe performance of tabu search with some other algo-rithms. In the following section, a brief description oftabu search is presented. In Section 3, experimental re-sults of tabu search are shown, providing also compari-sons to other methods. The parameters in tabu searchwill be discussed in Section 4, and a short conclusionfollows in Section 5.</p><p>2. Tabu search</p><p>The tabu search, proposed by Glover [13,14], is a metaheuristic method that can be used to solve combinatorialoptimization problem. It has received widespread atten-tion recently. Its #exible control framework and severalspectacular successes in solving NP-hard problemscaused rapid growth in its application. It di!ers from thelocal search technique in the sense that tabu searchallows moving to a new solution which makes the objec-tive function worse in the hope that it will not trap inlocal optimal solutions. Tabu search uses a short-termmemory, called tabu list, to record and guide the processof the search. In addition to the tabu list, we can also usea long-term memories and other prior information aboutthe solutions to improve the intensi"cation and/or diver-si"cation of the search.The tabu search scheme can be outlined as follows:</p><p>start with an initial (current) solution x, called a con"g-uration, evaluate the criterion function for that solution.Then, follow a certain set of candidate moves, called theneighborhoodN(x) of the current solution x. If the best ofthese moves is not tabu (i.e., not in the tabu list) or if thebest is tabu, but satis"es the aspiration criterion, whichwill be explained below, then pick that move and con-sider it to be the new current solution. Repeat the proced-ure for a certain number of iterations. On termination,the best solution obtained so far is the solution of thetabu search. Note that the solution that is picked atcertain iteration is put in the tabu list () so that it isnot allowed to be reversed in the next l iterations, i.e., thissolution is tabu. The l is the size of . When the lengthof tabu list reaches that size, then the "rst solution on the is freed from being tabu and the new solution entersthat list. The process continues. The acts as a short-term memory. By recording the history of searches, tabusearch can control the direction of the following searches.The aspiration criterion could re#ect the value of thecriterion function, i.e., if the tabu solution results ina value of the criterion function that is better than thebest known so far, then the aspiration criterion is satis-</p><p>"ed and the tabu restriction is relieved, and move to thissolution is allowed.Let S</p><p>denote the best solution obtained so far, the</p><p>tabu list. The algorithmic description of the tabu searchcan be summarized as follows:</p><p>(1) Initialize. Generate an initial solution x. And letS"x. k"1, ".(2) Generate candidate set. Randomly pick out a cer-</p><p>tain number of solutions from the neighborhood of x toform the candidate set N(x).(3)Move. (a) If N(x)", Go back to step 2 to regener-</p><p>ate the candidate set. Otherwise, "nd out the best solu-tion y in N(x). (b) If y3TL, i.e., it is tabu, and y does notsatisfy the aspiration criterion, let N(x)"N(x)!y.Then go to 3(a). Otherwise, let x"y. And let S</p><p>"y if y is</p><p>better than S.</p><p>(4) Output. If termination condition is satis"ed, stopand output the S</p><p>. Otherwise, let TL"TLx. (Add the</p><p>new solution to the tail of TL. And if the length of TLexceeds a prede"ned size, remove the head item of thelist.). Let k"k#1 and go back to step 2.</p><p>The above paragraph has outlined the basic steps ofthe tabu search for solving combinatorial optimizationproblems. For more details on the tabu search as well asa list of successful application, the reader is encouragedto refer to Glover [13}15]. In the next section, we devel-op a new method for feature selection based on tabusearch technique.</p><p>3. Application of tabu search to feature selection</p><p>In this section, we present our tabu search-based algo-rithms for the two forms of feature selection problem,compare the performance of tabu search with that ofother algorithms, such as classical sequential methods,branch and bound method, genetic method and sequen-tial #oating methods by experiments.</p><p>3.1. Feature selection based on misclassixcation error rate</p><p>The 2nd-form feature selection stated in Section 1 isa constrained optimization problem. Generally, a con-strained optimization problem can be converted intoa relevant non-constrained optimization problem bytransforming the constraint into a penalty function. Theoptimization problem of the 2nd-form feature selectioncan be converted into the following forms:</p><p>Min J(S)"S#p(e(S))s.t. S-F,</p><p>where p(e(S)) is the penalty function. We adoptthe penalty function used by Siedlicki and Sklansky</p><p>H. Zhang, G. Sun / Pattern Recognition 35 (2002) 701}711 703</p></li><li><p>[9], i.e.,</p><p>p(e)"exp((e!t)/m)!1exp(1)!1 ,</p><p>where e is the error rate, t the tolerant error rate, and ma scale factor. This penalty function is...</p></li></ul>