A combined global and local search method to deal with constrained optimization for continuous tabu search

Download A combined global and local search method to deal with constrained optimization for continuous tabu search

Post on 15-Jun-2016




0 download


<ul><li><p>INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERINGInt. J. Numer. Meth. Engng 2008; 76:18691891Published online 10 July 2008 in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nme.2393</p><p>A combined global and local search method to deal withconstrained optimization for continuous tabu search</p><p>Xi Chen,, Jing Yang, Zhaohua Li, Daqing Tian and Zhijiang ShaoState Key Laboratory of Industrial Control Technology, Department of Control Science and Engineering,</p><p>Zhejiang University, Hangzhou 310027, China</p><p>SUMMARY</p><p>Heuristic methods, such as tabu search, are efficient for global optimizations. Most studies, however, havefocused on constraint-free optimizations. Penalty functions are commonly used to deal with constraintsfor global optimization algorithms in dealing with constraints. This is sometimes inefficient, especially forequality constraints, as it is difficult to keep the global search within the feasible region by purely addinga penalty to the objective function. A combined global and local search method is proposed in this paperto deal with constrained optimizations. It is demonstrated by combining continuous tabu search (CTS)and sequential quadratic programming (SQP) methods. First, a nested inner- and outer-loop method ispresented to lead the search within the feasible region. SQP, a typical local search method, is used toquickly solve a non-linear programming purely for constraints in the inner loop and provides feasibleneighbors for the outer loop. CTS, in the outer loop, is used to seek for the global optimal. Finally,another local search using SQP is conducted with the results of CTS as initials to refine the global searchresults. Efficiency is demonstrated by a number of benchmark problems. Copyright q 2008 John Wiley&amp; Sons, Ltd.</p><p>Received 7 August 2007; Revised 30 April 2008; Accepted 30 April 2008</p><p>KEY WORDS: constrained optimization; tabu search; SQP</p><p>1. INTRODUCTION</p><p>Tabu search (TS) is a meta-heuristic method originally developed for combinatorial optimizationsby Glover [1, 2]. It employs adaptive memory and responsive exploration to effectively search within</p><p>Correspondence to: Xi Chen, State Key Laboratory of Industrial Control Technology, Department of Control Scienceand Engineering, Zhejiang University, Hangzhou 310027, China.</p><p>E-mail: xichen@iipc.zju.edu.cnContract/grant sponsor: National Key Basic Research and Development Program of China; contract/grant number:2002CB312200Contract/grant sponsor: National Natural Science Foundation of China; contract/grant number: 60704029Contract/grant sponsor: 863 Program of China; contract/grant number: 2007AA04Z192</p><p>Copyright q 2008 John Wiley &amp; Sons, Ltd.</p></li><li><p>1870 X. CHEN ET AL.</p><p>the solution space and prevent it from being trapped in a local optimum. Good performance has beendemonstrated by a number of successful applications in combinatorial optimization problems, suchas graph coloring [3], quadratic assignment [4], electronic circuit design [5] and scheduling [6]. Inrecent years, it has been employed to solve continuous optimization problems. Hu [7] may be thefirst one who introduced TS to continuous optimizations, but the main principle of his method seemsrather far from the original TS. Siarry and Berthiau [8] presented an adaptation of the original simpleTS to continuous optimization. However, it could deal only with low-dimension problems andmany concepts such as intensification, diversification and aspiration level were not included. Later,Chelouah and Siarry [9] further developed an improved continuous tabu search (CTS) algorithm,where the diversification and intensification strategies were introduced. Wang et al. [10] studiedthe selection in the central area and proposed an improved CTS algorithm. The above-mentionedmethods have been successfully demonstrated by a number of continuous optimization problems.However, due to their heuristic feature, these methods limited their applications to constraint-freeproblems. Similar situation also happens in other global search methods. In real-world applica-tions, however, many continuous optimization problems are constrained. They can be defined asfollows:</p><p>min f (x)s.t. gi (x)0, i =1, . . . , p</p><p>h j (x)=0, j =1, . . . ,qxlbxxub</p><p>(1)</p><p>where x is a vector of n variables (x1, x2, . . . , xn) with a lower bound, xlb, and an upper bound,xub,gi (x)0, i =1, . . . , p, are inequality constraints; and h j (x)=0, j =1, . . . ,q, are equalityconstraints. Up to now, several methods in overcoming constraints associated with constrainednon-linear optimization problems with evolutionary algorithms have been reported. All of thesestrategies could be classified into the following categories [11, 12]:</p><p>(a) Methods based on penalty functions.(b) Methods based on preserving feasibility of solutions.(c) Methods that make a clear distinction between feasible and infeasible solutions.(d) Methods based on decoders.(e) Other hyper methods.</p><p>Among them, the most commonly used approach is based on penalty functions, which can bestatic [13, 14], dynamic [15, 16] or adaptive [17]. The essential idea of penalty function is totransform a constrained problem into a sequence of unconstrained problems. But when we applythis method combined with CTS to solve problems with equality constraints, we often face theproblem that we cannot even obtain any feasible solution after lengthy calculation let alone theoptimal solution. This is because neighbor solutions are randomly generated by adding changesfrom the current one, and they will have very low probability of falling into the feasible regionof problems with equality constraints. Lin and Miller [18] used a direct substitution method toremove the equality constraint for CTS. It can deal with problems with simple constraints. Forrelatively complex problems, application of this method will be difficult. To utilize the advantageof TS to solve constrained optimization problems, a proper constraint handling method is required.In this paper, a combined global and local search method to deal with constrained optimization</p><p>Copyright q 2008 John Wiley &amp; Sons, Ltd. Int. J. Numer. Meth. Engng 2008; 76:18691891DOI: 10.1002/nme</p></li><li><p>CONSTRAINED OPTIMIZATION FOR CTS 1871</p><p>for CTS is proposed. It introduces sequential quadratic programming (SQP), a typical local searchmethod, into CTS to deal with constraints. It can lead and keep the global search in the feasibleregion and refine the global search result.</p><p>2. PROPOSED CONSTRAINT HANDLING METHOD FOR CTS</p><p>2.1. Overview of the proposed constraint handling methodConsidering the negative feature for penalty function method in overcoming constraints, we proposea new constraint handling method for CTS by firstly transforming the non-linear constrainedproblem into a nested optimization problem. As shown in Figure 1, the first step is a nestedinner- and outer-loop optimization structure. It uses SQP method in the inner loop to quickly leadthe search back into the feasible region as much as possible, and CTS in the outer loop to dealwith local optimality within the feasible region. The constraints in Equation (1) are handled byconstructing a sub-optimization problem in the inner loop as follows:</p><p>min G(x,w1,w2)=q</p><p>j=1w1 j h2j (x)+</p><p>p</p><p>i=1w2i [max{0,gi (x)}]2</p><p>s.t. xlbxxub(2)</p><p>SQPMin G(x)</p><p>CTS Min f(x)+G(x)</p><p>Combined CTS+SQP </p><p>Min f(x) s.t.:</p><p>g(x)0h(x)=0 </p><p>SQPMin f(x) s.t.: </p><p>g(x)0h(x)=0 </p><p>Figure 1. A combined global and local search structure for constrained optimizations.</p><p>Copyright q 2008 John Wiley &amp; Sons, Ltd. Int. J. Numer. Meth. Engng 2008; 76:18691891DOI: 10.1002/nme</p></li><li><p>1872 X. CHEN ET AL.</p><p>where w1 and w2 are two weight vectors. In our problem, weights are simplified to unity vectorsand Equation (2) is thus simplified to</p><p>min G(x)=q</p><p>j=1h2j (x)+</p><p>p</p><p>i=1[max{0,gi (x)}]2</p><p>s.t. xlbxxub(3)</p><p>The max and squared terms ensure that the objective function is non-negative. In addition, it can beproved that G has continuous gradients if h and g have continuous gradients. If all constraints ofthe original problem are satisfied, the objective function value of the above optimization problem iszero; else if any constraint of the previous problem is violated, the objective function is larger thanzero. Therefore, the optimal solution of this sub-optimization problem is also a feasible solutionof the original non-linear constrained problem. By minimizing the inner-loop sub-optimizationproblem, it provides a guide for search in the feasible region. We use SQP method to solve theinner-loop sub-optimization associating with constraints only. Ideally, the outer-loop optimizationproblem only needs to use global search in terms of the original objective function with the feasiblesolutions obtained by the inner-loop optimization. But in some cases the inner-loop problemconstructed as Equation (3) may not be able to reach its global minimum, or it takes too longtime to reach it, due to the complexity of the constraints. To control the time cost, we fix themaximal iteration number in the inner loop. This means the outer-loop search within the feasibleregion is not 100% guaranteed. Therefore, we formulate a norm penalty function in the outer-loopoptimization as</p><p>min F(x)= f (x)+G(x)s.t. xlbxxub</p><p>(4)</p><p>where f is the original objective function in Equation (1) and G is the objective function constructedin Equation (3). If the inner-loop optimization succeeds, the objective function of the outer loop isidentical to the original objective function. While, in any circumstance, the inner loop does notconverge to zero, the objective function of the outer-loop optimization will be penalized by adding G.</p><p>SQP, a gradient-based local search method, is used in the inner-loop problem based on theassumption that it is sufficient for solving the problem constructed by Equation (3). This assumptionis based on the following reasons. First, many of the local minima and other problems of thebenchmark examples are caused by the combination of objective function and constraints. Forconstraints only, there may not be so many difficulties for the local search methods to obtain afeasible solution if the constraint functions are continuous. Second, even if there is a local minimumin solving Equation (3), the penalty term in Equation (4) will help. At this case, it is no worsethan the traditional penalty-function-based methods. Results presented in Section 3 also supportthe above statement.</p><p>After CTS finishes its global search in the outer loop, another SQP module is activated, asshown also in Figure 1. By inputting the final CTS result as an initial, a gradient-based local searchcontinues to refine the result obtained by the global search. This module also helps the search toreach the valley bottom as the global search has little chance to reach the exact bottom due to thelack of guide using gradient. Moreover, as the global search result is already close to the globaloptimal point, the gradient-based local search costs little time.</p><p>Copyright q 2008 John Wiley &amp; Sons, Ltd. Int. J. Numer. Meth. Engng 2008; 76:18691891DOI: 10.1002/nme</p></li><li><p>CONSTRAINED OPTIMIZATION FOR CTS 1873</p><p>2.2. SQP algorithmSQP methods [19, 20] are a class of optimization methods through solving a quadratic programmingsub-problem at each iteration. Each QP sub-problem minimizes a quadratic model of a certainmodified Lagrangian function subject to linear constraints. The merit function is reduced alongeach search direction to ensure convergence from any starting point. Given a general NLP problemas shown in Equation (1), the QP sub-problem can be defined by linearizing both the inequalityand equality constraints as follows:</p><p>mindkRn</p><p>12 d</p><p>Tk Hkdk + f (xk)Tdk</p><p>s.t. hi (xk)Tdk +hi (xk)=0, i =1,2, . . . , pg j (xk)Tdk +g j (xk)0, j =1,2, . . . ,qxlbxkxub</p><p>(5)</p><p>where matrix Hk is a positive-definite approximation of the Hessian matrix of the Lagrangianfunction, which can be updated by any of the quasi-Newton methods. The QP sub-problem canbe solved by using any QP algorithm. The solution dk is used to form a new iterate:</p><p>xk+1=xk+kdk (6)where the step length parameter, k , is determined by an appropriate line search procedure sothat a sufficient decrease in a merit function is obtained. Good performance of SQP has beendemonstrated in dealing with constrained optimization problems.</p><p>2.3. CTS algorithm</p><p>TS is a meta-heuristic method originally developed for combinatorial optimizations by Glover.Later, a number of adaptations extend its use in continuous optimization. The CTS can be summa-rized as follows. For current point x , its neighborhood, H(x), is first defined following certainrules. A set of neighbor candidates is randomly generated within the neighborhood. Sometimestrying to cover all of the feasible regions, diversification strategy is required for the neighborgeneration. The next move of point x to a new point is selected among the neighbor candidatesby normally comparing the objective function. A tabu list is created during the iteration to storepoints recently visited so as to avoid being trapped into a local minimum. Points in the tabu listare prohibited from selection of next move. Aspiration criterion is also used to release some pointsin the tabu list during the iteration. The procedure is repeated until certain termination criterion issatisfied. Some details of the developed algorithm are presented below.</p><p>2.3.1. Neighbor generation. Considering the diversification strategy, the neighbor space ofcurrent solution x , as shown in Figure 2, is partitioned by a set of concentric hyper-rectanglesHi (x,hi ,hi1) with radii h0,h1, . . . ,hk such that</p><p>Hi (x,hi1,hi )= {x |hi1, j|x j x j |</p></li><li><p>1874 X. CHEN ET AL.</p><p>x</p><p>H0(x,h0)</p><p>H1(x,h1,h0)</p><p>H2(x,h2,h1)</p><p>H3(x,h3,h2)</p><p>Figure 2. Neighbor space partition of CTS.</p><p>where radius h0 is an independent parameter. Any new point, x , satisfying Equations (7)(9)would be considered as a neighbor candidate. A number of initial neighbors are first generatedwithin the hyper-rectangles without considering the constraints. The constraint handling methodproposed in the previous section is used to search from the initial neighbors to a group of newcandidate solutions by quickly solving the sub-optimization problems in Equation (3) using SQPmethod. The collection of the new candidate solutions are sorted in terms of the objective functionvalue of Equation (4) to form a new neighborhood for the out-loop optimization using TS.</p><p>2.3.2. Tabu criteria. To prevent the search from being trapped into a local minimum, TS uses atabu list to restrict the visited solutions. But unlike the combinatorial optimizations, continuousTS restricts an area rather than a point. In this paper the tabu criteria in [10] are adopted. The tabucondition is organized by a twofold tabu list, which contains all the visited solutions and theirobjective function values during the last L iterations. To check whether a solution is tabooed, itfirst determines if the current objective function value, F(x), is within the tolerance ( f ) of anyfunction value in the tabu list. It should be noted that to restric...</p></li></ul>


View more >