ants foraging mechanism in the design of multiproduct batch chemical process

9
PROCESS DESIGN AND CONTROL Ants Foraging Mechanism in the Design of Multiproduct Batch Chemical Process Wang Chunfeng* and Zhao Xin Institute of Systems Engineering, Tianjin University, Tianjin, P.R. China 300072 In this paper, a novel evolutionary approach, the ants foraging mechanism (AFM), that effectively overcomes local optima is presented for the solution of the optimal design of multiproduct batch chemical processes. To demonstrate the effectiveness of AFM in solving the proposed problem, four examples adopted from the literature are presented, together with the computation results. Satisfactory results are obtained in comparison with the results of mathematical programming (MP), tabu search (TS), genetic algorithm (GA), and simulated annealing (SA) algorithm. Introduction Batch process are widely used in the chemical process industry and are of increasing industrial importance because of the great emphasis on low-volume, high- value-added chemicals and the need for flexibility in a market-driven environment. If two or more products require similar processing steps and are to be produced in low volume, it is economical to use the same set of equipment to manufacture them all. This, a multiprod- uct batch chemical process is proposed, requiring that all of the products follow essentially the same path through the process and use the same equipment, that only one product be manufactured at a time, and that production take place in a series of production runs or campaigns for each product in turn. 1 In the optimal design of a multiproduct batch chemical process, the production requirements of each product and the total production time available for all products are specified. The number and size of parallel equipment units in each stage as well as the location and size of intermediate storage are determined to minimize the investment costs. The common approach used by previous research in solving the batch process design problem is to formulate it as a mixed integer nonlinear programming (MINLP) problem and then employ optimization techniques to solve it. Mathematical programming (MP) and heur- istics 1-4 are commonly used. A design problem without scheduling considerations using a minimal capital cost design criterion was formulated by Robinson and Loonkar in 1970. 5 In 1979 and 1982, Grossman and Sargent, 6 Knopf et al., 7 and Takamastsu et al. 8 improved the MP methods and applied them to the design problem. Moreover, in 1989, Espun ˜ a and Puigjaner used an efficient optimization strategy based on gradient calcu- lation, with the advantage of reducing the computing time, to solve the problem. 9 In 1994, Barbosa-Po ´ vga and Macchietto constructed the model, including very gen- eral constraints and objective functions and permitting both capital costs of equipment units and pipework and operating costs and revenues to be taken into account. As a result, a branch-and-bound method was adopted to solve the model successfully. 10 Because of the NP- hard nature of the design problem of batch chemical processes, an impractically long computational time will be induced by the use of MP when the design problem is somewhat complicated. Severe initial values for the optimization variables are also necessary. Moreover, as the size of the design problem increases, MP will become futile. Heuristics requires less computational time and does not necessitate severe initial values for optimiza- tion variables. However, it can end up with a local optimum because of its greedy nature. Also, heuristics is not a general method because it requires special rules for particular problems. Patel et al., 11 Tricoire and Malone 12 used simulated annealing (SA) to solve the design problem of multiproduct batch chemical pro- cesses. SA performs effectively and gives a solution within 0.5% of the global optimum. However, SA has the disadvantage of long searching times, and hence, it requires more CPU time than heuristics. To accelerate the convergence of SA, Wang et al. 13 combined SA with heuristics to solve the design problem of multiproduct batch chemical processes and obtained satisfactory results. Wang et al. 14,15 also successfully applied genetic algorithm (GA) and tabu search (TS) approaches to the problem. Lin and Floudas put forward a model that accounts for the tradeoff between capital costs, rev- enues, and operational flexibility by considering design, synthesis, and scheduling simultaneously. They also used a branch-and-bound method to solve the resulting mixed integer linear program (MILP) and MINLP models and obtained satisfactory results. 16 To solve the proposed problem more effectively, the ants foraging mechanism (AFM), a novel evolutionary approach, is presented in this paper, and satisfactory results are obtained. The rest of the paper is organized as follows: Section 2 presents a mathematical model for the problem of designing multiproduct batch chemical processes. The basic ideas of AFM are introduced in section 3. The * To whom correspondence should be addressed. E-mail: [email protected]. Fax: 86-22-27401658. 6678 Ind. Eng. Chem. Res. 2002, 41, 6678-6686 10.1021/ie010932r CCC: $22.00 © 2002 American Chemical Society Published on Web 11/16/2002

Upload: zhao

Post on 23-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

PROCESS DESIGN AND CONTROL

Ants Foraging Mechanism in the Design of Multiproduct BatchChemical Process

Wang Chunfeng* and Zhao Xin

Institute of Systems Engineering, Tianjin University, Tianjin, P.R. China 300072

In this paper, a novel evolutionary approach, the ants foraging mechanism (AFM), that effectivelyovercomes local optima is presented for the solution of the optimal design of multiproduct batchchemical processes. To demonstrate the effectiveness of AFM in solving the proposed problem,four examples adopted from the literature are presented, together with the computation results.Satisfactory results are obtained in comparison with the results of mathematical programming(MP), tabu search (TS), genetic algorithm (GA), and simulated annealing (SA) algorithm.

Introduction

Batch process are widely used in the chemical processindustry and are of increasing industrial importancebecause of the great emphasis on low-volume, high-value-added chemicals and the need for flexibility in amarket-driven environment. If two or more productsrequire similar processing steps and are to be producedin low volume, it is economical to use the same set ofequipment to manufacture them all. This, a multiprod-uct batch chemical process is proposed, requiring thatall of the products follow essentially the same paththrough the process and use the same equipment, thatonly one product be manufactured at a time, and thatproduction take place in a series of production runs orcampaigns for each product in turn.1 In the optimaldesign of a multiproduct batch chemical process, theproduction requirements of each product and the totalproduction time available for all products are specified.The number and size of parallel equipment units in eachstage as well as the location and size of intermediatestorage are determined to minimize the investmentcosts.

The common approach used by previous research insolving the batch process design problem is to formulateit as a mixed integer nonlinear programming (MINLP)problem and then employ optimization techniques tosolve it. Mathematical programming (MP) and heur-istics1-4 are commonly used. A design problem withoutscheduling considerations using a minimal capital costdesign criterion was formulated by Robinson and Loonkarin 1970.5 In 1979 and 1982, Grossman and Sargent,6Knopf et al.,7 and Takamastsu et al.8 improved the MPmethods and applied them to the design problem.Moreover, in 1989, Espuna and Puigjaner used anefficient optimization strategy based on gradient calcu-lation, with the advantage of reducing the computingtime, to solve the problem.9 In 1994, Barbosa-Povga andMacchietto constructed the model, including very gen-eral constraints and objective functions and permitting

both capital costs of equipment units and pipework andoperating costs and revenues to be taken into account.As a result, a branch-and-bound method was adoptedto solve the model successfully.10 Because of the NP-hard nature of the design problem of batch chemicalprocesses, an impractically long computational time willbe induced by the use of MP when the design problemis somewhat complicated. Severe initial values for theoptimization variables are also necessary. Moreover, asthe size of the design problem increases, MP will becomefutile. Heuristics requires less computational time anddoes not necessitate severe initial values for optimiza-tion variables. However, it can end up with a localoptimum because of its greedy nature. Also, heuristicsis not a general method because it requires special rulesfor particular problems. Patel et al.,11 Tricoire andMalone12 used simulated annealing (SA) to solve thedesign problem of multiproduct batch chemical pro-cesses. SA performs effectively and gives a solutionwithin 0.5% of the global optimum. However, SA hasthe disadvantage of long searching times, and hence, itrequires more CPU time than heuristics. To acceleratethe convergence of SA, Wang et al.13 combined SA withheuristics to solve the design problem of multiproductbatch chemical processes and obtained satisfactoryresults. Wang et al.14,15 also successfully applied geneticalgorithm (GA) and tabu search (TS) approaches to theproblem. Lin and Floudas put forward a model thataccounts for the tradeoff between capital costs, rev-enues, and operational flexibility by considering design,synthesis, and scheduling simultaneously. They alsoused a branch-and-bound method to solve the resultingmixed integer linear program (MILP) and MINLPmodels and obtained satisfactory results.16

To solve the proposed problem more effectively, theants foraging mechanism (AFM), a novel evolutionaryapproach, is presented in this paper, and satisfactoryresults are obtained.

The rest of the paper is organized as follows: Section2 presents a mathematical model for the problem ofdesigning multiproduct batch chemical processes. Thebasic ideas of AFM are introduced in section 3. The

* To whom correspondence should be addressed. E-mail:[email protected]. Fax: 86-22-27401658.

6678 Ind. Eng. Chem. Res. 2002, 41, 6678-6686

10.1021/ie010932r CCC: $22.00 © 2002 American Chemical SocietyPublished on Web 11/16/2002

adaptation of AFM to the proposed optimization prob-lem is described in section 4. To demonstrate theeffectiveness of AFM in solving the proposed problem,four problems adopted from the literature, together withthe computation results obtained using AFM, are pre-sented in section 5. Comparisons with TS, MP, GA, andSA are given in section 6. Finally, section 7 providesthe summary and conclusions.

Mathematical Model of MBCPThe optimal design of multiproduct batch processes

can be formulated according to a MINLP model. Thispaper employs Modi’s model modified by Xu et al.4 Itincludes the following assumptions: (1) The processesoperate in the way of overlay. (2) The devices in a givenproduction line cannot be reused by the same product.(3) The long campaign and the single-product campaignare considered. (4) The type and size of parallel itemsin- or out-of-phase are the same in one batch stage. (5)All intermediate tanks are finite. (6) The operationbetween stages can satisfy zero wait or no intermediatetank conditions when there is no storage. (7) There isno limitation on the utility. (8) The cleaning time of thebatch item can be neglected or included in the process-ing time. (9) The size of a device can change continu-ously in its own range.

Assuming that (1) there are J batch stages, K semi-continuous stages, and I products to be manufactured;(2) there are moj out-of-phase groups of parallel unitsin each batch stage in which the sizes are all Vj; (3) thereare Rk parallel units in-phase in each semicontinuousstage, the operating rates of which are all Rk; and (4)there are S - 1 intermediate tanks that divide the wholeprocess into S subsystems. Also, let

Then, using the equipment investment as a criterion ofoptimization, which can be expressed as a power func-tion of characteristic dimension of the equipment, thefollowing mathematical model can be obtained

subject to the following constraints:(1) Dimensional Constraints. Each piece of equipment

can be altered in its allowable range

(2) Time Constraint. The sum of available productiontime for all products is not more than the total time forproduction

where the corresponding variables for product i aredefined as follows

(3) Product Quantity Constraints. A given productexhibits the same productivity in all subprocesses.

(4) Storage Size Constraints. The size of intermediatestorage is the maximum of what is needed by allproducts

Using the mathematical model to optimize a designfor a given product demand, the size and number of eachtype of equipment must be calculated to minimize theequipment investment.

Ants Foraging Mechanism

In this section, the ants foraging mechanism, a newevolutionary method, is proposed according to themechanism of army ant foraging and ACO (ant colonyoptimization)17 to solve the optimal problem in continu-ous space.

ACO. ACO, which simulates the foraging behavior ofants, was first proposed by Dorigo and colleagues as amultiagent approach for solving difficult combinatorialoptimization problems such as the traveling salesmanproblem (TSP) and the quadratic assignment problem(QAP).18 Ants are social insects that live in colonieswhose behavior is directed more toward the survival ofthe colony as a whole than that of a single individual of

Js ) (j| batch stage belonging to subprocess s), s )1, ..., S

Ts ) (t| semicontinuous substrain belongingto subprocess s) s ) 1, ..., S

Ut ) (k| semicontinuous stage k belonging tosemicontinuous substrain t), t ) 1, ..., T

min f(V,R) )

∑j)1

J

(mojmpjajVjRj) + ∑

k)1

K

(nkbkRkâk) + ∑

s)1

S

(csVs/γs) (1)

Vjmin e Vj e Vj

max, j ) 1, ..., J (2)

Rkmin e Rk e Rk

max, k ) 1, ..., K (3)

H g ∑i)1

I

Hi ) ∑i)1

I Qi

Pi

(4)

(a) productivity for product i

Pi )Bis

TisL

i ) 1, ..., I; s ) 1, ..., S (5)

(b) limiting cycle time for product i in subprocess s

TisL ) max

j∈Js,i∈Ts[Tij, θij] i ) 1, ..., I; j ) 1, ..., J (6)

(c) cycle time for product i in batch stage j

Tij )θiu + Pij + θi(u+1)

moji ) 1, ..., I; j ) 1, ..., J (7)

(d) processing time for product i in batch stage j

Pij ) Pij0 + gij(Bis

mpi)dij

i ) 1, ..., I; j ) 1, ..., J; j ∈ Js

(8)

(e) operating time for product i in substrain t

θit ) maxk∈Ut [BisDik

Rknk] i ) 1, ..., I; t ) 1, ..., T; t ∈ Ts

(9)

(f) batch size for product i in subprocess s

Bis ) minj∈Js

(mpiVi

sij) i ) 1, ..., I; t ) 1, ..., T; t ∈ Ts

(10)

Vs/ ) max

i[PiSis

/ (TisL - θiu + Ti(s+1) - θi(u+1))]

i ) 1, ..., I; s ) 1, ..., S - 1 (11)

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6679

the colony. An important and interesting behavior of antcolonies is their foraging behavior. While walking fromfood sources to the nest and vice versa, ants deposit asubstance called pheromone on the ground and form apheromone trail. Ants can smell pheromone and, whenchoosing their way, tend to choose, with high prob-ability, paths marked by strong pheromone concentra-tions (shorter paths). Also, other ants can use phero-mone to find the locations of food sources found by theirnestmates. In fact, ACO simulates the optimization ofant foraging behavior. Although it can be applied toscatter combination optimum problems, ACO is noteasily used to solve continuous optimum problems wellbecause the shortest path it finds is the arc that onlylinks the scatter sites.

Basic Idea of AFM. To solve continuous problems,we put forward the ants foraging mechanism (AFM),which combines army ants’ foraging behavior and ACO.A neighborhood-seeking mechanism, which is differentfrom the tabu mechanism in ACO, is introduced to dealwith continuous optimizations in AFM. In addition,army ant foraging behavior, which is stronger than thatof common ants in ACO, is simulated in our AFM. As aspecial race of ants, thousands of army ants will leavetheir nest to form large columns and swarms in a matterof hours to find food for the colony, and their raids areable to sweep out an area of 1000 m2 in a single day.

In fact, AFM, which simulates the strong foragingability of army ants, is basically a kind of multiagentneighborhood-search method. From the initial solutions,it can find the best solution in the neighborhood of thegiven solutions. Then, taking the new solutions as initialsolutions, AFM repeats the above step as long asnecessary. The core of AFM consists of an aspirationcriterion, movement probability, transition probability,and diversification.

Aspiration Criterion. It has been found that thepheromone trail ants leave, which can be observed byother ants, motivates the colony to follow the path; i.e.,a randomly moving ant will follow the pheromone trailwith high probability. This is how the trail is reinforcedand more and more ants follow that trail and why acolony of ants can find large food resources whereas asingle ant would probably fail as ants are almost blind.To mimic this behavior, the aspiration criterion, whichrelates the quantity of trail to the optimum direction,is introduced into AFM to reduce the random seekingbehavior and improve the optimization efficiency ofAFM. According to this aspiration criterion, optimizingagents will select the better direction with high prob-ability, and the quantity of pheromone in this directionwill then increase. As a result, the searching process isa self-reinforced process that makes it possible to findthe best solution of the optimization problem.

Movement Probability. From the kth ant’s currentlocation S1, we can generate three neighbors sites, S11,S12, and S13 (the use of three neighbor sites is just for aconvenient description), that represent new feasiblesolutions. Movement probability is used to determinewhether the ants should move or not. Because thequantity of the pheromone on the neighbor site isrelated to the value of its objective function (i.e., thegreater the value of the objective function, the more thepheromone is left on the site), Richard et al. defined themovement probability of kth ant as19

where φ(S11), φ(S12), φ(S13) g 0 are the trail concentra-tions at lattice sites S11, S12, and S13 respectively. Theparameter φ* represents the concentration of the phero-mone trail for which the probability of moving is 0.5 perstep.

Transition Probability. If movement is permitted,the transition probability must be calculated to deter-mine which site should be selected as the next lattice.The transition probability is used to select the movedirection (optimum direction) to ensure the diversity ofthe solutions and to accelerate the convergence of thealgorithm. Because a path with higher pheromonecontent should be given a higher probability, the transi-tion probability Pkj

k can be formulated as

where ηkj ) 1/dkj, j ) 1, 2, 3, is called the visibility; τkl) Qk/dkl , l ) 1, 2, 3, represents the density of the trailon the three arcs; Qk is the total quantity of pheremonethat the kth ant holds; dkj ) 1/∆fkj denotes the distancebetween the site Sk and its neighbors Skj, ∆fkj ) fk - fkjis the difference of the objective function values betweenthe new solution and the current solution; and R and âare parameters that control the relative importance ofthe trail versus visibility.20-22

Diversification. An intelligent search techniqueshould not only thoroughly explore a region that con-tains good solutions, but also have a general view of thesolution space and try to make sure that no distantregion has been entirely neglected.23 To realize suchdiversification, AFM repeats the entire search procedurewith a collection of randomly generated initial solutions.By controlling the scale of the initial solution, probabi-listic arguments can be applied to establish convergenceproperties to a global optimum.

Outline of AFM. We take a kind of nonlinearprogramming (NLP) problem as an example to illustratethe details of the implementation of AFM. The problemis to minimize f(x)

subject to the nonlinear constraints

and bounds

The process of optimization for the example usingAFM can be divided into three steps: initialization,iteration, and termination.

(1) Initialization. In this step, the initial solutiongroup is generated according to the work of the Wanget al.13,14 The size of the solution group is determined

Pm ) 12[1 + tanh(φ(S11) + φ(S12) + φ(S13)

φ*- 1)] (12)

Pkjk )

[τkj]R[ηkj]

â

∑k)1

3

[τkj]R[ηkj]

â

j ) 1, 2, 3 (13)

min f(V,R) ) ∑j)1

J

(mojmpjajVjRj) + ∑

k)1

K

(nkbkRkâk) (14)

x12 + x2

2 + x32 g 1

x12 + x2

2 + x32 e 4 (15)

x1, x2, x3 > 0

6680 Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

by the user in terms of the complexity of the problem.The size of the solution is 10; i.e., 10 initialized solutionsare generated in this step. To illustrate the next steps,a solution is given as (x1, x2, x3) ) (0.81, 0.60, 1.29), andits objective function is 0.1508.

(2) Iteration. After the initialization process, theiteration process (i.e., the optimization process) begins.Above all, three neighboring feasible solutions aregenerated randomly for each initial solution in thegroup, and the movement probabilities of these newsolutions are computed to determine whether theseneighbors solutions can be candidate solutions. If theoriginal solution can not be accepted, it will be replacedby a new one. For (x1, x2, x3) ) (0.81, 0.60, 1.29), thethree neighbors solutions are (0.78, 0.66, 1.31), (0.82,0.59, 1.47), and (0.84, 0.53, 1.33), and the movementprobability of these neighbor solutions is 0.77 accordingto eq 12. Then, a random number, uniformly distributedon [0, 1], is generated. If this random number is smallerthan the movement probability, these neighbors solu-tions can be candidate solutions. For this solution, therandom number is 0.6724. Next, if the feasible neighborsolutions are accepted, their transition probabilities arecalculated. For each current solution, a new solution isselected from its three candidate solutions according tothe transition probabilities. If this new solution is betterthan the current one, the current one is replaced.Otherwise, the current solution is retained. For (x1, x2,x3) ) (0.81, 0.60, 1.29), the transition probabilities of(0.78, 0.66, 1.31), (0.82, 0.59, 1.47), and (0.84, 0.53, 1.33)are 0.72, 0.68, and 0.88, respectively. Obviously, the lastcandidate solution is selected. The value of the objectivefunction of this candidate solution is 0.1535, which isbetter than that of the current solution. Thus, thecurrent solution is replaced by this candidate solution.When all current solutions are updated, the best solu-tion of the current cycle is stored, and another cyclebegins.

(3) Termination. The iteration process is repeateduntil the termination condition is reached. When thealgorithm converges on one solution, the algorithm isterminated, and this solution is consider as a satisfac-tory solution of the problem. (Alternatively, if themaximum iterating generation is achieved, the satisfac-tory solution is selected from the stored solution.) Forexample, the satisfactory solution is (x1, x2, x3) ) (0.8450,0.5120, 1.3201), and its objective function value is0.1537.

Implementation

From the previous discussion, we know that the AFMmethod is, in fact, an intelligent search procedure that,in some sense, imitates army ant foraging behavior andapplies some rules based on artificial intelligence prin-ciples. In this section, we apply AFM to the optimaldesign of multiproduct batch chemical processes. Figure1 shows the flowchart of the algorithm.

Neighborhood Structure. Neighborhood structureplays an important role in AFM implementation, influ-encing the solution’s quality and the computing speed.A large neighborhood provides generally high-qualitysolutions but can result in longer CPU times. A smallneighborhood can accelerate the convergence of thesearching process, but it might result in a reduction ofthe quality of the optimization results, i.e., the algorithmmight become trapped in a local minimum. A tradeoffbetween the computing speed and the solution quality

must be made. For this reason, the concept of naturalneighborhood size is introduced in this paper. We varythe neighborhood size over two different ranges (smalland large) while the search process proceeds. At thebeginning, a smaller neighborhood size is preferred forthe rapid self-reinforcement of pheromone, and a largerneighborhood size is preferred for a thorough search juston the optimum directions identified according to thedistribution of pheromone trails at the end of the small-neighborhood search. Because of the given seekingability of the ants, this mechanism is appropriate inAFM, and the computed examples also support thisview.

Aspiration Criterion. If the search direction isaccepted in the current step, it will be accepted withhigher probability again in the next step. In this way,the self-reinforcement of pheromone occurs to improvethe searching efficiency. The best search direction andsolutions are related to a high density of pheromone.Thereby, it makes the better solution easier to accept.Obviously, the good solution spaces, which are moreattractive for the optimizing agents, are given a morethorough search than other solution spaces.

Step Size of Continuous Variables. For an MINLPproblem, there exists a problem of step size of continu-ous variables when a neighborhood-search method isadopted. The step size can be neither too large nor too

Figure 1. AFM implementation.

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6681

small. A large step results in a local optimum, whereasa small step size requires a longer searching time.

Two simple but effective dynamic methods are de-signed here to vary the step size of continuous variables.(1) We simply let x ) x[1 + (-R%)i-1] (where xrepresents an optimization variable and i represents thenumber of times neighborhood solutions are generatedin a certain iteration step). (2) Another method is to letx ) x + (-R%)i-1x′ (where both x and x′ representoptimization variables and i represents the number oftimes neighborhood solutions are generated in a certainiteration step). The larger R is, the larger the initial stepsize, and vice versa. We can see that the step sizechanges with both the value of the optimization vari-ables and the increase in i. Hence, the step size of acontinuous variable varies adaptively during the searchprocedure. (1) At the beginning, when the optimizationvariables are larger, a larger step size is adopted toestablish the search procedure, and at the end, whenthe optimization variables are smaller, a smaller stepsize is adopted to improve the precision of the searchprocedure. (2) As the value of i increases, the step sizedecreases. This leads to a more thorough search witheach additional iteration step. (3) We initially use thefirst method to generate the neighborhood solutionsbecause it is helpful to thoroughly seek the neighbor-hood solutions of the current solution. Sometimes, goodneighborhood solutions cannot be found. In such a case,the second method, which can increase the solution’sdiversification in finding good neighborhood solutions,is used. Because the unit size of a batch stage exerts amuch greater influence on the objective function thanthe semicontinuous variables, we adopted two differentR values for these two kinds of variables, a smaller onefor batch stages and a larger one for semicontinuousstages.

Termination Criterion. Two termination criteriacan be used by AFM to control the schedule of thealgorithm. One is using the single-solution ability of theant colony to control the algorithm: if all optimizingagents obtain the same solution, the algorithm shouldbe stopped. The other is the classical maximum genera-tion criterion: when the cycle number of the algorithmreaches the previously established maximum generationnumber, the algorithm is terminated. The first termina-tion criterion can find global solutions, but it is only fitfor simple problems because its computation time forcomplex problems is too long. Although the secondcriterion perhaps decreases the quality of the optimiza-tion solution, it cuts the computation time greatly. Thus,for complex problems, the second criterion is adopted.

Examples and AnalysisExamples were computed to demonstrate the ef-

fectiveness of the algorithm described in the abovesection. Four examples, which are adopted from Wanget al.,15 are presented here. The samples selected areconvenient for comparison with existing methods anddemonstration of the efficiency of AFM. Because the fourexamples chosen present different levels of complexity,they are helpful to manifest the robustness of AFM tothe complexity of the problem. The data for examples1-4 are presented in Tables 1-4, respectively. Theresults are presented in Tables 5-8, respectively. Thedata for GA, which used a multiparameter crossedbinary coding mechanism, come from Wang et al.14 Thedata for MP and TS are from Wang et al.,15 and TSrepresents the results of the standard tabu search.

Results and Comparison Analysis. From the re-sults presented in Tables 5-8, we can see that, inexample 1, AFM obtained better results than MP, TS,GA, and SA. In example 2, AFM found nearly the sameresult as MP but exhibited much faster convergence.Moreover, the result of example 2 using AFM is betterthan those obtained using TS, GA, and SA. We can alsosee that AFM yielded nearly the same result as TS andGA but a better result than MP and SA in the somewhatcomplicated example 3. For example 4, Patel et al.11

pointed out that this problem cannot be solved usingany existing method other than SA because of thepresence of intermediate storage, nonidentical units,and mixed modes of operation. However, AFM handlesit successfully and does so in a computing time that isless than those of SA, GA, and TS.

From the results of these examples, we found thatAFM consistently obtained better results than GA andSA. This means that AFM adopted a satisfactorycriterion for optimization in this optimal design prob-lem.

As is evident from the computation results of ex-amples 1-4, AFM has advantages over MP, GA, andSA in terms of solution quality and computational time.As an AI (artificial intelligent) method, AFM canefficiently escape from getting trapped in local optimiza-tion to find more satisfactory solutions. The maindrawback of MP, in fact, is that it very easily becomestrapped in local optimization, and from the examples,we can see that AFM obtains better solutions than MP.AFM exploits certain forms of flexible memory (historyinformation) to control the search process, i.e., AFMemphasizes scouting successive neighborhoods to iden-tify moves of high quality through the pheromoneaspiration criterion and natural neighborhood size

Table 1. Data for Example 1a,b

SC1 B1 SC2 SC3 B2 SC4 T SC5 SC6 B3

a, b, or c 370 592 250 210 582 250 334 250 200 1200R, â, or γ 0.22 0.65 0.40 0.62 0.39 0.4 0.59 0.40 0.83 0.52I ) 1, S or D 1.2 1.2 1.2 1.2 1.5 1.2 1.1 1.4 1.4 1.1p0 35 1 4g 0.0 0.0 0.0I ) 2, S or D 1.5 1.4 1.5 1.5 1.2 1.5 1.1 1.5 1.5 1.2p0 40 1 8g 0.0 0.0 0.0I ) 3, S or D 1.1 1.0 1.1 1.1 1.0 1.1 1.1 1.2 1.2 1.0p0 30 2 4g 0.0 0.0 0.0

a H ) 8000 h, J ) 3, I ) 3, Q ) [100 000, 100 000, 50 000], 800e Vi e 2400, 300 e Rk e 1800. b SC indicates semicontinuousstage, B indicates batch stage, and T indicates intermediatestorage.

Table 2. Data for Example 2a,b

SC1 B1 SC2 SC3 T SC4 B2 SC5 SC6 B3

a, b, or c 370 592 250 210 278 250 582 250 200 1200R, â, or γ 0.22 0.65 0.40 0.62 0.49 0.40 0.39 0.40 0.83 0.52I ) 1, S or D 1.2 1.2 1.2 1.2 1.1 1.2 1.5 1.4 1.4 1.1p0 35 1 4g 0.0 0.0 0.0I ) 2, S or D 1.5 1.4 1.5 1.5 1.1 1.5 1.2 1.5 1.5 1.2p0 40 1 8g 0.0 0.0 0.0I ) 3, S or D 1.1 1.0 1.1 1.1 1.1 1.1 1.0 1.2 1.2 1.0p0 30 2 4g 0.0 0.0 0.0

a H ) 8000 h, J ) 3, I ) 3, Q ) [100 000, 100 000, 50 000], 800e Vi e 2400, 300 e Rk e 1800. b SC indicates semicontinuousstage, B indicates batch stage, and T indicates intermediatestorage.

6682 Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

described in this paper. Also, AFM seems to be morerobust with respect to variations in the initial solutionthan SA. When SA is employed, a “good” choice of thecontrol parameters (temperature, annealing schedule,etc.) greatly influences the solution quality, as pointedout by Patel et al.11 and Wang et al.13 It was also foundthat AFM performs better than GA in terms of simplic-ity, because GA is sensitive to coding design, which isboth the key and the major difficulty in GA implemen-tation, whereas AFM simply uses the objective functionvalue to construct the solution. Moreover, we can seethat the acceptance criterion of AFM is superior to thoseof TS, SA, and GA. AFM first uses movement prob-abilities to determine whether acceptable neighborsolutions of the current solution exist. If acceptable

neighbor solutions exist, then the optimization directionis determined by transition probabilities. This type ofacceptance criterion not only allows AFM to reduce thecomputation time significantly but also can greatlyimprove the quality of the solutions. In addition, AFMemploys intelligent strategies, i.e., natural neighborhoodsize, ant foraging behavior character, to accelerate itsconvergence.

Computational Experience. In this section, someimportant aspects in our implementation of AFM andsome problems in practice are discussed.

(a) Initial Solution. In these four examples, we startthe search procedure from the largest possible value ofall optimization variables, that is, we adopted the

Table 3. Data for Example 3

SC1 B1 SC2 T SC3 B2 SC4 B3 SC5 B4 SC6

a, b, or c 370 250 370 278 370 250 370 250 370 250 370R, â, or γ 0.22 0.60 0.22 0.49 0.22 0.60 0.22 0.60 0.22 0.60 0.22I ) 1, S or D 1.0 8.28 1.0 1.0 1.0 9.7 1.0 2.95 1.0 6.57 1.0p0 1.15 9.86 5.28 1.20g 0.20 0.24 0.40 0.50d 0.40 0.33 0.30 0.20I ) 2, S or D 1.0 5.58 1.0 1.0 1.0 8.09 1.0 3.27 1.0 6.17 1.0p0 5.95 7.01 7.00 1.08g 0.15 0.35 0.70 0.42d 0.40 0.33 0.30 0.20I ) 3, S or D 1.0 2.34 1.0 1.0 1.0 10.3 1.0 5.70 1.0 5.98 1.0p0 3.96 6.01 5.13 0.66g 0.34 0.50 0.85 0.30d 0.40 0.33 0.30 0.20

a H ) 6000 h, J ) 4, I ) 3, Q ) [437 000, 324 000, 258 000], 250 e Vi e 10 000, 300 e Rk e 10 000. b SC indicates semicontinuousstage, B indicates batch stage, and T indicates intermediate storage.

Table 4. Data for Example 4a,b

SC1 B1 SC2 SC3 B2 SC4 T SC5 SC6 B3 SC7

a, b, or c 370 592 250 210 582 250 200 250 200 1200 600R, â, or γ 0.22 0.65 0.40 0.62 0.39 0.40 0.39 0.40 0.85 0.52 0.40g - 0 - - 0 - - - - 0 -I ) 1, S or D 1.2 1.2 1.2 1.2 1.4 1.4 1.0 1.4 1.4 1.0 1.0p0 - 3.0 - - 1.0 - - - - 4.0 -I ) 2, S or D 1.5 1.5 1.5 1.5 0.0 0.0 1.0 1.5 1.5 1.0 1.0p0 - 6.0 - - 0.0 - - - - 8.0 -I ) 3, S or D 1.1 1.1 1.1 1.1 1.2 1.2 1.0 1.2 1.2 1.0 1.0p0 - 2.0 - - 2.0 - - - - 4.0 -I ) 4, S or D 1.5 1.5 1.5 1.5 1.8 1.8 1.0 1.8 1.8 1.0 1.0p0 - 2.0 - - 1.5 - - - - 3.0 -I ) 5, S or D 1.3 1.3 1.3 1.3 3.0 3.0 1.0 3.0 3.0 1.0 1.0p0 - 1.0 - - 2.0 - - - - 2.5 -I ) 6, S or D 1.4 1.4 1.4 1.4 2.1 2.1 1.0 2.1 2.1 1.0 1.0p0 - 2.0 - - 2.5 - - - - 5.0 -I ) 7, S or D 1.2 1.2 1.2 1.2 5.2 5.2 1.0 5.2 5.2 1.0 1.0p0 - 1.0 - - 0.5 - - - - 7.0 -I ) 8, S or D 1.1 1.1 1.1 1.1 2.1 2.1 1.0 2.1 2.1 1.0 1.0p0 - 4.0 - - 3.5 - - - - 3.0 -I ) 9, S or D 1.3 1.3 1.3 1.3 1.1 1.1 1.0 1.1 1.1 1.0 1.0p0 - 2.0 - - 3.0 - - - - 2.0 -I ) 10, S or D 1.4 1.4 1.4 1.4 1.5 1.5 1.0 1.5 1.5 1.0 1.0p0 - 2.5 - - 2.5 - - - - 4.0 -I ) 11, S or D 1.5 1.5 1.5 1.5 1.7 1.7 1.0 1.7 1.7 1.0 1.0p0 - 3.0 - - 2.0 - - - - 4.0 -I ) 12, S or D 1.2 1.2 1.2 1.2 1.9 1.9 1.0 1.9 1.9 1.0 1.0p0 - 3.5 - - 4.5 - - - - 6.5 -I ) 13, S or D 1.5 1.5 1.5 1.5 3.7 3.7 1.0 3.7 3.7 1.0 1.0p0 - 5.0 - - 7.0 - - - - 9.0 -I ) 14, S or D 1.8 1.8 1.8 1.8 2.2 2.2 1.0 2.2 2.2 1.0 1.0p0 - 4.5 - - 3.0 - - - - 4.0 -I ) 15, S or D 1.5 1.5 1.5 1.5 2.7 2.7 1.0 2.7 2.8 1.0 1.0p0 - 3.0 - - 2.0 - - - - 6.0 -

a Q(m 000) ) [40, 30, 10, 35, 33, 27, 25, 22, 20, 19, 15, 12, 9, 7, 5], H ) 8000 h, I ) 15, 300 e Vi e 2400, 300 e Rk e 2400. b SC indicatessemicontinuous stage, B indicates batch stage, and T indicates intermediate storage.

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6683

largest possible value of each optimization variable asthe initial solution.

By investigating the influence of the initial solu-tion on the search procedure, we found that differentinitial solutions have no obvious influence on theoptimization results, but a better initial solution willspeed convergence greatly. For these examples, a “good”initial solution will result in at least a 20% reductionof computing time. This means that, although AFMhas no special need for a particular initial solution, a

better initial solution, as opposed to a random start,is beneficial in the eventual success of AFM im-plementation (particularly for large problems, such asexample 4 in this paper). A typical way of obtaining anappropriate initial solution is through the use of aheuristic procedure, such as that designed by Wang etal.13,14

(b) Natural Neighborhood. We also found that ofall of the strategies explained in section 4, the strategyof natural neighborhood provides the greatest contribu-

Table 5. Results of Example 1

AFM TS (standard TS)15 GA14 MP15

188 257.1 191 267.3 189 294.8 189 015.7objectivefunction moj mpj Vj moj mpj Vj moj mpj Vj moj mpj Vj

j ) 1 1 1 1557.8 1 1 1620.1 1 1 1575.5 1 1 1631.1j ) 2 1 1 2028.2 1 1 2010.3 1 1 2061.5 1 1 2039.2j ) 3 1 1 800.0 1 1 800.0 1 1 800.0 1 1 800.0

nk Rk nk Rk nk Rk nk Rk

k ) 1 1 1800.0 1 1800.0 85 1800.0 1 1800.0k ) 2 1 435.7 1 460.5 85 455.4 1 435.4k ) 3 1 435.7 1 460.5 85 455.4 1 435.4k ) 4 1 300.0 1 300.0 85 300.0 1 300.0k ) 5 1 300.0 1 300.0 85 300.0 1 300.0k ) 6 1 300.0 1 300.0 85 300.0 1 300.0

V′s 1629.5 1720.1 1644.5 1751.1CPU time (s)a 20.8 35.5 75.5 166.4a On an Intel PS 400 586 computer.

Table 6. Results of Example 2

AFM TS (standard TS)15 GA14SA15objective

functiona

170 373.6 170 604.3 170 553.1 170 539.8objectivefunction moj mpj Vj moj mpj Vj moj mpj Vj moj mpj Vj

j ) 1 1 1 1662.8 1 1 1665.1 1 1 1661.5 1 1 1677.5j ) 2 1 1 800.0 1 1 800.0 1 1 800.0 1 1 800.0j ) 3 1 1 800.0 1 1 800.0 1 1 800.0 1 1 800.0

nk Rk nk Rk nk Rk nk Rk

k ) 1 1 1800.0 1 800.0 1 1800.0 1 1800.0k ) 2 1 306.1 1 325.1 1 316.4 1 300.1k ) 3 1 306.1 1 325.1 1 316.4 1 300.1k ) 4 1 300.0 1 300.0 1 300.1 1 300.1k ) 5 1 300.0 1 300.0 1 300.1 1 300.1k ) 6 1 300.0 1 300.0 1 300.1 1 300.1

V′s 1722.7 1733 1728.3 1742.2CPU time (s)b 72 98 209 158

a MP objective function ) 170 357.0. b On an Intel PS 400 586 computer.

Table 7. Results of Example 3

AFM TS (standard TS)15 GA14 SA11

36 885.8 368 131.4 362 130 368 88.3objectivefunction moj mpj Vj moj mpj Vj moj mpj Vj moj mpj Vj

j ) 1 2 1 4291.2 1 1 7301.3 1 1 6907 2 1 4290.0j ) 2 2 1 9923.2 2 1 9926.3 2 1 9918 2 1 9930.0j ) 3 2 1 5533.7 2 1 5800.1 2 1 5724 2 1 5534.0j ) 4 1 1 7633.0 1 1 9905.2 1 1 9466 1 1 7627.0

nk Rk nk Rk nk Rk nk Rk

k ) 1 1 9250.0 1 9006.2 2 7717 1 9252.0k ) 2 1 10 000 1 4843.2 1 2189 1 10 000.0k ) 3 1 9671.1 1 7101.4 1 6637 1 9675.0k ) 4 1 10 000 1 9104.6 2 9466 1 10 000.0k ) 5 1 8994.0 1 9860.6 1 9926 1 9000.0k ) 6 1 380.1 1 3805.1 1 5212 1 390.0

V′s 1996.2 3103.6 2946 1997.0a MP objective function ) 369 728.11 b CPU time for AFM on an Intel PS 400 586 computer is 86 s.

6684 Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002

tion to acceleration of the search procedure. The simula-tion of army ant behavior shows its superiority over themethod of simply adopting a smaller neighborhood size.The computational examples also revealed that thisnatural neighborhood reduces the searching time by atleast 10%.

(c) Reinforcement of Character and Diversifica-tion. The reinforcement of character has a greaterinfluence than diversification in improving the bestsolution. However, it was found that a global optimiza-tion could not be obtained without the adoption ofdiversification. This phenomenon can be explained asfollows: the special self-reinforcing character of AFMattracts the optimum agents to better solution spacesand explores these spaces more thoroughly, whereasdiversification restarts the searching procedure from aless attractive solution space. Without diversification,the algorithm will become trapped in a local optimumbecause of its ignorance of some solution spaces in whichthe global optimum might be located.

(d) Movement Probability. For other AI methodssuch as GA and SA, acceptable neighbor solutions areselected as soon as the neighbor solutions are randomlygenerated. When acceptable neighbor solutions do notexist, this mechanism would lead to a waste of searchingtime. However, the use of movement probabilities helpsAFM identify whether acceptable neighbor solutions ofthe current solution exist and avoid wasting much timein searching for acceptable neighbor solutions whensuch solutions do not exist. In such a case, the seekingprocedure for the optimization direction of AFM is moreeffective than the approaches used by other AI methods.For the four examples discussed, through the use of bothmovement probabilities and transition probabilities, thesearch time can be reduced by at least 5% comparedwith the time required by the GA and SA algorithms.

(e) Step Size of Continuous Variable. For thisproblem, Patel et al. designed a method to vary the stepsize of continuous variables dynamically.7 However, ourcomputational experience showed that Patel’s methodinduced unnecessary computation in the search proce-dure.11

(f) Termination. We used the single-solution termi-nation criterion in computing examples 1 and 2 so thatvery satisfactory solutions would be obtained. Forexamples 3 and 4, we adopted the maximum generation

criterion and suggest 150-250 as a good compromisefor the maximum value.

Conclusion

In this paper, the ants foraging mechanism (AFM), anovel evolutionary approach, is presented for the solu-tion of the optimal design of multiproduct batch chemi-cal processes, and satisfactory results are obtained.AFM proved to be fit for the proposed optimizationproblem and has the following advantages in applica-tion:

(1) AFM is more robust than GA and SA. It requiresneither the temperature-control parameter of simulatedannealing nor the coding designed in genetic algorithmsfor special application.

(2) AFM can find the best satisfactory solutions. Inparticular, AFM can find the global optimum solutionsfor simple problems through its single-solution termina-tion criterion.

(3) AFM makes no special demand for initial valuesof optimization variables. Rather, any feasible value ofeach optimization variable can be taken as the initialsolution at any instance.

(4) AFM makes no special demand for the form of theobjective function.

(5) As is evident from the computational results, AFMyields a highly satisfactory global optimum.

(6) AFM is simple in structure and is convenient forsimple implementation.

Nomenclature

aj ) cost coefficient for batch state jbk ) cost coefficient for semicontinuous stage kBis ) batch size for product i in subprocess s, kgcs ) cost coefficient for intermediate storage sDik ) duty factor for product i in semicontinuous stage kdij ) power coefficient for processing time for product i in

stage jgij ) coefficient processing time for product i in storage jH ) horizon, hHi ) production time of product i, hi ) index for productI ) total number of productsj ) index for batch stageJ ) total number of batch stagesk ) index for semicontinuous stage

Table 8. Results of Example 4

AFM TS (standard TS)15 GA14 SA11

414 563.8 433 329.7 425 676 450 983.0objectivesfunction moj mpj Vj moj mpj Vj moj mpj Vj moj mpj Vj

j ) 1 2 1 1082.8 2 1 1090.3 2 1 1148 2 1 1590, 1780j ) 2 2 1 2397.2 2 1 2398.1 2 2 1585 2 2 2400, 896, 1934, 756j ) 3 2 1 1580.5 2 1 1605.2 2 1 1644 2 1 1897, 1871

nk Rk nk Rk nk Rk nk Rk

k ) 1 2 2000.4 2 2010.2 2 1805 2 2050, 1645k ) 2 1 2102.3 1 2303 1 2061 1 1512k ) 3 1 1864.8 1 2100.2 2 2268 1 1512k ) 4 2 2287.5 2 2313.1 2 2362 2 1564, 559k ) 5 1 1894.2 1 2080.5 1 2266 2 918, 300k ) 6 1 1432.2 1 1635.1 1 1248 1 1185k ) 7 1 1356.9 1 1930.6 1 1735 1 2046

V′s 3022.4 4321.2 2172 5131.0CPU time (min) 3 5.6 4 102a MP objective function ) 369 728.11 b On a Sun Sparc workstation.

Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002 6685

K ) total number of semicontinuous stagesmoj ) number of out-of-phase groups in batch stage jmpj ) number of in-phase parallel units in each of the out-

of-phase groups in batch stage jnk ) number of parallel units in semicontinuous stage kPi ) rate of production of product i, kg/hpij ) processing time for product i in stage j, hpij

0 ) constant in processing time equation for product i instage j

Qi, Q (m 000) ) demand for product i, kgRk ) processing rate for semicontinuous unit k, kg/hRk

max ) maximum feasible size of semicontinuous stage kRk

min ) minimum feasible size of semicontinuous stage kS ) total number of subprocessesSij ) size factor for product i in batch stage jSis

/ ) size factor for product i in storage st ) index for substrainsT ) total number of substrainstij ) recycling time for product i in batch stage jtisL ) limiting cycle time for product i in subprocess s

Vj ) size of batch stage j, LVj

max ) maximum feasible size of batch stage j, LVj

min ) minimum feasible size of batch stage j, LVs

/ ) size of intermediate storage s, LRj ) cost coefficient for batch stage jâk ) cost coefficient for semicontinuous stage kγs ) cost coefficeint for storage sθit ) operating time for product i in substrain t

Literature Cited

(1) Modi, A. K.; Karimi, I. A. Design of multiproduct batchprocesses with finite intermediate storage. Comput. Chem. Eng.1989, 13, 127.

(2) Yeh, N. C.; Reklaitis; G. V. Synthesis and sizing of batch/semicontinuous process. Presented at the AIChE Annual Meeting,Chicago, IL, 1985; Paper No. 35a.

(3) Yeh, N. C.; Reklaitis, G. V. Synthesis and sizing of batch/semicontinuous processes: Single product plants. Comput. Chem.Eng. 1987, 11, 639.

(4) Xu X.; Zheng, G.; Cheng, S. Optimized design of multiprod-uct batch chemical processsA heuristic approach. J. Chem. Eng.1993, 44, 442 (in Chinese).

(5) Lookar, Y. R.; Robinson, J. D. Minimization of capitalinvestment for batch processes. Ind. Eng. Chem. Process Des. Dev.1970, 9, 625.

(6) Grossmann, I. E.; Sargent, W. E. Optimal design of multi-purpose chemical plants. Ind. Eng. Chem. Process Des. Dev. 1979,18, 343.

(7) Knopf, F. C.; Okos, M. R.; Reklaitis, G. V. Optimum design

of batch/semicontinuous processes. Ind. Eng. Chem. Process Des.Dev. 1982, 21, 79.

(8) Takamatsu, T.; Hashimoto, I.; Hasebe, S. Optimal designand operation of a batch process with intermediate storage tanks.Ind. Eng. Chem. Process Des. Dev. 1982, 21, 431.

(9) Espuna, A.; Lazaro, M.; Martınez, J. M.; Puigjaner L. Anefficient and simplified solution to the predesign problem ofmultiproduct plants. Comput. Chem. Eng. 1989, 13 (1/2), 163.

(10) Barbosa-Povoa, A. P.; Macchietto, S. Detailed design ofmultipurpose batch plants. Comput. Chem. Eng. 1994, 18 (11/12),1013

(11) Patel, A. N.; Mah, R. S. K.; Karimi, I. A. Preliminary designof multiproduct noncontinuous plants using simulating annealing.Comput. Chem. Eng. 1991, 15, 451.

(12) Tricoire, B.; Malone, M. A new appoach for the design ofmultiproduct batch processes. Presented at the AIChE AnnualMeeting, Los Angeles, CA, 1991.

(13) Wang, C.; Quan, H.; Xu, X. Optimal design of multiproductbatch chemical processsMixed simulated annealing. J. Chem.Eng. 1996, 47, 1844 (in Chinese).

(14) Wang, C.; Quan, H.; Xu, X. Optimal Design of MultiproductBatch Chemical Process Using Genetic Algorithms. Ind. Eng.Chem. Res. 1996, 35, 3560.

(15) Wang, C.; Quan, H.; Xu, X. Optimal Design of MultiproductBatch Chemical Process Using Tabu Search. Comput. Chem Eng.1999, 23, 427.

(16) Lin, X.; Floudas, C. A. Design, synthesis and schedulingof multipurpose batch plants via an effective continuous-timeformation. Comput. Chem. Eng. 2001, 5, 665.

(17) Dorigo, M.; Maniezzo, V.; Colorni, A. Positive Feedback asa Search Strategy; Technical Report 91-016; Dipartimento diElettronica, Politecnico di Milano: Milan, Italy, 1991.

(18) Dorigo, M.; Di Caro, G. Ant Algorithms for DiscreteOptimization. Artif. Life 1999, 5 (3), 137.

(19) Sole, R. V.; Bonabeau, E.; Delgado, J.; Fernandez, P.;Marın, J. Pattern Formation and Optimization in Army Ant Raids.Proc. R. Soc. London B; http://www.santafe.edu/sfi/publications/wpabstract/199910074 2000.

(20) Dorigo, M.; Maniezzo, V.; Colorni, A. The ant system:Optimization by a colony of cooperating agents. IEEE Trans. Syst.,Man, Cybernet. B 1996, 26 (1), 29.

(21) Dorigo, M.; Gambardella, L. M. Ant colonies for thetraveling salesman problem. BioSystems 1997, 43, 73.

(22) Dorigo, M.; Gamgbardella, L. M. Ant colony system: Acooperative learning approach to the traveling salesman problem.IEEE Trans. Evol. Comput. 1997, 1 (1), 53.

(23) Werra, D.; Hertz, A. Tabu search techniquessA tutorialand an application to neural network. OR Spektrum 1989, 11, 131.

Received for review November 19, 2001Revised manuscript received September 23, 2002

Accepted October 10, 2002

IE010932R

6686 Ind. Eng. Chem. Res., Vol. 41, No. 26, 2002