solving job shop scheduling problem using a hybrid parallel micro genetic algorithm

11
Applied Soft Computing 11 (2011) 5782–5792 Contents lists available at ScienceDirect Applied Soft Computing j ourna l ho me p age: www.elsevier.com/l ocate/asoc Solving job shop scheduling problem using a hybrid parallel micro genetic algorithm Rubiyah Yusof , Marzuki Khalid, Gan Teck Hui, Syafawati Md Yusof, Mohd Fauzi Othman Centre for Artificial Intelligence and Robotics (CAIRO), Universiti Teknologi Malaysia International Campus, Jalan Semarak, 54100 Kuala Lumpur, Malaysia a r t i c l e i n f o Article history: Received 17 June 2010 Received in revised form 17 December 2010 Accepted 25 January 2011 Available online 3 March 2011 Keywords: Job shop scheduling problem Parallel genetic algorithm Micro GA Asynchronous colony GA Autonomous immigration GA a b s t r a c t The effort of searching an optimal solution for scheduling problems is important for real-world industrial applications especially for mission-time critical systems. In this paper, a new hybrid parallel GA (PGA) based on a combination of asynchronous colony GA (ACGA) and autonomous immigration GA (AIGA) is employed to solve benchmark job shop scheduling problem. An autonomous function of sharing the best solution across the system is enabled through the implementation of a migration operator and a “global mailbox”. The solution is able to minimize the makespan of the scheduling problem, as well as reduce the computation time. To further improve the computation time, micro GA which works on small population is used in this approach. The result shows that the algorithm is able to decrease the makespan considerably as compared to the conventional GA. © 2011 Elsevier B.V. All rights reserved. 1. Introduction Genetic algorithm (GA) is a powerful search technique based on natural biological evolution which is used for finding an optimal or near-optimal solution. The idea of GA was first proposed by Holland [1] in the early 1970s and since then has been widely used in solving optimization problem. In contrast to other optimization methods, GA functions by generating a large set of possible solutions to a given problem instead of working on a single solution. The tech- nique includes the process of selection, crossover, mutation and evaluation. GA has been implemented successfully in many scheduling problems, in particular job shop scheduling. Job shop scheduling problem (JSSP) is a difficult NP-hard combinatorial optimization problem. Earlier work on solving JSSP centered around exact algo- rithms such as branch-and-bound approach [2,3]. However, the work focused on small sized instances which can be solved in a reasonable computation time. As the problems become more complex, the research focused various other techniques such as simulated annealing [4–6], and genetic algorithms [7–11], ant colony optimization (ACO) [12,13]. To explore the characteristics of the real world problems, Oduguwa et al. [14] did a research on the applications of evolutionary computing which includes GA in Corresponding author. Tel.: +60 3 2691 3710/2615 4816; fax: +60 3 2697 0815. E-mail addresses: [email protected], [email protected] (R. Yusof), [email protected] (M. Khalid), [email protected] (S. Md Yusof). manufacturing industry. Generally, GA is employed to minimize the makespan of the scheduling jobs with quick convergence to the optimal solution, hence reducing the computational cost. The complexity of some of the JSSP instances provides the impetus for the search of a faster convergence algorithm and one which can avoid local minima. Parallel GA (PGA) is explored by the researchers in 1980s and 1990s [15–18] to make the tech- nique faster by executing GAs on parallel computer. PGA can be classified into four types: global parallelization (master-slaver model), coarse-grained, fine-grained and hybrid algorithm. Many researchers employed these types of PGA in solving JSSP with vary- ing degrees of success. For example, Kirley [19] had divided JSSP into sub-problems of lower complexity and used parallel evolution of partial solutions. Later, Zhang and Chen [20] did a research on coarse-grained PGA based grid job scheduling, and able to minimize the execution time of jobs and makespan of resources compared to the serial process, hence improve the utilization of resources. This proposed algorithm proved to produce minimal makespan and near-optimal solution. Another paper on JSSP that employed PGA is by Park et al. [21], which proposed an island-model PGA to pre- vent premature convergence, minimize the makespan and generate better solution than serial GA. A coarse-grain PGA had also been employed by Defersha and Chen [22] to solve a lot streaming problem in a flexible job-shops environment. The proposed algorithm was implemented based on island-model parallelization technique with different connection topologies such as ring, mesh and fully connected. Zhao et al. [23] proposed a combination of solving optimization problem 1568-4946/$ see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2011.01.046

Upload: independent

Post on 09-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Sa

RC

a

ARR1AA

KJPMAA

1

nn[oGgne

ppprwacscot

m

1d

Applied Soft Computing 11 (2011) 5782–5792

Contents lists available at ScienceDirect

Applied Soft Computing

j ourna l ho me p age: www.elsev ier .com/ l ocate /asoc

olving job shop scheduling problem using a hybrid parallel micro geneticlgorithm

ubiyah Yusof ∗, Marzuki Khalid, Gan Teck Hui, Syafawati Md Yusof, Mohd Fauzi Othmanentre for Artificial Intelligence and Robotics (CAIRO), Universiti Teknologi Malaysia International Campus, Jalan Semarak, 54100 Kuala Lumpur, Malaysia

r t i c l e i n f o

rticle history:eceived 17 June 2010eceived in revised form7 December 2010ccepted 25 January 2011vailable online 3 March 2011

a b s t r a c t

The effort of searching an optimal solution for scheduling problems is important for real-world industrialapplications especially for mission-time critical systems. In this paper, a new hybrid parallel GA (PGA)based on a combination of asynchronous colony GA (ACGA) and autonomous immigration GA (AIGA)is employed to solve benchmark job shop scheduling problem. An autonomous function of sharing thebest solution across the system is enabled through the implementation of a migration operator and a“global mailbox”. The solution is able to minimize the makespan of the scheduling problem, as well as

eywords:ob shop scheduling problemarallel genetic algorithmicro GA

synchronous colony GA

reduce the computation time. To further improve the computation time, micro GA which works on smallpopulation is used in this approach. The result shows that the algorithm is able to decrease the makespanconsiderably as compared to the conventional GA.

© 2011 Elsevier B.V. All rights reserved.

utonomous immigration GA

. Introduction

Genetic algorithm (GA) is a powerful search technique based onatural biological evolution which is used for finding an optimal orear-optimal solution. The idea of GA was first proposed by Holland1] in the early 1970s and since then has been widely used in solvingptimization problem. In contrast to other optimization methods,A functions by generating a large set of possible solutions to aiven problem instead of working on a single solution. The tech-ique includes the process of selection, crossover, mutation andvaluation.

GA has been implemented successfully in many schedulingroblems, in particular job shop scheduling. Job shop schedulingroblem (JSSP) is a difficult NP-hard combinatorial optimizationroblem. Earlier work on solving JSSP centered around exact algo-ithms such as branch-and-bound approach [2,3]. However, theork focused on small sized instances which can be solved in

reasonable computation time. As the problems become moreomplex, the research focused various other techniques such asimulated annealing [4–6], and genetic algorithms [7–11], ant

olony optimization (ACO) [12,13]. To explore the characteristicsf the real world problems, Oduguwa et al. [14] did a research onhe applications of evolutionary computing which includes GA in

∗ Corresponding author. Tel.: +60 3 2691 3710/2615 4816; fax: +60 3 2697 0815.E-mail addresses: [email protected], [email protected] (R. Yusof),

[email protected] (M. Khalid), [email protected] (S. Md Yusof).

568-4946/$ – see front matter © 2011 Elsevier B.V. All rights reserved.oi:10.1016/j.asoc.2011.01.046

manufacturing industry. Generally, GA is employed to minimizethe makespan of the scheduling jobs with quick convergence tothe optimal solution, hence reducing the computational cost.

The complexity of some of the JSSP instances provides theimpetus for the search of a faster convergence algorithm and onewhich can avoid local minima. Parallel GA (PGA) is explored bythe researchers in 1980s and 1990s [15–18] to make the tech-nique faster by executing GAs on parallel computer. PGA canbe classified into four types: global parallelization (master-slavermodel), coarse-grained, fine-grained and hybrid algorithm. Manyresearchers employed these types of PGA in solving JSSP with vary-ing degrees of success. For example, Kirley [19] had divided JSSPinto sub-problems of lower complexity and used parallel evolutionof partial solutions. Later, Zhang and Chen [20] did a research oncoarse-grained PGA based grid job scheduling, and able to minimizethe execution time of jobs and makespan of resources comparedto the serial process, hence improve the utilization of resources.This proposed algorithm proved to produce minimal makespan andnear-optimal solution. Another paper on JSSP that employed PGAis by Park et al. [21], which proposed an island-model PGA to pre-vent premature convergence, minimize the makespan and generatebetter solution than serial GA.

A coarse-grain PGA had also been employed by Defersha andChen [22] to solve a lot streaming problem in a flexible job-shops

environment. The proposed algorithm was implemented based onisland-model parallelization technique with different connectiontopologies such as ring, mesh and fully connected. Zhao et al.[23] proposed a combination of solving optimization problem

Compu

uPucoebt

hSipmcii

oawcctsTdit

wpeDaptast

opeitotaiewsa

pfoGAtrme(a

R. Yusof et al. / Applied Soft

sing coarse grain size PGA (CGS-PGA) and master-slaver modelGA (MSM-PGA) with Multi-Agent theory. The algorithm is madep of many M-Agent (master course) and many A-Agent (slaverourse). Through the developed algorithm, the speed and precisionf calculation had improved significantly besides suppressing themergence of the early-maturing phenomenon. The advantages ofoth CGS-PGA and MSM-PGA had been utilized while overcomingheir deficiencies.

In other related area, Skinner et al. [24] had investigated theybridization of two different optimization methods, PGA andequential Quadratic Programming (SQP). The simulation exper-ments show that this method had combined the robust searchroperty of PGA with high convergence velocity of SQP. A set ofultimodal, non-linear, smooth objective functions was used to

ompare PGA with the proposed hybrid algorithm. The exper-ments prove that the overall computation time was reduced,ncrease solution quality or accuracy and maintain the robustness.

Tang and Lau [25] had worked on PGA for floorplan areaptimization in very large-scale integrated-circuit (VLSI) designutomation. This was presented by adopting an island modelith an asynchronous migration mechanism, and using different

hromosome representations and genetic operators. With asyn-hronous migration, the efficiency had been improved in contrasto synchronized migration. When compared to the correspondingequential GA, the PGA had produced better floor planning results.he performance of the PGA was also empirically investigated forifferent number of islands and migration interval where the PGA

s sensitive to both parameters with some correlations betweenhem.

Another recent research on PGA is done by Yussof et al. [26]here they proposed a coarse-grained PGA for solving the shortestath routing problem in computer networks. Although the short-st path routing algorithms are already well-established such asijkstra’s algorithm and Bellman–Ford algorithm, Yussof et al. hadttempted to implement PGA as an alternative method to solve thisroblem which was developed on MPI cluster. The performance ofhe algorithm in terms of computation time was studied as wells the effect of migration on the proposed algorithm. The resultshow that the computation time was greatly reduced compared tohe serial version of the algorithm.

Liu et al. [27] introduced a new PGA based on k1 triangulationf fixed point theory and an injection island approach. With theroposed algorithm, the processors were guaranteed only to gen-rate solutions within their assigned region. The algorithm, which ismplemented on cluster system, had greatly reduces the computingime and subsequently enhances the computing efficiency. Severalther parallel and distributed algorithms have been introduced inhe literature, an example is the distributed evolutionary simulatednnealing (dESA) [28]. In this work multiple agent (island) systemss utilized, where algorithm is evolved in each island and enablesach individual to get better chance of being selected across thehole search. The results obtained indicated that the dESA can

olve problems far faster compared to non distributed simulatednnealing method.

In this paper, a new hybrid PGA and parallel distributed com-uting techniques with coarse-grained or island model are usedor solving benchmark JSSP. The proposed PGA is a combinationf asynchronous colony GA (ACGA) and autonomous immigrationA (AIGA) which was originally applied by Inoue et al. [29]. InCGA, as proposed by Inoue et al., the supervisor task (parent GA

ask) will start a predetermined number of sub-GA tasks, whichun on a single computer. The method was used on symmetrical

ultiprocessor (SMP) machine (which have up to sixteen CPUs) to

xecute the sub-GAs. Each sub-GA, consists of small populationsnot more than 100 individuals per population) will communicatemong themselves, sharing the information through asynchronous

ting 11 (2011) 5782–5792 5783

message passing through a host. The number of sub-GA tasks“spawned” is heavily dependent on the system speed and amountof RAM (random access memory) available. The method proved tobe very fast and able to give optimal results within a short time.

However, to implement the method with a large number of sub-GAs on a normal personal computer (PC) is not possible due to thelimited speed and RAM as compared to the SMP. Therefore, weproposed the use of the ACGA in combination with AIGA. In theproposed hybrid PGA model, each PC has three sub-GA tasks withone of them set up to be the local host that handles migration ofindividuals. Each PC contains an ACGA model and the connectedPCs are modeled according to AIGA. The migration operator wasintroduced to send some individuals from one “island” to another“island”. This migration scheme will replace the worst chromosomeor individual in the population with the best chromosome fromneighbouring population, hence produce better final result.

The method is applied to the some benchmark datasets. We havealso proposed the use of micro GA [30] for each of the sub GA opera-tions. As micro GA evolves a very small population, the computationtime will be very much reduced. One of the main problems of microGA is the loss of genetic diversity because of the small population.However, the proposed hybrid PGA model, ensures the retention ofthe genetic diversity due to the migration of populations from oneisland to another and the passing of individuals from one sub-GAto another.

Comparisons of the method are made with the conventional GA.The proposed PGA has been shown to solve JSSP better as comparedto sequential GA which minimizes the makespan with sub-linearspeedup and decreases the occurrence of premature convergence.The rest of this paper is organized as follows: Section 2 will discussthe problem formulation for JSSP. The hybrid parallel GA will bediscussed in Section 3. Section 4 will describe the implementationof the GA in this research. In Section 5, the results and discussionare presented. The conclusions will be given in Section 6.

2. Problem formulation

In general, scheduling problem consists of a set of concurrentconflicting goals that are to be satisfied using a limited number ofresources. The JSSP is one of the hardest combinatorial optimizationproblems, which is a process to allocate limited resources such asmachines to competing job or task.

This problem can be described [28] as follows: given a set of njobs (J) given by J = {J1, J2, . . ., Jn}, each composed of more than oneoperation (O) that must be processed on a set of m machines (M)given by M = {M1, M2, . . ., Mm}. Each operation occupies one of themachines for a fixed duration. The constraints for the schedulingproblems are:

• Each machine can only process one operation at a time.• Once an operation initiates processing on a given machine, it must

complete the processing on that machine without interruption.• All operations must be in technological sequence.

The problem’s main aim is to schedule the activities or opera-tions of the jobs on the machines such that the total time to finish alljobs, that is the makespan, Cmax is minimized. The term makespanrefers to the cumulative time to complete all the operations of alljobs on all machines. It is a measure of the time period from thestarting time of the first operation to the ending time of the lastoperation. Sometimes there may be multiple solutions that have

the minimum makespan, but the goal is to find out any one of them:it is not necessary to find all possible optimum solutions.

Let each job j consists of p operations Oj = {o1j, o2j, . . ., opj},whichare interrelated by two types of constraints. First is the precedence

5 Computing 11 (2011) 5782–5792

cpbtctod(c

acC.

sasssgiT

3

aHittsp

lficistatla

tttmtfmtaeei

rtnamb

The supervisor GA task has two responsibilities: firstly to han-dle its own version of GA, and secondly to store a duplicate ofthe best individual from its own “colonies” to a “global mailbox”(Figs. 1 and 2).

784 R. Yusof et al. / Applied Soft

onstraint, in which the operations of each particular job has to berocessed after all predecessor operations, Pi, are completed andefore its successor Si commences. Second constraint is the opera-ion i can only be scheduled if the machine it requires is idle. Theompletion time of the operation, cji considers the starting time orhe release time of the operation rji, the duration of the operationr processing time needed by job j of operation i on each machine,enoted by dji, as well as the waiting time in between operationsif required) wji. Hence, the completion time of operation oji isji = rji + dji + wji.

The completion time of the whole schedule or the makespan islso the maximum of the machines’ completion time or the jobs’ompletion time. Therefore, the makespan is Cmax = max(C1, C2, . . .,m), where C1, C2, . . ., Cm are the completion time of machines 1, 2,

. ., m respectively.One of the most important characteristic to differentiate job

hop scheduling from other type of scheduling is the order in which job is processed is taken into account. In JSSP, an order must bepecified, as well as defining the amount of time that each job mustpend at each machine. This has significantly lowers the size of theearch space and makes the representations scheme used in a pro-ram potentially harder normal scheduling problem. The problemtself is trying to have every job finished in as little time as possible.his is known as minimizing the makespan.

. Hybrid parallel GA (PGA)

Recent years have seen more active research on parallel geneticlgorithms (PGAs) being applied in solving difficult problems.ard problems normally require a bigger population, and this

mplies the requirement of higher computational power. Hence,he initial motivation behind many early studies of PGAs waso reduce the processing time needed to reach an acceptableolution, and the methods used to execute GAs on a parallel com-uter.

The effectiveness of GA is determined largely by the popu-ation size. As the population size increases, the probability ofnding the global solution also increases. At the same time, theomputation cost also increases since it takes a longer time to fin-sh an iteration (which is directly proportional to the populationize). Contrary to sequential GA, PGA have the ability of keepinghe quality of the results high and find them fast by using par-llel machines where larger population can be processed in lessime. This keeps the confidence factor high and the response timeow, enabling genetic algorithms to be applied in time-constrainedpplications.

One of the most popular PGAs is the coarse grained PGAs wherehe population is subdivided into a few subpopulations keepinghem relatively isolated from each other. This model of paralleliza-ion introduces an extra operator from normal GAs, which is the

igration operator, used to send individuals from one subpopula-ion to another (i.e. roughly the same idea as migration of humanrom one country to another). In this research, we use the island

odel where the population is partitioned into small subpopula-ions by geographical isolation and migration can happen betweenny two islands (subpopulation). The island model usually has sev-ral isolated subpopulations of individuals evolve in parallel whereach island does its own genetic operation and periodically sharingts best individuals through migration.

For parallel implementation we consider M independent GAsunning with independent memories, independent genetic opera-ions and independent function evaluations. The M processes work

ormally with the exception that the best individuals discovered in

generation are broadcast to the other subpopulations over a com-unication network. In this research, the island models proposed

y Inoue et al. [29] were used with some changes to suit the sys-

Fig. 1. Illustration of ACGA.

tem and application. Two of the models are described in the nextsections.

3.1. Asynchronous colony GA (ACGA)

In ACGA, there is a supervisor task (parent GA task) which willstart a predetermined number of sub-GA tasks on a single PC.Instead of using symmetrical multiprocessor (SMP) machine asimplemented by Inoue et al., this research used a normal PC whichconsists of only single CPU to execute the sub-GA. Each sub-GA,also termed as colony consists of small populations, where theywill communicate among themselves and sharing the informationthrough asynchronous message passing.

Fig. 2. AIGA structure.

R. Yusof et al. / Applied Soft Computing 11 (2011) 5782–5792 5785

3

brsctttmGph

3

PttaGbcirTbf

Pao

Table 1The operation of the global mailbox and the localized global mailbox in the proposedhybrid PGA.

1 Read the data set from the input file.2 Set migration parameters.

4 For N generations, do the following:i. Send ready to migrate/receive flag to island PCii. Send K chromosomes to an island PCiii. Delete all chromosomes in the mailboxiv. Send ready to retrieve chromosomes flag to PC island

Fig. 3. Proposed hybrid PGA.

.2. Autonomous immigration GA (AIGA)

Taken from the conventional island model, this PGA model isased on the biologically reported observation that isolated envi-onments, such as islands, often produce species that are morepecifically adapted to the peculiarities of their environments thanorresponding areas of wider surfaces. In this scheme, the popula-ion is subdivided into a few subpopulations in an island keepinghem relatively isolated from each other but able to communicatehrough a migration center, or the global mailbox, in this case. This

odel of parallelization introduces an extra operator from normalAs, migration operator, used to send individuals from one sub-opulation to another (i.e. roughly the same idea as migration ofuman from one country to another).

.3. Proposed hybrid PGA model

A hybrid of ACGA and AIGA is proposed to form a new model ofGA as illustrated in Fig. 3. In this model, each PC has three sub-GAasks running, or colonies, with one of them set up to be local hosthat handles immigration of individuals. Each colony can perform

different type of GA (i.e. different combinations of conventionalA and micro GA). It acquires the best solution of neighbour colonyy its own judgment and inserts it into its own population. Eacholony only has one GA task running. In this way, migration ofndividuals occurs between the colonies as in the island model algo-ithm, but the evaluation of the individuals is handled in parallel.his approach does not introduce new analytical problems and cane useful when working with complex applications with objectiveunctions that need a considerable amount of computation time.

Each PC contains an ACGA model and more than one connectedCs are modeled according to AIGA model. Each PC is considereds an “island” while each “island” consists of three colonies. Onef the three processes (colonies) in a PC (island) acts as a mas-

v. Retrieve new chromosomes of other population from the island PC

5 Go to step 4

ter that is responsible for communication among its own coloniesand among other PCs. The master or supervisor task starts a num-ber of predetermined sub GA tasks apart from being responsiblefor exchanging data between sub GA tasks. It is also responsiblefor passing information to the global mailbox in the AIGA model.The global mailbox, on the other hand is created in AIGA model tohandle migration among PCs.

There are four important functions used in the global mailbox.They are retrieving message, inserting message, deleting messageand getting information out of the mailbox. Both the localized mail-box and the global mailbox operate in a similar manner in terms ofcommunication as described in Table 1.

3.3.1. MigrationThe use of the island model PGA requires some consideration

on the migration operator. Migration of the chromosomes or indi-viduals is the key characteristic of the hybrid PGA. The processwill maintain the genetic diversity by inserting offsprings whichmigrated from other subpopulations. Basically there are four mainoperators which require careful selection for effective implemen-tation of island model PGA. These four operators are the: migrationfrequency, migration percentage, the migration selection schemeand the migration replacement scheme.

Migration frequency is the rate at which migration takes place.Several research results [15,16] have shown that there is a criticalmigration frequency below which the performance of the algorithmis obstructed by the isolation of the demes and above which thepartitioned population behaves as a panmictic one. Asynchronousmigration schemes are well suited for the hybrid PGA where migra-tion is allowed at any moment independently of the evolution stateof subpopulations. This asynchronous behavior reflects the kind ofmigration that, in fact, happens in nature where diverse popula-tions have distinct evolution paces.

The migration percentage is the fraction of subpopulations ofindividuals of each island is randomly selected to send to the globalmailbox, and gathered. Then the global mailbox redistributes theindividuals randomly onto the different islands. The size of theglobal mailbox equals the migration size (the number of individualsthat migrate) from each island times the number of islands. Someguidance on the choice of migration sizes and frequency based onexperimental observations for multi-island PGA applications canbe found in [30].

In general, there are mainly two selection schemes: roulettewheel and tournament selection, both have similar effects onconvergence. In the roulette wheel method, the fitness values ofindividuals represent the width of slots of the wheel, and selec-tion is based on the slot widths of individuals. Individuals withlarger slot widths will have a higher probability to be selected.The roulette-wheel selection operator is employed in this work to

allow the chromosomes with a higher fitness value to have a higherchance to be selected.

Finally, the design of the PGA must consider the replacementscheme policy. The migration replacement scheme determines

5786 R. Yusof et al. / Applied Soft Computing 11 (2011) 5782–5792

Table 2Pseudo code for the migration process.

Step 1 Initialize populationStep 2 Evaluate fitness of each individuals in the population MStep 3 While no terminationStep 4 Select individuals for crossover and mutation using

roulette wheelStep 5 Crossover and mutationStep 6 Evaluate fitness of offspringsStep 7 Check flag for migration status. If yes go to step 8,

otherwise go to step 14Step 8 Select N best individuals for duplicationStep 9 Send signal to global mailbox of readiness to migrateStep 10 Migration of N individualsStep 11 Accept migrants from neighbouring PC through global

mailboxStep 12 Evaluate fitness migrantsStep 13 Compare migrants with locals, replace worst locals

wfcTnaifiIgafg

4

4

ptTtviclkdptwo

tGciwoip

4

t

Step 14 Perform elitismStep 15 Go to step 3

hich individuals are removed from the population to make roomor the individuals that are migrating into the population. In thisase, the migrated individuals are not accepted unconditionally.he fitness of the migrated individuals is compared with the fit-ess of the individuals in the subpopulations and need to meet thecceptance criteria. In this work, the island will only replace itsndividuals with the migrants if and only if the migrants have bettertness. The worst individuals will then be replaced by the migrants.

t can be argued that this scheme may results in premature conver-ence of a GA. However, in this case, since each island is considereds a subpopulations, the scheme resembles elitism which promotesaster convergence. The pseudo code for the migration process isiven in Table 2.

. Implementation of hybrid PGA for JSSP

.1. Micro genetic algorithm (GA) for JSSP

To further reduce the computation time of the hybrid PGA, weropose the use of micro GA. Micro GA refers to genetic algorithmhat works on a small set of population and with re-initialization.he idea of micro GA is first suggested by Goldberg [31] who foundhat a population with a size of three individuals is enough to con-erge without considering the length of chromosome. The basicdea of micro GA is to work on a small population until nominalonvergence, which means until all the individuals in the popu-ation have very similar chromosome. The best individual will beept and transfer to a new population which will be generated ran-omly. Micro GA is first implemented by Krishnakumar [32] withopulation size of five. It has been shown that micro GA improvedhe relatively poor exploitation characteristics of conventional GAithout affecting their strong exploration capabilities. Process flow

f the micro GA can be illustrated in Fig. 4.Basically, as shown in Fig. 4, micro GA differs from the conven-

ional GA with the addition of an outer loop on top of conventionalA, plus it works on a small population size. One of the main con-erns of micro GA is the premature convergence which resulted innconsistencies of the optimal values in different GA runs. However,

ith the set up of parallel GA in the ACGA model, the search spacef the micro GA is further expanded with the insertions of the bestndividuals from other sub GAs, hereby reducing the probability ofremature convergence.

.2. Genetic representation

The representation chosen here is operation-based representa-ion. The job shop schedule is encoded as a sequence of operations,

Fig. 4. Micro genetic algorithm.

with each operation represented by a gene. The genotype (chromo-some) for a j × m problem is a link list of j × m elements (record) of‘gene’. Each gene is designed such that it is large enough to holdthe largest job number (j). Each gene contains all the informationof a particular operation such as job number, operation number,duration of the operation and specified machine to perform theoperation. These parameters are required in the building of a fea-sible schedule. The basic structure of a gene can be summarized inTable 3. In Table 4, an example of a 3 × 2 chromosome is given. Inthis case, there are 3 jobs and 2 machines. In this representation,

various information are stored such as the start time, end time andalso accumulated time of the previous gene. The information inTable 4 can be decoded as in Fig. 5.

R. Yusof et al. / Applied Soft Compu

Table 3Basic structure of a gene representation.

Job ID a

Task ID (operation) bMachine ID cDuration d

Table 4Example of a 3 × 2 chromosomes based on 3 jobs and 2 machines.

Job ID 1 3 2 3 1 2Task ID (operation) 1 1 1 2 2 2Machine ID 1 2 2 1 2 1Duration (DU) 10 15 13 8 6 9Start time (SD) 0 0 15 25 28 25End time (ET) 10 15 28 33 34 34Accumulated (ACCU) 10 15 28 33 34 34

(patrsrrbacwdta

4

to

TC

Fig. 5. The makespan of the JSSP described in Table 4.

Since the size of the structure representation is very flexibledepending on the data), overflow is not an issue as long as thehysical memory of the system is large enough. Information suchs operation start and end time are added later into the gene whenhey are required in the evaluation of a schedule. This direct rep-esentation has many advantages over the indirect representationuch as the binary representation and float or real number rep-esentation. When using indirect representation such as binaryepresentation, we have to take into consideration the number ofits used to represent each task. Each time a crossover occur, the bitfter the crossover point will change its meaning. This needs extraomputational time to modify some of the bit string so that it isithin the range of the representation. Using direct representationescribed in this paper, the range problem and invalid representa-ion of code do not arise. Therefore the process is straight forwardnd modifications of the algorithm can be worked more easily.

.3. Fitness

The fitness function basically determines which possible solu-ions get passed on to multiply and mutate into the next generationf solutions. In the context of JSSP, there are various ways of defin-

able 5rossover operation.

Step 1 Let S include all operation with no predecessors initially.Step 2 Determine ∅* = min {∅ ij|oji ∈ S} and the machine r* on which Ø* coul

schedule S can be terminated, where oji is the operation i of job jStep 3 Let Gr include all operation oij ∈ S that requires machine r*.

Step 4 Choose one of the operations from Gr as follows:• Generate a random number ε ∈ [0, 1] and compare it with mutationto the randomized heuristic rules.• Otherwise select one parent with an equal probability, say parent Poperations in Gr .Schedule oji* in offspring according to Øji .

Step 5 Update S as follows:• Remove operation oji* from S.• Add the direct successor of operation oji* to S.

Step 6 Return to step 2 until complete schedule is generated.

ting 11 (2011) 5782–5792 5787

ing the fitness function the problem. In this paper, the fitnessfunction chosen is the maximum complete time, Cmax, which is alsoknown as the total production time or makespan. This is calculatedby comparing the final end time (completion time) of each machine,and taking the largest value to be the makespan. The fitness valueof each chromosome is calculated according to the start time, endtime and also accumulated time of the previous gene as shown inTable 4.

4.4. Selection method

A good selection method impacts on the performance of a GA byincreasing the speed of reaching optimal or near-optimal solution.The most common selection method used is the Roulette wheelselection. This method depends on the individual fitness, fi and thetotal fitness of the population. The probability of each chromosometo be selected is given as: pselected = fi/

∑maxi=1 fi

4.5. Crossover operator

Crossover operator can be considered as the most importantelement of GA. The crossover operator used in this work is Gifflerand Thompson algorithm-based crossover (GT) [33]. GT is a mod-ified crossover technique from Giffler and Thompson algorithm.This algorithm uses two parents to produce an offspring and basedtree-structure approach. Two parents are chosen from the popula-tion through roulette wheel selection and the crossover process iscarried out according to the steps as shown in Table 5.

4.6. Building a schedule

Generally, an operation is chosen from a list of schedulable oper-ations generated earlier. A schedulable operation here means anoperation whose preceding operations have already been sched-uled. Normally the problem is to select an operation from the listof schedulable operation. This process is performed in the initial-ization and re-initialization process.

In this schedule builder, we only consider three types of feasi-ble schedule that is semi-active, active and non-delay schedules.Using this schedule builder, the possibility of occurrence of infea-sible schedule is zero. Therefore, no schedule repairer function isneeded. A semi-active schedule is schedule with no excess idle time.Using the forward shifting technique, we can improve the semi-active schedule into an active schedule (schedule allows no suchshift process). The optimal schedule is in the set of active sched-ule. Active schedule is a superset of non-delay schedule (schedule

where a machine is never kept idle if some operation is able tobe processed). However, the optimal schedule is not necessarily anon-delay schedule, even if the non-delay schedule is normally anear optimal one.

d be realized, where Ø* is the earliest time at which operation oji from a set of

rate Pm if ε < Pm , and then choose an arbitrary operation from Gr as oji according

s; find an operation oji* which was scheduled earliest in Ps among all the

5788 R. Yusof et al. / Applied Soft Computing 11 (2011) 5782–5792

Table 6Steps of the random schedule builder.

Aan

5

tfiahpstb

s

The process of a random schedule builder is presented in Table 6.lthough the algorithm is straight forward and simple, it guar-ntees to generate a feasible schedule be it semi-active, active oron-delay.

. Results and discussion

In this paper, the algorithm is applied using 10 instances fromhe OR library which can be obtained from [34] or downloadedrom OR library from http://mscmga.ms.ic.ac.uk/info.html. Thenstances chosen are by no means an exhaustive list of all avail-ble problems but they represent different difficulty levels andave been used as benchmark problems by many researchers, inarticular [28,35–37] among others. The instances covers variouseries such as FT series contributed by Fisher and Thompson [38],

he LA series contributed by Lawrence [39], Orb series contributedy Applegate and Cook [40].

The proposed hybrid PGA is implemented on a four computersystem configuration through a 100 Mbps Ethernet line using a hub.

Each PC has an ACGA configurations and micro GA is used for eachsub GA. Parallel virtual machine (PVM) is used as the middlewarefor communication, spawning of new program and creates theglobal mailbox. Various experiments have been performed in deter-mining the parameters for the GA and micro GA and the parametershave been selected for both the GA and micro GA which gives thebest performance, taking into consideration the computation timeand the consistencies of the results. The full detail of the experimen-tal results of the selection of the parameters can be found in [30].For the micro GA, Giffler and Thompson crossover (GT) is used withthe rate of 0.9 while the mutation rate is set to 0.4. Micro GA is runon a population of five. For experiments involving conventional GA,all other parameters are the same as the micro GA, except for themutation rate which is set lower at 0.05 and the population is setat 100.

In order to determine the performance of the hybrid parallel GA,we use the quality index as proposed by [37]. The quality of solu-tion is measured with respect to the average relative percentage oferror (RPE) index over the number of repetitions (runs), which is

R. Yusof et al. / Applied Soft Compu

Table 7Results on multiple migration rate over 10 experimental runs.

Dataset Migration rate

High Medium LowAverage makespan(over 10 run)

ft10(10 × 10) Min. 940 930 944Ave. 944 932 950Max. 952 938 956

ft20(5 × 20) Min. 1165 1165 1180Ave. 1181 1167 1187Max. 1195 1170 1195

la02(10 × 5) Min. 675 660 666Ave. 678 663 672Max. 682 670 678

la16(15 × 5) Min. 952 945 950

c

A

wonroih

5

dtt(eshas1t

ttrmigowni

TP

Ave. 958 952 956Max. 965 956 962

alculated as follows:

RPE = (xb − xo)/xo

n× 100

here xb is the best makespan found and xo is either the optimumr the lowest boundary known for unknown optimum values, and

is the number of runs. Another index used is called the hittingatio (HR) being calculated as the ratio between number of timesptimum hit and the total number of repetitions. Several exper-ments are performed to study the performance of the proposedybrid parallel GA.

.1. Experiment 1 – effect of migration rates

This experiment is carried out on the four networked PCs withifferent migration rates so to analyze the effect of migration rate tohe population fitness. In this paper, we only consider three migra-ion rates which are high (occur after every generation), mediumoccur after every 200 generations) and low (migration occur aftervery 500 generations). We have chosen those instances which areimilar and of medium complexity for this experiment in order toave more precise results. The instances are ft10, la02, la03, la04nd la05. In view that GA is a stochastic search algorithm, and incon-istencies of results occur in different runs, we run the experiments0 times and the average, maximum and minimum makespan areabulated in Table 7.

From Table 7, it is observed that for the same parameter setting,he algorithm performs best with medium migration rate (migra-ion of individuals every 200 generation). Having a high migrationate contributes to a behavior similar to global parallelization whicheans that all four subpopulation acted as one big population hav-

ng the individuals migrating to other subpopulation after everyeneration. Therefore, when there is an outstanding individual in

ne of the sub-populations, it tends to spread and dominate thehole population in a few generations and at the same time elimi-ates those less fit individuals in other words; the whole population

s interspersed with the genetic material of only a few individuals.

able 8arameters of the hybrid parallel GA/GA/micro GA.

GA

Migration interval (generations)Number of migration individuals

Selection of migration individuals Elitist strategy

Mutation probability 0.05

Crossover probability 0.9

ting 11 (2011) 5782–5792 5789

Generally, from the results, having a high migration rates degradesthe performance of the algorithm.

With a low migration rate where exchange of individuals occursevery 500 generation, the results are generally better than settinga high migration rate. However, setting a low migration rate pre-vent the local population from improving further after reaching alocal optimal for a long time. Lowering the migration rate createsan isolated population which is totally different from the other sub-population. This behavior contributes to the diversity in the othersub-population after migration, yet at the same time it resemblesthe running of a conventional GA with occasional appearing of ran-domly generated individuals. The improvement shown with thismigration rate setting is very slow and time wasting.

5.2. Experiment 2 – comparing the performance of PGA andconventional GA

In this experiment, we compare the performance of three typesof GA implementation on the job shop scheduling instances. Thethree types of GA implementations are (1) the conventional GA,where only one PC is used to compute the job shop scheduling, (2)the micro GA, where only one PC is used but the GA is run on asmall population as described in Section 4.1, and (3) hybrid PGAusing micro GA as the sub-GA, where four PCs are used, and thenumber of micro GA in each PC is kept at two. The migration rateis set to medium. Extensive experiments have been conducted todetermine the parameters for GA of which the best combination ofparameters are chosen and shown in Table 8.

Results are collected for instances ft10(10 × 10), ft20(5 × 20),la01(10 × 5), la02(10 × 5), orb04(10 × 10), la16(15 × 5),la21(15 × 10), orb01(10 × 10), la29(20 × 10), la31(30 × 10),la36(15 × 15). We collected the result for every 50 genera-tions for a maximum of 10,000 generations in order to observe thebehavior of the three types of GA. Comparisons are made amongthe GA configurations based on 10 runs. The quality index ARPEand HR are used as the basis of comparison as shown in Table 9.

From Table 9, it can be seen that the ARPE of the hybrid PGA isthe best as compared to the conventional GA and micro GA. Theindex show that the hybrid parallel GA is able to reach the optimalsolution within the number of generations run. The conventionalGA or micro GA fare quite well for less complex instances such asft10,ft20, and la16, but did badly with the more complex instances.For example, for the instances la29, and la31, both the conventionalGA and micro GA have an ARPE index of more than 8, although themicro GA is slightly better than the GA. This may be due to the slowconvergence of these algorithms as compared to the hybrid parallelGA as the GAs are run for only 10,000 generations. The use of microGA in the hybrid parallel GA does not result in local optima due tothe effect of the migration. In fact, the migration of the individualsamong the micro GA helps to maintain the exploration of capa-bility of the micro GA, while the selection and evaluation processof accepting and rejecting the immigrant individual preserves the

exploitation capability. As the evolution progresses, more and moregood candidates exist in the next generation and therefore, it cannarrow the search space so that fast convergence can be achieved.The probability of which the best individuals found are kept in the

Micro GA Hybrid PGA

200Top 20% best

Elitist strategy Elitist strategy0.05 0.40.9 0.9

5790 R. Yusof et al. / Applied Soft Computing 11 (2011) 5782–5792

Table 9Quality index based on ARPE and HR of the makespan of the instances.

Instances GA Micro GA Hybrid PGA

Best known makespan Note: x = minimum makespan obtained

ARPE HR x ARPE HR x ARPE HR x

ft10(10 × 10) 930 1.61 0.42 945 1.08 0.11 940 0.0 0.80 930ft20(5 × 20) 1165 2.92 0.54 1199 1.29 0.26 1180 0.0 0.74 1165la02(10 × 5) 660 1.82 0.39 672 0.76 0.35 665 0.3 0.60 662orb04(10 × 10) 1005 2.35 0.37 611 1.01 0.36 603 0.5 0.65 600la16(15 × 5) 945 1.9 0.49 963 0.74 0.11 952 0.0 0.72 945la21(15 × 10) 1046 2.2 0.65 1070 1.34 0.51 1060 0.19 0.55 1048orb01(10 × 10) 1059 2.93 0.66 1090 1.51 0.17 1075 0.28 0.55 1055

885

nicGt

pohmatp

itt

Ff

la29(20 × 10) 1153 8.41 0.38 1250

la31(30 × 10) 1888 10.17 0.37 2080la36(15 × 15) 1268 6.47 0.35 1350

ext generation becomes larger as is evidence in the HR index. Thendex shows that the hybrid parallel GA has much better quality asompared to the GA and micro GA. The index shows that the microA on its own or using just one PC is very inconsistence in giving

he minimum makespan over the 10 runs.For the purpose of showing the behavior of the convergence

oint, several examples of the convergence curves of the algorithmsver the first 2000 generations of a run are given in Fig. 6. Micro GAas the tendency to stop at certain convergence point (local opti-al) for an average of 500 generations before it starts to improve

gain. Conventional GA showed a slow but steady improvement onhe population and is more capable of avoiding the local optimaloint.

Another set of experiments conducted was to show the different

n the computation time for the three methods. Table 9 shows theabulation of results for 5 runs for speed up factor for the GA andhe hybrid PGA. The time taken for calculation of the speed up is

ig. 6. Makespan versus time for conventional GA, micro GA and hybrid parallel GAor (a) Orb04, (b) la16 datasets.

.0 0.2 1246 0.0 0.68 1153

.36 0.1 2046 0.11 0.58 1890

.89 0.1 1330 0.32 0.56 1272

the time when the first minimum makespan appear before the startof convergence. The speed up factor is a measure of relative per-formance between a multiprocessor system and a single processorsystem and is given by:

S(n) = Execution time using one processor (single processor system)Execution time using a multirpocessor with n processors

= ts

tp

where in this case the one processor system refers to the conven-tional GA or micro GA and the multiprocessor system refers to thehybrid PGA. In this experiment set, we choose only 5 of the 10instances which can give a reasonable convergence time for the GAand micro GA. From the results, almost for all instances, the aver-age speed up for GA or micro GA with the hybrid PGA is more thana factor of 7. For more complicated instances such as ft20 or la16,the speed up is high at about 9 for GA and slightly less for microGA. The speed up relates to the convergence time of the method asthe CPU time taken to calculate the speed up factor is the time thecurve starts to converge. In this case, the hybrid parallel GA with3 islands and two colonies on each island can speed up the CPUtime more than 7 times for most runs. Moreover, the minimummakespan obtained from the hybrid PGA is better as comparedto the other two methods, as shown in Table 10. The addition ofmore processors in the hybrid PGA reduces execution time withoutsacrificing solution quality.

5.3. Experiment 3 – number of running sub-GA

In this experiment, we study the effect of the number of sub-GA programs in all PCs on the performance of the GA. We performthe test on 10 instances over 10 runs each. We compare the per-formance of the hybrid parallel GA with two GAs, three GAs andfour GAs in each PC. Table 11 shows the complete results of the 10instances used when the number of sub GAs are varied. The resultsare the quality indices ARPE and HR taken over 10 runs.

The ARPE index shows there is not much difference in the resultsfor the different number of sub Gas on each island. However, theHR index as well as the speed up factor indicates that the number ofsub GAs on the islands improves both of these measures. For exam-ple, the HR is highest at 0.763 for 4 sub GAs as compared to 0.643for 2 sub GAs. In the case of speed up factor, the use of 4 sub GAsincreases the speed up factor by 35% as compared to when only 2sub GAs are used in each island. The result is expected as the morenumber of sub GAs available, more individuals are migrated amongthe sub GAs, expanding the exploration of the GAs, and increase the

probability of the more fit individuals to be found, thereby increas-ing the convergence rate. However, if time is not the essence ofthe problem, then the use of the 2 sub GAs are adequate as theminimum makespan obtained are almost the same.

R. Yusof et al. / Applied Soft Computing 11 (2011) 5782–5792 5791

Table 10Comparison of speed up factor.

Instances Speed up factor

GA/hybrid PGA Micro GA/hybrid PGA

Run1 Run 2 Run 3 Run 4 Run 5 Ave Run1 Run 2 Run 3 Run 4 Run 5 Ave.

ft10(10 × 10) 7.7 6.8 8.1 7.5 7.4 7.5 6.9 6.8 7.4 7.6 7.4 7.22ft20(5 × 20) 8.6 8.6 9.2 9.1 9.3 8.8 8.4 8.6 9.1 9.1 7.8 8.6la02(10 × 5) 7.8 7.1 7.3 6.8 6.9 7.184 7.3 7.1 7.3 6.8 7 7.1orb04(10 × 10) 8.6 9.2 9.1 9.5 8.8 9.04 8.4 9 9.1 9.2 8.8 8.9la16(15 × 5) 9.1 9.2 9.1 9.3 8.9 9.12 9.2 9.1 9 9.3 8.9 9.1

Table 11Quality index based on ARPE and HR of the makespan of the instances.

Instances Hybrid parallel GA

Best known makespan 2 sub GA 3 sub GA 4 sub GA

ARPE HR Speed up ARPE HR Speed up ARPE HR Speed up

ft10(10 × 10) 930 0.01 0.80 7.5 0.01 0.63 10.1 0.0 0.8 11.2ft20(5 × 20) 1165 0.0 0.74 8.8 0.0 0.7 9.95 0.0 0.74 10.54la02(10 × 5) 660 0.32 0.60 7.184 0.3 0.82 8.9 0.3 0.85 8.87orb04(10 × 10) 1005 0.5 0.65 9.04 0.49 0.86 12.1 0.5 0.81 11.98la16(15 × 5) 945 0.02 0.72 9.12 0.02 0.65 13.2 0.0 0.72 13.3la21(15 × 10) 1046 0.16 0.55 7.5 0.2 0.71 9.98 0.19 0.72 10.2orb01(10 × 10) 1059 0.25 0.55 – 0.25 0.72 0.28 0.75la29(20 × 10) 1153 0.02 0.68 – 0.01 0.67 0.0 0.68la31(30 × 10) 1888 0.10 0.58 – 0.10 0.69 0.11 0.78

6

a(oitsGtfgtodt

tioldpatsote

R

[

[

[

[

[

[

[

[

[

[

la36(15 × 15) 1268 0.33 0.56 –

Average 0.171 0.643 8.2 0.17

. Conclusion

In this paper, a hybrid PGA has been proposed which is based onsynchronous colony GA (ACGA) and autonomous immigration GAAIGA). Results have shown that the proposed hybrid PGA is able tobtain the optimum or a useful near optimum result. The qualityndex adopted such as ARPE and HR demonstrate the capability ofhe method in terms of the consistency in obtaining the optimalolutions as well as the quality of the makespan. The use of microA as the sub-GA of the hybrid parallel GA contributes further to

he faster convergence to achieve optimal solutions. The hybrid PGAocuses on achieving a higher level of efficiency via parallelism toet a higher quality of solution within far shorter time. Moreover,he use of micro GA in this manner managed to avoid the problemf local optima and ensures the retention of the genetic diversityue to the migration of populations from one island to another andhe passing of individuals from one sub-GA to another.

In order to improve the work, more research has to be done inhree main areas which are scalability, robustness and adaptabil-ty. Investigations can be done on how scalability which is changef speed up with respect to the number of islands helps to solvearger JSS problems. In the case of robustness, more tests can beone to investigate how well the system recovers from errors (e.g.rocessors crashing midcomputation). Adaptability and transfer-bility may be two of the most important aspect of continuation ofhis research. Investigations can be done to determine how well theystem can integrate heterogeneous resources, that is, processorsf different speed and memory characteristics or network connec-ion qualities and the running of the system in a totally differentnvironment.

eferences

[1] J.H. Holland, Adaptation in Natural and Artificial Systems, The University ofMichigan Press, 1975.

[2] D. Applegate, W. Cook, A computational study of job-shop scheduling, ORSA J.Comput. 3 (1991) 149–156.

[

[

0.32 0.7 0.32 0.76

0.715 10.7 0.171 0.761 11.1

[3] J. Carlier, E. Pison, An algorithm for solving the job-shop problem, Manage. Sci.35 (1989) 164–176.

[4] M.E. Aydin, T.C. Fogarthy, A distributed evolutionary simulated annealingalgorithm for combinatorial optimisation problems, J. Heuristics 10 (2004)269–292.

[5] M. Kolonko, Some new results on simulated annealing applied to the job shopscheduling problem, Eur. J. Oper. Res. 113 (1999) 123–136.

[6] T. Satake, K. Morikawa, K. Takahashi, N. Nakamura, Simulated annealingapproach for minimizing the makespan of the general job-shop, Int. J. Prod.Econ. 60 (1999) 515–522.

[7] J.F. Goncalves, J.J.M. Mendes, M.G.C. Resende, A hybrid genetic algorithm forthe job shop scheduling problem, Eur. J. Oper. Res. 167 (2005) 77–95.

[8] M.F. Hussain, S.B. Joshi, A genetic algorithm for job shop schedulingproblems with alternate routing, in: Proceedings of the 1998 IEEE Inter-national Conference on Systems, Man, and Cybernetics, vol. 3, 1998,pp. 2225–2230.

[9] L. Liu, Y. Xi, A hybrid genetic algorithm for job shop scheduling problem tominimize makespan, in: Proceedings of the Sixth World Congress on IntelligentControl and Automation, 2006, pp. 3709–3713.

10] J.C.H. Pan, H.C. Huang, A hybrid genetic algorithm for no-wait job shop schedul-ing problems, Expert Syst. Appl. 36 (2009) 5800–5806.

11] L.J. Park, C.H. Park, Genetic algorithm for job shop scheduling problems basedon two representational schemes, Electron. Lett. (1995) 2051–2053.

12] C. Blum, M. Samples, An ant colony optimization algorithm for shop schedulingproblems, J. Math. Model. Algorithms 3 (2004) 285–308.

13] A. Colorni, M. Dorigo, V. Maniezzo, M. Trubian, Ant system for job-shop schedul-ing, Belg. J. Oper. Res. Stat. Comput. Sci. (JORBEL) 34 (1994) 39–53.

14] V. Oduguwa, A. Tiwari, R. Roy, Evolutionary computing in manufacturingindustry: an overview of recent applications, Appl. Soft Comput. 5 (2005)281–299.

15] P.B. Grosso, Computer simulations of genetic adaptation: parallel subcompo-nent interaction in a multilocus model, Unpublished Doctoral Dissertation, TheUniversity of Michigan, 1985.

16] R. Tanese, Parallel genetic algorithm for a hypercube, in: Proceedingsof the Second International Conference on Genetic Algorithms, 1987,pp. 177–183.

17] R Bianchini, C.M. Brown, Parallel genetic algorithms on distributed-memoryarchitectures Transputer Research and Applications, 6, IOS Press, 1993, 67–82.

18] J.R. Koza, D. Andre, Parallel genetic programming on a network of transputers,Tech. Rep. No. STAN-CS-TR-95-1542, Stanford University, Stanford, CA, 1995.

19] M. Kirley, A coevolutionary genetic algorithm for job scheduling problems, in:Proceedings of the 1999 Third International Conference on Knowledge-basedIntelligent Information Engineering Systems, 1999, pp. 84–87.

20] H. Zhang, R. Chen, Research on coarse-grained parallel genetic algorithm basedgrid job scheduling, in: Proceedings of the Fourth International Conference onSemantics, Knowledge and Grid, 2008, pp. 505–506.

21] B.J. Park, H.R. Choi, H.S. Kim, A hybrid genetic algorithm for the job shopscheduling problems, Comput. Ind. Eng. 45 (2003) 597–613.

5 Compu

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

Artificial Intelligence and Robotics (CAIRO) of UniversitiTeknologi Malaysia. Obtained her first Bachelor and Mas-ters degree of MEng & ACGI in Electrical and ElectronicEngineering from Imperial College London, United King-dom in June 2009 with First Class Honours.

792 R. Yusof et al. / Applied Soft

22] F.M. Defersha, M. Chen, A coarse-grain parallel genetic algorithm for flexiblejob-shop scheduling with lot streaming, in: Proceedings of 2009 InternationalConference on Computational Science and Engineering, 2009, pp. 201–208.

23] T. Zhao, Z. Man, Z. Wan, G. Bi, A CGS-MSM parallel genetic algorithm based onmulti-agent, in: Proceedings of the 2nd International Conference on Geneticand Evolutionary Computing, 2008, pp. 10–13.

24] B.T. Skinner, H.T. Nguyen, D.K. Liu, Hybrid optimisation using PGA and SQPalgorithm, in: Proceedings of the 2007 IEEE Symposium on Foundations ofComputational Intelligence (FOCI 2007), 2007, pp. 73–80.

25] M. Tang, R.Y.K. Lau, A parallel genetic algorithm for floorplan area optimiza-tion, in: Proceedings of the 7th International Conference on Intelligent SystemsDesign and Applications, 2007, pp. 801–806.

26] S. Yussof, R.A. Razali, O.H. See, A. Abdul Ghapar, M. Md Din, A coarse-grained parallel genetic algorithm with migration for shortest pathrouting problem, in: Proceedings of the 11th IEEE International Con-ference on High Performance Computing and Communications, 2009,pp. 615–621.

27] G. Liu, J. Zhang, R. Gao, Y. Sun, An improved parallel genetic algorithm basedon injection island approach and K1 triangulation for the optimal design ofthe flexible multi-body model vehicle suspensions, in: Proceedings of 2009ISECS International Colloquium on Computing, Communication, Control, andManagement, 2009, pp. 30–33.

28] M. Emin Aydin, C. Terence, Fogarty, A distributed evolutionary simulatedannealing algorithm for combinatorial optimisation problems, J. Heuristics 10(2004) 269–292.

29] H. Inoue, Y. Funyu, K. Kishino, T. Jinguji, M. Shiozawa, S. Yoshikawa, T. Nakao,Development of artificial life based optimization system, in: Proceedings of theEighth International Conference on Parallel and Distributed Systems, 2001, pp.429–436.

30] Z. Skolicki, K. De Jong, The influence of migration sizes and intervals on islandmodels, in: GECCO’05, 2005, pp. 1295–1302.

31] D. Goldberg, Genetic Algorithms in Search Optimization and Machine Learning,Addison-Wesley, Menlo Park, CA, 1988.

32] K. Krishnakumar, Micro-genetic algorithms for stationary and non-stationaryfunction optimization, in: SPIE Proceedings Intelligent Control and AdaptiveSystems, 1989, pp. 289–296.

33] B. Giffler, J.L. Thompson, Algorithms for solving production-scheduling prob-lems, Oper. Res. 8 (1960) 487–503.

34] J.E. Beasley, Obtaining test problems via internet, J. Glob. Optim. 8 (1996)429–433, http://mscmga.ms.ic.ac.uk/info.html.

35] D.-W. Huang, J. Lin, Scaling populations of genetic algorithm for job shopscheduling problems using MapReduce, in: Proceeding 1st International Work-shop on Theory and Practice of MapReduce (MAPRED’2010), November 2010,Indiana, United States, 2010.

36] M. Mehmet Eevkli, Aydin Emin, Variable neighbourhood search for job shopscheduling problems, J. Softw. 1 (August (2)) (2006).

37] M. Mehmet Eevkli, Aydin Emin, Parallel variable neighbourhood search for jobshop scheduling, IMA J. Manage. Math. 18 (2007) 117–133.

38] H. Fisher, G.L. Thompson, Probabilistic learning combinations of local job-shopscheduling rules, in: J.F. Muth, G.L. Thompson (Eds.), Industrial Scheduling,Prentice Hall, Englewood Cliffs, NJ, 1963, pp. 225–251.

ting 11 (2011) 5782–5792

39] S. Lawrence, Resource Constrained Project Scheduling: An Experimental Inves-tigation of Heuristic Scheduling Techniques (Supplement), Carnegie-MellonUniversity, Pittsburgh, PA, 1984.

40] D. Applegate, W. Cook, A computational study of the job-shop schedulinginstance, ORSA J. Comput. 3 (1991) 149–156.

Rubiyah Yusof [PhD, Professor] is the Director of Centerfor Artificial Intelligence and Robotics (CAIRO) of Uni-versiti Teknologi Malaysia in Kuala Lumpur, Malaysia.Obtained her first degree in B.Sc. (Electrical and Electron-ics) Eng (Hons.) from University of Loughborough, UnitedKingdom in 1983 and pursued her Masters Degree inControl Systems from Cranfield Inst. Of Tech., United King-dom in 1986. Obtained her PhD in Control Systems fromUniversity of Tokushima, Japan in 1994. Her expertise inAdaptive Control, Artificial Intelligence and applications,and Biometrics applications has gained Prof Rubiyah toreceive various awards from national and internationalorganizations regarding her research work and projects.

Marzuki Khalid [PhD, Professor] is currently the DeputyVice Chancellor for Research and Innovation at UniversitiTeknologi Malaysia in Johor Bahru, Malaysia. He obtainedhis first Diploma in Electrical (Power) Engineering withFirst Class Honour from Universiti Teknologi Malaysiain 1980. His Bachelor Degree in Electrical Engineeringfrom University of Southampton, UK in 1983. His MastersDegree in Control Systems from Cranfield Inst. of Tech, UKin 1986 and PhD in Control Systems from University ofTokushima, Japan in 1994.His expertise in various elec-trical engineering fields especially in Intelligent ControlSystems has gained various awards from national andinternational bodies including academic and industrial

organizations.

Syafawati Md Yusof has worked as tutor in Center for