A comparison of machine-learning algorithms for dynamic scheduling of flexible manufacturing systems

Download A comparison of machine-learning algorithms for dynamic scheduling of flexible manufacturing systems

Post on 26-Jun-2016

212 views

Category:

Documents

0 download

TRANSCRIPT

  • Engineering Applications of Articial Int

    afa

    nt

    ersi

    rm

    14

    Dispatching rules are frequently used to schedule jobs in exible manufacturing systems (FMSs) dynamically. A drawback, however,

    to using dispatching rules is that their performance is dependent on the state of the system, but no single rule exists that is superior to all

    the others for all the possible states the system might be in. This drawback would be eliminated if the best rule for each particular

    of exible manufacturing system (FMS) scheduling can be

    NP-complete type (Garey and Johnson, 1979), so heuristicand off-line type algorithms are usually used (Cho and

    for reasonably large-scale problems.

    systems using simulation, concluding that performancedepends on several factors, such as the due date tightness

    ARTICLE IN PRESS(Baker, 1984), the systems conguration, the workload,and so on (Cho and Wysk, 1993). FMSs led to manystudies evaluating the performance of dispatching rules in

    0952-1976/$ - see front matter r 2005 Elsevier Ltd. All rights reserved.

    doi:10.1016/j.engappai.2005.09.009

    Corresponding author. Tel.: +0034985182107; fax: +0034985182010.E-mail address: priore@epsig.uniovi.es (P. Priore).divided into the following categories: the analytical, theheuristic, the simulation-based and the articial intelli-gence-based approaches. The analytical approach consid-ers an FMS scheduling problem as an optimisation modelwith an objective function and explicit constraints. Anappropriate algorithm resolves the model (see for example,Stecke, 1983; Kimemia and Gershwin, 1985; Shanker andTzen, 1985; Lashkari et al., 1987; Han et al., 1989;Hutchison et al., 1989; Shanker and Rajamarthandan,1989; Wilson, 1989). In general, these problems are of a

    These difculties led to research into many heuristicapproaches, which are usually dispatching rules. Theseheuristics employ different priority schemes to order thediverse jobs competing for the use of a machine. A priorityindex is assigned to each job and the one with the lowestindex is selected rst. Many researchers (see for example,Panwalkar and Iskander, 1977; Blackstone et al., 1982;Baker, 1984; Russel et al., 1987; Vepsalainen and Morton,1987; Ramasesh, 1990; Kim, 1990) have assessed theperformance of these dispatching rules on manufacturingand by analysing the earlier performance of the system, scheduling knowledge is obtained whereby the right dispatching rule at each

    particular moment can be determined. Three different types of machine-learning algorithms will be used and compared in the paper to

    obtain scheduling knowledge: inductive learning, backpropagation neural networks, and case-based reasoning (CBR). A module that

    generates new control attributes allowing better identication of the manufacturing systems state at any particular moment in time is

    also designed in order to improve the scheduling knowledge that is obtained. Simulation results indicate that the proposed approach

    produces signicant performance improvements over existing dispatching rules.

    r 2005 Elsevier Ltd. All rights reserved.

    Keywords: Knowledge-based systems; Scheduling; FMS

    1. Introduction

    The different approaches available to solve the problem

    Wysk, 1993; Chen and Yih, 1996). The problem is thatthese analytical models include simplications that are notalways valid in practice. Moreover, they are not efcientsituation could be used. To do this, this paper presents a scheduling approach that employs machine learning. Using this latter technique,A comparison of machine-learningof exible manu

    Paolo Priore, David de la Fue

    Escuela Tecnica Superior de Ingenieros Industriales, Univ

    Received 3 November 2004; received in revised fo

    Available online

    Abstractelligence 19 (2006) 247255

    lgorithms for dynamic schedulingcturing systems

    e, Javier Puente, Jose Parreno

    dad de Oviedo, Campus de Viesques, 33203, Gijon, Spain

    16 September 2005; accepted 26 September 2005

    November 2005

    www.elsevier.com/locate/engappai

  • 2. Machine learning

    According to Quinlan (1993), machine-learning algo-rithms can be classied into the following categories: CBR,neural networks and inductive learning. A brief descriptionof the machine-learning algorithms applied in this workwill be provided next. The most commonly used CBRalgorithm is the nearest neighbour algorithm (Aha, 1992).The formulation of this algorithm, called NN, or k-NN inthe more sophisticated version, is very simple. The startingpoint is a metric, d, amongst examples, and a set of trainingexamples, E. When a new case x occurs, its solution or classis determined as follows:

    ARTICLE IN PRESSof Artificial Intelligence 19 (2006) 247255these systems (see for example, Stecke and Solberg, 1981;Egbelu and Tanchoco, 1984; Denzler and Boe, 1987; Choiand Malstrom, 1988; Henneke and Choi, 1990; Montazeriand Wassenhove, 1990; Tang et al., 1993).Given the variable performance of dispatching rules, it

    would be interesting to change these rules dynamically andat the right moment according to the systems conditions.Basically, there are two approaches to modifying dispatch-ing rules dynamically in the literature. Firstly, the rule ischosen at the right moment by simulating a set of pre-established dispatching rules and selecting the one thatgives the best performance (see for example, Wu and Wysk,1989; Ishii and Talavage, 1991; Kim and Kim, 1994; Jeongand Kim, 1998). The second approach, from the eld ofarticial intelligence, employs a set of earlier systemsimulations (training examples) to determine what the bestrule is for each possible system state. These training casesare used to train a machine-learning algorithm (Michalskiet al., 1983) to acquire knowledge about the manufacturingsystem. Intelligent decisions are then made in real time,based on this knowledge (see for example, Nakasuka andYoshida, 1992; Shaw et al., 1992; Kim et al., 1998; Min etal., 1998). A set of control attributes need to be establishedwhich identify the manufacturing systems state at eachparticular moment in time in order to generate trainingexamples. Aytug et al. (1994) and Priore et al. (2001)provide a review in which machine learning is applied tosolving scheduling problems. Finally, it has to be notedthat many works take benet from a combination of two ormore approaches (see for example, Glover et al., 1999;Flanagan and Walsh, 2003).Machine learning solves problems by using knowledge

    acquired while solving earlier problems in the past similarin nature to the problem at hand. The main algorithmtypes in the eld of machine learning are case-basedreasoning (CBR), neural networks and inductive learning.The right training examples and machine-learning algo-rithm must be selected if this scheduling approach based onmachine learning is to work properly. However, there arehardly any studies in the literature dealing with thisproblem. This paper thus aims to compare three of theabove-mentioned machine-learning algorithms. In anattempt to improve the manufacturing systems perfor-mance, a new approach is also proposed whereby newcontrol attributes that are arithmetical combinations of theoriginal attributes can be determined.The rest of this paper is organised as follows. Machine-

    learning algorithms used in this paper are rst described.An approach to scheduling jobs that employs machinelearning is then presented. The experimental study describ-ing a new approach to determining new control attributesfrom the original ones now follows, along with acomparison of the machine-learning algorithms. Theproposed scheduling approach is then compared with thealternative of using a combination of dispatching rules

    P. Priore et al. / Engineering Applications248constantly. A summary of the results obtained rounds thepaper off.Classx classnn;where nn (nearest neighbour) satises:

    dx; nn Minimumfdx; e : e 2 Eg.This means that the cases class coincides with that of the

    nearest example or neighbour. This initial scheme can bene tuned by including an integer value, k (kX1), whichdetermines the k nearest neighbours in E for x. The class ofx is therefore a function of the class of the majority of the kneighbours.As regards neural networks, those that will be used in

    this work are backpropagation neural networks (BPNs),or multilayer perceptron (Rumelhart et al., 1986). Theseare one of the most well-known and widely used as patternclassiers and function approximators (Lippman, 1987;Freeman and Skapura, 1991). Fig. 1 gives an overview of aneural network of this type. As can be seen, there is a singlehidden layer and there are no connections between neuronsin the same layer in this particular case.The backpropagation training algorithm is the one that

    is used in this type of neural networks. There are severalversions of this algorithm, and the standard one (Rumel-hart et al., 1986; Freeman and Skapura, 1991) will next becommented on. We will assume a network with an inputlayer of n1 neurons, a hidden layer of n2 neurons, and anoutput layer of n3 neurons. The outputs of the input,hidden and output layers are called xi, yj and zk,respectively. The weights of the connections that connect

    wjkwij

    uk

    uj

    Input layer Hidden layer

    Output layer

    xi yj zk ok TargetFig. 1. An overview of a backpropagation neural network.

  • 1. The training set is a sub-set of all possible cases.However, situations in which the scheduling system doesnot work properly can always be added as trainingexamples.

    2. The systems performance depends on the number andrange of control attributes used in the design of thetraining examples.

    3. A rule may perform well in a simulation over a long timeperiod for a set of given attributes, but will performpoorly when applied dynamically.

    4. The system can be prone to inadequate generalisationsin extremely imprecise situations.

    4. Experimental study

    4.1. The proposed FMS

    The selected FMS consists of a loading station, anunloading station, four machines and a material-handlingsystem. It is assumed that material-handling system has a

    ARTICLE IN PRESSofthe rst two layers and the thresholds of the second layerneurons are called wij and uj, respectively. Similarly, w

    0jk

    and u0k are the weights of the connections between the twolatter layers and the thresholds of the third layer neurons.The training algorithm is iterative, and employs the

    gradient algorithm to minimise a function that measuresthe difference between the network output (zmk) and thedesired one (omk). This algorithm has two phases, oneforward and the other backward. The former calculatesthe difference between the network output and the desiredone:

    Ewij ; uj ; w0jk; u0k

    12

    Xpm1

    Xn3k1

    omk f

    Xn2j1

    w0jk ymj u0k !" #2

    ,

    where

    ymj f

    Xn1i1

    wij xmi uj !

    p is the number of training examples.This functioncalled the cost, target, error or energy

    functionmeasures how appropriate the weights of theconnections are, and approaches zero when the networkoutput approximates to the desired output. Once thisfunction has been calculated, the backward phase follows.In this phase, by applying the gradient algorithm, theweights and thresholds are modied so that error isreduced. This process is iterated until the error is reducedto the desired amount.Finally, inductive learning algorithms generate a deci-

    sion tree from a set of training examples. The original ideasprung from the works of Hoveland and Hunt at the end ofthe 1950s, culminating in the following decade in conceptlearning systems (Hunt et al., 1966). The idea is torecursively divide the initial set of training examples intosubsets composed of single-class collections of examples.Other researchers discovered the same or similar methodsindependently, and Friedman (1977) established thefundamentals of the CART system, which shared someelements with ID3 (Quinlan, 1979, 1983, 1986), PLS(Rendell, 1983) and ASSISTANT 86 (Cestnik et al.,1987). The C4.5 algorithm (Quinlan, 1993) is the mostwidely used of the inductive learning algorithms. Besidesthe decision tree that is generated, this algorithm alsoproduces a set of decision rules out of the decision tree,whereby the class of new cases can be determined.

    3. Scheduling using machine learning

    An overview of a scheduling system that employsmachine learning is shown in Fig. 2. A simulation modelis used by the example generator to create differentmanufacturing system states and choose the best dispatch-

    P. Priore et al. / Engineering Applicationsing rule for each particular state. The machine-learningalgorithm acquires the knowledge that is necessary to makefuture scheduling decisions from the training examples.The real time control system using the scheduling knowl-edge, the manufacturing systems state and performance,determines the best dispatching rule for job scheduling. Theknowledge may need to be rened, depending on themanufacturing systems performance, by generating furthertraining examples.Finally, it must be remembered that there may be at least

    four reasons why a machine learning-based approachmight perform worse than the best rules used individually:glotimFig. 2. General overview of a knowledge-based scheduling system.ManufacturingsystemSystem state and performance

    Real time control system

    Job schedulingSystemperformance

    Control Attribute Combination

    Machine learning algorithm

    Training and test example generator

    Training and test examples

    Scheduling knowledge

    KnowledgerefinementSimulation model

    Artificial Intelligence 19 (2006) 247255 249bal maximal capacity of 30 parts and the transportationes between two machines are negligible. Two types of

  • evet

    6.

    7.

    8.

    the

    ARTICLE IN PRESSof4.2. Generating training and test exampleswoingtraare

    1.

    2.

    3.4.5.6.

    7.The dispatching rules employed for the second type ofent in this FMS are (OKeefe and Kasirajan, 1992; Kimal., 1998; Min et al., 1998): SPT, shortest queue (NINQ),rk in queue (WINQ), and lowest utilised station (LUS).al.1. Each time a machine ends an operation, a schedulingdecision must be taken about the choice of the next partto process in the queue of parts that have been assignedto this machine.

    2. Each time a machine ends an operation on a part/job, adispatching decision must be taken about the choice ofthe machine (and of the queue where to wait for it) thatwill realise the next operation for this part/job.

    For the rst type of event, the following dispatchingrules are used: shortest processing time (SPT), earliest duedate (EDD), modied job due date (MDD), and shortestremaining processing time (SRPT). These rules wereselected because of their ne performance in earlier studies(see for example, Shaw et al., 1992; Kim et al., 1998; Min et, 1998).events that trigger a decision of the FMS control systemare studied for the FMS proposed (see Fig. 3):

    Fig. 3. Flexible manufacturing system conguration.Machine 1 Machine 2 Machine 3 Machine 4 Material handling system stationstation

    UnloadingLoading250 P. Priore et al. / Engineering ApplicationsThe control attributes used to describe the manufactur-system state must rst be dened in order to generateining and test examples. In this particular FMS these:

    F: ow allowance factor which measures due datetightness (Baker, 1984).NAMO: mean number of alternative machines for anoperation.MU: mean utilisation of the manufacturing system.Ui: utilisation of machine i.WIP: mean number of parts in the system.RBM: ratio of the utilisation of the bottleneck machineto the mean utilisation of the manufacturing system.RSDU: ratio of the standard deviation of individualmachine utilisations to mean utilisation.of the combinations are selected, to a greater or lesserextent, in each of the scenarios that are proposed, pointingresulting from the use of each of the dispatching rules inisolation were calculated. Sixteen simulations were actuallyneeded to generate a training or test example, as there are16 combinations because each type of decision (choice ofthe next machine for an operation, choice of the nextoperation for a machine) has four rules. The number oftimes that each of the combinations of the dispatching rulesemployed is selected for the criteria of mean tardiness andmean ow time is shown in Table 1 as a percentage. Most

    to the need to modify the dispatching rules in real time as auseattfunpase were used as test examples, while the other 1000 wered as training examples. For each combination ofributes, mean tardiness and mean ow time valuesatttheployed in this study. In all, 1100 different controlribute combinations were randomly generated; 100 ofmaemAs mean tardiness and mean ow time in the system aremost widely used criteria to measure system perfor-nce in all manufacturing systems, they are alsovaries between one and four.The precedence constraints between some pairs ofoperations are randomly generated for each job.The job arrival rate varies so that the overall use of thesystem ranges between 55% and 95%.The value of factor F uctuates between one and ten.5.

    have a greater workload.The number of alternative machines for an operationstudy the behaviour of the FMS in an unbalancedsituation, it is also assumed that the rst two machinesassigned to machine i (POi). These percentages (ran-domly generated) uctuate between 10% and 40%. Todepends on the parameters percentage of operationsThe following expressions are employed to calculate thelatter two attributes:

    RBM maxU1; U2; U3; U4MU

    ,

    RSDU

    U1 MU2 U2 MU2 U3 MU2 U4 MU2=4

    qMU

    .

    The training and test examples needed for the learningstage are obtained by simulation using the WITNESSprogramme (Witness, 1996). The following suppositionswere made to do this:

    1. Jobs arrive at the system following a Poisson distribu-tion.

    2. Processing times for each operation are sampled from anexponential distribution with a mean of one.

    3. The actual number of operations of a job is a randomvariable, equally distributed among the integers fromone to four.

    4. The probability of assigning an operation to a machine

    Artificial Intelligence 19 (2006) 247255ction of the state the manufacturing system is in at eachrticular moment.

  • steps of the proposed genetic algorithm are as follows:

    2.3.

    4.

    5.

    6.

    7.

    ARTICLE IN PRESSof4.3. Comparison of the machine-learning algorithms

    The distances between a new case and the trainingexamples have to be calculated and the shortest distancefound to calculate its class using the nearest neighbouralgorithm. The new cases class will coincide with that ofthe nearest training example. The formulation used tocalculate the distance (d(x,e)) between a new case (x) and atraining example (e) is

    dx; e Xn

    w a a 2s

    ,

    Table 1

    Comparison of combinations of the dispatching rules in the examples

    generated

    Combination of the dispatching

    rules employed

    Mean tardiness

    (%)

    Mean ow time

    (%)

    SPT+SPT 23.27 50

    SPT+NINQ 4.64 22.45

    SPT+WINQ 6.09 25

    SPT+LUS 0.36 0

    EDD+SPT 1.09 0

    EDD+NINQ 8.73 0

    EDD+WINQ 4.73 0

    EDD+LUS 9.09 0

    MDD+SPT 25.64 0

    MDD+NINQ 11.27 0

    MDD+WINQ 3.82 0.64

    MDD+LUS 1.27 0

    SRPT+SPT 0 0

    SRPT+NINQ 0 0

    SRPT+WINQ 0 1.91

    SRPT+LUS 0 0

    P. Priore et al. / Engineering Applicationsi1i ix ie

    where n is the number of attributes considered; aix is thevalue of attribute i in case x; aie is the value of attribute i inexample e, and wi is the weight assigned to attribute i as afunction of its importance.The previous section shows that n equals ten in this

    particular case. Another two more sophisticated versions ofthe nearest neighbour algorithm have also been applied inthis study. In the rst of the two, an integer value of k ofthree is employed, so that the three nearest neighbours arecalculated for each x. In the second version, a value of k ofve is used. Odd values of k are chosen to eliminatepotential ties. One of the main difculties of the nearestneighbour algorithm is that the classication of new casesdepends on the weights wi selected. However, the value ofeach wi is not known a priori.A genetic algorithm (Holland, 1975; Goldberg, 1989;

    Michalewicz, 1996) resolves this problem by determiningthe optimum values of wi. Fig. 4, shows an overview of themethod for calculating optimum wi weights. It can be seenthat the genetic algorithm employs the nearest neighbourmethod to calculate the classication error for given wivalues. This error is the tness for a given set of wi

    than the maximum number of generations (NGmax).Each of the m individuals of the population is formed bya vector of the type that follows:8.wwhvaalsCP

    nomatark Mup12genbeReturn to step three if the number of generations is less

    individual of the present generation is eliminated.Selection of the best individual. If this has a tness thatis inferior to that of the best individual from theprevious generation, it is retained and the worstparents are eliminated.Mutation with a predetermined mutation probability(MP).calculated probability distribution.Simple crossover of the individuals selected in step fourwith a predetermined crossover probability (CP). TheSelection of m individuals generating m randomnumbers between zero and one, employing the recentlySelection of the best individual.Calculation of a distribution of accumulated probabilityas a function of the tness of each individual in thepopulation.1. Generation of a population of m individuals.weights. The genetic algorithm will calculate the optimumvalues of wi after a certain number of generations. The

    Geneticalgorithm

    Nearest neighbour algorithm

    Test error

    wi

    Training and test example generator

    Testexamples

    Trainingexamples

    Fig. 4. Method for calculating optimum wi weights.

    Artificial Intelligence 19 (2006) 247255 2511; w2; w3 . . . ; w9; w10,ere wi is the weight of attribute i. The most appropriatelues of population size (m), CP, MP, and NGmax wereo studied. Optimal values obtained are: m 50, 0:7, MP 0:025 and NGmax 100.

    Fig. 5 shows the test example error (examples that havet previously been dealt with) obtained with each of thechine-learning algorithms for the criterion of meandiness. The nearest neighbour algorithm (k-NN with5) can be seen to provide the lowest test error (9%).

    oreover, error uctuation from 600 training exampleswards is minimum. The BPN is in second place, with a% test error, and the C4.5 inductive learning algorithmerates the greatest test error (16%). Furthermore, it canobserved that both the neural network and the C4.5

  • have greater test error uctuation than the nearest

    Fig. 7 provides an overview of how the module works. Apseudo-code for the generator of the new control attributesis listed next:

    1. Determination of the combinations of the presentattributes.

    2. Generation of new training and test examples accordingto earlier combinations.

    3. Identication of the useful combinations. These are in

    ARTICLE IN PRESSe BPN

    20%e

    C4.5k-NN (k=1)

    Fig. 7. The generator module of new control attributes.

    of igence 19 (2006) 247255neighbour algorithm from 600 training examples onwards.Similarly, Fig. 6 provides a summary of the results

    that were obtained with each of the machine-learningalgorithms for the criterion of mean ow time. Itshows that the nearest neighbour algorithm obtains zerotest error. The C4.5 inductive learning algorithm is insecond place, with a 1% test error, with the BPNgenerating greatest test error (4%). Errors for this lattercriterion are lower due to there being ve dispatching rulecombinations that are really used. In contrast, 12combinations are used for the criterion of mean tardiness(see Table 1).The ideal conguration of the neural network that was

    used for the criterion of mean tardiness has 10 input nodes(one for each control attribute), 16 nodes in the hiddenlayer, and 12 nodes in the output layer (one for eachdispatching rule combination). Similarly, the ideal cong-uration of the neural network employed for the criterion ofmean ow time has 10 input nodes, 16 nodes in the hiddenlayer, and ve nodes in the output layer.0%5%

    10%15%20%25%

    50 200 350 500 650 800 950Number of examples

    Erro

    r per

    cent

    ag

    Fig. 5. Comparison of machine-learning algorithms for mean tardiness.4.4

    sysdeorallwicosyscoingop

    1.

    2.30%35% C4.5

    k-NN (k=5)252 P. Priore et al. / Engineering Applications. Generating new control attributes

    To allow a better identication of the manufacturingtems state, a module was designed which automaticallytermines the useful arithmetic combinations of theiginal attributes by using simulation data which origin-y served to obtain test and training examples. Togetherth the C4.5 algorithm, this module generates the bestntrol attribute combinations for the manufacturingtem that is being used. The basic arithmetic operatorsnsidered are adding, subtracting, multiplying and divid-. Although others can be used, these above fourerators were chosen for two reasons:

    The more operators used, the more search timeincreases.As the process is recursive, fairly complex attributes canbe generated from basic operators.Generator of test and training examples

    Initial test examples

    Initial trainingexamples4.

    algobthoiteusthe

    1.2.3.4.Modified test and training examples C4.5 inductive learning algorithm

    Generator of new control attributesModified decision tree and set of decision rules 0%

    5%

    10%

    15%

    50 200 350 500 650 800 950Number of examples

    Erro

    r per

    cent

    ag BPN

    Fig. 6. Comparison of machine-learning algorithms for mean ow time.25%Artificial Intellthe decision tree and in the set of decision rulesgenerated by C4.5.If the new decision tree and/or the set of decision ruleshave fewer classication errors, go back to step one. Ifnot, stop the algorithm.

    In fact, the decision can be made to continue theorithm at step four, as fewer classication errors may betained in the following iteration or iterations, evenugh error has not been improved in the presentration. By applying the proposed module, the followingeful control attribute combinations were obtained forcriterion of mean tardiness:

    U1+U2U1+U4U2+U3U1U2

  • 5. U2U46. U3U47. U3/U4

    The proposed module is also applied for the criterion ofmean ow time, and the following combinations ofattributes were determined to be useful:

    1. U1U22. U3U43. U1/U24. U2/U3

    input, hidden and output layers respectively. Similarly, for

    training examples will be used for both performancecriteria in the light of the results in the previous section.Finally, it should be pointed out that ve independentreplicas are made over 100 000 times units.A summary of the results obtained is provided in Table

    2. To increase readability, the values in the table arerelative to the lowest mean tardiness and mean ow timeobtained (these are assigned a value of one), with amonitoring period of 2.5 time units, as this was the best.Mean tardiness and mean ow time values in the tablecorrespond to the average of the ve replicas. Table 2demonstrates not only that the best alternative is to employa scheduling system based on machine learning, but alsothat the nearest neighbour algorithm (k-NN) provides thebest results. Mean tardiness values for the C4.5 algorithm

    ARTICLE IN PRESS

    750 900Number of examples

    n

    BPN

    P. Priore et al. / Engineering Applications ofFig

    me600

    0%5%

    10%

    Erro

    r per

    ce

    C4.5k-NN (k=5) tage 15%20%It should be noted that only combinations of attributesUi are relevant for both criteria. Besides, U1 and U2 arepresent in most combinations due to the fact that machines1 and 2 have a greater workload. Fig. 8 shows the test errorobtained with each of the machine-learning algorithms forthe criterion of mean tardiness when the generator moduleof new control attributes was applied. It shows that thenearest neighbour algorithm gives the lowest test error(8%). Similarly, the BPN has a test error of 12%, and theC4.5 inductive learning algorithm generates the largest testerror (13%). The C4.5 algorithm is also shown to lead to alarger error uctuation than the nearest neighbour algo-rithm and the BPN. A comparison of this gure with Fig. 5shows how test error has decreased. Only training sets of600 examples or more were used, as lower errors areobtained upwards of this training set size.Furthermore, Fig. 9 provides a summary of the results

    obtained for each of the machine-learning algorithms forthe criterion of mean ow time. It can be seen that thenearest neighbour algorithm and the C4.5 inductivelearning algorithm have zero test error. In contrast, theBPN generates the largest test error (2%). Errors are onceagain considerably smaller that those obtained for thecriterion of mean tardiness, as the number of classes islower. Test errors for this gure are also lower than for Fig.6, because of the new control attributes applied. A nalpoint to be noted is that the ideal neural network for thecriterion of mean tardiness has 17, 15 and 12 neurons in the. 8. Comparison of machine learning algorithms for the criterion of

    an tardiness with the generator module of new control attributes.the criterion of mean ow time the ideal neural networkhas 14, 10 and ve neurons.

    4.5. Learning-based scheduling

    To select the best combination of dispatching rulesaccording to the FMSs state in real time we mustimplement the scheduling knowledge in the FMS simula-tion model. A further key issue is the choice of themonitoring period, as the frequency used to test the controlattributes to decide whether the dispatching rules change ornot, determines the performance of the manufacturingsystem. Multiples of the average total processing time for ajob are taken (see for example, Wu and Wysk, 1989; Kimand Kim, 1994; Jeong and Kim, 1998) to select thismonitoring period. In this study, 2.5, 5, 10 and 20 timeunits are used as monitoring periods.Two different scenarios are also proposed for the

    manufacturing system. In the rst of them, changes aregenerated in the FMS at given time periods, as a functionof a random variable, equally distributed among theintegers from 50 to 500 time units. In the second scenario,this random variable ranges between 2.5 and 250 timeunits. Therefore, in this latter scenario the FMS willundergo a greater number of changes. Nine hundred

    0%

    1%

    2%

    3%

    4%

    5%

    600 750 900Number of examples

    Erro

    r per

    cent

    age

    C4.5k-NN (k=1)BPN

    Fig. 9. Comparison of machine learning algorithms for the criterion of

    mean ow time with the generator module of new control attributes.

    Artificial Intelligence 19 (2006) 247255 253and the BPN are similar, and worse than the valueobtained with the nearest neighbour algorithm. The

  • ARTICLE IN PRESSofcombinations MDD+NINQ and MDD+WINQ are thebest strategies of those that use a xed combination ofdispatching rules. However, mean tardiness values arehigher than the alternative that uses CBR by between13.58% and 15.21%.

    Table 2

    Mean tardiness (MT) and mean ow time (MFT) for the proposed

    strategies

    Strategy used MT for

    rst

    scenario

    MT for

    second

    scenario

    MFT for

    rst

    scenario

    MFT for

    second

    scenario

    SPT+SPT 4.1228 5.4703 2.1223 2.4183

    SPT+NINQ 1.2134 1.2173 1.0455 1.0500

    SPT+WINQ 1.2068 1.1964 1.0481 1.0476

    SPT+LUS 2.5154 2.5819 1.5208 1.5277

    EDD+SPT 3.5316 4.7024 2.2199 2.6260

    EDD+NINQ 1.5325 1.6737 1.3361 1.3991

    EDD+WINQ 1.5271 1.6819 1.3358 1.4030

    EDD+LUS 2.8872 3.2830 1.8767 2.0614

    MDD+SPT 3.5436 4.7435 2.3141 2.6908

    MDD+NINQ 1.1358 1.1406 1.2356 1.2569

    MDD+WINQ 1.1433 1.1521 1.2394 1.2620

    MDD+LUS 2.3953 2.5228 1.7848 1.8646

    SRPT+SPT 4.5211 6.1213 2.2947 2.6706

    SRPT+NINQ 1.3824 1.4119 1.1392 1.1477

    SRPT+WINQ 1.3860 1.4034 1.1448 1.1486

    SRPT+LUS 2.8449 3.0129 1.6815 1.7195

    k-NN 1.0000 1.0000 1.0000 1.0000

    BPN 1.0221 1.0258 1.0129 1.0114

    C4.5 1.0267 1.0294 1.0052 1.0023

    P. Priore et al. / Engineering Applications254Similarly, for the criterion of mean ow time, thescheduling systems based on the nearest neighbour algo-rithm and on C4.5 provide the best results. With the BPN,mean ow time values are slightly higher to those for theother two algorithms. Furthermore, it can be seen that thecombinations SPT+NINQ and SPT+WINQ generate theleast mean ow time from amongst the strategies that applya xed combination of rules. However, mean ow timevalues are greater than the alternative of the nearestneighbour algorithm by between 4.55% and 5%.To round off this study, the scheduling system based on

    the nearest neighbour algorithm is compared with the otheralternatives by analysis of variance (ANOVA). Thisscheduling system is shown to be better than these with asignicance level of less than 0.05. The only exceptionoccurs between the nearest neighbour algorithm and C4.5according to the mean ow time criterion.

    5. Conclusions

    This paper makes a comparison of three machine-learning algorithms used for dynamically scheduling jobsin a FMS. A generator module of new control attributes isalso incorporated by which test error obtained with themachine-learning algorithms can be reduced, therebyimproving the manufacturing systems performance. Theperformance of the FMS employing the scheduling systembased on the nearest neighbour algorithm is compared withthe other strategies, and the system proposed is shown toobtain lower mean tardiness and mean ow time values.The only exception occurs with the scheduling systemsbased on the nearest neighbour algorithm and on C4.5 forthe criterion of mean ow time as test errors in both thesealgorithms are the same. Although the selected FMS isquite generic both in its conguration and in the jobscharacteristics, and it has been widely used by otherauthors (see for example, Shaw et al., 1992; Kim et al.,1998; Min et al., 1998), the obtained results cannot begeneralised to any type of FMS.Future research might include consideration of more

    decision types in the proposed FMS. In the same way otherFMS congurations could be studied. A simulator mightalso be incorporated, as sometimes scheduling knowledgeleads to there being two or more theoretically rightdispatching rules for certain given control attribute values.In such cases, where the decision based on schedulingknowledge is not clear, a simulator would be extremelyuseful to determine which rule to apply. Finally, aknowledge base renement module could be added, whichwould automatically modify the knowledge base whenmajor changes in the manufacturing system occur.

    References

    Aha, D.W., 1992. Tolerating noisy, irrelevant and novel attributes in

    instance-based learning algorithms. International Journal of Man-

    Machine Studies 36, 267287.

    Aytug, H., Bhattacharyya, S., Koehler, G.J., Snowdon, J.L., 1994. A

    review of machine learning in scheduling. IEEE Transactions on

    Engineering Management 41, 165171.

    Baker, K.R., 1984. Sequencing rules and due-date assignments in a job

    shop. Management Science 30, 10931103.

    Blackstone, J.H., Phillips, D.T., Hogg, G.L., 1982. A state-of-the-art

    survey of dispatching rules for manufacturing job shop operations.

    International Journal of Production Research 20, 2745.

    Cestnik, B., Kononenko, I., Bratko, I., 1987. ASSISTANT 86: a

    knowledge-elicitation tool for sophisticated users. In: Bratko, I.,

    Lavrac, N. (Eds.), Progress in Machine Learning. Sigma Press,

    Wilmslow.

    Chen, C.C., Yih, Y., 1996. Identifying attributes for knowledge-based

    development in dynamic scheduling environments. International

    Journal of Production Research 34, 17391755.

    Cho, H., Wysk, R.A., 1993. A robust adaptive scheduler for an intelligent

    workstation controller. International Journal of Production Research

    31, 771789.

    Choi, R.H., Malstrom, E.M., 1988. Evaluation of traditional work

    scheduling rules in a exible manufacturing system with a physical

    simulator. Journal of Manufacturing Systems 7, 3345.

    Denzler, D.R., Boe, W.J., 1987. Experimental investigation of exible

    manufacturing system scheduling rules. International Journal of

    Production Research 25, 979994.

    Egbelu, P.J., Tanchoco, J.M.A., 1984. Characterization of automated

    guided vehicle dispatching rules. International Journal of Production

    Research 22, 359374.

    Flanagan, B., Walsh, D., 2003. The role of simulation in the development

    of intelligence for exible manufacturing systems. International

    Manufacturing Conference 20, Cork Institute of Technology, Ireland.

    Freeman, J.A., Skapura, D.M., 1991. Neural Networks: Algorithms,

    Artificial Intelligence 19 (2006) 247255Applications, and Programming Techniques. Addison Wesley, Read-

    ing, MA.

  • Friedman, J.H., 1977. A recursive partitioning decision rule for

    non-parametric classication. IEEE Transactions on Computers,

    404408.

    Garey, M., Johnson, D., 1979. Computers and Intractability: A Guide to

    the Theory of NP-Completeness. Freeman, New York.

    Nakasuka, S., Yoshida, T., 1992. Dynamic scheduling system utilizing

    machine learning as a knowledge acquisition tool. International

    Journal of Production Research 30, 411431.

    Okeefe, R.M., Kasirajan, T., 1992. Interaction between dispatching and

    next station selection rules in a dedicated exible manufacturing

    ARTICLE IN PRESSP. Priore et al. / Engineering Applications of Artificial Intelligence 19 (2006) 247255 255optimization and simulation. Proceedings of the 1999 Winter Simula-

    tion Conference. In: Farrington, P.A., Nembhard, H.B., Sturock,

    D.T., Evans, G.W. (Eds.).

    Goldberg, D.E., 1989. Genetic Algorithms in Search, Optimization and

    Machine Learning. Addison Wesley, Reading, MA.

    Han, M., Na, Y.K., Hogg, G.L., 1989. Real-time tool control and job

    dispatching in exible manufacturing systems. International Journal of

    Production Research 27, 12571267.

    Henneke, M.J., Choi, R.H., 1990. Evaluation of FMS parameters on

    overall system performance. Computer Industrial Engineering 18,

    105110.

    Holland, J., 1975. Adaptation in Natural and Articial Systems.

    University of Michigan Press, Detroit, MI.

    Hunt, E.B., Marin, J., Stone, P.J., 1966. Experiments in Induction.

    Academic Press, New York.

    Hutchison, J., Leong, K., Snyder, D., Ward, F., 1989. Scheduling for

    random job shop exible manufacturing systems. In: Proceedings of

    the Third ORSA/TIMS conference on Flexible Manufacturing

    Systems, pp. 161166.

    Ishii, N., Talavage, J., 1991. A transient-based real-time scheduling

    algorithm in FMS. International Journal of Production Research 29,

    25012520.

    Jeong, K.-C., Kim, Y.-D., 1998. A real-time scheduling mechanism for a

    exible manufacturing system: using simulation and dispatching rules.

    International Journal of Production Research 36, 26092626.

    Kim, Y.-D., 1990. A comparison of dispatching rules for job shops with

    multiple identical jobs and alternative routings. International Journal

    of Production Research 28, 953962.

    Kim, M.H., Kim, Y.-D., 1994. Simulation-based real-time scheduling in a

    exible manufacturing system. Journal of Manufacturing Systems 13,

    8593.

    Kim, C.-O., Min, H.-S., Yih, Y., 1998. Integration of inductive learning

    and neural networks for multi-objective FMS scheduling. Interna-

    tional Journal of Production Research 36, 24972509.

    Kimemia, J., Gershwin, S.B., 1985. Flow optimization in exible

    manufacturing systems. International Journal of Production Research

    23, 8196.

    Lashkari, R.S., Dutta, S.P., Padhye, A.M., 1987. A new formulation of

    operation allocation problem in exible manufacturing systems:

    mathematical modelling and computational experience. International

    Journal of Production Research 25, 12671283.

    Lippman, R.P., 1987. An introduction to computing with neural

    networks. IEEE ASSP Magazine 3, 422.

    Michalewicz, Z., 1996. Genetic Algorithms+Data Structures EvolutionPrograms. Springer, Berlin.

    Michalski, R.S., Carbonell, J.G., Mitchell, T.M., 1983. Machine Learning.

    An Articial Intelligence Approach. Tioga Press, Palo Alto, CA.

    Min, H.-S., Yih, Y., Kim, C.-O., 1998. A competitive neural network

    approach to multi-objective FMS scheduling. International Journal of

    Production Research 36, 17491765.

    Montazeri, M., Wassenhove, L.N.V., 1990. Analysis of scheduling rules

    for an FMS. International Journal of Production Research 28,

    785802.Panwalkar, S.S., Iskander, W., 1977. A survey of scheduling rules.

    Operations Research 23, 961973.

    Priore, P., De La Fuente, D., Gomez, A., Puente, J., 2001. A review of

    machine learning in dynamic scheduling of exible manufacturing

    systems. Articial Intelligence for Engineering Design, Analysis and

    Manufacturing 15, 251263.

    Quinlan, J.R., 1979. Discovering rules by induction from large collections

    of examples. In: Michie, D. (Ed.), Expert Systems in the Micro

    Electronic Age. Edinburgh University Press, Edinburgh, UK.

    Quinlan, J.R., 1983. Learning efcient classication procedures. In:

    Michalski, R.S., Carbonell, J.G., Mitchell, T.M. (Eds.), Machine

    Learning: An Articial Intelligence Approach. Tioga Press, Palo Alto,

    CA.

    Quinlan, J.R., 1986. Induction of decision trees. Machine Learning 1,

    81106.

    Quinlan, J.R., 1993. C4.5: Programs for Machine Learning. Morgan

    Kaufmann, San Mateo, CA.

    Ramasesh, R., 1990. Dynamic job shop scheduling: a survey of simulation

    studies. OMEGA: The International Journal of Management Science

    18, 4357.

    Rendell, L.A., 1983. A new basis for state-space learning systems and a

    successful implementation. Articial Intelligence 20, 369392.

    Rumelhart, D.E., Hinton, G.E., Williams, R.J., 1986. Learning repre-

    sentations by back-propagating errors. Nature 323, 533536.

    Russel, R.S., Dar-El, E.M., Taylor, B.W., 1987. A comparative analysis of

    the COVERT job sequencing rule using various shop performance

    measures. International Journal of Production Research 25, 15231540.

    Shanker, K., Rajamarthandan, S., 1989. Loading problem in FMS: part

    movement minimization. In: Proceedings of the Third ORSA/TIMS

    conference on Flexible Manufacturing Systems, pp. 99104.

    Shanker, K., Tzen, Y.J., 1985. A loading and dispatching problem in a

    random exible manufacturing system. International Journal of

    Production Research 23, 579595.

    Shaw, M.J., Park, S., Raman, N., 1992. Intelligent scheduling with

    machine learning capabilities: the induction of scheduling knowledge.

    IIE Transactions 24, 156168.

    Stecke, K.E., 1983. Formulation and solution of nonlinear integer

    production planning problems for exible manufacturing systems.

    Management Science 29, 273288.

    Stecke, K.E., Solberg, J., 1981. Loading and control policies for a exible

    manufacturing system. International Journal of Production Research

    19, 481490.

    Tang, L.-L., Yih, Y., Liu, C.-Y., 1993. A study on decision rules of a

    scheduling model in an FMS. Computers in Industry 22, 113.

    Vepsalainen, A.P.J., Morton, T.E., 1987. Priority rules for job shops with

    weighted tardiness costs. Management Science 33, 10351047.

    Wilson, J.M., 1989. An alternative formulation of the operation-allocation

    problem in exible manufacturing systems. International Journal of

    Production Research 27, 14051412.

    Witness, 1996. User Manual. Release 8.0. AT&T ISTEL Limited.

    Wu, S.-Y.D., Wysk, R.A., 1989. An application of discrete-event

    simulation to on-line control and scheduling in exible manufacturing.

    International Journal of Production Research 27, 16031623.Glover, F., Kelly, J.P., Laguna, M., 1999. New advances for wedding system. International Journal of Production Research 30, 17531772.

    A comparison of machine-learning algorithms for dynamic scheduling of flexible manufacturing systemsIntroductionMachine learningScheduling using machine learningExperimental studyThe proposed FMSGenerating training and test examplesComparison of the machine-learning algorithmsGenerating new control attributesLearning-based scheduling

    ConclusionsReferences

Recommended

View more >