automated examination timetabling solution construction using fuzzy approach

22
6 AUTOMATED EXAMINATION TIMETABLING SOLUTION CONSTRUCTION USING FUZZY APPROACH Hishammuddin Asmuni 1 INTRODUCTION Examination timetabling is of much interest and concern to academic institutions. The basic problem is to allocate a timeslot for all exams within a limited number of permitted timeslots in order to find a conflict free timetable. This assignment process is subject to ‘hard’ constraints which must be satisfied in order to get a feasible timetable, such as no student is required to sit two exams at the same time. In addition, it is also important to build a good quality examination timetable that considers not only the administration requirements, but also takes into account lecturers’ and students’ preferences. These requirements are generally considered as ‘soft’ constraints which are desirable (but not essential) to satisfy. As reported by Burke et al. (1996), these requirements vary from one academic institution to another. As this task is time consuming and tedious to do manually, many attempts have been made over the last few decades to generate timetables automatically. With a large number of exams needing to be assigned to timeslots and a list of constraints needing to be satisfied, the search space for this problem is very large. While some work only focuses on finding good initial solutions by using ‘constructive’ algorithms, many others use an iterative

Upload: teknologimalaysia

Post on 21-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

6 AUTOMATED EXAMINATION

TIMETABLING SOLUTION CONSTRUCTION USING FUZZY

APPROACH Hishammuddin Asmuni

1 INTRODUCTION

Examination timetabling is of much interest and concern to academic institutions. The basic problem is to allocate a timeslot for all exams within a limited number of permitted timeslots in order to find a conflict free timetable. This assignment process is subject to ‘hard’ constraints which must be satisfied in order to get a feasible timetable, such as no student is required to sit two exams at the same time. In addition, it is also important to build a good quality examination timetable that considers not only the administration requirements, but also takes into account lecturers’ and students’ preferences. These requirements are generally considered as ‘soft’ constraints which are desirable (but not essential) to satisfy. As reported by Burke et al. (1996), these requirements vary from one academic institution to another.

As this task is time consuming and tedious to do manually, many attempts have been made over the last few decades to generate timetables automatically. With a large number of exams needing to be assigned to timeslots and a list of constraints needing to be satisfied, the search space for this problem is very large. While some work only focuses on finding good initial solutions by using ‘constructive’ algorithms, many others use an iterative

80 AI in Planning, Scheduling and Timetabling 2

search procedure to progressively improve the initial solutions. Various search techniques have been developed to find good solutions to exam timetabling problems. These include Local Search (Burke and Newall, 2003; Caramia et al., 2001; Merlot et al., 2003), Tabu Search (Di Gaspero and Schaerf, 2001; Kendall and Hussin, 2004), Simulated Annealing (Thompson and Dowsland, 1998), Genetic Algorithms (Burke et al., 1995), Memetic Algorithm (Burke, Newall and Weare, 1996; Burke et al., 1998), and the Great Deluge algorithm (Burke et al., 2004). The recent state of the art of exam timetabling is overviewed in a variety of papers (refer to Burke and Petrovic (2002), Carter and Laporte (1996), Petrovic and Burke (2004) and Schaerf (1999)).

One of the earliest techniques implemented in finding good initial solutions is known as the sequential constructive algorithm. The main principle is that exams are ordered by certain heuristics before each exam is sequentially chosen to be assigned to a timeslot. This ordering represents how difficult it is to schedule the exams. The idea is that by assigning the most difficult exams first it is likely that we can avoid generating unfeasible solutions. Here, a feasible solution means that all exams are assigned to timeslots without violating any of the specified hard constraints. Many studies have been made into the best way to calculate the ‘difficulty to schedule an exam’. Carter et al. (1996) have shown that a single ordering heuristic can guide the search algorithm for a good solution compared to random selection. The work of Black (2003) has examined the usefulness of incorporating constraint weights into measures used to generate both static and dynamic exams orderings for the same benchmark problems used by Carter et al. (1996). Burke and Newall (2004) introduced an adaptive heuristic technique in which they start ordering by a certain heuristic and then alter that heuristic ordering to take into account the penalty that exams are imposing upon the timetable. The aim of this paper is to investigate and analyze the potential of using fuzzy methodologies to perform simultaneous multiple ordering. A sequential constructive algorithm was implemented with a single ordering heuristic and multiple ordering, both by

Automated Examination Timetabling Solution Construction 81 using Fuzzy Approac fuzzy reasoning and linear combination. The performance of various ordering heuristics was compared on a set of standard benchmark problems. Ordered Weighted Averages (OWA) by Yager (1988) and fuzzy linear programming by Zimmermann (1978) are closely related to this problem but any comparison between their methods and this work is out of the scope of this paper. The rest of the paper is organized as follows: Section 2 briefly describes the constructive algorithm and the fuzzy model used. Section 3 presents the experimental results and finally, Section 4 contain some concluding remarks.

2 METHODS

2.1 The Sequential Construction Heuristic

Sequential constructive techniques are amongst the earliest methods that have been used to tackle the examination timetabling problem in an automated way (Broder, 1964; Cole, 1964; Foxley and Lockyer, 1968). In this approach, the concept of ‘failed first’ is implemented. The basic idea is to first schedule the exams that might cause problems if they were to be scheduled later on in the process. By doing so, it would appear to be more likely that we can avoid the assignment of exams to time slots which might later lead to an infeasible solution.

There is a well known analogy between a basic version of the timetabling problem (no soft constraints) and the graph coloring problem (Burke at al., 2003). Indeed, some of the best known timetabling heuristics are based upon graph coloring heuristics. Particularly, the following graph coloring based heuristics can be employed as heuristic ordering:

(a) Largest Degree (LD) - The degree of an event is simply a count of the number of other events which conflict in the sense that students are enrolled in both events. This

82 AI in Planning, Scheduling and Timetabling 2

heuristic orders events in terms of those with the highest degree first.

(b) Saturation Degree (SD) – The number of time slots available is used to order the events. The basic motivation is that events with less time slots available are more likely to be difficult to be scheduled. The fewer time slots that are available, the higher up the ordering is the event.

(c) Largest Enrollment (LE) – The number of students enrolled for each event is used to order the events (the highest number of students first).

Figure 1 depicts a general framework for sequential constructive algorithm in which it requires the following steps to assign all exams to time slots:

Process 1 Choose heuristic ordering

In order to determine the sequence in which exams is scheduled to a valid time slot, it must be decided which heuristic ordering is to be employed. Usually, any of the heuristic orderings described earlier can be employed on their own to measure the exams’ difficulty to be scheduled. In this research, an alternative approach is introduced in which two heuristic orderings are considered simultaneously to measure the exams' difficulty. Process2 Calculate the difficulty of the exam to be scheduled

Having chosen a heuristic ordering to be implemented, the calculation of the assessment of difficulty is performed and exams are ordered in a specified sequence.

Automated Examination Timetabling Solution Construction 83 using Fuzzy Approac

Figure 1 A general framework for producing timetabling solutions

84 AI in Planning, Scheduling and Timetabling 2

Figure 2 Pseudo code for the ‘rescheduling procedure’

Process 3 – Process 7: Sequentially assign exams to time slots

For each exam in turn (starting with the most difficult to schedule) the following sequence of events are carried out. The free time slots are examined in turn to find valid ones and for each, the penalty is calculated that would result from placement of the exam in that slot After examining each of the time slots the exam is scheduled into the available slot incurring the least penalty (if two or more slots share the lowest penalty cost, the exam is scheduled into the last such time slot). If no valid time slot is available, the “rescheduling procedure” (i.e. Process 4) is executed in order to finds a feasible solution. If a dynamic heuristic is being used, the remaining exams' difficulties are updated and the remaining unscheduled exams are reordered accordingly.

1. E* = current unscheduled event that need to be scheduled; 2. Find time slots where event E* can be inserted with minimum number

of scheduled events need to be removed from the time slot; 3. If found more than one time slot with the same number of scheduled

events need to be removed 3.1. Select a time slot t randomly from the candidate list of time slots;

4. End if 5. While there exist events that conflict with event E* in time slot t

5.1. Et = first event in time slot t ; 5.2. If found another time slot with minimum penalty cost to move

event Et 5.2.1. Move event Et to the time slot;

5.3. else 5.3.1. Bump back event Et to unscheduled events list;

5.4. End if 6. End While 7. Insert event E* to timeslot t; 8. Remove event E* from unscheduled event list;

Automated Examination Timetabling Solution Construction 85 using Fuzzy Approac The steps outlined above continue until all the exams are scheduled, i.e. until a feasible solution is constructed. There are many potential ways in which exams could be ordered using various simultaneous combinations of the three heuristic orderings with the consequence that different solutions will be produced. Essentially, the exam ordering will have an impact on how the search algorithm will navigate through the search space.

2.2 Description of Experiments

A series of experiments were carried out in which progressively more sophisticated fuzzy mechanisms were created to order the exams. Ultimately, the objective o these experiments was to compare the solution quality when different kinds of heuristic ordering were employed to measure the difficulty of scheduling exams to time slots. In each experiment, this ordering is simply inserted into the sequential constructive heuristic algorithm as shown in Figure 1. The heuristic orderings considered in the experiments are described below.

2.2.1 Linear Multiple Ordering Heuristic

One way to simultaneously consider several ordering heuristics in measuring the exam difficulty weight is to multiply the value of the criteria for that exam with a weighting factor. In this approach, the exams were ordered based on a linear multiple heuristic ordering. All the exams were then selected to be scheduled based on this ordering. When this method is used, the weighted function becomes, for example:

W(ej) = wdLDj + weLEj + wsSDj (1) where N is number of exams, j = 1,2, . . . N; wd = we = ws = {0.0, 0.1, …, 1.0} if N <= 300; or wd = we = ws = {0.0, 0.25, 0.5, 0.75, 1.0} if N > 300; and wd , we , ws are weighting factors for LD, LE and SD respectively.

86 AI in Planning, Scheduling and Timetabling 2

In the implementation, if one of the weighted factors is equal to zero, and the other two weighted factors are assigned with non-zero value, this situation represents the implementation of two heuristic ordering simultaneously. On the other hand, if two of the weighted factors are equal to zero, and the other one is equal to 1.0, this situation represents Single Heuristic Ordering. These non-fuzzy multiple heuristic orderings were developed for the purposes of comparison to the fuzzy multiple heuristic ordering detailed below.

2.2.2 Fuzzy Multiple Heuristic Ordering

In practice, the choice of ‘appropriate’ exam ordering always involves uncertainty. For example, it may be assumed that an exam is more difficult to schedule if it has a ‘large’ number of exams in conflict and a ‘small’ number of available slots. This is dealing with imprecise and vague information, where the exact values for ‘large’ and ‘small’ are not known with certainty. Hence, it would appear that this problem is one where fuzzy techniques may fit well.

A fuzzy expert system was designed in which two out of the three heuristics above or all three ordering heuristics are selected as input variables. In any combination of input variables, an output variable called examweight is generated. This output variable, examweight represents the overall difficulty of scheduling an exam to a time slot. Each of the input and output variables are associated with three linguistic terms; triangular shape fuzzy sets corresponding to meanings of small, medium and high.

A restricted form of exhaustive search was implemented to find the most appropriate shape for the linguistic terms. The memberships function were tuned by altering the parameter cp which represents the right edge for the term small, the centre point for the linguistic label medium and the left edge for the term high as illustrated in Figure 3. A search was then carried out to find the best set of cp parameters (there was one for each linguistic variable

Automated Examination Timetabling Solution Construction 87 using Fuzzy Approac – i.e. a cp parameter for each of the input variables and the output variable).

During the search for the ‘optimal’ fuzzy model, the centre point for any of the fuzzy variables might take a value between 0.0 and 1.0 (inclusive). Increments of value 0.1 were used for datasets that have 300 and fewer exams and increments of value 0.25 were used for datasets that have more than 300 exams.

0

0.5

1

0 0.2 0.4 0.6 0.8 1

small medium high

cp

Figure 3 Membership function for fuzzy variables

Different sets of fixed rule sets have been designed for different combinations of input variables. For simplicity, the fuzzy rules are expressed as a linguistic matrix (see Lim et al. (1996)). In such a linguistic matrix, the left-most column and the first row denote the variables involved in the antecedent part of the rules. The second column contains the linguistic terms applicable to the input variable shown in the first column; those in the second row correspond to the input variable shown in the first row. Each entry in the main body of the matrix denotes the linguistic values of the consequent part of a rule. For instance, the bottom-right entry in Table 1 is read as as “IF LD is high AND LE is high THEN examweight is very high”.

Tables .1-3 show the rules used for input variables LD + LE, LE + SD and LD + SD respectively. In the case where there are only two input variables involved, 9 fuzzy rules were

88 AI in Planning, Scheduling and Timetabling 2

implemented. Table.4 illustrates the rule set, consisting of 27 fuzzy rules, used for combining three ordering heuristics. Standard Mamdani style fuzzy inference was implemented with standard Zadeh (min-max) operators. Centroid defuzzification was utilized to obtain a single crisp (real) value for examweight.

Table 1 Fuzzy rules set for variables LD + LE

LE S M H

S VS S M M S M H LDH M H VH

Table 2 Fuzzy rules set for variables LE + SD

SD S M H

S M S VSM H M S LEH VH H M

Table 3 Fuzzy rules set for variables LD + SD

SD S M H

S M S VSM H M S LDH VH H M

Automated Examination Timetabling Solution Construction 89 using Fuzzy Approac

Table 4 Fuzzy rules set for variables LD + SD + LE

LD S M H

SD SD SD

LE

S M H S M H S M HS S VS VS S S VS M S S M S S VS H M M H M MH H S S H M M VH H M

3 EXPERIMENTAL RESULTS

In this section, the results obtained in each experiment are presented. The experiments were carried out with 12 benchmark data sets made publicly available by Carter et al. (1996). Table 5 reproduces the problem characteristics.

The widely used proximity cost function is implemented to measure the timetable quality. The maximum capacity for each timeslot is not taken into account. Only feasible timetables are accepted and the penalty function is utilized to try to spread out each student’s schedule. If two exams scheduled for a particular student are t timeslots apart, the weight is set to wt = 25-t where t ∈ {1, 2, 3, 4, 5}. The weight is multiplied by the number of students that sit both of the scheduled exams. The average penalty per student is calculated by dividing the total penalty by total number of students. The following formulation was used (adopted from Burke et al. (2004)), in which the goal is to minimize

Tppws

ij

N

i

N

ij ij ||

1

1 1 −

= +=∑ ∑ (2)

where N is number of exams, sij is number of student enrolled both exam i and j, pi is the timeslot where exam i is scheduled, pj is the

90 AI in Planning, Scheduling and Timetabling 2

timeslot where exam j is scheduled, T is total number of students and subject to 1 ≤ pj – pi ≤ 5.

Table 5 Characteristics of the problem

Data set No. of Exams

No. of Student

No. of Slots

CAR-F-92 543 18419 32CAR-S-91 682 16925 35EAR-F-83 190 1125 24HEC-S-92 81 2823 18KFU-S-93 461 5349 20LSE-F-91 381 2726 18RYE-F-92 486 11483 23STA-F-83 139 611 13TRE-S-92 261 4360 23UTA-S-92 622 21266 35UTE-S-92 184 2750 10YOR-F-83 181 941 21

The system was implemented using java based object oriented programming, utilizing the fuzzy inference engine developed by Sazonov et al. (2002). The experiments were run on a PC with a 1.8 GHz Pentium 4 and 256MB of RAM.

The experiments were undertaken in two stages. The first stage focused on finding the appropriate weighted factor values for the linear multiple heuristic orderings and the cp values for the fuzzy multiple heuristic orderings. The results for the tuning process of the linear multiple heuristic orderings are presented in Table 6; while for the fuzzy multiple heuristic orderings, the results are presented in Table 7 and Table 8. These values were then used in the second stage of the experiments in which repeated runs were performed to generate 30 solutions with each heuristic ordering, for each of the twelve data sets. Table 9 shows a comparison of the best cost penalties obtained based on 30 runs of each data set when implementing non-fuzzy

Automated Examination Timetabling Solution Construction 91 using Fuzzy Approac heuristic orderings. The best results among the different non-fuzzy heuristic orderings used are highlighted in bold font. It can be seen that, in eleven out of twelve data sets, best results are produced when multiple heuristic orderings are implemented. In the case of the STA-F-83 data set, the single heuristic ordering LD produced the best result and has the same solution quality compared to the solutions produced by the Linear LD+LE and the Linear LD+SD+LE. In comparison with the best result amongst the single heuristic orderings, Linear SD+LD combination produced worst results in all the 12 data sets.

Table 6 Values for weighted factors identified in the tuning process

Linear LD+LE

Linear SD+LE

Linear SD+LD

Linear LD+SD+LE

Data set

wd we ws we ws wd wd ws weCAR-F-92 0.50 0.75 0.00 1.00 0.75 0.00 0.25 0.75 0.75 CAR-S-91 0.75 1.00 0.25 0.25 1.00 0.25 0.75 0.75 1.00 EAR-F-83 0.90 0.70 0.00 0.00 0.50 0.40 1.00 0.10 0.80 HEC-S-92 0.10 0.70 1.00 0.40 0.90 0.00 0.10 0.00 0.70 KFU-S-93 0.00 0.25 0.25 1.00 0.75 0.50 0.25 0.75 0.50 LSE-F-91 0.75 0.75 0.00 1.00 0.50 0.50 0.75 0.75 0.50 RYE-F-92 0.00 1.00 0.00 1.00 0.00 0.00 0.00 0.00 1.00 STA-F-83 0.10 0.00 0.00 0.00 0.10 0.00 0.10 0.00 0.00 TRE-S-92 0.00 0.50 0.00 0.50 0.60 0.40 0.70 0.60 0.40 UTA-S-92 0.25 1.00 0.25 1.00 0.75 0.00 0.25 0.50 0.75 UTE-S-92 0.70 0.40 0.10 0.30 0.10 0.70 0.30 0.90 0.80 YOR-F-83 0.00 0.40 0.60 0.20 0.70 0.40 0.10 0.10 1.00

Table 7 Values for cp parameters

Fuzzy LD+LE Model Fuzzy SD+LE Model

Data set LD LE W* SD LE W*

CAR‐F‐92  0.75  1.00  0.00    1.00  0.25  0.25 CAR‐S‐91  1.00  0.75  0.00    0.50  0.25  0.75 EAR‐F‐83  0.40  0.00  0.80    0.20  0.80  0.80 HEC‐S‐92  0.30  0.90  0.70    0.40  1.00  1.00 

92 AI in Planning, Scheduling and Timetabling 2

KFU‐S‐93  0.75  0.00  0.00    0.00  0.25  0.00 LSE‐F‐91  1.00  0.25  1.00    0.25  0.00  0.00 RYE‐F‐92  1.00  0.50  0.50    0.50  1.00  1.00 STA‐F‐83  0.60  0.70  0.90    0.90  0.90  0.00 TRE‐S‐92  0.80  0.20  0.00    0.80  0.90  0.10 UTA‐S‐92  0.75  0.25  0.50    0.00  0.00  0.75 UTE‐S‐92  0.30  0.60  0.00    0.60  0.70  0.30 YOR‐F‐83  0.80  0.80  0.70    0.50  0.70  0.50 

W* = examweight

Table 8 Values for cp parameters (continue)

Fuzzy SD+LD Model Fuzzy LD+SD+LE Model

Data set SD LD W* LD SD LE W*

CAR‐F‐92  0.25  0.50  1.00    0.00  1.00  0.50  0.50 CAR‐S‐91  0.75  0.00  0.25    0.00  0.75  0.25  0.50 EAR‐F‐83  1.00  0.20  0.80    0.50  0.90  0.70  0.10 HEC‐S‐92  0.90  0.00  0.20    0.30  0.20  0.70  0.70 KFU‐S‐93  0.75  1.00  0.50    0.00  0.50  0.50  0.00 LSE‐F‐91  0.50  1.00  0.25    0.00  0.50  0.50  0.25 RYE‐F‐92  0.25  0.25  0.00    0.00  0.25  0.25  0.50 STA‐F‐83  0.60  0.00  1.00    0.60  0.90  0.80  0.00 TRE‐S‐92  0.70  0.90  0.80    0.00  0.30  0.70  0.50 UTA‐S‐92  0.25  0.25  0.75    0.25  0.75  0.25  0.00 UTE‐S‐92  0.00  0.90  0.40    0.00  0.50  0.20  0.30 YOR‐F‐83  0.30  0.80  0.70    0.10  0.30  0.50  0.90 

W* = examweight

Table 9 The penalty costs obtained by different non-fuzzy heuristic orderings on each of the twelve benchmarks data sets

Data set LD LE SD LD + LE

SD + LE

SD + LD

SD +LD +

LECAR-F-92 4.89 4.74 5.12 4.66 4.72 4.9 4.67CAR-S-91 5.86 5.64 5.97 5.47 5.78 5.83 5.38EAR-F-83 39.9 45.57 45.42 38.68 50.95 40.99 38.17

Automated Examination Timetabling Solution Construction 93 using Fuzzy Approac

HEC-S-92 14.56 13.36 13.7 12.76 13.05 14.56 12.68KFU-S-93 17.64 16.23 18.33 16.45 16.2 17.77 16.02LSE-F-91 13.98 13.25 12.76 13.03 13.1 14.24 12.47RYE-F-92 12.34 10.8 11.51 12.42 10.73 12.79 10.96STA-F-83 167.05 172.01 177.93 167.05 172.76 171.51 167.05TRE-S-92 10.45 9.25 10.5 9.26 9.56 9.97 9.21UTA-S-92 3.97 3.71 4.11 3.65 3.82 3.83 3.61UTE-S-92 35.19 28.93 33.72 28.68 28.93 33.27 28.41YOR-F-83 45.72 42.65 46.74 42.03 44.47 44.02 41.52

Table 10 shows a comparison of the best cost penalties

obtained based on 30 runs of each data set when the fuzzy multiple heuristic ordering are implemented. The best results among the different fuzzy multiple heuristic orderings used are highlighted in bold font. It appears that Fuzzy LD+SD+LE Model is the best amongst the fuzzy multiple heuristic ordering, because it obtained nine best results, followed by Fuzzy SD+LE Model with two best results (CAR-F-92 and EAR-F-83 ). Both Fuzzy LD+SD+LE Model and Fuzzy SD+LE Model produced best solutions with the same solution quality for the UTA-S-92 data set. Comparing Tables 8 and 9, it is evident that the fuzzy multiple heuristic orderings have outperformed all of the non-fuzzy heuristic orderings in terms of cost penalty.

94 AI in Planning, Scheduling and Timetabling 2

Table 10 The penalty costs obtained by different fuzzy multiple heuristic orderings on each of the twelve benchmarks data sets

Data set Fuzzy

LD+LE Model

Fuzzy SD+LE Model

Fuzzy SD+LD

Model

Fuzzy LD+SD+LE

ModelCAR-F-92 4.57 4.47 4.62 4.53CAR-S-91 5.45 5.31 5.45 5.21EAR-F-83 38.80 36.99 39.34 37.11HEC-S-92 12.09 12.03 12.69 11.70KFU-S-93 15.73 15.90 16.09 15.41LSE-F-91 11.97 12.16 14.22 11.43RYE-F-92 13.02 10.25 13.40 10.21STA-F-83 159.82 159.59 165.25 159.34TRE-S-92 8.99 8.92 9.26 8.64UTA-S-92 3.77 3.55 3.73 3.55UTE-S-92 28.59 27.99 30.37 27.64YOR-F-83 41.10 40.71 43.00 40.46

4 DISCUSSION AND FUTURE RESEARCH

In Asmuni et al. (2005) it was demonstrated that multiple ordering heuristics, utilizing fuzzy techniques to consider two ordering heuristics simultaneously, could outperform any single ordering heuristic in the benchmark datasets used. In this paper, these experiments have been extended by improving the construction algorithm and by utilizing up to three ordering heuristics in the fuzzy expert system. For 10 out of the 12 datasets used, better results were obtained compared to two (fuzzy) ordering heuristics. This indicates that the selection of combinations of ordering heuristics is important in order to get good quality solutions. It is not the case, however, that three ordering heuristics always performed better than two ordering heuristics. For two datasets (CAR-F-92 and EAR-F-83), two ordering heuristics (SD+LE)

Automated Examination Timetabling Solution Construction 95 using Fuzzy Approac produced the best overall result. This is probably due to the fact that a fixed fuzzy rule set was implemented in each case – no tuning of fuzzy rules was implemented. If the rule set was tuned, then it should be possible to find a model based on three ordering heuristics to outperform that based on two (assuming that it is possible to search a reasonable proportion of the overall model search space). In addition, this study also confirms that, as might be expected, fuzzy reasoning does result in better solutions compared to linear combinations. Although fuzzy techniques required longer processing time, this is acceptable because once the best fuzzy model is known for the problem instances; the constructive algorithm can produce the solution in a reasonable time.

Table 11 shows the results when implementing multiple ordering heuristics compared to Carter’s sequential constructive algorithm. The last column in Table 11 shows the best results for the benchmark datasets compiled from Abdullah et al. (2006), Burke et al. (2004), Burke and Newall (2003), Caramia et al. (2001) and Merlot et al. (2003). Although the best results produced in these experiments did not beat any of the best benchmark results, the fuzzy based ordering produced better results for CAR-F-92, CAR-S-91, STA-F-83, TRE-S-92 and YOR-F-83 than Carter et al.’s constructive approach.

The main objective of this research was to investigate the effect of simultaneously considering multiple ordering heuristics when finding solutions for examination timetabling. The ordering represents the difficulty of the exam to be scheduled. How the exams are ordered and chosen sequentially will influence the behavior of the search algorithm in finding feasible solutions. Rather than employing the single ordering heuristic (which is usually used), this paper proposes a new approach to calculate exam difficulty by taking into account several ordering heuristics at the same time. Two approaches have been used to calculate the weight: linear combination and fuzzy reasoning.

96 AI in Planning, Scheduling and Timetabling 2

Table 11 A comparing of results obtained herein with results published by other researchers

Data set Experimentsbest results

Carter et al.(1996)

Best results from literature

AR-F-92 4.47 6.20 4.10CAR-S-91 5.21 7.10 4.65EAR-F-83 36.99 36.40 29.30HEC-S-92 11.70 10.80 9.20KFU-S-93 15.41 14.00 13.46LSE-F-91 11.43 10.50 9.60RYE-F-92 10.21 7.30 6.80STA-F-83 159.34 161.50 150.28TRE-S-92 8.64 9.60 8.13UTA-S-92 3.55 3.50 3.20UTE-S-92 27.64 25.80 24.21YOR-F-83 40.46 41.70 36.11

As future work, the authors will be experimenting with constraint weight ordering heuristics as described in Black (2003). In the next stage, the authors aim to investigate iterative improvement utilizing the Great Deluge Algorithm when it is started with the good initial solutions produced in this paper. Finally, the authors expect to explore the use of more sophisticated search algorithms in tuning the membership functions and the fuzzy rules.

REFERENCES

Abdullah, S, Ahmadi,,S., Burke, E. K. and Dror, M. 2006. Investigating Ahuja-Orlin’s Large Neighbourhood Search Approach for Examination Timetabling. OR Spectrum , 29, 351--372.

Automated Examination Timetabling Solution Construction 97 using Fuzzy Approac Asmuni, H., Burke, E.K. and Garibaldi, J. M. Fuzzy Multiple

Ordering Criteria for Examination Timetabling. 2004. In: E. K. Burke & M. Trick (Eds.) Practice and Theory of Automated Timetabling V (PATAT 2004, Pittsburg USA, August 2004, Selected Revised Papers), ). Lecture Notes in Computer Science, Vol. 3616. Berlin, Springer, 334--353.

Black, D. P. 2003. Search in Weighted Constraint Satisfaction Problems. PhD Thesis, University of Leeds, United Kingdom.

Broder, S. 1964 Final examination scheduling. Communications of the ACM, 7(494--498.

Burke, E. K., Bykov, Y., Newall, J. and Petrovic, S. 2004. A time-predefined local search approach to exam timetabling problems. IIE Transactions on Operations Engineering, 36, 509--528.

Burke, E. K., De Werra, D. and Kingston, J. 2003. Applications in timetabling. In: Yellen, J., Gross, J. L (Eds.): Handbook of Graph Theory. Chapman Hall, CRC Press. 445--474

Burke, E. K., Elliman, D. G., Ford, P. H.and Weare, R. F. 1996. Examination timetabling in British Universities – a survey. In: Burke, E, Ross, P. (Eds.): Practice and Theory of Automated Timetabling I (PATAT 1995, Edinburgh, Aug/Sept, selected papers). Lecture Notes in Computer Science, Vol. 1153. Springer-Verlag, Berlin Heidelberg New York,76--90.

Burke, E. K., Elliman, D. G. and Weare, R. F. 1995. A hybrid genetic algorithm for highly constrained timetabling problems. Proceedings of the 6th International Conference on Genetic Algorithms (ICGA'95, Pittsburgh, USA, 15th-19th July 1995). 605--610, Morgan Kaufmann, San Francisco, CA, USA.

Burke, E. K. and Newall, J. P. 2003. Enhancing Timetable Solutions with Local Search Methods. In: Burke, E,

98 AI in Planning, Scheduling and Timetabling 2

Causmaecker, P.D. (Eds.): Practice and Theory of Automated Timetabling IV (PATAT 2002, Gent Belgium, August, selected papers). Lecture Notes in Comp. Science, Vol. 2740. Springer-Verlag, Berlin Heidelberg New York, 195--206.

Burke, E. K. and Newall, J. P. 2004. Solving examination timetabling problems through adaptation of heuristic orderings. Annals of Operations Research, 129, 107--134.

Burke, E. K., Newall, J. P. and Weare, R. F. 1996. A Memetic Algorithm for University Exam Timetabling, The Practice and Theory of Automated Timetabling (eds EK Burke and P Ross), Lecture Notes in Computer Science Vol. 1153, Springer, 241--250.

Burke, E. K., Newall, J. P. and Weare, R. F. 1998. Initialisation Strategies and Diversity in Evolutionary Timetabling, Evolutionary Computation Journal (special issue on Scheduling), 6, 81--103.

Burke, E. K. and Petrovic, S. 2002. Recent research directions in automated timetabling. European Journal of Operational Research. 140, 266--280.

Caramia, M., Dell’Olmo, P. and Italiano, G. F. 2001. New algorithms for examination timetabling. In: Naher, S., Wagner, D. (Eds.): Algorithm Engineering 4th Int. Workshop, Proc. WAE 2000 (Saarbrucken, Germany, September) Lecture Notes in Computer Science, Vol. 1982. Springer-Verlag, Berlin Heidelberg New York, 230--241.

Carter, M. W. and Laporte, G. 1996. Recent developments in practical examination timetabling. In: Burke, E., Ross, P. (Eds.): Practice and Theory of Automated Timetabling I (PATAT 1995, Edinburgh, Aug/Sept, selected papers). Lecture Notes in Computer Science, Vol. 1153. Springer-Verlag, Berlin Heidelberg New York, 3--21.

Carter, M. W., G. Laporte, G. and Lee, S. Y. 1996. Examination timetabling: Algorithmic strategies and applications. Journal of the Operational Research Society. 47, 373--383.

Automated Examination Timetabling Solution Construction 99 using Fuzzy Approac Cole, A. J. 1964. The preparation of examination time-tables using

a small-store computer. The Computer Journal, 7(2), 117--121.

Di Gaspero, L. and Schaerf, A. 2001. Tabu search techniques for examination timetabling. In: Burke, E., Erben, W. (Eds.): Practice and Theory of Automated Timetabling III (PATAT 2000, Konstanz Germany, August, selected papers). Lecture Notes in Computer Science, Vol. 2079. Springer-Verlag, Berlin Heidelberg New York, 104--117.

Kendall, G and Hussin, N. M. 2004. Tabu Search Hyper-Heuristic Approach to the Examination Timetabling Problem at University Technology MARA. In: E. K. Burke & M. Trick (eds): Proceedings of the 5th International Conference on Practice and Theory of Automated Timetabling (PATAT 2004), Pittsburgh, USA, pages 199--217.

Lim, M. H., Rahardja, S. and Gwee, B. H. 1996 A GA paradigm for learning fuzzy rules. Fuzzy Sets and Systems, 82(177--186.

Merlot, L. T. G., Boland, N., Hughes, B. D.and Stuckey, P. J. 2003. A hybrid algorithm for examination timetabling problem. In: Burke, E, Causmaecker, P.D. (Eds.): Practice and Theory of Automated Timetabling IV (PATAT 2002, Gent Belgium, August, selected papers). Lecture Notes in Computer Science, Vol. 2740. Springer-Verlag, Berlin Heidelberg New York, 207--231.

Petrovic, S. and Burke, E. K. 2004. University Timetabling, Ch. 45 in the Handbook of Scheduling: Algorithms, Models, and Performance Analysis (ed. J. Leung), Chapman and Hall/CRC Press, .

Sazonov, E. S., Klinkhachorn, P., Gangarao, H. V. S. and Halabe, U. B. 2002. Fuzzy logic expert System for automated damage detection from changes in strain energy mode shapes. Nondestructive Testing and Evaluation. 18(1), 1-20.

Schaerf, A. 1999. A survey of automated timetabling. Artificial Intelligent Review. 13, 87--127

100 AI in Planning, Scheduling and Timetabling 2 Thompson, J. M. and Dowsland, K. A. 1998. A robust simulated

annealing based examination timetabling system. Computers and Operations Research. 25, 637--648.

Yager, Y. Y. 1988. On ordered weighted averaging aggregation operators in multi-criteria decision making, IEEE Trans. Systems Man Cybernetic. 18, 183--190.

Zimmermann, H. J. 1978. Fuzzy programming and linear programming with several objective functions. Fuzzy Sets and Systems. 1(1), 45--55.