scheduling master-slave multiprocessor systems · 2017. 8. 29. · scheduling master-slave...

14
Scheduling

Upload: others

Post on 21-Mar-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

Scheduling

Page 2: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University
Page 3: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

Scheduling Master-Slave Mult iprocessor Systems *

Sartaj Sahni

Computer & Information Sciences Department University of Florida

Ga.inesville, FL 32611, USA

A b s t r a c t . We define the master-slave multiprocessor scheduling model and provide several applications for the modal. O(nlog n) algorithms are developed for some of the problems formulated and some others are shown to be NP-hard.

1 I n t r o d u c t i o n

The problem of scheduling a multiprocessor computer system has received con- siderable at tention [1, 2, 3, 4, 5, 6, 7, 8]. In this paper, we develop a model to schedule a parallel computer system in which the parallel computer operates under control of a host processor. The host processor is referred to as the master processor and the processors in the parallel computer are referred to as the slave processors. The nCube hypercube is an example of such a parallel computer system. When programming such a system, one typically writes a program that runs on the master computer. This is a sequential program tha t spawns parallel tasks to be run on the slave processors. When these tasks complete, the sequen- tial thread on the master continues and possibly (later) spawns a new set of parallel tasks on the slaves, and so on. The number of parallel tasks spawned is always less than or equal to the number of slave processors.

I f we examine the execution profile of such a computer system, we see that , in general, there are t ime intervals in which only the master is active, only the slaves are active, both the master and the slaves are active. With each task to be run on a slave processor, we may associate three activities:

1. Preprocessing. This is the work the master has to do to collect the da ta needed by the slave and includes the overhead involved in initiating the transfer of this da ta as well as the code to be run by the slave.

2. Slave work. This includes the work the slave must do to complete the as- signed computat ion task, receive the da ta and code f rom the master, and transfer the results back to the master . Into this work, we also include the transmission dalays experienced in receiving the data and code from the t ime the master initiates transmission to the t ime the slave receives as well f rom

* This work was supported in part by the National Science Foundation under grant MIP-9103379

Page 4: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

612

the time the slave initiates transmission of the results to the t ime the master receives the results.

3. Postprocessing. This is the work the master must do to receive the results and store them in the desired format. It also includes any checking or data combining work the master may do on the results.

In addition to applications to parallel computer scheduling, the master-slave model can be used to model scheduling problems that arise in industrial settings. The master-slave scheduling model has the following attributes:

1. there is a single master processor 2. thre are as many slave processors as parallel jobs 3. associated with each job, there are three tasks: pre-processing (performed

by the master), slave work (performed by the slaves), and post-processing (performed by the master).

4. for each job, the tasks are to be performed in the order: pre-processing, slave work, post-processing . . . .

Let ai > O, bi > 0, and ci > 0, respectively, denote the time needed to perform the three tasks associated with job i and let n be the number of jobs as well as the number of slaves. In this paper, we shall use the notations ai, bi, ci to represent both the tasks of job i as well as the time needed to complete these tasks.

When we are scheduling a parallel computer system using the above model, we are interested in schedules that minimize the finish time. However, when scheduling industrial systems using the above model we may be interested in minimizing either the schedule finish time or the mean finish t ime of the jobs.

In this paper, we do the following.

1. In Section 2, we show that obtaining minimiumfinish t ime no-wait-in-process (MFTNW) schedules (i.e., schedules in which once the processing of a job begins, it continues without interruption to completion) is NP-hard for each of the following scheduling disciplines: (a) Each job's pre-processing must be done before its post-processing. No

other constraint is put on the master. (b) The pre-processing and post-processing orders are the same. In this section, we also develop an O(n log n) algorithm to obtai n MF TN W schedules when the pre-processing order is required to be the reverse of the post-processing order.

2. In Section 3, we develop O(n logn) algorithms to minimize finish time for each the following scheduling constraints: (a) The pre-processing and post-processing orders are the same. (b) The pre-processing order is the reverse of the post-processing order.

2 N o W a i t in P r o c e s s

Our NP-hard proofs use the subset sum problem which is known to be NP-hard [GARE79]. This problem is defined below:

Page 5: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

613

Input A collection of positive integers xi, 1 < i < n and a positive integer M. Output "Yes" iff there is a subset with sum exactly equal to M.

From any instance of the subset sum problem, we may construct an equivalent instance of M F T N W as below:

ai = ci = x J 2 , bi = c, l < i < n

an+l = c,~+1 = S - M + 1, bn+l = M + ne

an+2 = cn+2 = M + 1, b,+2 = S - M + nc

where S is the sum of the x i !s and 0 < e < 1In .

In the no wait case, the master processor cannot preempt any job as such a preempt ion would violate the no wait constraint. In the preceding section, we remarked tha t there is no advantage to preemptions on slave processors. So, we may assume non-preemptive schedules. Since an+l = Cn+l > bn+2, the pre- processing and/or post-processing tasks of job n + 1 cannot be done while a slave is working on job n + 2. Similarly the pre-processing and/or post-processing tasks of job n + 2 cannot be overlapped with the slave task of job n + 1. Hence, every no wait schedule has a finish t ime f tha t is at least the sum of the task t imes of these two jobs. Tha t is,

f >_ an+l -4- bn+l "t- Cn+l "4- an+2 -4- bn+2 -4- Cn+2 = 3 S "4- 4 -4- 2ne

There are exactly two templates for schedules with this length. One has job n + 1 processed before job n + 2 and the other has n + 2 preceding n + 1 (see Figure 1).

0

. . . . . . . . | I[qlT I

S 1 ~ lb. ~ 1

$2

M

S1

$2

0

(a)

3S +4+2ne

1 44

I I 3S+4+2ne

Ill,Ill]Ill

Fig. 1. Templates for NP-hard proof

Page 6: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

614

To complete the schedule using either of the templates and not exceed the finish t ime of 3S + 4 + 2he, some of the remaining jobs must fully overlap with bn+l and the remainder with bn+2. For this, the sum of the first groups task times cannot exceed bn+l = M + ne and the sum of second groups task t imes cannot exceed bn+2 = S - M + he. Since the sum of the task times for the remaining jobs is S + nc and e < l / n , the only way to accomplish this is when there is a subset of the xi 's that sums to M. Hence, M F T N W is NP-hard.

We can modify the above proof to show that order preserving M F T N W (OP- MFTNW) is Mso NP-hard. The task t imes for the n + 2 jobs are: ai = ci = zi, bi = S - xi + 1, 1 < i < n an+l --" Cn+l = S -- M + 1, bn+l = M an+2 = Cn+2 = M + 1, bn+2 = S - M

The finish t ime is at least the sum of the master processor task times. So,

f > ~ a i + ~ c i = 4S + 4

I t is easy to see that there is an order preserving no wait schedule with length 4 S + 4 whenever there is a subset of the si 's tha t sums to M. We shall show tha t whenever there is a schedule with this length, there is a subset that sums to M.

As in the previous proof, the tasks of jobs n + 1 and n + 2 cannot overlap. So, jobs n + 1 and n + 2 are done in sequence. Suppose tha t job n + 1 is done before n + 2 (the case n + 2 before n + 1 is similar). Since the sum of the task t imes for these two jobs is 3S + 4, the only way to finish processing by t ime 4S + 4 is for the master processor to be busy throughout the t ime the slaves are working on tasks b,~+l and bn+2 and for task an+l to begin by t ime S. The first requirement means that there are only S other t ime units when the master can work on the remaining S units of pre- and post- processing needed by jobs 1, . . -, n. There are three cases to consider:

C a s e 1: There is at least one job whose pre-processing is done before an+l and whose post-processing is done af ter an+l. Let u, 1 < u < n, be the first such job. The post-processing of this job must be done while a slave is working on b~+l as an+l + bn+l = S + 1 > bu = S - xu + 1. Hence, we have the situation shown in Figure 2 (a).

The tasks (if any) scheduled between an+l and c~, must be pre-processing tasks. To see this, note that to schedule a post-processing task here, the corre- sponding pre-processing task must have been scheduled either before a~ (in which case u is not the first job with pre-processing before an+l and post-processing after an+l) , or in between au and a,~+l (in which case the order requirement is violated as the post-processing of this job precedes tha t of u), or between an+l and cu (which is not possible as the sum of task lengths for each of jobs 1 . . . n exceeds S + 1 which in turn is larger than b,~+l.

The tasks scheduled between a~ and a~+l are either post-processing tasks of jobs started before au or pre-processing tasks of jobs that will finish after c~ (because of the order requirement). Hence, the tasks beginning with au and ending just before cu tha t are processed by the master correspond to different jobs. The total amount of t ime f rom the beginning of au to the s tar t of c~ is

Page 7: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

615

0

M #,._ §

$1

$2

M -

$1

$2

A

v~_ +]

(a)

4S +4

Ill! (b)

Fig. 2. Templates for order preserving NP-hard proof

a~ + b~ = S + 1. Subtracting an+l from this leaves us with M units of time, all of which must be utilized by the master in order for the schedule to complete by 4S + 4. This can happen iff there is a subset of the xi 's that sums to M.

Case 2: There is at least one job whose pre- and post- processing are done before an+l. Let u be one such job. Since the sum of the task lengths of u is S + z~ + 1, task an+l cannot begin until S + x~ + 1 and so the schedule cannot complete by 4S + 4. Therefore, this case is not possible.

Case 3: Task an+l is the first task scheduled. Figure 2 (b) shows the schedul- ing template for this case. For the schedule length to be 4S + 4, the total t ime represented by the regions A, B, C, and D must be 2S. The master processor cannot be idle in any of these regions as the amount of pre- and post- processing not scheduled in Figure 2 (b) is exactly 2S. Because of the order constraint, in region A, we can schedule only the pre-processing of some subset of the jobs 1,-- -, n. Hence, there needs to be a subset of the xi's that sums to bn+l = M.

Hence, OP-MFTNW is NP-hard. The M F T N W problem is quite easy to solve when the post-processing is to

be done in the reverse order of the pre-processing. In this case, there is at most one feasible solution. Hence, if such a solution exists, it has minimum finish time and also minimum mean finish time. Note that when there is no ordering constraint between pre- and post- processing and also when these two orders are required to be the same, there is always at least one feasible solution (i.e., process the jobs in sequence using any permutation). When the post-processing order is required to be the reverse of the pre-processing order and no wait is permit ted in process, then the processing of the jobs must be fully nested. That is, the processing of the j ' t h scheduled job must begin and end while a slave is

Page 8: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

616

working on the j - l ' th job. As a result, if jobs are pre-processed in the order 1, 2, --., n, then the following must be true:

bi>ai+l+b~+l+ci+l, l _ < i < n (1)

Since the aj 's and cj's are positive, it follows that:

bl > b2 > . . . > bn (2)

The preceding inequality implies a unique ordering of the jobs. The algorithm to determine feasibilty, as well as a feasible schedule that minimizes both the finish and mean finish times is:

1. (Ver i fy E q u a t i o n 2) Sort the jobs into decreasing order of bj 's. If such an ordering does not exist, there is no feasible schedule. In this case, terminate.

2. (Ver i fy E q u a t i o n 1) For i = 1, -.., n - l , verfify that bi > ai+l+bi+l-J-ci+l. If there is an i for which this is not true, then there is no feasible schedule. In this case, terminate.

3. The minimum finish time and mean finish time schedule is obtained by pre- processing the jobs in the order determined in step 1.

The complexity of the above algorithm is readily seen to be O(n log n).

3 S a m e P r e - a n d P o s t - P r o c e s s i n g O r d e r s

In this section, we develop an O(n log n) algorithm to construct an order pre- serving minimum finish time (OPMFT) schedule. Without loss of generality, we place the following restrictions on schedules we consider in this section:

RI: The schedules are non-premptive. RP: Slave tasks begin as soon as their corresponding pre-processing tasks are

complete. R3: Each post-processing task begins as soon after the completion of its slave

task as is consistent with the order preserving constraint. First, we establish some properties of order preserving schedules that satisfy

these assumptions.

De f in i t i on 1. A canonical order preserving schedule (COPS) is an order pre- serving schedule in which (a) the master processor completes the pre-processing tasks of all jobs before beginning any of the post-processing tasks, and (b) the pre-processing tasks begin at time zero and complete at time ~in=l ai.

Because of restrictions R1 - R3, every COPS is uniquely described by pro- viding the order in which the pre-processing is done.

L e m m a 2 . There is a canonical OPMFT schedule.

Page 9: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

617

Proof." Consider any non-canonical OPMFT schedule. Let cj be the first post- processing task that the master works on. Since the schedule is non-canonical, there is a pre-processing task that is executed at a later time. Let ai be the first of these. Slide ai to the left so that it begins just after the pre-processing task (if any) that immediately precedes cj (if there is no such task preceding cj, then slide ai left so as to start at time 0). Slide the post-processing tasks beginning with cj and ending at the post-processing task that immediately preceded ai (before it was moved) rightwards by ai units. Slide the slave and post-processing tasks left so as to satisfy restrictions I~2 and R3. The result is another OPMFT schedule that is closer to canonical form. By repeating this transormation at most n - 1 times we can obtain a canonical OPMFT schedule. O

L e m m a 3 . I f ai = c~, 1 < i < n, then every COPS is an O P M F T schedule.

Proof : Because of the preceding lemma, it is sufficient to show that all COPS have the same length. Each COPS is uniquely identified by the order in which the pre-processing tasks are executed. We shall show that exchanging two adjacent jobs in this ordering does not increase the schedule length. Since we can go from one permutation to any other via a finite sequence of adjacent exchanges, it follows that no matter what the pre-processing order, canonical schedules have the same finish time when jobs have equal pre- and post- processing times. Hence, all COPS are OMFT schedules.

Consider two jobs j and j + 1 that are adjacent in the pre-processing order (Figure 3). Let tj and t i+l, respectively, be the times at which the master begins tasks cj and cj+l. Slide job j + 1 left by aj so that all its tasks begin aj units earlier than before, slide tasks aj and bj right by aj+l units so that they begin aj+l units later than before, and move task cj so that it begins just after cj+l finishes. As a result, task cj+l now begins at tj+l -- aj = tj+l -- cj > tj. Hence, the rescheduling of job j + 1 does not result in the master working on two or more jobs simultaneously. In addition, the post-processing of job j + 1 does not begin until after its slave task is complete. The post-processing of task cj now begins at tj+l - aj + cj+l = t j + l - cj "4- aj+l > t j "4- aj+l which is greater than or equal to the time at which the slave finishes bj. Task cj finishes at tj+l + aj+l. Hence, the schedule for the remaining jobs is unchanged. D

L e m m a 4 . Consider the COPS defined by some permutation c~. Assume that job j is pre-processed immediately before job j + 1 (i.e., j immediately precedes j + 1 in or). I f c j < aj and cj+l > aj+l, then the schedule length (i.e., its finish time) is no less than that of the COPS obtained by interchanging j and j + 1 in iT.

Proof." A diagram of the schedule with job j immediately preceding job j + 1 is shown in Figure 4 (a). In this figure, t is the time at which the pre-processing of job j starts, A is the elapsed time between the completion of task aj+l and the start of the post-processing of job j (note that A > ~ k follows j+l ak "4-~k precedes j Ck) , A~ > 0 is the time between the start of cj and cj+l, and r is the time at which cj+l completes.

Page 10: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

618

ty ty+1

~.1 I II"fltllttl

Fig. 3. Figure for Lemma 3

A t - ~ 17

�9 "1~ ~f~b I A I~11 ftlb I-" I

s j '-

s.~ I "1~t I (a) Initial

t t " ~"

-"1 +t~1 ~ A II 'tll H ' "

s., I ~t4t I (b) After exchange

Fig. 4. Figure for Lemma 4

Let tr' be the permutat ion obtained by interchanging jobs j and j + l in or. The schedule corresponding to 6,' is shown in Figure 4 (b). Let t ' and r ', respectively, be the times at which cj+l and cj finish in this schedule. If A > aj , then t ' < r - aj. Also, from Figure 4 (a), we observe that bj < aj+l + A < cj+l + A . So, bj finishes by t ' in Figure 4 (b). Hence, r ' = t ' + cj < r - aj + cj < r . As a result, the post-processing tasks of the remaining jobs can be done so as to complete at or before their completion times in ~r and the interchanging of j and j + 1 does not increase the schedule length.

If A < aj, then Cj+l starts at t ime t + aj+l + aj + A in ~'. So, t ' = t + ay+l + aj + A + cj+l. The time at which bj finishes in or' is t + aj+l + ay + bj <

t + 2aj+l + aj + A < t + a j+l + aj + A + Cj+l = t' . So, cj finishes at t ' + cj = t + a j+l + aj + A + cj+x + cj < r . Consequently, the OPS defined by a~ has a finish t ime that is < that of the OPS defined by c~. []

Page 11: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

619

T h e o r e m 5. There is an O P M F T schedule which is a COPS in which the pre- processing order satisfies the following:

1. jobs with cj > aj come first 2. those with cj = aj come next 3. those with cj < aj come last

P r o o f : Immediate consequence of Lemma 4. []

L e m m a 6. Let ~r define an O P M F T COPS that satisfies Theorem 5. Its length is unaffected by the relative order of jobs with aj = ej.

P r o o f : Follows from Lemma 4. []

L e m m a 7. There is an O P M F T COPS in which all jobs with cj > aj are at the left end in non-decreasing order of aj + bj.

P r o o f : From Theorem 5, we know that there is an O P M F T COPS in which all jobs with cj > aj are at the left end. Let 0" = (1, 2, --., n) define such an O P M F T COPS. Let j be the least integer such that:

1. aj + bj > aj+l + bj+l 2. cj > aj 3. cj+l > aj+l

If there is no such j , then the lemma is established. So, assume that such a j exists. Figure 4 (a) shows the relevant part of the schedule. A denotes the t ime span between the finish of task aj+ 1 and the finish of the task that immediately precedes cj (in the figure, this happens to coincide with the start of cj). Figure 4 (b) shows the relevant part of the schedule, a ~, that results from interchanging the jobs j and j + 1. We shall show that ~'f _< v. As a result, the finish time of q~ is no more than that of or. So, a ~ is also an O P MF T schedule. By repeated application of this exchange process, a is transformed into an O P M F T that satisfies the lemma. case (a ) bj < aj+l T A and b j + 1 ~__ A + aj Now, bj < cj+ 1 + A and bj+l < A-F cj. So, 7 = t + aj + aj+l + A + cj + cj+l = T I ,

ca s e (b ) bj <_ ai+l -t- A and bj+l > A + aj The conditions for this case imply that A + aj + bj < A + aj+l -{- bj+l or aj -t- bj < aj+l + bj+l which contradicts the assumption on j . Hence, this case cannot arise. case (c ) bj > aj+l + A and bj+l <_ A + aj Since, cj > aj, bj+l < A- t -c j , ~" = t ~-aj ~-bj -{-cj "~-Cj+I, and 7- ~ = t + a j+ l + a j + m a x { A + Cj+l, bj} + cj. For r ~ to be _< v, we need:

bj + cj+l >_ aj+l + m a x ( A + cj+l, bj}

So, if bj >_ A + Cj+l, we need bj + cj+l > aj+l + bj or Cj+l >_ aj+l. This is true by choice of j . If bj < A + Cj+l, we need bj + cj+l > aj+l + A + Cj+l or bj > aj+l + A. This is part of the assumption for this case.

Page 12: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

620

case (d ) bj > aj+l + A and bj+l > A + aj This time, r = t -b aj + max{bj + cj, aj+l + bj+l} + cj+l = t + max{aj + bj + cj + cj+l, aj + aj+l + bj+l + cj+l}, and r' = t + aj+l + max{bj+l + cj+l, aj + bj} q-cj :- t-l-max{aj+l-l-bj+l q-Cj+l q-cj,aj q-bj +cj q- aj+l}. Since, aj +bj > aj+l +bj+l, aj +bj +cj +cj+l > aj+l +bj+l +ej+l +cj. Also, since cj+l > aj+l, aj + bj + cj + cj+l > aj + bj + cj + aj+l. Hence, v > v ~ .0

L e m m a 8 . There is an OPMFT COPS in which all jobs with Cj < aj are at the right end in non-increasing order of bj + cj.

P r o o f : Similar to that of Lemma 7. []

T h e o r e m 9. There is an OPMFT COPS in which the pre-processing order sat- isfies the following:

1. jobs with cj > aj come first and in non-decreasing order of aj + bj 2, those with cj = aj come ne~t in any order 3. those with cj < aj come last and in non-increasing order of bj + cj

P r o o f : This follows from Theorem 5 and the fact that the proofs of Lemmas 6, 7, 8 are local to the portion of the schedule they are applied to. []

Theorem 9 results in the simple O(n log n) algorithm given below to find a pre-processing order that defines a COPS which is an O P MF T schedule.

S t e p 1: Part i t ion the jobs into three sets L, M, and R such that L = {jlcj > aj}, M = {j[cj = aj}, and R : {cj < aj}.

S t e p 2: Sort the jobs in L such that aj W bj ~_ aj+l + bj+l. Let L be the resulting ordered sequence.

S t e p 3: Sort the jobs in R such that bj + cj >_ bj+l + cj+l. Let /~ be the resulting ordered sequence.

S t e p 4: The pre-processing order for the COPS is: L followed by the jobs in M in any order followed by/~.

4 R e v e r s e O r d e r P o s t - p r o c e s s i n g

While there are no-wait-in-process master-slave instances that are infeasible when the post-processing order is required to be the reverse of the pre-processing order, this is not the case when the no-wait constraint is removed. For any given pre-processing permutation, a, we can construct a reverse-order schedule as be- low:

1. the master pre-processes the n jobs in the order 2. slave i begins the slave processing of job i as soon as the master completes

its pre-processing 3. the master begins the post-processing of the last job (say k) in cr as soon as

its slave task is complete

Page 13: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

621

4. the master begins the post-processing of job j • k at the later of the two times (a) when it has finished the post-processing of the succesor of j in a, and (b) when slave j has finished bj

Schedules constructed in the above manner will be referred to as canoni- cal reverse order schedules (CROS). Given a pre-processing permutat ion v~, the corresponding CROS is unique. It is easy to establish that every minimum finish- t ime reverse order (ROMFT) schedule is a CROS. So, we can limit ourselves to finding a minimum finish-time CROS.

L e m m a 10. Let (r = (1, 2, . . . , n) be a pre-processing permutation. Let j < n be such that bj < bj+l. Let ~' be obtained from ~ by interchanging jobs j and j + 1. Let r and r respectively, be the finish times of the CROSs S and S ~ corresponding to ~ and a'. r < 7".

P r o o f : If j > 1, then let t be the time at which job j - 1 finishes in S and S'. If j = 1, let t = O. Let sj (s~) be the time at which task bj finishes in S (S'). Sj+l and ' sj+ 1 are similarly defined. From the definition of a CROS, we get:

J j+~ sj = E a k + bj Sj+l = E a k + bj+l (3)

1 1 j+l j+l

s; = E ak + bj s;+ 1 = E ak - aj + bj+ 1 (4) 1 1

Let q (q') be the t ime at which cj (Cj+I) finishes in ~ ((r It is sufficient to show that q~ < q. We see that:

and

q = m a x { m a x { t , S j+l} -~- Cj_I_I, 8 j } "~ Cj

= max{t + cj + cj+l, sj+l + cj + cj+l, sj + cj} (5)

! q' - max{max{t, s~} + cj, s j+l} + Cj+ 1

= max{t + cj + cj+~, sj + cj + cj+l j+l + cj+l}

From Equations 3, 4, 5, and the inequality bj < bj+l, we obtain:

(6)

and

I 8j -Jr" cj -t- Cj+l = 8j+l -~- bj - bj+l + cj + cj+ 1

<~ sjq. 1 -~- cj -~- cj+ 1 ~ q

I Sj+ 1 -[- Cj+ 1 = 8j+ 1 -- aj + Cj+l < Sj+l Jv Cj+l

< sj+l + cj+l + cj <__ q

From Equations 7, 8, 5, and 6, it follows that qr _< q. D

(7)

(8)

Page 14: Scheduling master-slave multiprocessor systems · 2017. 8. 29. · Scheduling Master-Slave Multiprocessor Systems * Sartaj Sahni Computer & Information Sciences Department University

622

T h e o r e m l l . The CROS defined by the ordering bl > b2 >_ " . >_ b~ is an R O M F T schedule.

P r o o f : Follows from Lemma 10. [] Using Theorem 11, one readily obtains an O(n log n) algorithm to construct

an ROMFT schedule.

5 C o n c l u s i o n

In this paper, we have introduced and motivated the master-slave scheduling model. We have shown that obtaining minimum finish-time schedules under the no-wait-in-process constraint is NP-hard when the schedule is required to be order preserving as well as when no constraint is imposed between the pre- and post- processing orders. The no-wait-in-process minimum finish t ime problem is solvable in O(n log n) time when the post-processing order is required to be the reverse of the pre-processing order.

When the no-wait constraint is eliminated, O P MF T as well as ROMFT schedules can be found in O(n log n) time.

R e f e r e n c e s

1. G. Chen and T. L~i, Preemptive scheduling of independent jobs on a hypercube, Information Processing Letters, 28, 201-206, 1988.

2. G. Chen and T. L~i, Scheduling independent jobs on partitionable hypercubes, Jr. of Parallel ~J Distributed Computing, 12, 74-78, 1991.

3. P. Krueger, T. Lai, and V. Dixit-Radiya, Job scheduling is more important than processor allocation for hypercube computers, IEEE Trans. on Parallel ~ Dis- tributed Systems, 5, 5, 488-497, 1994.

4. S. Leutenegger and M. Vernon, The performance of multiprogrammed multipro- cessor scheduling policies, Proc. 1990 ACM SIGMETRICS Con]erence on Mea- surement ~ Modeling of Computer Systems, 226-236, 1990.

5. S. Majumdar, D. Eager, and R. Bunt, Scheduling in multiprogrammed parallel systems, Proc. 1988 ACM SIGMETRICS, 104-113, 1988.

6. C. McCreary, A. Khan, J. Thompson, and M. McArdle, A comparison of heuris- tics for scheduling DAGS on multipr0cessors, 8th International Parallel Processing Symposium, 446-451, 1994.

7. S. Sahni, Scheduling multipipeline and multiprocessor computers, IEEE Trans on Computers, C-33, 7, 637-645, 1984.

8. Y. Zhu and M. Ahuja, Premptive job scheduling on a hypercube, Proc. 1990 In- ternational Conference on Parallel Processing, 301-304, 1990.