new tight np-hardness of preemptive multiprocessor and open-shop scheduling

24
NEW TIGHT NP-HARDNESS OF PREEMPTIVE MULTIPROCESSOR AND OPEN SHOP SCHEDULING Evgeny Shchepin Steklov Math. Inst., 117966 Gubkina 8, Moscow, Russia [email protected] Nodari Vakhania Science Faculty, State University of Morelos Av. Universidad 1001, Cuernavaca 62210, Morelos Mexico. Inst. Computational Math., Akuri 8, Tbilisi 93, Georgia [email protected] Abstract We show that preemptive versions of some NP-hard scheduling problems re- main NP-hard, but only for a restricted number of preemptions. If we allow a “sufficient” number of preemptions, then these problems become polynomi- ally solvable. We find, as we call, the critical number of preemptions for our problems, i.e., the minimal number of preemptions for which they become poly- nomially solvable. First we establish that the critical number of preemptions for scheduling m identical processors is m - 1 (we show that scheduling identi- cal processors with at most m - 2 preemptions is NP-hard). Then we consider so-called acyclic open-shop scheduling problem with m machines, a strongly restricted version of the classical open-shop scheduling, and show that m - 2 is the critical number of preemptions for this problem (the earlier known related well-known result was that non-preemptive open shop scheduling is NP-hard). Finally, we consider a slightly restricted version of scheduling on m unrelated processors. The restriction is that the processing time of any job on any proces- sor is no more than the optimal schedule makespan. We call such processors non-lazy unrelated processors. We show that 2m - 3 is the critical number of preemptions for scheduling non-lazy unrelated processors. Keywords: open-shop scheduling, multiprocessor scheduling, preemption, algorithm, NP- hard 606

Upload: independent

Post on 29-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

NEW TIGHT NP-HARDNESS OF PREEMPTIVEMULTIPROCESSOR AND OPEN SHOPSCHEDULING

Evgeny ShchepinSteklov Math. Inst., 117966Gubkina 8, Moscow, [email protected]

Nodari VakhaniaScience Faculty, State University of MorelosAv. Universidad 1001, Cuernavaca 62210, Morelos Mexico.Inst. Computational Math., Akuri 8, Tbilisi 93, [email protected]

Abstract We show that preemptive versions of some NP-hard scheduling problems re-main NP-hard, but only for a restricted number of preemptions. If we allowa “sufficient” number of preemptions, then these problems become polynomi-ally solvable. We find, as we call, the critical number of preemptions for ourproblems, i.e., the minimal number of preemptions for which they become poly-nomially solvable. First we establish that the critical number of preemptions forscheduling m identical processors is m − 1 (we show that scheduling identi-cal processors with at most m − 2 preemptions is NP-hard). Then we considerso-called acyclic open-shop scheduling problem with m machines, a stronglyrestricted version of the classical open-shop scheduling, and show that m − 2is the critical number of preemptions for this problem (the earlier known relatedwell-known result was that non-preemptive open shop scheduling is NP-hard).Finally, we consider a slightly restricted version of scheduling on m unrelatedprocessors. The restriction is that the processing time of any job on any proces-sor is no more than the optimal schedule makespan. We call such processorsnon-lazy unrelated processors. We show that 2m − 3 is the critical number ofpreemptions for scheduling non-lazy unrelated processors.

Keywords: open-shop scheduling, multiprocessor scheduling, preemption, algorithm, NP-hard

606

2

IntroductionTwo types of scheduling problems distinguished in the literature are pre-

emptive and non-preemptive ones. It is quite typical that a non-preemptivescheduling problem P is NP-hard whereas its preemptive version Ppmtn ispolynomially solvable. Let us call a scheduling problem π-preemptive ifat most π preemptions are allowed in it, π being any non-negative integer.A question which naturally arises is whether k-preemptive and l-preemptiveproblems, for some non-negative integers k and l, have the same complexity. Ifa 0-preemptive (non-preemptive) problem is NP-hard, does the correspondingk-preemptive problem, for k = 1, 2, ..., remain NP-hard? For many schedul-ing problems this is not the case, i.e., there is some positive integer π such thatcorresponding k-preemptive versions, for k ≥ π, are polynomially solvable.Then what is the minimal π such that the π-preemptive version is polynomiallysolvable (we call such π the critical number of preemptions for the problem)?Traditionally, a preemptive scheduling problem implies an arbitrary numberof preemptions: a k-preemptive problem for a fixed integer k > 0 would notbe treated as a preemptive scheduling problem (unless k is “sufficiently” largemagnitude not known in advance and changing from problem to problem). Sothough a k-preemptive problem, k > 0, is not a non-preemptive problem, itmight neither be a “preemptive problem”. We could refine the rough classi-fication of scheduling problems into preemptive or non-preemptive ones byspecifying the maximal number of preemptions which is allowed for a particu-lar problem, i.e., consider π-preemptive scheduling problems.

A polynomial-time algorithm for a preemptive problem Ppmtn imposes acertain number of preemptions, the maximal number of which can be usu-ally estimated. In most of the real-life problems the number of preemptionsis a crucial factor which has to be made as small as possible since each pre-emption implies additional communication cost and may also yield a forcedjob migration. How small the overall number of preemptions can be made,i.e., what is the critical number of preemptions for Ppmtn? This kind ofquestion has been also posed by Shachnai et al. [7] and the critical num-ber of preemptions 2m − 1 for scheduling uniform machines Q/pmtn/Cmax

was established (an O(n + m log m) algorithm by Gonzalez and Sahni [3]yields at most (2m − 2) preemptions). In this paper, we first establish thatm− 1 is the critical number of preemptions for scheduling identical machinesP/pmtn/Cmax. We show that (m − 2)-preemptive scheduling on identicalprocessors P/pmtn(m − 2)/Cmax is NP-hard (Section 3), whereas a venera-ble linear-time algorithm by McNaughton [6] yields m− 1 preemptions.

Multiprocessor and open-shop scheduling problems are intimately relatedwith the so-called acyclic distributions (see Shchepin & Vakhania [8]–[11]).An optimal schedule can be obtained in two stages. On the first stage an op-

607

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 3

timal distribution is constructed (a distribution assigns jobs or their parts tomachines without specifying the start times). Such a distribution can be con-structed in polynomial time by linear programming. The preemptions in thesedistributions can be represented by the so-called preemption graphs: in thesegraphs nodes represent machines and if two nodes are joined by an edge, thenthe corresponding machines share the same job which labels this edge. Anoptimal distribution, obtained by linear programming, has at most m − 1 pre-emptions or equivalently, its preemption graph is acyclic. A distribution oran open-shop (which can be seen as a distribution) is acyclic, if its preemp-tion graph is acyclic. As an example, in an acyclic open-shop no two jobsmay have two non-dummy operations on the same two machines. Schedulingacyclic distributions turned out to be an efficient tool for the exact solution ofR/pmtn/Cmax (see Lawler and Labetoulle [4]) and an approximate solutionof its non-preemptive version R//Cmax (see Lenstra et al. [5] and Shchepin& Vakhania [10]). Acyclic shop scheduling problems are also interesting fromthe other point of view: they can represent “maximal” polynomially solvablecases of the corresponding non-acyclic versions Shchepin & Vakhania [11].

The second problem we deal with is an acyclic (m − 3)-preemptive open-shop scheduling O/acyclic, pmtn(m − 3)/Cmax. General preemptive open-shop problem O/pmtn/Cmax is well-known to be solvable in polynomial(no worse than O(n4)) time, and its non-preemptive version O//Cmax isNP-hard due to the early classical results by Gonzales & Sahni [2]. Weshow that (m − 3)-preemptive open-shop scheduling, even its acyclic ver-sion O/acyclic, pmtn(m − 3)/Cmax, remains NP-hard (Section 4). More-over, m− 2 is the critical number of preemptions due to the recent linear-timealgorithm by Shchepin & Vakhania [11] for O/acyclic, pmtn(m− 2)/Cmax.

Scheduling m unrelated processors R/pmtn/Cmax is much more difficultthan scheduling m identical processors. No polynomial algorithm with a per-formance ratio less than 1.5 can exist unless P = NP , see Lenstra et al.[5] (currently best known ratio is 2 − 1/m, see Shchepin & Vakhania [10]).However, it turned out that a slightly restricted version of R/pmtn/Cmax

can be efficiently solved. In particular, we consider, as we call, non-lazyunrelated processors, in which the processing time of any job on any ma-chine is no more than the optimal schedule makespan C∗

max; i.e., there isno machine which is too slow for some job (we abbreviate this problem byR/pij ≤ C∗

max/Cmax adopting the standard notation for scheduling prob-lems). Not only R/pij ≤ C∗

max, pmtn/Cmax is polynomially solvable, but its(2m−3)-preemptive version is polynomially solvable (Section 6), whereas weshow that the (2m−4)-preemptive version is NP-hard (Section 5). Thus 2m−3is the critical number of preemptions for R/pij ≤ C∗

max, pmtn/Cmax. We canguarantee only a near-optimal (2m − 3)-preemptive schedule for general un-related processors. The makespan of such a schedule is no more than either

608

4

the corresponding non-preemptive schedule makespan or max{C∗max, pmax},

where pmax is the maximal job processing time (Section 6). These results areobtained relatively easily from the earlier results from Shchepin & Vakhania[9].

1. Summary of notions and notationsThis section contains glossary of notions and notations used further in this

paper. The reader may choose to have a brief look on it or skip it at all now andreturn to it later upon necessity. Not all the introduced notations are widelyused in the literature (however, we do find them convenient).

Jobs and machines. We will be dealing with n jobs J = {J1, . . . Jn} andm machines (processors) M = {M1, . . .Mm}. The processing time (length)of job J on machine M will be denoted by M(J). The maximal job processingtime will be denoted by pmax.

Distributions and assignments. A distribution δ of n jobs from J on mmachines from M is a mapping δ:J ×M→ R+, such that

∑M∈M

δ(J,M) =

1, for all J ∈ J . The processing time (or length) of job J on a machine M indistribution δ is |J |δM = δ(J,M)M(J). The (total) processing time or lengthof a job J in a distribution δ is |J |δ =

∑M∈M

|J |δM ). The maximal job length

in δ is denoted by |δ|max.∑J∈J

|J |δM is called the load of machine M in distribution δ and is denoted

by |M |δ. The maximal machine load in δ or the makespan is denoted by |δ|max.A distribution δ for J ,M with the minimal |δ|max is called an optimal distri-bution. A uniform distribution is a distribution, in which all machine loads areequal. The total processing time in a distribution is the sum of all machineloads in this distribution.

The sequential makespan of δ, ‖δ‖ = max{|δ|max, |δ|max}. It is clear that‖δ‖ is a lower bound on the makespan of any feasible schedule for the corre-sponding open-shop problem; Gonzales & Sahni [2] (see also Lawler & La-betoulle [4]) have shown that ‖δ‖ is achievable in a feasible schedule associ-ated with distribution δ (a formal definition of a feasible schedule associatedwith δ is given a bit later in this section).

An assignment of jobs of J on machines of M is a binary relation onJ ×M. Any distribution δ of J on M generates an assignment {(J,M) |δ(J,M) > 0}. We will not use any special letter for an assignment, instead,we will use δ(J) for {M ∈M | δ(J,M) > 0}.

A distribution δ is non-preemptive if δ(J,M) takes value 0 or 1 for anyJ,M . The number of preemptions of job J in δ is prJ(δ) = |δ(J)| − 1.pr(δ) =

∑J∈J

prJ(δ) is the (total) number of preemption in δ.

609

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 5

Multiprocessors. A multiprocessor is a triple constituted by the sets ofjobs J and machines M and a processing time function f , a mapping fromJ ×M to R+, where the value of this function for a pair J,M is the processingrequirement of J on M . As mentioned above, we use M(J) for the value ofthis function for the pair J,M . For the notational simplicity, a multiprocessorwill be denoted by J ,M (we omit f ).

A multiprocessor without any restriction on its processing time function iscalled a system of unrelated processors. In a system of identical processors,for each P and Q from M and for each J ∈ J , P (J) = Q(J).

Schedules. A schedule indicates which job is in process on each machineat each time moment. Formally, a schedule σ is a subset of J ×M× [0, T ),for some T ≥ 0. (J,M, t) ∈ σ signifies that job J is processed by machineM at the moment t in σ. T is called the makespan of σ and is denoted by ‖σ‖.For a pair J,M ∈ J ×M, σ(J,M) = {t ∈ R+ | (J,M, t) ∈ σ}; σ(J,M)may contain a single interval, or it can be a multi-interval, i.e., a union of twoor more intervals. Each interval is assumed to be a left half-interval. The totallength of σ(J,M), |σ(J,M)|, is the processing time of J on M in σ.

A feasible schedule σ must satisfy the following conditions:(1) For every M ∈ M, the set of jobs σ(M, t) = {J ∈ J | (J,M, t) ∈ σ}

contains at most one element;(2) For every J ∈ J , the set of machines σ(J, t) = {M ∈ M | (J,M, t) ∈

σ} contains at most one element;(3) For every J ∈ J ,

∑M∈M

|σ(J,M)|M(J) = 1;

(1) provides that each machine processes at most one job at any time; (2)provides that each job is processed by at most one machine at any time; (3)provides that each job is completely processed.

A feasible schedule with the minimal makespan is called optimal.With each distribution is associated an (infinite) set of feasible schedules

with this distribution. Conversely, with each feasible schedule σ is associateda distribution δσ defined as follows:

δσ(J,M) =|σ(J,M)|

M(J).

A machine is idle at some moment (interval) if it handles no job at thatmoment (interval). We call a feasible schedule σ tight, if no machine is idlefrom time 0 to its completion time (that is, the completion time of the latest jobscheduled on that machine) in σ. A tight non-preemptive schedule, associatedwith a distribution δ can be uniquely given by indicating a sequence of jobsfor each machine. The makespan of this (not necessarily feasible) schedule is|δ|max.

Switching points, components, splittings and preemptions. Let us definea component of a schedule as the maximal (continuous) time interval, dur-

610

6

ing which a machine processes a unique job. A schedule can be completelygiven by all its components. Formally, a component of a schedule σ is a triple(J,M, [p, q)), where J is a job, M is a machine and [p, q) is a time intervalwhich is a connectivity component in σ(J,M). Further on we will refer to acomponent (J,M, [p, q)) as a J-component of σ on M or a (J,M)-componentof σ.

A boundary point of the set σ(J,M) is called a switching point of J on Mor a J-switching point on M . At such a point M interrupts the processing ofJ or (re)starts its processing.

We will say that a job J is split on machine M , if σ(J,M) is a multi-interval,i.e., it consists of two or more J-components. The number of splittings of jobJ on machine M in σ, sp(σ(J,M)) is the number or components in σ(J,M)minus 1. sp(σ(M)) =

∑J∈J

sp(σ(J,M)) is the number of splittings in σ on

M ; sp(σ) =∑

M∈Msp(σ(M)) is the total number of splittings in σ.

The number of preemptions in σ, pr(σ) = sp(σ) + pr(δσ): a preemption inσ may come either from δσ, or it might be a splitting.

Preemption and assignment graphs. A (full) preemption graph of a dis-tribution δ is a labelled graph, where the nodes correspond to the machines ofM and the edges correspond to the jobs of J . In particular, there is an edge(Mi,Mj) between nodes Mi and Mj labelled by job J , iff both δ(J,Mi) andδ(J,Mj) are positive. The reduced preemption graph of δ or the preemptiongraph for short, is a subgraph G(δ) of the full preemption graph, in which allredundant edges are eliminated: an edge (Mi, Mj) labelled by J is redundant,if there exists k, i < k < j, such that δ(J,Mk) > 0 (according to our machineindexing inM). Any subgraph of the full preemption graph of δ constituted byall nodes representing machines sharing some job J with all edges labelled byJ , is a complete graph. The corresponding subgraph in the reduced preemp-tion graph G(δ) is a simple path in G(δ). Note that to different enumerationsin M different preemption graphs correspond. However, G(δ) is acyclic ifand only if the preemption graph, corresponding to any enumeration in M isacyclic (Shchepin & Vakhania [8]). Other important properties are given innext propositions which immediately follow.

Proposition 1 The number of preemptions in a distribution δ is equal tothe number of edges in its preemption graph G(δ).

Proposition 2 The preemption graph of a distribution δ is not connectediff the set of jobs J can be partitioned into subsets J1 ∪ J2 = J , such thatδ(J1) ∩ δ(J2) = ∅, for each J1 ∈ J1 and J2 ∈ J2.

A distribution δ is called acyclic if G(δ) is acyclic. For a given distributionδ and the corresponding assignment, an assignment graph is defined as a bipar-tite graph in which the two sets of nodes are formed by the sets of jobs J and

611

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 7

machines M, respectively, such that there is an edge (J,M) iff δ(J,M) > 0(Lenstra et al. [5]). An acyclic assignment is an assignment with the acyclicassignment graph. There is an obvious similarity between preemptions andassignment graphs. The following proposition immediately follows from thedefinitions.

Proposition 3 Two distributions with the same assignment have the samepreemption graph. Hence, the assignment graph of any distribution is acycliciff the preemption graph of this distribution is acyclic.

2. NP-hardness of P/pmtn(m − 2)/Cmax

In this section we show that P/pmtn(m−2)/Cmax is NP-hard. We use thereduction from the NP-complete PARTITION problem (see Garey & Johnson[1]) for the decision version of P/pmtn(m − 2)/Cmax. In the PARTITIONproblem we are given a finite set of integer numbers C = {c1, c2, . . . , cn} with

S =n∑

i=1ci. 1 This decision problem gives a “yes” answer iff there exists a sub-

set of C which sums up to S/2. Given an arbitrary instance of a PARTITION,let us define our scheduling instance with n+2m+2 jobs with the total lengthof 2m + 1

2m as follows.There are m pairs of the so-called big jobs denoted by B±

i , i = 1, . . . , mwith |B+

i | = 1+1/2i, and |B−i | = 1− 1/2i. So the total length of all big jobs

is 2m.There are two median jobs denoted by D and D′, with |D| = 1

2m − 5m2m+2

and |D′| = 3m2m+2 . So total length of the two median jobs is 1

2m − 1m2m+1 .

There are n small jobs Ci with |Ci| = cimS2m+1 . So the total length of small

jobs is 1m2m+1 .

This transformation is polynomial as the number of jobs is bounded by thepolynomial in n and m, and all magnitudes can be represented in binary en-coding in O(m) bits.

Now we prove that there exists a feasible schedule with less than m − 1preemptions and with the optimal makespan 2 + 1

m2m iff there exists a so-

lution to our PARTITION. In one direction, supposek∑

i=1ci = S/2, for

some k < n, i.e. we have a solution to the PARTITION. Then we definea tight schedule σ with the makespan 2 + 1

m2m as follows. The job se-quence on M1 is: B−

1 , B+1 , D′, C1, . . . Ck; so the completion time of M1 is

2 + 3m2m+2 + 1

m2m+2 = 2 + 1m2m (the load of M1). The job sequence on M2

1In this and the following section n stands exclusively for the number of elements in PARTITION; in therest of the paper n denotes the number of jobs

612

8

is: B−2 , D,Ck+1, . . . , Cn, B+

2 , where we have only a part of D with the length3

m2m+2 = 34

1m2m (providing that the load of M2 is 2 + 1

m2m ). The rest of D isdivided into equal parts of the length 1

m2m and is distributed on the machinesMi, i > 2. The schedule on Mi, i > 2 is tight and is generated by the sequenceB−, D,B+.

To see that σ is feasible, we need to check the sequentiality of σ on D,which is the only preempted job in σ. The completion time of D on Mi is nomore than 1 − 1

2i + 1m2m ≤ 1 − 1

2i + 1m2m+1 ≤ 1 − 1

2i + 12i+1 = 1 − 1

2i+1 ,and its starting time on Mi+1 is 1 − 1

2i+1 . Hence, σ is sequential and has themakespan 2 + 1

m2m . This completes the proof in one direction. We need thefollowing lemma for the other direction:

Lemma 1 If there exists a uniform distribution δ of jobs of J on m identicalprocessors from M with less than m − 1 preemptions, then there is a subsetJ ′ of J with the total length of k|δ|max, for some natural k < m.

Proof. Since pr(δ) < m − 1, G(δ) has less than m − 1 edges and hence anyits connected component contains k < m machines (we may have two or moresuch components). Let J ′ be the set of all jobs distributed on machines in anyof the connected components. Since δ is uniform, the total length of these jobsis k|δ|max.

Assume now σ is a tight (m − 2)-preemptive schedule for our schedulinginstance. A distribution, generated by σ also has no more than m− 2 preemp-tions and the load on all machines in this distribution is 2+ 1

m2m . We will provethat this distribution already gives a solution to our instance of PARTITION.

Suppose J ′ is a subset of J with the total length of |J ′| = 2k + k 1m2m

(see Lemma 1). First we note that J ′ contains exactly 2k big jobs. Indeed, J ′cannot contain more than 2k big jobs, because the total length of the smallest

2k+1 big jobs is 2k+1−k+1∑i=1

12i = 2k+ 1

2k+1 ≥ 2k+ 12m > 2k+k 1

m2m = |J ′|.At the same time, the total length of the longest 2k − 1 big jobs together withall median and small jobs is less than 2k. Further, the total length B′ of all bigjobs from J ′ is 2k. Indeed if B′−2k is not 0, it must have an absolute value ofat least 1

2m . If B′ < 2k, then B′ plus the total length of all non-big jobs (whichis no more than 1

2m ) will be no more than 2k. This contradicts our assumptionthat |J ′| = 2k + k 1

m2m . Similarly, if B′ > 2k, then the above magnitude isat least 2k + 1

2m > 2k + k 1m2m = |J ′|, which again is a contradiction. Thus,

B′ = 2k.It follows that the total length of the big jobs from J c is 2(m − k), J c

being the complement of J ′ in J , and hence without loss of generality, wecan assume that k ≤ m/2. In this case, D cannot belong to J ′, because|D| > k 1

m2m . On the other hand, D′ must belong to J ′. Indeed, denote by C ′

613

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 9

the total length of all small jobs from J ′. If J ′ does not contain a median job,then |J ′| = B′ + C ′ ≤ 2k + 1

m2m+1 , which contradicts our conjecture that|J ′| = 2k + k 1

m2m . Therefore, D′ ∈ J ′. In this case |J ′| = B′ + C ′ + |D′|.This implies that k 1

m2m = C ′ + 3m2m+2 . But since C ′ ≤ 1

m2m+1 , the onlypossible value of k is 1. Then C ′ = 1

m2m − 32m+2m

= 1m2m+2 , which is

S/2 and we have obtained a solution to the PARTITION. We have proved thissection’s main result:

Theorem 2 P/pmtn(m− 2)/Cmax is NP-hard.

3. The NP-hardness of O/acyclic, pmtn(m − 3)/Cmax

An open shop J ,M consists of a set of n jobs J and a set of m machinesM. Each job J j consists of m operations {J j

i }, i = 1, . . . m. Each J ji has to

be processed by machine Mi during |J ji | ≥ 0 time units, the latter is called the

length of J ji . Note that we allow |J j

i | = 0; in this case we will say that J ji is a

dummy operation.

|J j | =m∑

i=1|J j

i | is called the length of job J j . |Mi| =n∑

j=1|J j

i | is the load of

machine Mi.It is known that the optimal non-preemptive schedule makespan for

O/pmtn/Cmax is the maximum between the maximal machine load and themaximal job length (see Gonzalez & Sahni [2] and Lawler and Labetoulle [4]).

A distribution δ on J ,M defined as δ(J j ,Mi) = |Jji |

|Jj | can obviously beassociated with each open-shop. An acyclic open-shop is one with an acyclicdistribution. A uniform open-shop is defined similarly as a uniform distribu-tion.

In this section we prove the following theorem.

Theorem 3 O/acyclic, pmtn(m− 3)/Cmax is NP-hard.

The reduction from PARTITION is described in the next subsection.

3.1 The reduction from PARTITIONWe transform the PARTITION problem from Section 3 to O/pmtn/Cmax.

Let C = {c1, c2, . . . , cn} and S/2, S =n∑

i=1ci, form again an arbitrary in-

stance of a PARTITION. We define our open shop instance O(C,m) as fol-lows (assuming without loss of generality that m ≥ 3). O(C, m) deals with1+n(m− 2)+2m− 2 = (n+2)m− 2n− 1 jobs and m machines {Mi}m

i=1.We divide the set of jobs into the following three categories:

614

10

1) There is one common job I to be distributed on all machines. The process-ing requirement of I on Mi, i = 1, 2, ...,m, is 1. Hence, the total processingtime of I is m.

2) The jobs from the second category are called partition jobs. We intro-duce n partition jobs for each of the machines, except the first one M1 andthe last one Mm (we call these machines extremal, and the rest of machinesintermediate). The jth, j ≤ n, partition job to be distributed on machine Mi

(1 < i < m) is denoted by Pi,j . The processing time of Pi,j is cj

S2i . Note thatthe total processing time of all partition jobs on each Mi (1 < i < m) is equalto 1

S2i−1 .3) The jobs from the third category are called fixers and denoted by

F1, F+2 , F−

2 , F+3 , F−

3 , . . . , F+m−1, F

−m−1, Fm. As it is evident from this nota-

tion, there is only one fixer on machines M1 and Mm, and there are two fixerson any intermediate machine. The processing time of F1 and Fm is m − 1.The processing time of the first fixer for machine Mi, F+

i , 1 < i < m, isequal to (i − 1) − 1

2S2i−1 , and the processing time of the second fixer F−i is

(m− i)− 12S2i−1 .

Similarly as in Section 3, our transformation is polynomial. Observe thatall machine loads are m and that the operations which we have not explicitlydefined are dummy ones providing that O(C, m) is acyclic. The following factis also obvious:

Lemma 4 Any tight schedule for O(C,m) with the makespan m is optimal.

Given a solution∑

i≤k ci = S/2 to our PARTITION instance (we renumberelements in PARTITION correspondingly), a tight schedule for O(C, m) with-out any splitting and with the optimal makespan m (Lemma 4) can easily bebuilt. On M1 first is scheduled the fixer F1 and then the common job I . On thelast machine, first I is scheduled and then Fm. On each intermediate machineMi, jobs are scheduled in the following order: first the fixer F+

i , then all Pi,j ,j ≤ k (in any order), then job I followed by the rest of the partition jobs Pi,j ,j > k, and finally the second fixer F−

i .It will take essentially longer to show that a feasible schedule to our open-

shop yields a solution to PARTITION. We need a number of auxiliary notionsand lemmas. Let us say that σ is a partitioning schedule if there is at leastone intermediate machine Mi in σ and a time moment t, such that no partitionjob is executed at time t and the total processing time of the partition jobs,completed by time t is equal to the total processing time of the partition jobs,which are started at or after time t. We shall refer to t as a partitioning time inσ.

Lemma 5 Every feasible schedule for O(C,m) having the makespan m andat most m− 3 splittings is a partitioning schedule.

615

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 11

Assume for now that the above lemma is true and that σ is a feasible sched-ule for O(C, m), such that ‖σ‖ = m and sp(σ) ≤ m − 3. By Lemma 5,σ contains an intermediate machine with a switching point which is a parti-tioning time. Such a switching point can be found in linear time by checkingall switching points, corresponding to the partition jobs on each intermediatemachine. Clearly, any partitioning time gives a solution to the PARTITIONproblem. Thus Theorem 3 is proved under the assumption that Lemma 5 istrue. The rest of this section is devoted to the proof of this lemma.

3.2 Schedule editingIn this subsection we introduce schedule editing operations which we use

later on. It will be useful to note that if σ is an optimal schedule for O(C, m),then the common job I has to be processed at any time moment in σ (becausethe processing time of I equals to the makespan of σ). Next, let us note thatthe number of switching points is closely related with the number of splittings.The following lemma gives an explicit formula which proof is straightforward:

Lemma 6 The number of splittings in a tight schedule on a machine M isequal to s− n− 1, where s is the number of all switching points on M and nis the number of jobs scheduled on M .

The schedule editing operations are cutting, inserting and moving and aredefined as follows.

Cutting. Let p and q, p < q, be time moments. A schedule σ′ is obtainedfrom a schedule σ by the (p, q)-cutting on a machine Mi if the following con-ditions hold:

1 σ′(J,M) = σ(J,M) for all M 6= Mi and all J ;

2 σ′(Mi, t) = σ(Mi, t) for t < p;

3 σ′(Mi, t) = σ(Mi, t + q − p) for t ≥ p.

Note that if σ is tight, then σ′, obtained by the (p, q)-cutting on Mi, is alsotight and the completion time of Mi in σ′ is p− q less than that in σ. Further,if p and q are switching points in σ, then the number of splittings in σ′ cannotincrease. Moreover, this number will be decreased by 1 if Mi performed thesame job immediately before time p and immediately after time q in σ.

Inserting. A schedule σ′ is obtained from a schedule σ by the (p, q)-inserting of a job J on machine Mi if the following conditions hold:

1 σ′(J,M) = σ(J,M) for all M 6= Mi and all J ;

2 σ′(Mi, t) = σ(Mi, t) for t ≤ p;

616

12

3 σ′(Mi, t) = J for t ∈ [p, q);

4 σ′(Mi, t) = σ(Mi, t− (q − p)) for t ≥ q.

Again, σ′ is tight if σ is tight and the completion time of Mi in σ′ is q − pgreater than that in σ. Further, if p is a switching point in σ on Mi, then theinserting cannot create a splitting of any job, different from the inserted job J ;it yields a new splitting of job J if J in σ was already scheduled on Mi and pis not a boundary of σ(J,Mi).

Moving. We will use (p, i, q) for the schedule component of a job J on themachine Mi with the time interval [p, q). The (p, q) → (p′, q′)-moving cuts awhole J-component (p, i, q) of σ from the interval [p, q) and inserts it into theinterval [p′, q′) (of the same length) on the same machine Mi. Thus the movingis performed into two steps. The first step is the (p, q)-cutting on Mi; with thesecond step, in the obtained (after cutting) schedule the (p′, q′)-inserting of Jon Mi is carried out.

The moving does not affect the tightness of σ and the completion time ofmachine Mi if q′ ≤ |σ|Mi . Note that if p > p′ and p′ is a switching point in σ,then the (p′, q′)-inserting will not create a new splitting of any job, scheduledon Mi. Consequently, neither the (p, q) → (p′, q′)-moving of a J-componentwill cause a new splitting. Moreover, if Mi processed the same job imme-diately before time p and immediately after time q in σ, then the number ofsplittings in the resulted schedule σ′ will be one less than that in σ. Now ingeneral, if p is not a switching point, then the moving of a J-component can in-crease the number of splittings at most by 1. If Mi is an extremal machine, thenthe moving (p, q) → (p′, q′) of a I-component does not increase the number ofsplittings if 0 < p < q < |σ|Mi . Indeed, since there are only two jobs sched-uled on Mi (the common job and the fixer), the cutting of the I-componentwill reduce by 1 the number of splittings of the fixer.

The following lemma summarizes the above made observations.

Lemma 7 Let σ′ be a schedule produced from σ by the (p, q) → (p′, q′)-moving of a J-component on machine Mi, such that q′ ≤ |σ|Mi . Then

1 σ′ is tight if σ is tight;

2 sp(σ′) ≤ sp(σ) + 1;

3 sp(σ′) ≤ sp(σ) if p′ < p and p′ is switching point for σ on Mi;

4 sp(σ′) < sp(σ) and spJ(σ′) < spJ(σ) if p′ < p and p′ is J-switchingpoint in σ on Mi;

5 sp(σ′) ≤ sp(σ) if i = 1 or i = m.

617

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 13

Lemma 8 A feasible tight schedule for O(C, m) with no more than m − 2splittings can be constructed in linear time.

Proof. First construct a tight schedule for all jobs except the common job I .Then for each Mi, insert I in this schedule by the (i − 1, i)-insertion. Thisinsertion may create at most one splitting on an intermediate machine.

3.3 Schedule preprocessingIn this section we introduce procedures based on cutting, inserting and

moving, which simplify a schedule structure. A sequence of J-components(p1, i1, q1), (p2, i2, q2), . . . , (pk, ik, qk) is called a J-loop, if qj = pj+1 for allj < k, and i1 = ik; we will say that the above J-loop passes through machinesMi2 ,Mi3 , . . . Mik−1

.A J-component (p, i, q) is called unit if q − p = 1, otherwise it is called non-unit. A J-loop is called integral if any its component, except the first one andthe last one are unit, and all machines which this loop passes (the intermediateones) have at least one splitting.We will say that a J-component (p, i, q) splits a job J ′, if both p and q areswitching points of J ′, i.e., if Mi processes J ′ immediately before time p andimmediately after time q.

Lemma 9 If a J-component splits another job, then any its moving will notincrease the total number of splittings.

Proof. The cutting of such a J-component will decrease the number of split-tings, and the insertion can again increase the number of splittings at most by1. Hence, the total number of splittings will not increase.

Lemma 10 Given any tight schedule σ, it is possible to construct in lineartime another tight schedule σ′ with sp(σ′) ≤ sp(σ), such that any machine Mi

without an I-splitting in σ with spσ(Mi) > 0 will have at most one splitting inσ′ (and this splitting will be caused by the unique I-component).

Proof. Let (p, i, p+1) be the unique I-component on Mi and let spσ(Mi) > 0.A tight schedule σ′ with at most 1 splitting on Mi can be obtained as follows.First construct a tight schedule on Mi including in it all jobs, scheduled in σon Mi, but job I . This schedule has no splittings. Then carry out the (p, p+1)insertion of I on Mi. The resulted schedule satisfies our claim for Mi. Wecarry out the analogous procedure on any other machine without a I-splittingand obtain the resulted schedule σ′ with sp(σ′) ≤ sp(σ).

Lemma 11 Given any tight schedule σ, it is possible to construct in lineartime another tight schedule σ′ without any integral loop and with sp(σ′) ≤sp(σ).

618

14

Proof. Due to the previous Lemma 10, without loss of generality, we can as-sume that in σ all machines with unit I-components have at most 1 splittinggenerated by the corresponding unique I-component. If σ contains an integralloop (p, i, q), (q, i1, q+1), (q+1, i2, q+2), . . . , (q+k−1, ik, q+k), (q+k, i, r),then the machines Mi1 , . . . , Mik have just one splitting generated by the cor-responding I-component. Then by Lemma 9, a moving of these componentswill not increase the number of splittings. Hence, the schedule σ′, obtainedfrom σ by the following sequence of movings will have less splittings than σhas: on Mi perform the moving (q + k, r) → (q, r − k); on Mi1 performthe moving (q, q + 1) → (r − k, r − k + 1); on Mi2 perform the mov-ing (q + 1, q + 2) → (r − k + 1, r − k + 2); and so on, the last moving(q + k− 1, q + k) → (r− 1, r) is performed on Mik . The first from the abovemovings decreases the number of splittings on Mi and any successive movingdoes not increase the number of splittings. Now if σ′ still contains an integralloop, then we apply the same procedure again. The process has to terminate insp(σ) steps.

Let us say that an intermediate machine Mi in a schedule σ is incorrect, ifσ has no splitting on Mi and the common job I is scheduled the first or the laston Mi.

Lemma 12 Given a tight schedule σ, associated with O(C, m), it is possibleto construct in linear time another tight schedule σ′ with sp(σ′) ≤ sp(σ) andwithout any incorrect machine.

Proof. Note that σ can have no more than two incorrect machines: on oneof these machines I is scheduled first and on the other one I is scheduled last.Let Mi be the incorrect machine on which I is scheduled first (the other casewill be quite similar). The schedule σ′ which we construct from σ coincideswith σ on all machines except the machines M1 and Mi. We define σ′ on M1

as follows: σ′(I, M1) = [0, 1) and σ′(F1,M1) = [1,m).Now let (p1, 1, q1), (p2, 1, q2), . . . , (pk, 1, qk) be the increasing sequence of

all I-components on M1 in σ. As p1 > 0, the number of splittings in σ onM1 is 2k − 2 if qk = m and it is 2k − 1 otherwise. σ′ on Mi is defined asfollows. First we perform the [0, 1)-cutting on Mi which leaves 0 splittings onMi. Further, step by step we perform (pi, qi)-insertions of I on Mi. Any suchinsertion but the last one increases the number of splittings at most by 2. Thelast insertion increases the number of splittings by one if qk = m. Hence, thenumber of splittings in σ′ on Mi is no more than the number of splitting in σon M1, and spσ(Mi) = spσ′(M0) = 0. Therefore, sp(σ′) ≤ sp(σ).

3.4 Proper timesFor a real x, we denote by 〈x〉 its fractionation defined as the distance from

x to the nearest integer number.

619

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 15

A time moment t is called a proper time for a machine Mi in a scheduleσ if it is an integer number plus the sum of the processing times of a propernon-empty subset of the jobs, scheduled in σ on Mi and different from I . Forexample, for machines M1 and Mm any integral number is a proper time. Foran intermediate machine Mi the following inequalities hold.

Lemma 13 The fractionation of a proper time t of an intermediate machineMi satisfies the inequalities

1S2i

≤ 〈t〉 ≤ 1S2i−1

Proof. t has a form f + p, where f is the sum of processing times of 0,1 or 2fixers and p is the sum of processing times of a set of partition jobs. (Note thatby the definition, f+p > 0 and since in this sum not all fixers and partition jobscan be presented, it is not integral.) Since in f + p at least one job processingtime is included, it follows immediately from the definition of processing timesof partition jobs and fixers that 〈f +p〉 ≥ 1

S2i . To see the other inequality, notethat 0 ≤ p ≤ 1

S2i−1 and the list of possible values of f is 0, i − 1 − 12S2i−1 ,

m − i − 12S2i−1 and m − 1 − 1

S2i−1 . It is straightforward to check that thisimplies 〈f + s〉 ≤ 1

S2i−1 .For a machine Mi, a time moment which is not proper is called improper.

A proper time for some machine is said to be strongly improper, for any othermachine in σ.

Lemma 14 A strongly improper time for any intermediate machine is im-proper for that machine.

Proof. The fractionation of a proper time for an intermediate machine Mi

is from the interval [ 1S2i ,

1S2i−1 ] (Lemma 13). If t is a proper time for an inter-

mediate machine Mj , j 6= i, then 〈t〉 is from the interval [ 1S2j , 1

S2j−1 ]. But theintersection of the two above intervals in empty. The same is true if Mj is anextremal machine, since a proper time of such a machine is integral.

Lemma 15 If t1, t2 and t3 are proper times for different machines, then t1 6=t2 ± t3.

Proof. Let t1, t2 and t3 be proper times for machines Mi, Mj and Mk,respectively. Without any loss of generality, assume i < j < k. If i = 1 thent1 is integer and t3 ± t1 are proper times for Mk and hence strongly improperfor Mj . Then by Lemma 14 t2 6= t3 ± t1. The proof is similar for k = m.

Let us assume now that 1 < i < j < k < m, and by a contradic-tion, suppose t1 = t2 + t3. Then since S2jt1 and S2jt2 are integral, Sjt3also has to be integral. But the inequalities 1

S2k ≤ 〈t3〉 ≤ 1S2k+1 imply

620

16

1S2k−2j ≤ 〈S2jt3〉 ≤ 1

S2k−2j+1 . Then S2jt3 cannot be integral and we cameto a contradiction. The case t1 = t2 − t3 is similar.

Our this sections last lemma easily follows from the definition of propertimes.

Lemma 16 Suppose σ is a tight non-partitioning schedule and Mi is anycorrect machine in σ without a splitting. Then any switching point in Mi isproper.

3.5 The proof of Lemma 5From now on we will be dealing exclusively with an associated with

O(C, m) tight non-partitioning schedule without any integral loop and any in-correct machine. We denote such a schedule by σ. By virtue of Lemmas 11and 12 it is sufficient to prove Lemma 5 for σ. We wish to prove that σ has atleast m−2 splittings. If σ has a splitting on all intermediate machines, then thenumber of splitting is at least m−2 and our assertion is true. Hence, it remainsto consider the case when the set of machines without a splitting contains anintermediate machine in σ.

We denote by si and fi, respectively, the starting time and finishing time,respectively, of the common job I on machine Mi. Note that I is not split onMi iff fi − si = 1. Let Mi and Mj be a pair of machines without any splittingin σ such that fi ≤ sj . Then we call such a pair a σ-successive pair in σ ifthere is no other machine Mk without splittings, such that fi ≤ sk ≤ fk ≤ sj .Notice that since σ is a non-partitioning schedule and there is no splitting onmachines Mi and Mj , sj is in fact strongly more than fi. This implies thatthere must exist at least one machine with a splitting, on which a I-componentis scheduled within the interval [fi, sj ] (since job I has to be in process at anymoment in σ). Also observe that either Mi or Mj must be intermediate.

We will say that a I-component (p, k, q) precedes another I-component(p′, k′, q′), if q ≤ p′; a I-component (p, k, q) is a (i, j)-intermediate, iffi ≤ p ≤ q ≤ sj . A I-component (p, k, q) is improper for an extremalmachine Mk if p − q < 1 and 0 < p < q < m. This component is im-proper for an intermediate machine Mk if, besides the above two conditionsfor extremal machines, both p and q are improper times for Mk. An improperI-component which has one (two, respectively) strongly improper end(s) iscalled semi-strongly improper (strongly improper, respectively).

Lemma 17 Let Mi, Mj be a σ-successive pair. Then(1) the set of (i, j)-intermediate improper I-components is not empty;(2) if the earliest scheduled (i, j)-intermediate improper I-component be-

longs to an intermediate machine, then it has a strongly improper starting time;(3) if the latest scheduled (i, j)-intermediate improper I-component belongs

to an intermediate machine, then it has a strongly improper finishing time.

621

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 17

Proof. The difference sj − fi is not integral since otherwise both, sj and fi

would be proper for both Mi and Mj and this cannot be true because at leastone of these machines is intermediate (see the definition of a proper time andLemma 14). Since the length of I coincides with the makespan of our sched-ule, this job is processed during the whole interval [sj , fi), i.e., there is anI-component inside this interval. Let (p1, k1, q1) be the earliest non-unit I-component after fi. If Mk1 is an extremal machine, then this I component isimproper (since it is non-unit) and (1) is proved. If Mk1 is an intermediatemachine, then p1 may differ from fi only by an integer (the sum of processingtimes of unit I-components), hence it is proper for Mi and strongly improperfor Mk1 . If q1 is improper for Mk1 , then this component is improper andstatements (1) and (2) are proved. Assume q1 is proper for Mk1 . Then wecontinue to search an improper I-component. The difference sj − q1 is againnon-integral as it is a difference of proper times of different machines. Hence,there is at least one non-unit I-component scheduled between q1 and sj . Let(p2, k2, q2) be the earliest such component. If k2 = 1 or k2 = m, this com-ponent is improper and the lemma is proved. Assume 1 < k2 < m. Becauseσ has no integral loops, k2 6= k1. Since p2 is the sum of f1 and an integernumber, it is proper for Mk1 and hence strongly improper for Mk2 . If q2 isimproper, (p2, k2, q2) is improper and the statements (1) and (2) are proved. Ifnot, we continue the procedure and look for the next I-component (p3, k3, q3)and so on. This process cannot take more steps than the number of splittings ofσ, and we obtain an improper I-component with strongly improper start time.The statements (1) and (2) are proved.

To prove (3), we merely replace in the above the "first" by the "last".We use the above lemma to mark intermediate improper I-components

for every σ-successive pair (Mi, Mj). In particular, if there is an (i, j)-intermediate improper I-component on an extremal machine, then we markthe earliest such component. Otherwise, if there is only one (i, j)-intermediateimproper I-component, we mark this piece (such component will be stronglyimproper by Lemma 17). If there is more than one (i, j)-intermediate improperI-component, we mark the earliest and the latest of these I-components; wecall a couple of such components a twin couple.

Let us denote by mr(M) the number of the marked I-components, sched-uled on machine M ; if mr(M) > 0, then we call M marked. Let M be amachine with at least one splitting. Then the number of spare splittings on aM , sps(M) = sp(M)− 1.

Lemma 18 If M is an extremal marked machine, then sps(M) ≥ mr(M).

Proof. Note that the number of switching points on M is at least 2mr(M)+2.Using Lemma 6 with n = 2 and s = 2mr(M) + 2, we obtain that sp(M) ≥2mr(M) + 2− 2− 1 = 2mr(M)− 1. If mr(M) > 1, then 2mr(M)− 1 ≥

622

18

mr(M) + 1; if mr(M) = 1, then job I has at least one splitting and the fixeralso has at least one splitting because the marked component is scheduled in anintermediate position on M and we have found at least 1 spare splitting. Thiscompletes the proof.

Lemma 19 If an intermediate machine Mk is such that mr(Mk) > 2, thensps(Mk) ≥ mr(Mk).

Proof. We first consider the case when there is a non-marked I-componenton Mk, or equivalently spI(Mk) ≥ mr(Mk). Then to prove our lemma it issufficient to find some split job on Mk, different from job I . Let p1 be thestarting time of the earliest marked I-component on Mk. There are scheduledonly components of jobs, different from I in the interval (0, p1), and at leastone of these components has to be fractional since the length of this interval isimproper.

Thus we can assume that spI(Mk) < mr(Mk), i.e., all I-componentson Mk are marked. Consider next the case when we have three marked I-components (p1, k, q1), (p2, k, q2) and (p3, k, q3), p1 < p2 < p3 which donot contain a twin couple. Hence all these pieces are intermediate for somethree different σ-successive pairs. In this case each of the following intervals(0, p1), (q1, p2), (q2, p3) and (p3,m) has the length more than 1. Therefore, ineach of these intervals has to be scheduled a fixer. If the same fixer is scheduledin two different intervals, then it has a splitting; if a fixer is scheduled in threedifferent intervals, it has at least two splitting. As it is easily seen, in all casesthe total number of splittings of a fixer is no less than 2. On the other hand, thenumber of splittings of job I is at least mr(Mk) − 1. Hence the total numberof splittings on Mk is at least mr(Mk)− 1 + 2 = mr(Mk) + 1.

Now let us consider the case when there are 3 marked components on Mk,but first two of them form a twin couple. Since p1 and q3 are improper, theintervals (0, p1) and (q3,m) contain a fractional component of some job, sayJ1 for (0, p1) and J2 for (q3,m). If J1 6= J2, then sp(Mk) ≥ spI(Mk) + 2 ≥mr(Mk)− 1 + 2 and our lemma holds. Suppose J1 = J2. Observe that fromLemma 17, both p1 and q2 are strongly improper. Besides, either p3 or q3 isalso strongly improper. Assume first that p3 is strongly improper. Then theinterval (q2, p3) has an improper length by Lemma 15 and hence contains afractional component of some job, say J3. If J3 6= J1, the reasoning, similarto that used for J2 above, proves our lemma. If J3 = J1, then J1 has at least2 splittings and so sp(Mk) ≥ spI(Mk) + spJ1(Mk) ≥ mr(Mk) − 1 + 2 andthe lemma is also proved.

Now assume that q3 is strongly improper. Since all I-components on Mk

are marked, the sum of the length of all I-intervals (pi, qi) is 1, and the sumof the length of the intervals (q1, p2) and (q2, p3) is respectively 1 + q3 − p1

and it is improper by Lemma 15. Hence the length of one of these intervals is

623

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 19

improper by Lemma 15 and it must contain a fractional job component. Theproof now is completed similarly as for the above considered case.

The case when a single marked I-component follows after a twin coupleis analogous. Finally, the only non-considered case is when there are fourmarked I-components on Mk, which form 2 different twin couples. Then thefollowing three intervals (0, p1), (q2, p3) and (q4,m) have improper length andhence contain a fractional component of a job, different from I , and we againhave found an additional spare splitting.

Lemma 20 If mr(Mk) = 2 and at least one of the marked components onMk is strongly improper, then sps(Mk) ≥ 2.

Proof. Suppose (p, k, q) is a strongly improper marked I-component. The sec-ond marked I-component (p′, k, q′) is then a semi-strongly improper. Assumeq < p′ and p′ is strongly improper (and hence q′ is improper). The length of theinterval (0, p) is improper for Mk (Lemma 17), hence it contains a fractionalcomponent of some job, say J1. The same is true for the interval (q′, m). Ifthis interval contains a fractional component of some another job J2, then wealready have founded three split jobs I , J1 and J2 and the lemma holds. Sup-pose J2 = J1. If I has more than 2 components on Mk, then I has 2 splittingsand J1 gives another splitting, so we again have 3 splittings on Mk. SupposeI has just 1 splitting on Mk. In this case the length of interval (q, p′) is ex-actly 1 less than the length of (p, q′), whereas the last interval is improper asthe difference of two strongly improper times (Lemma 15). Hence, the lengthof (q, p′) is improper and this interval contains a fractional job component. Ifthis component is of job J1, then J1 has at least two splittings, and we have 3splittings on Mk together with the splitting of job I . If the above component isof some other job, we again have 3 split jobs.

The case when q′ is strongly improper is simpler, because then we immedi-ately have that the length of the interval (p, q′) is improper as the difference oftwo strongly improper times. The case q′ < p is analogous.

Lemma 21 Every marked machine Mk has at least 2 splittings.

If I has more than 1 splitting on Mk, then we have enough splittings. Assumethat I has just 1 splitting and let (p, k, q) be a marked I-component. Either theinterval (0, p) or the interval (q, m) does not contain a component of job I . Asthat interval, in addition, has an improper length, it must contain a fractionalcomponent of some job, different from I . So we have found two split jobs onMk and the lemma is proved.

The proof of Lemma 5. Now we are ready to prove the main Lemma 5. Let kand m−k be the number of machines without and with splittings, respectively.The total number of splittings is m − k plus the number of spare splittings.Hence it is sufficient to prove that sps(σ) ≥ m− 2− (m− k) = k − 2.

624

20

Denote by p the number of σ-successive pairs for which a twin couple ismarked. Then the total number of the marked components is k−1+p. Indeed,the number of σ-successive pairs is k − 1. Each such pair generates one ortwo marked components, while the number of the pairs which generate twocomponents is p.

Let us call a marked machine distinctive, if it contains exactly two markedcomponents, such that each of these components is from some twin couple(the two components may belong to different twin couples). We denote by qthe number of all distinctive machines. Since the total number of twins is 2pand the number of twins, scheduled on a distinctive machine is 2q, we havethat 2q ≤ 2p, or equivalently q ≤ p.

By Lemma 21, each distinctive machine has at least 1 spare splitting. Hence,the distinctive machines give in total at least q spare splittings. The number ofthe marked components, scheduled on non-distinctive machines is (k−1+p)−2q ≥ k − 1− q. By virtue of Lemmas 20, 19 and 18, each marked componenton a non-distinctive machine generates at list 1 spare splitting. Hence, the totalnumber of spare splittings is at least q + (k − 1− q) = k − 1 (this estimationis even better than the claimed one; it holds only if there exists an intermediatemachine without splittings).

4. NP-hardness of R/pij ≤ C∗max, pmtn(2m − 4)/Cmax

In this section we apply our previous NP-hardness result forO/acyclic, pmtn(m − 3)/Cmax and show that R/pij ≤ C∗

max, pmtn(2m −4)/Cmax is also NP-hard. First we prove the following auxiliary lemma:

Lemma 22 Two distributions δ and δ′ are equal if they have the same acyclicassignment and the load on all machines in both δ and δ′ is the same.

Proof. Assume δ and δ′ are two arbitrary distributions satisfying the lemma.The proof is by the induction on the number of machines. Suppose the lemmais proved for distributions with < m machines.

Let δ and δ′ be two acyclic distributions with m machines from M. Sinceδ and δ′ have the same acyclic preemption graph, there is a node of degreeone N in this graph. Let I be the job, corresponding to the (only) edge of M .Note that the rest of the jobs, distributed in δ and δ′ on M have no preemption,i.e., they are assigned completely to M . Therefore, the length of all these jobsin δ and δ′ is the same. But since the load of M in δ and δ′ is the same, thelength of the portion of I on M in both δ and δ′, must be also the same. Henceδ(J,M) = δ′(J,M) for all J . To apply the induction hypothesis, we definedistributions δ0 and δ′0 onM\M in the following way: δ0(J,M) = δ(J,M) ifJ 6= I and δ(J,M) > 0, δ0(I, M) = δ(I, M) 1

1−δ(I,M) ; δ′0(J,M) = δ′(J,M)

625

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 21

if J 6= I and δ′(J,M) > 0, δ′0(I,M) = δ′(I, M) 11−δ′(I,M) . Distributions δ0

and δ′0 coincide the by induction hypothesis. This implies that δ = δ′.Given a multiprocessor J ,M, let us say that a distribution δ on J ,M

generates an open shop O on J ,M, if |J ij | = δ(J j ,Mi)Mi(J j).

Theorem 23 For any uniform acyclic open shop O on J ,M, there is aprocessing time function f and a distribution δ for the multiprocessor J ,Mwith this processing function, such that δ generates O and δ is a unique optimaldistribution for J ,M.

Proof. First we define f as follows:Mi(J j) = |J j | + ε if J j

i is dummy, and Mi(J j) = |J j | otherwise, where εis a positive real number. Now we define the distribution δ as δ(J j ,Mi) =

|J ji |/|J j |. Note that if |J j

i | > 0, then δ(J j ,Mi)Mi(J j) = Mi(J j) |Jji |

|Jj | =

|Jj | |Jji |

|Jj | = |J ji |. Similarly, |J j

i | = 0 implies δ(Jj ,Mi) = 0. Hence,

(Mi)δ(J j) = |J ji | holds for all i, j and δ generates O. Since O is uniform,

δ is also uniform and the total processing time of δ is m|δ|max. Besides, everyjob in δ is distributed on its fastest machine (i.e. the machine where this jobcan be processed in the minimal time). This implies that δ has the minimalpossible total processing time, and since δ is uniform, it is optimal.

It remains to show that δ is unique. Assume δ′ is another optimal distri-bution such that δ′ generates O. The total processing time in δ′ is no morethan m|δ′|max, but the latter by our assumption is no more than m|δ|max; nowsince m|δ|max is the minimal possible total processing time, the total process-ing time in δ′ is to be equal to m|δ|max. Hence, m|δ′|max = m|δ|max and δ′ isalso uniform, i.e., loads on all machines in both, δ and δ′ are equal. At the sametime, δ′ has to distribute each job on the fastest machine. But the processingtime function is defined in such a way that a machine M is fastest for a job Jiff δ(J,M) > 0. Hence, δ′ and δ have the same acyclic assignment and ourclaim follows from Lemma 22.

Theorem 24 R/pij ≤ C∗max, pmtn(2m− 4)/Cmax is NP-hard.

Proof. We prove that the corresponding decision problem is NP-complete.Recall that the open-shop problem O(C,m) of Section 4 is acyclic and uni-form. We apply Theorem 23 with ε ≤ 1 to O(C,m) to construct a processingtime function f for multiprocessor J ,M, and a distribution δ, such that δgenerate O(C, m). Note that |δ|max = m and pmax = m, hence the multi-processor J ,M, defined by the processing time function f , is non-lazy (i.e.,pmax ≤ Cmax).

It is not difficult to see that our decision problem, “does there exist a feasi-ble schedule σ for the multiprocessor J ,M with ||σ|| ≤ m and with pr(σ) ≤

626

22

2m − 4?”, has a “yes” answer if and only if there exists a tight schedule forO(C, m) with the makespan, not exceeding m and with less than m− 2 split-tings. Indeed, if σ is such a schedule for O(C, m), then σ is a feasible schedulefor the multiprocessor J ,M with less than (m − 1) + (m − 2) = 2m − 3preemptions.

In the other direction, suppose σ is a feasible schedule for J ,M with||σ|| ≤ m and pr(σ) < 2m − 3. Then the makespan of any distribution asso-ciated with σ is m and it is optimal. Since there is only one such distribution(Lemma 23), it is precisely the distribution δ. δ has exactly m − 1 preemp-tions. It follows that σ is a feasible schedule for O(C,m) with the number ofsplittings pr(σ)− (m− 1) < m− 2.

5. Polynomial approximation for little-preemptivemultiprocessor scheduling

As mentioned earlier in Section 1, the NP-hardness results for P/pmtn(m−2)/Cmax and O/acyclic, pmtn(m−3)/Cmax, respectively, are tight due to thepolynomial (m − 1) and (m − 2)-preemptive algorithms for P/pmtn/Cmax

and O/acyclic, pmtn(m− 2)/Cmax, respectively, from McNaughton [6] andShchepin & Vakhania [11], respectively. The tightness of the NP-hardnessresult for R/pij ≤ C∗

max, pmtn(2m − 4)/Cmax follows from the followingtheorem. The proof of this and the next theorem are based on the earlier resultsfrom Shchepin & Vakhania [9].

Theorem 25 R/pij ≤ C∗max, pmtn(2m − 3)/Cmax is polynomially solv-

able.

Proof. An optimal acyclic distribution δ with pr(δ) ≤ m − 1 and |δ|max ≤C∗

max can be obtained by linear programming. For each J ∈ J , we have|J |δ =

∑M∈M

δ(J,M)M(J) ≤ ∑M∈M

δ(J,M)pmax = pmax ≤ C∗max. Hence

‖δ‖ ≤ C∗max and we can apply Lemma 4 from [9] to construct an optimal

schedule with less than m − 2 splittings for δ. Since δ has at most m − 1preemptions, the total number of preemptions of the constructed schedule willbe no more than (m− 1) + (m− 2) = 2m− 3.

For general unrelated processor system we can guarantee a near-optimalapproximation as the following theorem shows:

Theorem 26 There is a polynomial time algorithm, which for any instanceof R/pmtn/Cmax constructs a feasible schedule σ with pr(σ) ≤ 2m− 3 andwith

(a) ‖σ‖ ≤ max{C∗max, pmax};

(b) ‖σ‖ ≤ C0max, where C0

max is the optimal non-preemptive schedulemakespan.

627

New tight NP-hardness of preemptive multiprocessor and open shop scheduling 23

Proof. Part (a) immediately follows from Theorem 25. We refer the reader toTheorem 2 from [9] for the proof of part (b).

6. Further researchWe have shown that O/acyclic, pmtn(m − 3)/Cmax, P/pmtn(m −

2)/Cmax and R/pij ≤ C∗max, pmtn(2m−4)/Cmax are NP-hard in the normal

sense. Can there exist pseudo-polynomial algorithms for (some of) these prob-lems or are they (some of them) strongly NP-hard? A polynomial algorithmfor R/pmtn/Cmax by Lawler and Labetoulle [4] gives up to (4m2 − 5m + 2)preemptions, whereas our result for R/pij ≤ C∗

max, pmtn/Cmax ensures thatthe critical number of preemptions for R/pmtn/Cmax is no less than (2m−4).Can this lapse be reduced?

628

References

[1] Garey M.R. and D.S. Johnson. Computers and Intractability: A Guide to the Theory ofNP–completeness (Freeman, San Francisco, 1979)

[2] Gonzalez T. and S. Sahni, “Open Shop Scheduling to Minimize Finish time”, Journal ofthe ACM 23, 665-679 (1976)

[3] Gonzalez T. and S. Sahni, “Preemptive scheduling of uniform processor systems”, Jour-nal of the ACM 25, 92-101 (1978)

[4] Lawler E.L. and J.Labetoulle, “On preemptive scheduling of unrelated parallel processorsby linear programming”, Journal of the ACM 25, 612-619 (1978)

[5] Lenstra J.K., D. B. Shmoys and E. Tardos, "Approximation algorithms for schedulingunrelated parallel machines" Mathematical Programming, 46, 259-271 (1990)

[6] McNaughton, "Scheduling with deadlines and loss functions", Management Science 6,p.1-12 (1959)

[7] Shachnai H., T. Tamir and G. Woeginger, "Minimizing makespan and preemption costson a system of uniform machines", Proc. 10th European Symposium on Algorithms(2002)

[8] Shchepin E. and N. Vakhania, “Task distributions on multiprocessor systems”, In: LectureNotes in Computer Science (IFIP TCS) 1872, p.112-125 (2000)

[9] Shchepin E. and N. Vakhania, “Little-preemptive scheduling on unrelated processors”.Journal of Math. Modelling and Algorithms 1, p. 43-56 (2002)

[10] Shchepin E. and N. Vakhania. “An optimal rounding gives a better approximation forscheduling unrelated machines”. Operations Research Letters 33, p.127-133 (2005)

[11] Shchepin E. and N. Vakhania, “On machine dependency in shop scheduling”, submittedto Discrete Applied Mathematics

629