Tải bản đầy đủ - 0 (trang)
Chapter 20. Complexity of Machine Scheduling Problems

# Chapter 20. Complexity of Machine Scheduling Problems

Tải bản đầy đủ - 0trang

344

J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker

combinatorial optimization, we do not really require mathematically rigorous

definitions of these concepts; for our purposes we may safely identify 9 with the

class of problems for which a polynomial-bounded, good  or efficient algorithm

exists, whereas all problems in N.9 can be solved by polynomial-depth backtrack

sea rc h.

In this context, all problems are stated in terms of recognition problems which

require a yes/no answer. In order to deal with the complexity of a combinatorial

minimization problem, we transform it into the problem of determining the

existence of a solution with value at most equal to y , for some threshold y .

It is clear that 9 C N.9, and the question arises if this inclusion is a proper one or

if, on t h e contrary, 9 = N.9. Although this is still an open problem, the equality of

9’and N.9 is considered to be very unlikely and most bets (e.g., in ) have been

going in the other direction. T o examine the consequences of an affirmative answer

to the 9 = N.9 question, we introduce the following concepts.

Problem P’ is reducible to problem P (notation: P’ TX P) if for any instance of P’ an

instance of P can be constructed in polynomial-bounded time such that solving the

instance of P will solve the instance of P’ as well.

P‘ and P are equivalent if P‘ x P and P P‘.

P is NP-complete  if P E N.9 and P’ x P for every P’E N.9. Informally, the

reducibility of P’ to P implies that P’ can be considered as a special case of P; the

NP-completeness of P indicates that P is, in a sense, the most difficult problem in

NP.

In a remarkable paper , NP-completeness was established with respect to t h e

so-called Satisfiability problem. This problem can be formulated as follows.

Given clauses C , ,..., Cu, each being a disjunction of literals from the set X =

. . ., xo .f,, . . ., f r } is, the conjunction of the clauses satisfiable, i.e., does there exist a

subset S C X such that S does not contain a complementary pair of literals (x,, X,), and

S n C, # 0 for j = 1, . . ., u ?

{x,,

Cook proved this result by specifying a polynomial-bounded “master reduction”

which, given P E N.9, constructs for any instance of P an equivalent boolean

expression in conjunctive normal form. By means of this reduction, a polynomialbounded algorithm for t h e Satisfiability problem could be used to construct a

polynomial-bounded algorithm for any problem in N.9. It follows that

.9 = N.9 i f and only if Satisfiability E 9.

The same argument applies if we replace Satisfiability by any NP-complete

problem. A large number of such problems has been identified by Karp [25;261 and

others (e.g., ); Theorem 1 mentions some of them. Since they are all notorious

combinatorial problems for which typically n o good algorithms have been found so

far, these results afford strong circumstantial evidence that .9 is a proper subset

of N P .

Complexity of machine scheduling problems

345

Theorem 1. The following problems are NP-complete:

(a) Clique. Given a n undirected graph G = ( V ,E ) and a n integer k , does G have

a clique (i.e., a complete subgraph) on k vertices?

(b) Linear arrangement. Given a n undirected graph G = ( V ,E ) and a n integer k ,

does there exist a one-to-one function n- : V-{l,. . ., V I} such that &r, , ) E EI

n(i)rr(j)l c k ?

(c) Directed hamiltonian circuit. Given a directed graph G = (V, A ) , does G

have a hamiltonian circuit (i.e., a directed cycle passing through each vertex exactly

once)?

(d) Directed hamiltonian path. Given a directed graph G‘ = (V’,A’), does G’

have a hamiltonian path (i.e., a directed path passing through each vertex exactly

once)?

(e) Partition. Given positive in6egers a,, . . ., a,, does there exist a subset S C T =

(1, .. ., t } such that x , E s a ,= C.,,,-sa,?

(f) Knapsack. Given positive integers a , , . . ., a,, b, does there exist a subset

S C T = (1,. . ., t } such that x , E s a k= b?

(g) 3-Partition. Given positive integers a , , . . ., a,,, b, does there exist a partition

(TI,.. . , T I )of T = (1,. . .,3t} such that I T, 1 = 3 and CIE7,al

= b for j = 1 , . . ., t?

I

Proof. (a) See [7;25].

(b) See .

(c, e, f) See .

(d) NP-completeness of this problem is implied by two observations:

(A) Directed hamiltonian path E N . 9 ;

(B) P O( Directed hamiltonian path for some NP-complete problem P

(A) is trivially true, and (B) is proved by the following reduction.

Directed hamiltonian circuit

Directed hamiltonian path.

Given G = ( V ,A ) , we choose v’ E V and construct G’ = (V’, A‘) with

V ’=

v u {u”},

1

I

A’ = {(u,w ) (v, w ) E A, w # v’} U {(v,v”) ( v , u’) E A } .

G has a hamiltonian circuit if and only if G’ has a hamiltonian path.

(g) See . 0

Karp’s work has led to a large amount of research on the location of the

borderline separating the “easy” problems (in 9)from the “hard” (NP-complete)

ones. It turns out that a minor change in a problem parameter (notably-for some

as yet mystical reason -an increase from two to three) often transforms an easy

problem into a hard one. Not only does knowledge of the borderline lead to fresh

insights as to what characteristics of a problem determine its complexity, but there

are also important consequences with respect to the solution of these problems.

Establishing NP-completeness of a problem can be interpreted as a formal

346

J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker

justification to use enumerative methods such as branch-and-bound, since n o

substantially better method is likely to exist. Embarrassing incidents such as the

presentation in a standard text-book of an enumerative approach to the undirected

Chinese postman problem, for which a good algorithm had already been developed

in , will then occur less readily.

The class of machine scheduling problems seems an especially attractive object for

this type of research, since their structure is relatively simple and there exist

standard problem parameters that have demonstrated their usefulness in previous

research.

Before describing this class of problems, let us emphasize that membership of 9

versus NP-completeness only yields a very coarse measure of complexity. On one

hand, the question has been raised whether polynomial-bounded algorithms are

really good . On the other hand, there are significant differences in complexity

within the class of NP-complete problems.

One possible refinement of the complexity measure may be introduced at this

stage. It is based on the way in which the problem data are encoded. Taking the

Knapsack and 3-Partition problems as examples and defining a * = max,,.{a,}, we

observe that the length of the input is O ( t log a *) in the standard binary encoding,

and 0 ( t a , ) if a unary encoding is allowed. 3-Partition has been proved NPcomplete even with respect to a unary encoding . Knapsack is NP-complete

with respect to a binary encoding , but solution by dynamic programming

requires 0 ( t b ) steps and thus yields a polynomial-bounded algorithm with respect

to a unary encoding; similar situations exist for several machine scheduling

problems. Such “pseudopolynomial” algorithms  need not necessarily be

“good” in t h e practical sense of the word, but it may pay none the less to distinguish

between complexity results with respect to unary and binary encodings (cf. ).

Unary NP-completeness or binary membership of 9 would then be the strongest

possible result, and it is quite feasible for a problem to be binary NP-complete and

to allow a unary polynomial-bounded solution. The results in this paper hold with

respect to the standard binary encoding; some consequences of using a unary

encoding will be pointed out as well.

3. Classification

Machine scheduling problems can be verbally formulated as follows 16; 451:

A job J, ( i = 1,. . ., n ) consists of a sequence of operations, each of which

corresponds to the uninterrupted processing of J, o n some machine Mk ( k =

1,. . ., m ) during a given period of time. Each machine can handle at most one job at

a time. What is according to some overall criterion the optimal processing order o n

each machine?

The following data can be specified for each J , :

a number of operations n, ;

Complexity of machine scheduling problems

341

a machine order v,, i.e. an ordered n,-tuple of machines;

a processing time plk of its k th operation, k = 1,. . ., n, (if n, = 1 for all J,, we shall

usually write p, instead of p , , ) ;

a weight w , ;

a release date or ready time r,, i.e. its earliest possible starting time (unless stated

otherwise, we assume that r, = 0 for all J z ) ;

a due date or deadline d , ;

a cost function f : N + R , indicating the costs incurred as a nondecreasing

function of the completion time of J,.

We assume that all data (except v, and fi) are nonnegative integers. Given a

processing order on each Mk,

we can compute for each J,:

the starting time S, ;

the completion time C, ;

the lateness L, = C, - d, ;

the tardiness T, = max(0, C, - d , } ;

U, = if C, s d, then 0 else 1.

Machine scheduling problems are traditionally classified by means of four parameters n, m, 1, K . The first two parameters are integer variables, denoting the

numbers of jobs and machines respectively; the cases in which m is constant and

equal to 1, 2, or 3 will be studied separately. If m > 1, the third parameter takes on

one of the following values:

1 = F in a pow-shop where n, = m and u, = (Ml,. . ., M , ) for each J , ;

1 = P in a permutation pow-shop, i.e. a flow-shop where passing is not permitted

so that each machine has to process the jobs in the same order;

1 = G in a (general) job-shop where n, and v, may vary per job;

1 = I in a parallel-shop where each job has to be processed on just one of m

identical machines, i.e. n, = 1 for all J, and the v, are not defined.

Extensions to the more general situation where several groups of parallel (possibly

non-identical) machines are available will not be considered.

The fourth parameter indicates the optimality criterion. W e will only deal with

regular criteria, i.e., monotone functions K of the completion times C , , . . ., C, such

that

C, s C : for all i

=+

K

( C l , .. ., Cn)< K ( C i , .. ., CL)

These functions are usually of one of the following types:

K = fma. = max, {f(C)l;

K

=

xf

= C:=,f(c).

The following specific criteria have frequently been chosen to be minimized:

K = C,,, = max, {C,};

K

x

=x

K

=

K

=

K

= L,,,

c

W,c,=

E ~ =W I, c , ;

= max, { L , } ;

W , x =

C:=l W , x ;

W,u,=

z

Y

1W

,u,.

348

J.K. Lenstru, A.H.G. Rinnooy Kun, P. Brucker

We refer to  for relations between these and other objective functions.

Some relevant problem variations are characterized by the presence of one or

more elements from a parameter set A, such as

prec (precedence constraints between the jobs, where “J, precedes J,” (notation:

J, < J,) implies C, s S , ) ;

free (precedence constraints between the jobs such that t h e associated precedence graph can be given as a brunching, i.e. a set of directed trees with either

indegree or outdegree at most one for all vertices);

r, 3 0 (possibly non-equal release dates for the jobs);

C, < d, (all jobs have to meet their deadlines; in this case we assume that

K E {Cm,,,

WLCL));

no wait (no waiting time for the jobs between their starting and completion

times; hence, C, = S, + xkp,k for each J , ) ;

n, n , (a constant upper bound on the number of operations per job);

p & p , (a constant upper bound on the processing times);

p & = 1 (unit processing times);

w, = 1 (equality of the weights; we indicate this case also by writing

T,, Ut).

In view of the above discussion, we can use the notation n I m 11, A 1 K to indicate

specific machine scheduling problems.

c c,,c x

4. Complexity of machine scheduling problems

All machine scheduling problems of the type defined in Section 3 can be solved

by polynomial-depth backtrack search and thus are members of N.9. The results on

their complexity are summarized in Table 1.

The problems which are marked by an asterisk (*) are solvable in polynomialbounded time. In Table 2 we provide for most of these problems references where

the algorithm in question can be found; we give also the order of t h e number of

steps in the currently best implementations. The problems marked by a note of

exclamation (!) are NP-complete. The reductions to these problems are listed in

Table 3. Question-marks (?) indicate open problems. We will return to them in

Section 5 to motivate our typographical suggestion that these problems are likely to

be NP-complete.

Table 1 contains the “hardest” problems that are known to be in 9’ and the

“easiest” ones that have been proved to be NP-complete. In this respect, Table 1

indicates to the best of our knowledge the location of the borderline between easy

and hard machine scheduling problems.

Before proving the theorems mentioned in Table 3 , we will give a simple example

of the interaction between tables and theorems by examining the status of the

general job-shop problem, indicated by n I m I G I C,,,.

349

Complexity of machine scheduling problems

Table 1. Complexity of machine scheduling problems

n jobs

1 machine

2 machines

m machines

Lax

* prec, r, 3 0

* F

* F, no waif

! F, tree

! F, r , 2 0

! m=3:F

? m = 3 : F, no wait

! F, no wait

*

*n=2:G

! m=3:G,n,S2

G, n, s 2

! G, n, s 3

*

! I

*

1, prec, r, 2 0, C,

! I, prec, p, s 2

cw,c,

Lax

*

W,T.

! F, w, = 1

? F, no wait, w, = 1

! F, no wait, w, = 1

! I

* I, r, 3 0 , p, = 1

* I, prec, p, = 1,

w, = 1

! I, prec, p, = 1 , w, = 1

* prec

! F

w, = 1

! I, prec, p, s 2 , w, = 1

r, 2 0, p, = 1

! r,ZO

! I

*

! F, w, = 1

!'

c w,u,

I, tree, pz = 1

? m = 3 : I, prec, p, = 1

! I, prec, p, = 1

tree

! prec, p, = 1

! prec, w, = 1

! r, 2 0 , w, = 1

* C, s d,, w, = 1

! C,sd,

* prec,

c

d,,p, = 1

r, 2 0 , p, = 1

w, = 1

!

! prec, p, = 1 , w. = 1

! r, 3 0 , w, = 1

! I, w, = 1

* r, 3 0 ,

! F, w, = 1

pi

=

1

* w,=1

!

! I, w, = 1

! prec, p, = 1, w, = 1

! r, 3 0 , w, = 1

*: problem

in 8 ;see Table 2.

?: open problem; see Section 5.

! : NP-complete problem; see Table 3.

* I,

J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker

350

Table 2. References to polynomial-bounded algorithms

~~

Problem

References

~

Order

a An O ( n log n ) algorithm for the more general case of series parallel precedence

constraints is given in .

b A n O ( n log n ) algorithm for the more general case of agreeable weights (i.e.

p. < p ,

wi 2 w , ) is given in .

O ( n 3 )and O ( n Zalgorithms

)

for the n 12 1 I, prec, p , = 1 I C,. problem are given

Polynomial-bounded algorithms for the more general case of parallel nonidentical machines are given in [21; 41.

+

In Table 1, we see that the n 12 I G, n, G 2 1 C,,, problem is a member of 9 and

that two minor extensions of this problem, n 12 1 G, n, c 3 I C,,, and n 13 1 G,

n, G 2 1 C,,,,,, are NP-complete. By Theorem 2(c, h), these problems are special cases

of the general job-shop problem, which is thus shown to be NP-complete by

Theorem 2(b). Table 2 refers to an O ( n log n ) algorithm  for the n 121 G,

n, S 2 1 C,,, problem. Table 3 tells us that reductions of Knapsack to both

NP-complete problems are presented in Theorem 4(a, b); the NP-completeness of

Knapsack has been mentioned in Theorem l(f).

Theorem 2 gives some elementary results on reducibility among machine

scheduling problems. It can be used to establish either membership of 9 or

NP-completeness for problems that are, roughly speaking, either not harder than

the polynomially solvable ones or not easier than the NP-complete ones in Table 1.

Theorem 2. (a) If n ’ ~ m ’ ~ l ’ , A ’ ( K ’ ~ n ( mand

~ l , nAl m

( ~l l , A 1 ~ 6 Z . 9 , then

n ’ (m‘l 1‘, A ’ / K ’ E 9.

Complexity of machine scheduling problems

Table 3. Reductions to NP-complete machine scheduling problems

Reduction

References

[36; 38; 401

[36; 38; 401

h.L, Theorem 5

h.l., Theorem 4(j)

h.L, Theorem 4(g)

h.L, Theorem 4(i)

[38; 401

h.l., Theorem 2(j)

; h.l., Theorem 4(h)

[ 13; 38; 401

h.L, Theorem 2(j)

h.l., Theorem 4(f)

h.L, Theorem 4(d)

h.l., Theorem 4(a)

h.L, Theorem 3(a); cf. 

; cf. 

[I61

h.L, Theorem 3(b); cf. 

h.l., Theorem 2(1); cf. 

h.L, Theorem 4(e)

h.l., Theorem 2(i)

h.l., Theorem 20‘)

h.l., Theorem 2(j)

h.L, Theorem 26)

h.l., Theorem 2(j)

h.l., Theorem 4(c)

h.l., Theorem 6(a)

h.L, Theorem 4(b)

; cf. 

h.l., Theorem 6(b)

h.L, Theorem 2(1); cf. 

351

J.K. Lensfra, A.H.G. Rinnooy Kan, P. Brucker

352

Proof. Let P’ and P denote the problems on the left-hand side and right-hand side

respectively.

(a, b) Clear from the definition of reducibility.

(c) Trivial.

(d, e) P’ has an optimal solution with the same processing order on each machine

[6; 451.

(f, g, h) In each case P’ obviously is a special case of P.

(i) Given any instance of P’ and a threshold value y’, we construct a corresponding instance of P by defining d, = y ’ (i = 1,. . ., n ) . P’ has a solution with value s y ’

if and only if P has a solution with value G O .

(j) Given any instance of P’ with due dates d : (i = 1,.. ., n ) and a threshold value

y’, we construct a corresponding instance of P by defining d, = d : + y ’ ( i = 1,. . ., n ) .

P’ has a solution with value G y ’ if and only if P has a solution with value G 0.

(k) Take d, = 0 (i = 1,.. ., n ) in P.

(1) Given any instance of P’ and a y ’, 0 s y ’ sn ’ p *, we construct a corresponding

instance of P by defining

n” = ( n ’ - 1) y’,

n = n’+ n”,

y = ny’+tn”(n”+I),

and adding n” jobs J , + , (j = 1,. . ., n”) to P’ with

p n +,,I = 1,

J,

=

1,..., n ’ + j - 1).

Now P‘ has a solution with value

G

y ‘ if and only if P has a solution with value

S

y:

c,,, s y ‘ =+ X C, s n‘y‘+ Xf=l(yr+ j ) = y ;

C,,,>y‘*

cc,>y‘+cf:,(y‘+l+])=y.

0

Remark. The proof of Theorem 2(c) involves processing times equal to 0, implying

that the operations in question require an infinitesimally small amount of time.

Whenever these reductions are applied, the processing times can be transformed

into strictly positive integers by sufficiently (but polynomially) inflating the problem

data. Examples of such constructions can be found in the proofs of Theorem

4(c, d, e, f).

In Theorems 3 to 6 we present a large number of reductions of the form

P 0~ n I rn 1 I, A 1 K by specifying n I m 1 I, A I K and some y such that P has a solution

if and only if n 1 rn 11, A 1 K has a solution with value K < y. This equivalence is

proved for some principal reductions; in other cases, it is trivial or clear from the

analogy to a reduction given previously. The NP-completeness of n I m 1, A I K then

follows from the NP-completeness of P as established in Theorem 1.

I

353

Complexity of machine scheduling problems

First, we briefly deal with the problems on identical machines. Theorem 3

presents two reductions which are simplified versions of the reductions given in .

Theorem 3. Partition is reducible to the following problems :

(a) n 12 I 1

1C,,,;

wiCi.

(b) n 12 I I

I

Proof. Define A = E I E T a , .

(a) Partition a n 12 1 I1 C,,,:

n

=t;

p, = a, ( i E T ) ;

y =!A.

(b) Partition

n

=

I

n 12 1 I 2 w , C , :

t;

p, = w, = a, ( i E T ) ;

y

2 a,a,-tAZ

{J, I i E S} is assigned

=

ISlSjS,

1

Suppose that

to MI and {J, i E T - S } to M z ; let c =

ElESu,-;A. Since p8 = w, for all i, the value of w,C, is not influenced by the

ordering of the jobs o n the machines and only depends o n the choice of S :

2W,c,=

c

K(S).

It is easily seen (cf. Fig. 1) that

I 1 c w,C, problem

and it follows that Partition has a solution if and only if this n 12 I

has a solution with value < y . 0

S

L

T-S

E

I

-

MI 7

S

-

1

T-S

F”2

I

M2

value K ( T )

value K ( S )

I

Fig. 1

Most of our results on different machines involve the Knapsack problem, as

demonstrated by Theorem 4.

354

J.K. Lenstra, A.H.G. Rinnooy Kan, P. Brucker

Proof. Define A = C,ETa,.W e may assume that O < b < A .

(a) Knapsack 0~ n 12 G, n, S 3 I C,,,:

I

n=t+l;

v, = (M1),ptl= a, (i E T ) ;

vn = (Mz, M2)j p n 1 = b, p n z

y=A+l.

=

1,

pn?

=

A -b;

If Knapsack has a solution, then there exists a schedule with value C,,, = y, as

illustrated in Fig. 2. If Knapsack has n o solution, then x I E S a-, b = c # 0 for each

S C T, a n d we have f o r a processing order ({X i E S } , J,, {J, i E T - S } ) o n MI

that

1

I

c > O =3C m a x a C p t l + p n 2 + p n 3 = A + c + l > y ;

IES

c < O =3C m a x 2 p n l + p , , 2 + C p , l = A - c + l > y .

rET-S

It follows that Knapsack has a solution if a n d only if this n ( 2 1 G, n, s 3 1 C,,,

problem has a solution with value s y .

S

n

t i

b btl

T-S

1

At1 ### Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Chapter 20. Complexity of Machine Scheduling Problems

Tải bản đầy đủ ngay(0 tr)

×