dynamization of order decomposable set problems

16
JOURNAL OF ALGORITHMS 2,245- 260 ( 198 1) Dynamization of Order Decomposable Set Problems* MARKH.OVERMARS Department of Computer Science, Universiv of Utrecht, P.O. Box 80.002, 3508 TA Utrecht, The Netherlands Received December 4, 1980; revised April 9, 198 1 Order decomposable set problems are introduced as a general class of problems concerning sets of multidimensional objects. We give a method of structuring sets such that the answer to an order decomposable set problem can be maintained with low worst-case time bounds, while objects are inserted and deleted in the set. Examples include the maintenance of the two-dimensional Voronoi diagram of a set of points, and of the convex hull of a three-dimensional point set in linear time. We show that there is a strong connection between the general dynamization method given and the well-known technique of divide-and-conquer used for solving many problems concerning static sets of objects. Although the upper bounds obtained are low in order of magnitude, the results do not necessarily imply the existence of fast feasible update routines. Hence the results merely assesstheoretical bounds for the various set problems considered. 1. INTRODUCTION In the past few years, much effort has been made to develop general techniques for dynamizing problems (i.e., for turning static solutions into dynamic ones). Bentley [l] first noted that such a general approach was especially relevant to the class of so-called decomposable searching prob- lems. He gave a dynamization method for these problems that supported insertions in low average time bounds without sacrificing much of the efficiency of query answering. Soon after, a number of other dynamization methods for decomposable searching problems were developed [ 10, 13, 181, some also able to handle deletions for special subclasses of decomposable searching problems with good average bounds [9, 11, 21, 221. More recently is was shown that good worst-case bounds can be obtained as well [15]. It *This work was supported by the Netherlands Organization for the Advancement of Pure Research (ZWO). 245 Ol96-6774/81/030245-16$02.00/O Copyright 0 1981 by Academic Press. Inc. All rights of reproduction m any tom reserved.

Upload: mark-h-overmars

Post on 21-Jun-2016

219 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Dynamization of order decomposable set problems

JOURNAL OF ALGORITHMS 2,245- 260 ( 198 1)

Dynamization of Order Decomposable Set Problems*

MARKH.OVERMARS

Department of Computer Science, Universiv of Utrecht, P.O. Box 80.002, 3508 TA Utrecht, The Netherlands

Received December 4, 1980; revised April 9, 198 1

Order decomposable set problems are introduced as a general class of problems concerning sets of multidimensional objects. We give a method of structuring sets such that the answer to an order decomposable set problem can be maintained with low worst-case time bounds, while objects are inserted and deleted in the set. Examples include the maintenance of the two-dimensional Voronoi diagram of a set of points, and of the convex hull of a three-dimensional point set in linear time. We show that there is a strong connection between the general dynamization method given and the well-known technique of divide-and-conquer used for solving many problems concerning static sets of objects. Although the upper bounds obtained are low in order of magnitude, the results do not necessarily imply the existence of fast feasible update routines. Hence the results merely assess theoretical bounds for the various set problems considered.

1. INTRODUCTION

In the past few years, much effort has been made to develop general techniques for dynamizing problems (i.e., for turning static solutions into dynamic ones). Bentley [l] first noted that such a general approach was especially relevant to the class of so-called decomposable searching prob- lems. He gave a dynamization method for these problems that supported insertions in low average time bounds without sacrificing much of the efficiency of query answering. Soon after, a number of other dynamization methods for decomposable searching problems were developed [ 10, 13, 181, some also able to handle deletions for special subclasses of decomposable searching problems with good average bounds [9, 11, 21, 221. More recently is was shown that good worst-case bounds can be obtained as well [15]. It

*This work was supported by the Netherlands Organization for the Advancement of Pure Research (ZWO).

245

Ol96-6774/81/030245-16$02.00/O Copyright 0 1981 by Academic Press. Inc.

All rights of reproduction m any tom reserved.

Page 2: Dynamization of order decomposable set problems

246 MARK H. OVERMARS

was noted [ 181 that, although the number of decomposable searching problems was considerable, there were still many problems decomposable in nature that were not included.

Overmars and van Leeuwen [ 12, 141 showed that some of these problems could be dynamized using a different technique. The three main problems they considered were the maintenance of the convex hull of a two- dimensional point set, the maintenance of the intersection of a set of halfspaces in the plane and the maintenance of the contour of maximal elements of a two-dimensional point set. Their maintenance algorithms resulted in O(log’ n) worst-case insertion and deletion times. A number of interesting applications was offered as well, including e.g. the maintenance of the feasible region of a two-dimensional linear programming problem.

For all three problems considered in [ 12, 141 a similar technique was used to obtain the maintenance algorithms. First, for each of the problems an appropriate notion of decomposability was defined. The decomposability consisted of a method to derive the answer of the problem over the total set from the answers over two, in some way separated, subsets. After this method was devised, the technique used to dynamize the three problems was the same. In this paper we exploit this technique further. We define a general class of problems called “order decomposable set problems” for which some notion of decomposability exists. The class contains the three problems considered by Overmars and van Leeuwen [ 12, 141, but it contains many more problems also (see Section 4). We show that the technique described in [ 12, 141, when suitably modified and extended, can be used to obtain efficient dynamic solutions to all these problems. It appears that there is a strong connection between the existence of a divide-and-conquer algorithm for a set problem and the notion of order decomposability for the problem as defined here. More precisely, the notion of order decomposabil- ity of a set problem always leads to an easy divide-and-conquer algorithm for the static version of the problem. Also, most set problems for which a solution, based on a divide-and-conquer strategy, exists are order decom- posable and can therefore be dynamized by the method described in this paper. It follows that the method is particularly suited for the problems mentioned in Bentley [2], from which many examples have been taken (see Section 4).

In Section 2 we define the class of order decomposable set problems and show the connection with the divide-and-conquer technique. In Section 3 we adapt the method of [ 12, 141 to obtain a general dynamization technique for all order decomposable set problems. In Section 4 we give a number of examples of order decomposable set problems and show how the dynamiza- tion technique of Section 3 can be used. It shows maintenance algorithms for the convex hull of a three-dimensional point set, the closest pair of points in a set, and the Voronoi diagram of a two-dimensional point set

Page 3: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 247

with linear insertion and deletion times. We also show how to maintain the union (and intersection) of a set of line segments in 0(log2 n) steps per transaction. Although the bounds are low in order of magnitude, these results do not necessarily imply the existence of fast feasible update routines, and, hence, must be viewed as merely theoretical bounds on the complexity of the set problems considered.

2. ORDER DECOMPOSABLE SET PROBLEMS

For many set problems it is possible to derive the answer over the total set by combining the answers over two, in some way separated, “halves” of the set. (Note that answers to set problems often consist of entire configura- tions or structures.) Set problems of this kind will be called “order decom- posable.”

DEFINITION. A set problem P is called C(n)-order decomposable if and only if there exist an ordering ORD and a function Cl such that for each set

{a ,, . . . , a,}, ordered according to ORD,

Vl(i<n P(~,,..., a,)= q (P(u,,...,u,),P(u,+,,...,u,)), where 0 takes at most C(n) time to compute when the set contains n points.

We assume that C(n) is nondecreasing and smooth (i.e., C(O(n)) = 0( C( n))). A set problem will be called order decomposable if it is C( n)-order decomposable for some C(n). For a number of set problems we can meaningfully combine the answers over two separated halves of the set only when additional information about the points in the two halves is main- tained. (See, e.g., Example e in Section 4 where we require that the points be ordered with respect to a second coordinate as well.) If this extra kind of “preconditioning” is C(n)-order decomposable with respect to the same ordering ORD as that used for the problem itself, then we can transform the problem to a C(n)-order decomposable set problem by including the preconditioning as part of the “answer.” In such cases, 0 will first combine the answers using the preconditioning and afterward combine the precondi- tionings.

A well-known method for solving a number of static set problems is the divide-and-conquer technique [2, 3, 51. It asks for some kind of decomposa- bility very similar to the one described in the definition of order decomposa- ble set problems. It follows that most of the problems using divide-and- conquer are order decomposable. On the other hand, for each order decomposable set problem one can develop a divide-and-conquer algorithm quite easily (assuming we have an algorithm for Cl).

Page 4: Dynamization of order decomposable set problems

248 MARK H. OVERMARS

THEOREM 2.1. Let P be a C( n)-order decomposable set problem. There exists a divide-and-conquer solution to P that takes 0 (n + ORD( n )) steps when C(n) = O(n’) for 0 <c < 1, O(C(n) + ORD(n)) steps when C(n) = fi(n’+’ ) for z > 0 and 0 (log n. C( n ) + ORD( n )) steps otherwise, where n is the number of points in the set and ORD( n) is the time required to order n points according to ORD.

Proof. Let the point set for which we want to solve P be S = {p,,..., p,,}. We first order S according to ORD, thus obtaining an ordered set S’ = {a,, . . . , a,} (a; = pj for somej, a, # aj for i Zj). To solve P we split S’ in two halves, solve P for the two halves, and concatenate the answers using 0. The algorithm is best described by the following recursive procedure.

procedure ALG (ordered set S’, answer P); #S’ contains an ordered part of the point set. Let S’ = {a,, . . . , a,}.

ALG will return the answer to the set problem over S’ in P# begin

integer I= lowerbound (S’), u = upperbound (S’); ifI= u

then #S’ contains only one points# compute P directly

else integer m = [(I + u)/2/]; #split S’ in two halves and obtain the answers over the two halves by recursive calls of ALG . . . # answer P,, P2; ALG({a,, . . . , a,}, P, ); ALG({a mf,, . . .T a,), P,); # . . . and build the answer over S’ out of the answers over the halves. # P:= q (P,, P2)

fi end of ALG;

We call the procedure as ALG ( {a,, . . . , a,}, P ). The answer will be delivered in P.

Ordering S takes ORD(n) by definition. To obtain a bound for the cost of ALG note that, apart from the recursive calls, ALG takes 0 (C( n )) steps (n the number of points it is working on). Hence the runtime of ALG is bounded by a function T(n) which satisfies the following recurrence:

T(n) = 2T(in) i- C(n).

One easily verifies that T(n) = O(n) when C(n) = .O(n’) for some 0 < E <

Page 5: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 249

1, that T(n) = O(C(n)) when C(n) = Q(n’+‘) for some E > 0 and that T(n) = O(log n)C( n) otherwise. The theorem follows. 0

Often ORD( n) = 0( n log n) but there are order decomposable set prob- lems for which •i needs no ordering at all. (An example is MAX, which asks for the largest element of a set.) In such cases one can take ORD( n) = O(l).

3. DYNAMIZATION OF ORDER DECOMPOSABLE SET PROBLEMS

As Theorem 2.1. shows, there exist efficient algorithms to solve order decomposable set problems statically. But often one would like to maintain the answer to an order decomposable set problem while inserting and/or deleting points in the set. In this section we give a method of maintaining these answers efficiently. To achieve this we keep track of how the divide- and-conquer algorithm splits the set and how it “merges” answers for subsets in a single data structure. The structure we use is quite similar to the one described by Overmars and van Leeuwen [12]. The main part of the structure consists of a balanced search tree Tin which we keep points at the leaves, ordered by ORD. (When ORD consists of no ordering at all we choose lexicografic ordering, just to be able to find points.) T indicates the splittings in the divide-and-conquer algorithm. At each internal node a! of the tree we would like to have the complete P structure of the points below it. Let (Y be a node of T with sons y and 6. It follows from the definition of C(n)-order decomposability that, once we have the P structures at y and 6, we can obtain the P structure at (Y in 0 (C( n )) time by using 0 (see Fig. 1). There is only one problem. There is no guarantee in the definition of order decomposable set problems that 0 leaves the P structures it works on unchanged. In most of the examples offered in Section 4 these structures are

T:

FIGURE I

Page 6: Dynamization of order decomposable set problems

250 MARK H. OVERMARS

indeed largely destroyed. Copying the P structures before using them may take much more than 0 (C( n )) time. Therefore we have to develop another strategy.

Let Pa denote the P structure at (Y. When we want to build P, out of the structures at the sons y and 6 of IY, we keep track (at a) of all actions that were executed to obtain Pa and at y and 6 we save the pieces of Py and Ps that are left over. This can be done in 0 (C( n )) time. It follows that at some later time we can rebuild P, and P6 out of P,, using the stored information. In most applications there will be more efficient and less storage demanding methods for rebuilding P, and Pa out of Pa. Only at the root of T we keep the complete P structure of the entire set (and therefore the answer to the problem). It follows that at each internal node LY of T we have the following associated information.

(0 fW = P t a om er to the father of cr (if any). (ii) lson(a) = a pointer to the leftson of CX. (iii) rson(cu) = a pointer to the rightson of (Y. (iv) max(a) = the largest value (according to ORD) in the subtree of

lson (a). (v) P*(a) = the leftover piece of Pa.

(vi) I(a) = the f in ormation needed to obtain Plmnfaj and Prsoncaj from Pa (i.e., a record of actions).

Properties (i) to (iv) are needed to let T function as a search tree and properties (-1) and (vi) supply the associated information. When (Y is the root P*(a) = Pa. We will first describe the procedure DOWN to reconstruct the full Ps structure at an arbitrary node /3 of T. DOWN also reconstructs the full Py structure at each node y bordering the search path toward /3. These are needed to be able to return from /3 to the root of the tree and to reassemble structures when we do so. Later,p will be the father of some leaf that needs to be updated and the search will be guided by the usual decision criterion for search trees. We will not spell out this detail in the description of DOWN. DOWN works its way down the tree to /? by disassembling the structure at the node visited and reassembling the structures at its sons.

procedure DOWN(node a, p); #cw is the node just reached on the way towards /3. P*(a) contains the full Pa structure.#

begin ifa=p

then goal reached else build Plsonca) and Prmncaj using P*(a), P* (lson( (u)),

P* (rson( a)) and 1( at);

Page 7: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 251

P* (lson( a)) : = Plsontuj;

p* @son(a)) : = Prsonc.); #continue the search# if j3 below lson(a)

then DOWN (lson(a), /3) else DOWN (rson(a), /3)

fi fi

end of DOWN;

The procedure is called as DOWN(root, /?). Let the number of points in the set be n.

LEMMA 3. I. DOWN reaches its goal after 0 (C( n )) steps when C(n) = a(n’) > 0 and O(logn*C(n)) steps otherwise.

Proof. P

The expensive part of DOWN is the construction of Plsoncaj and rson(aj out of P, and I( a). At the ith recursive call of DOWN the number of

points below a is at most O(n/2’) because T is balanced. So the total cost (except for the recursive call) is bounded by 0 (C( n/2’)). Because /3 can never lie deeper than O(log n), the total cost for DOWN is bounded by

For C(n) = a( n’) (c > 0) this adds up to 0( C( n)). Otherwise it is bounded by O(log n.C(n)). Cl

We will use the procedure DOWN to reach the father of a leaf we want to update. After we have performed the action we want to climb back to the root, meanwhile assembling renewed P structures along the search path. For this task we use the procedure UP. The assembling of structures can be done quite easily using Cl but there is one problem. Because we have updated the tree (by means of an insertion or a deletion) T may have gotten out of balance. To be able to rebalance T efficiently we will restrict our possible choices of T to search trees that can be balanced by means of local rotations (like AVL-trees or BB[a] trees). In the description of UP we delegate this task to a procedure BALANCE.

procedure UP (node a); #a is the node most recently reached on the path from p to the root of

the tree. The P* structures at the sons of a contain the complete structures. #

Page 8: Dynamization of order decomposable set problems

252 MARK H. OVERMARS

begin P*(a) : = 0 (P*(lson( cr)), P*(rson( a))) meanwhile filling in the

information in Z(a) and leaving the leftover parts of the P struc- tures at the sons;

if out of balance then BALANCE (a) fi; if a = root

then goal reached ek UP(fW

fi end of UP;

The procedure is called as UP( f( p)) (when p is not the root).

LEMMA 3.2. UP always reaches its goal after O(C(n) + R) steps when C(n) = Sl(n’) and O(log n. C(n) + R) steps otherwise, where R is the total time needed for rebalancing.

ProoJ: The cost for rebalancing is R by definition. The costs for building P(a) and collecting the information in Z(a) add up to 0( C( n)) when C(n) = Q(n’) and O(log n-C(n)) otherwise, by the same arguments as those in the proof of Lemma 3.1. 0

To achieve a bound for R we have to look more carefully at the actions that need to be performed for rebalancing. In AVL-trees and BB[a] trees this rebalancing consists of local rotations only. We consider only the case in which a single rotation is needed, as double rotations can be handled in a similar way. When we have to perform a single rotation at node (Y (see Fig. 2) we first do one iteration of DOWN to reconstruct the P structures at lson(a) and rson( a). Had we noticed the need for rebalancing at the beginning of UP this step would not have been needed. (This modification is left to the reader.) Another iteration of DOWN is done at lson(cw) to obtain the full structures at y and 6. Next we perform the rotation and start UP again at (Y.

LEMMA 3.3. R = O(C(n)) when C(n) = P(n’) and R = 0 (log n. C( n )) otherwise.

FIGURE 2

Page 9: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 253

Proo$ A single rotation consists of at most two iterations of DOWN and one iteration of UP. Similar results can be obtained for double rotations. Because at each level there is at most once a need for rebalancing, the bound follows from the bounds for DOWN and UP. 0

We are now ready to prove our main result, namely, the dynamization of order decomposable set problems.

THEOREM 3.4, Given a C(n)-order decomposable set problem P, we can dynamize it to yield an insertion time I(n) = 0( C( n)) and a deletion time D(n) = O(C(n)) when C(n) = sl(n’) and I(n) = D(n) = O(log n. C(n)) otherwise, where n is the current number of points in the set.

Proof. We keep the points of the set, ordered with respect to ORD, in a tree structure T as described. In T we have the required answer to the problem at the root. When we want to insert (or delete) a point p we use DOWN to reach the father of the leaf where p needs to be inserted (or deleted) according to ORD. After performing the action we climb back to the root using the procedure UP, meanwhile reassembling the updated answers. UP also takes care of the rebalancing operations needed. When we have reached the root of T we have the updated answer over the total set. The bounds follow from the bounds of DOWN, UP, and BALANCE. q

Hence, once there is a “cheap” merging algorithm Cl for a set problem. one can always dynamize it efficiently.

4. EXAMPLES

In this section we consider a number of typical examples of order decomposable set problems.

(a) Convex Huff in One, Two, and Three Dimensions

A first example of an order decomposable set problem is the determina- tion of the convex hull of a set of points in one-, two-, and three-dimensional space. In one dimension the convex hull of a set of points consists of the smallest and largest points in the set. One easily verifies that the problem is O(l)-order decomposable with the usual ordering. Hence the dynamization method yields Z(n) = D(n) = O(log n). In the two-dimensional case the problem is more difficult. We need a “merging” routine for two in some way separated convex hulls (see Fig. 3). This merging consists of finding the two common tangents of the convex polygons. Overmars and van Leeuwen [ 121 have given a method for doing this in O(log n) steps when we maintain the convex hulls in some appropriate data structure. As ordering they use the ordering of points by y-coordinate. Hence the problem is O(log n)-order

Page 10: Dynamization of order decomposable set problems

254 MARK H. OVERMARS

FIGURE 3

decomposable and we can use the scheme of Section 3 to obtain a structure for maintaining the two-dimensional convex hull of a set of points with a transaction time of O(log* n). In three-dimensional space the merging of convex hulls is much harder. Preparata and Hong [16] offer a method for merging separated convex polygons in O(n) time. Thus the three-dimensional convex hull problem is again order decomposable but with the higher bound of C(n) = O(n). By Theorem 3.4. it can be dynarnized such that the insertion and deletion times are bounded by O(n). (Bentley and Shamos [5] state that it is not necessary for the method of Preparata and Hong [ 161 that the convex polygons be separated. Hence we can choose any ordering we like for ORD.) In d-dimensional space (d > 3) no efficient merging algo- rithms for convex polygons are known. Hence we have no efficient dynami- zation for the d-dimensional convex hull problem.

(b) Intersection of Half-Spaces in One, Two, and Three Dimensions

A somewhat related problem asks for the intersection of a set of half- spaces. In one-dimensional space it can be shown easily that the problem is 0( I)-order decomposable and hence we can achieve O(log n) insertion and deletion times. In the two-dimensional case the common intersection of a set of half-spaces consists of a (possibly open) convex polygonal region. Instead of maintaining the whole set we split the set in the set of half-spaces “facing” leftward and the set of half-spaces facing rightward (the I-half- spaces and r-half-spaces, respectively). Overmars and van Leeuwen [12] have shown that the problem of finding the intersection of l-half-spaces (and likewise r-half-spaces) is O(log n)-order decomposable. As ordering they use the angle of the bordering line. In this case, finding the total intersection from intersections of two halves consists of finding the one point of intersection of the two convex arcs (see Fig. 4). It follows that we can maintain the intersection of a set of f-half-spaces in O(log* n) transac- tion time (likewise for r-half-spaces). Overmars and van Leeuwen [ 121 show

Page 11: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 255

FIGURE 4

that one can find the intersection of the total set out of the intersections of the I-half-spaces and r-half-spaces in only O(log n) additional steps. Hence we can maintain the total intersection at a cost of O(log2 n) per transaction. In three dimensions the problem is again more complex. Brown [6, 71 and Preparata and Muller [17] have shown that there is a strong connection between the intersection of a set of half-spaces and the convex hull of a set of points (by means of dualization). Using these results one can develop an O(n) maintenance algorithm for the intersection of half-spaces in three- dimensional space.

(c) Maximal Elements in One, Two, and Higher Dimensions

Given a set of points we can ask for the maximal elements in the set. An element x is called maximal if there is no y in the set such that y > X. In one dimension the problem is trivial (yielding O(log n) bounds). In the two- dimensional case Overmars and van Leeuwen [12] have given a method for merging the contours of maximal elements of horizontally separated subsets in O(log n) time (see Fig. 5). Hence the problem is O(log n)-order decom-

Page 12: Dynamization of order decomposable set problems

256 MARK H. OVERMARS

Union: I

FIGURE 6

posable and can be dynamized yielding @(log* n) bounds. In the higher- dimensional case one can use a trivial dynamization of the ECDF searching problem (see Bentley and Shamos [4]) to achieve O(n) bounds for the problem.

(d) Union and Intersection of Line Segments

Consider the following problem: Given a set of segments on a line, determine the union. Clearly the union again consists of a set of segments (see Fig. 6). The set problem is decomposable in the following sense:

THEOREM 4.1. Let S = {sl,. . ., s,,} be a set of segments ordered by starting point (= left point). If the union of {s,, . . . , si} and of {So+,, . . . , s,,} is known we can determine the union of the entire set in O(log n) steps.

ProoF Let p be a point between the starting point of si and si+ ,. The union of {s,, . . . , s,} consists of a number of segments but at most one of these segments can stretch to the right of p (see Fig. 7). When there is no such segment, the union of the total set consists of the unions of the two parts. Otherwise, let x be the rightmost endpoint of that segment. Let the union of {s,,..., sI} be A and the union of {s;+,, . . . , s”} be C. We search for x in C. The following two cases can occur

'P

s: -- - - - - -

Union

leftpart:

A I I x

Union I c

rightpart: I

FIGURE 7

Page 13: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 257

Case a. x lies between two segments in C (or before or behind C).

A x -- --,- -

C P

In this case we just throw the part of C left from x away. The total union consists of A plus the rest of C.

Case b. x lies within a segment in C.

A x - .-

C P j Y

In this case we extend the last segment of A up toy and throw away the part of cup toy.

It follows that, when we use a proper data structure for A and C, we can find the union in O(log n) steps. q

It follows that the problem is O(log n)-order decomposable. Hence we can dynamize it by the methods described yielding 1(n) = D(n) = O(log* n). Extending this result to higher-dimensional space gives rise to tremendous problems.

To determine (and maintain) the intersection of a set of line segments is easier. Such an intersection consists of at most one segment.

THEOREM 4.2. Given a set S = {s,, . . . , sn} of segments and the intersec- tion of {s,, . . ., si} and of {si+,, . . ., s,,}, the intersection of S can be de- termined in 0( 1) steps.

Proof. The intersection of {s,, . . . , si} consists of (at most) one segment, as does the intersection of {si+,, . . . , s,,}. To find the total intersection we just have to intersect these two segments, which takes O(1). 0

Hence, the problem is O(l)-order decomposable and we can dynamize it in 1(n) = D(n) = O(log n). To extend the result to d-dimensional space, just replace “segment” by “rectilinearly oriented hyperrectangle” and Theo- rem 4.2 carries over. Hence the problem remains O(l)-order decomposable. More precisely C(n) = O(2d) and I(n) = D(n) = 0(2dlog n).

(e) Closest-Points Problem

The closest-points problem we consider asks for two points in a set that have smallest distance. For this problem there exists an algorithm based on a divide-and-conquer strategy (Bentley and Shamos [3] and Bentley [2]). To find the closest pair of points we split the set by a vertical line into two halves and determine the closest pair in each half. Let 6 be the minimum of the two distances. The only points that can lie closer than 6 must lie in the

Page 14: Dynamization of order decomposable set problems

258 MARK H. OVERMARS

.

.

e .

l Y .

FIGURE 8

slab of size 26 around the dividing line (see Fig. 8). When we have the points in this slab ordered by y-coordinate we can determine the closest pair among them by one walk along them. It can be shown (see [2, 31) that with each point in the slab we have to do only a constant number of comparisons with other points. Hence, the conquer step of the algorithm takes at most O(n), provided we have the points ordered by y-coordinate. So, as a preconditioning we need the points ordered by y-coordinate. As suggested in Section 2 we include this preconditioning as part of the answer. Clearly this preconditioning can be combined in O(n). Sifting the points in the slab from the preconditioning can also be done in O(n). It follows that C(n) = O(n). Hence we can dynamize the problem obtaining I(n) = D(n) = O(n).

(f) The Voronoi Diagram

In a Voronoi diagram we keep with each point of the set the region of points that lie nearest to the particular point (see Fig. 9). Each region is a convex polygon. Shamos [19] and Shamos and Hoey [20] first showed how the Voronoi diagram could be used to answer a number of closest-point problems. To build the Voronoi diagram of a set of points, Shamos [ 191 divides the set by a vertical line into two equal-sized subsets, builds the Voronoi diagrams of both halves, and then “concatenates” them. The merging of the two diagrams takes O(n). (Kirkpatrick [8] has shown that Voronoi diagrams can be merged in linear time even when they are not “separated.“) It follows that the problem of finding the Voronoi diagram of a set of points is O(n)-order decomposable. Hence we can dynamize it yielding I(n) = D(n) = O(n). There are many problems that can be solved in linear time using the Voronoi diagram (see Shamos and Hoey [20]) which can thus be dynamized in linear time, including the all-closest-points

Page 15: Dynamization of order decomposable set problems

DYNAMIZATION OF SET PROBLEMS 259

FIGURE 9

problem, the maintenance of a minimum-weight triangulating, the largest- empty-circle problem, and the maintenance of a minimum spanning tree.

ACKNOWLEDGMENTS

I would like to thank Jan van Leeuwen and the referee for helpful comments.

REFERENCES

1. J. L. BENTLEY, Decomposable searching problems, hjorm. Proc. L&t. 8 (1979), 244-251. 2. J. L. BENTLEY, Multidimensional divide-and-conquer, Comm. ACM 23 (1980), 214-229. 3. J. L. BENTLEY, AND M. I. SHAMOS, Divide-and-conquer in multidimensional space, in

“Proceedings, 8th Annual ACM Symposium on Theory of Computing, 1976,” pp. 220-230. 4. J. L. BENTLEY, AND M. I. SHAMOS, A problem in multivariate statistics: Algorithm, data

structure and applications, in “Proceedings, 15th Annual Allerton Conference on Commer- cial, Control and Computing, 1977,” pp. 193-201.

5. J. L. BENTLEY, AND M. I. SHAMOS, Divide and conquer for linear expected time, Inform. Proc. L&t. 7 (1978), 87-91.

6. K. Q. BROWN, “Fast Intersection of Half Spaces,” Tech. Rep. CMU-CS-78-129, Depart- ment of Computer Science, Carnegie-Mellon University, 1978.

7. K. Q. BROWN, “Geometric Transforms for Fast Geometric Algorithms,” Tech. Rep. CMU-CS-IO- 101, Department of Computer Science, Carnegie- Mellon University, 1980.

8. D. G. KIRKPATRICK, Efficient computation of continuous skeletons, in “Proceedings, 20th Annual IEEE Symposium on Foundations of Computer Science, 1979,” pp. 18-27.

9. H. A. MAURER AND TH. OTTMANN, Dynamic solutions of decomposable searching prob- lems, in “Discrete Structures and Algorithms” (U. Pape, Ed.), pp. 17-24, Hanser, 1979.

IO. K. MEHLHORN AND M. H. OVERMARS, Optimal dynamization of decomposable searching problems, Inform. Proc. Lett. 12 (1981), 93-98.

I 1. M. H. OVERMARS AND J. VAN LEEUWEN, Two general methods for dynamizing decomposa- ble searching problems, Computing 26 (1981), 155- 166.

Page 16: Dynamization of order decomposable set problems

260 MARK H. OVERMARS

12. M. H. OVERMARS AND J. VAN LEEUWEN, “Maintenance of Configurations in the Plane,” Tech. Rep. RUU-CS-79-9, Department of Computer Science, University of Utrecht, 1979/1980; revised version: Tech. Rep. RUU-CS-81-3. (To appear in J. Comput. Syst. SC;.)

13. M. H. OVERMARS AND J. VAN LEEUWEN, Some principles for dynamizing decomposable searching problems, Inform. Proc. Lett. 12 (1981), 49-53.

14. M. H. OVERMARS AND J. VAN LEEUWEN, Dynamically maintaining configurations in the plane, in “Proceedings, 12th Annual ACM Symposium on Theory of Computing, 1980,” pp. 135- 145.

15. M. H. OVERMARS AND J. VAN LEELJWEN, Dynamizations of decomposable searching problems yielding good worst-case bounds, in “Proceedings, 5th GI Fachtagung Theore- tische Informatik,” pp. 224-233, Springer Lecture Notes in Computer Science No. 104, Springer-Verlag, Berlin/New York, 198 1.

16. F. P. PREPARATA AND S. J. HONG, Convex hulls of finite sets of points in two and three dimensions, Comm. ACM 20 (1977), 87-93.

17. F. P. PREPARATA AND D. E. MULLER, Finding the intersection of n half-spaces in time O(nlog n), Theor. Compui. Sci. 8 (1979), 45-55.

18. J. B. SAXE AND J. L. BENTLEY, Transforming static data structures into dynamic structures, in “Proceedings, 20th IEEE Symposium on Foundations of Computer Science, 1979,” pp. 148-168.

19. M. I. SHAMOS, “Computational Geometry,” Springer-Verlag, Berlin/New York, in press. 20. M. I. SHAMOS AND D. HOEY, Closest-point problems, in “Proceedings, 16th Annual IEEE

Symposium on Foundations of Computer Science, 1976,” pp. 15 I - 162. 21. J. VAN LEEIJWEN AND H. A. MAURER, “Dynamic Systems of Static Data-Structures,”

Report 42, Inst. fiir Informationsverarbeitung, Technical University Graz, 1980. 22. J. VAN LEEUWEN AND D. WOOD, Dynamization of decomposable searching problems,

Inform. Proc. L&t. 10 (1980). 51-56.