parameterized self-adjusting heaps

19
Parameterized Self-Adjusting Heaps Amr Elmasry Computer Science Department, Alexandria University, Alexandria, Egypt Abstract We give a parameterized form for the standard implementation of the pairing heaps, skew heaps and skew-pairing heaps. When the node with the minimum value is to be deleted from the heap (deletemin operation), the procedure used to combine the resulting sub-trees into one tree depend on the value of a parameter k. When the value of k is equal to 2, the implementations are equivalent to the standard imple- mentations. Using more complicated arguments, we show that for some predefined ranges of k this general form achieves the same bounds as the standard implemen- tations. Finally, experimental results are conducted showing that for the pairing heaps, by tuning the value of k, the cost of the deletemin operation is reduced. Key words: Data structures, parameterization, self-adjusting structures, pairing heaps, skew heaps, amortized analysis. 1 Introduction Several self-adjusting heap structures were introduced in the literature (pair- ing heaps [5], skew heaps [11] and skew-pairing heaps [2]). The operation of deleting the node that has the minimum value from a heap is referred to as the deletemin operation. In this paper we generalize the deletemin operation for these self-adjusting heaps, by introducing a new parameter k (in the same line of thinking as the generalization of binary heaps to d-heaps). When k is equal to 2, our implementations are the same as the standard implementations. A similar generalization of the self-adjusting binary search trees of Sleator and Tarjan [10] to k-ary trees was given by Sherk [9]. We prove that, for some ranges of k, the same bounds as the standard im- plementations can be achieved for different heap operations. More specifically, Email address: [email protected] (Amr Elmasry). Preprint submitted to Journal of Algorithms 10 December 2014

Upload: page

Post on 20-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Parameterized Self-Adjusting Heaps

Amr Elmasry

Computer Science Department, Alexandria University, Alexandria, Egypt

Abstract

We give a parameterized form for the standard implementation of the pairing heaps,skew heaps and skew-pairing heaps. When the node with the minimum value is tobe deleted from the heap (deletemin operation), the procedure used to combine theresulting sub-trees into one tree depend on the value of a parameter k. When thevalue of k is equal to 2, the implementations are equivalent to the standard imple-mentations. Using more complicated arguments, we show that for some predefinedranges of k this general form achieves the same bounds as the standard implemen-tations. Finally, experimental results are conducted showing that for the pairingheaps, by tuning the value of k, the cost of the deletemin operation is reduced.

Key words: Data structures, parameterization, self-adjusting structures, pairingheaps, skew heaps, amortized analysis.

1 Introduction

Several self-adjusting heap structures were introduced in the literature (pair-ing heaps [5], skew heaps [11] and skew-pairing heaps [2]). The operation ofdeleting the node that has the minimum value from a heap is referred to as thedeletemin operation. In this paper we generalize the deletemin operation forthese self-adjusting heaps, by introducing a new parameter k (in the same lineof thinking as the generalization of binary heaps to d-heaps). When k is equalto 2, our implementations are the same as the standard implementations. Asimilar generalization of the self-adjusting binary search trees of Sleator andTarjan [10] to k-ary trees was given by Sherk [9].

We prove that, for some ranges of k, the same bounds as the standard im-plementations can be achieved for different heap operations. More specifically,

Email address: [email protected] (Amr Elmasry).

Preprint submitted to Journal of Algorithms 10 December 2014

we prove that the amortized cost of the deletemin operation for an n-node pa-rameterized pairing heap, skew heap, or skew-pairing heap is O(log n), whichis the same bound achieved by the standard implementations. The boundson the other operations are also the same as these proved for the standardimplementations. The original proofs given for the case when k equals 2 donot work for general values of k. We give new proofs for the general case usingnew potential functions and new averaging techniques that are interesting intheir own rights.

One of the main purposes of introducing other known parameterized struc-tures (d-heaps, B-trees [1] and self-adjusting k-ary trees [9]) is the efficientusage of external memory. In contrast, the main purpose of introducing ourparameterized heaps is to allow more flexibility leading to a speed-up of theheap operations performed in internal memory. We support our hypothesisby experimental results. For the pairing heaps, we show that by tuning thevalue of k relative to the underlying application better practical results can beachieved (one of our experiments showed about 30% speed-up when we usedk = 10 instead of 2).

We start by reviewing the structure and some basic operations for each of thepairing heaps, skew heaps and skew-pairing heaps.

Pairing heaps. The pairing heap [5] is a heap-ordered general tree. There isno restriction on the number of children a node may have, and the children of anode are maintained in a list of siblings. The basic operation on a pairing heapis the pairing operation, in which two queues are combined into one queue bymaking the root with the larger key value the leftmost child of the other root.The following operations are defined for the standard implementation of thepairing heaps:

• insert(x, h): Make x into a one-node tree and pair it with tree h.• decreasekey(δ, x, h): Subtract δ from the key of item x. If x is not the root

of tree h, cut the edge joining x to its parent and pair the two trees formedby the cut.• deletemin(h): Remove the node at the root of h and return its value. The

resulting queues are then combined to form a single tree using the pairingoperation. Several variants have been suggested, each with a different wayof combining the resulting queues. In the standard two-pass variant, thedeletemin operation proceeds as follows. In the first pass, the children ofthe deleted root are paired from left to right. In other words, if the numberof sub-trees is odd, the rightmost sub-tree is not paired. (Pairing these sub-trees from right to left achieves the same amortized bounds). In the secondpass, the remaining sub-trees are paired, in order from right to left. Eachsub-tree is paired with the result of the linkings of the sub-trees to its right.This pass is called the right-to-left incremental pairing pass.

2

It has been proven [5] that, for an n-node pairing heap, the amortized costof any of these operations is O(log n). Iacono [6] showed that the bound onthe insert is O(1). Fredman [3] has shown that a constant amortized costfor the decreasekey operation is precluded by establishing a lower bound ofΩ(log log n) for a large family of data structures that generalize the pairingheaps (this lower bound applies for the parameterized form of the pairingheaps).

Skew heaps. The skew heap [11] is a binary heap-ordered tree. The deleteminoperation starts by removing the root and then merging its two sub-trees. Inthe top-down variant, the merge operation proceeds by merging the rightspines of the two respective trees. Suppose that the right spine of one treeconsists of x1, x2, . . . , xr, that the right spine of the other tree consists ofy1, y2, . . . , ys (in order, starting with the respective tree roots), and that v1, v2,. . . , vt(t = r+s) gives the result of the merging of the two spines. Suppose thatthe sequence of xi’s are exhausted prior to the sequence of yi’s in this mergedresult. Let vh = xr, h < t be the point at which the xi’s are exhausted. Thenin the resulting merged tree the vi’s comprise a path starting at the root; thelinks joining v1, v2, . . . , vh+1 are all left links, and the remaining links (thosejoining vh+1, . . . , vt) all consist of right links. In other words, as merging takesplace in top-down order, sub-trees are swapped, but not below the level of thefirst node beyond the exhausted sublist. An alternative recursive formulationof the merge operation is given as follows. Let u and v be the two trees beingmerged and let merge(u, v) denote the result of the merging. If v is empty,then merge(u, v) is given by u. Otherwise, assume that the root α of u wins itscomparison with the root of v and let uL and uR be the left and right sub-treesof u, respectively. Then merge(u, v) has α as its root, uL as its right sub-tree,and merge(uR, v) as its left sub-tree. Sleator and Tarjan [11] have shown thatthe amortized cost of the deletemin operation performed on an n-node pairingheap is O(log n).

Skew-pairing heaps. Similar to the pairing heap, the skew-pairing heapis a heap-ordered general tree. Let y1, y2, y3, . . . be the children of the treeroot, in left-to-right order. After removing the root, the deletemin operationproceeds to combine these nodes as follows. First, a right-to-left incrementalpairing pass is performed among the nodes in odd numbered positions (thenodes y1, y3, y5, . . .). Let Sodd denote the resulting tree. Second, a right-to-leftincremental pairing pass is performed among the nodes in even numberedpositions (the nodes y2, y4, y6, . . .). Let Seven denote the resulting tree. Finally,Sodd and Seven are paired together. (If Seven is empty, then the final result isgiven by Sodd).

Fredman [2] has introduced a transformation, referred to as depletion, whichwhen applied to the skew-heap induces the skew-pairing heap. In other words,he showed that both heaps perform the same comparisons, but in a different

3

order. For the parameterized forms, we point out that the correspondencebetween the skew heap and the skew-pairing heap still holds for any k.

2 Parameterized pairing heaps

The only operation that is different from the implementation of the stan-dard pairing heaps is the deletemin operation. In the parameterized form, thedeletemin operation is implemented as follows. First, the root of the heap isremoved and its value is reported. Let c be the number of the sub-trees of thisdeleted root. These c sub-trees are grouped into g = d c

ke groups each having

k consecutive sub-trees, except possibly for the leftmost (or rightmost) group.In the first pass the sub-trees in each group are combined from right to left.In other words, starting from the rightmost sub-tree within the group, eachsub-tree is paired with the result of the linkings of the sub-trees to its right inan incremental fashion. In the second pass, the resulting sub-trees are paired,in order from right to left, in a similar fashion to the first pass. Note that bychoosing k equals 2, the implementation is the same as that of the standardpairing heaps. The value of k may change from one deletemin to the other.See Figure 1 for an example of the deletemin operation for the parameterizedpairing heaps for k=2 and k=3.

The following theorem shows that the logarithmic bound on the deleteminoperation holds for some ranges of k that depend on the number of children ofthe deleted root. It also shows that, to achieve the logarithmic cost, any fixedconstant value of k suffices.

Theorem 1 For an n-node parameterized pairing heap, using a value of theparameter k = O(log n) such that k < max( c

2 logn+1, ε) (where c is the number

of children of the root, and ε is a constant), the amortized cost of the deleteminoperation is O(log n).

Proof. We use the potential technique [13] to prove the theorem. Insteadof associating a potential with the nodes of the structure, we associate thepotential with the links connecting nodes. Define the size of a sub-tree to bethe number of nodes in this sub-tree. When a sub-tree, whose size is s1, islinked as the leftmost child to another sub-tree whose size is s2, a potential oflog(s1 + s2) 1 is associated with this link.

Let sl be the size of the lth sub-tree, from the right, of the deleted root. Sinceall the links that involved the deleted root took place by making a sub-tree theleftmost child of this root, the potential on the link joining the lth sub-tree to

1 All logarithms are to the base 2

4

Fig. 1. The deletemin operation for the parameterized pairing heap, for k=2 and 3.

the root is log(1 +∑ld=1 sd). When the root is cut by the deletemin operation,

all the links joining that root to its children are also cut, and the potentialdecreases by

∑cl=1 log(1 +

∑ld=1 sd), which accounts for the potential on these

links. New links are created during the two pairing passes. Assume withoutloss of generality that k divides c. (The proof is similar if k does not divide c).The increase in potential as the result of the new links produced by the firstpass is

∑g−1i=0

∑kj=2 log(

∑jd=1 sik+d). One may think about i as a group index

and j as a position in a group. The increase in potential as the result of thenew links produced by the second pass is

∑gi=2 log(

∑ikd=1 sd). Let the change

in potential as a result of the deletemin operation be ∆P , then

∆P =g−1∑i=0

k∑j=2

log(j∑

d=1

sik+d) +g∑i=2

log(ik∑d=1

sd)−c∑l=1

log(1 +l∑

d=1

sd) (1)

5

What we are looking for is to bound the value of ∆P in (1) from aboveby −c + O(log n). Since the actual number of comparisons involved in thedeletemin is c− 1, adding the claimed change in potential to this actual costimplies the required logarithmic amortized cost.

To simplify (1), we write∑cl=1 log(1 +

∑ld=1 sd) in terms of i and j.

c∑l=1

log(1 +l∑

d=1

sd) >c∑l=1

log(l∑

d=1

sd) =g−1∑i=0

k∑j=1

log(ik+j∑d=1

sd).

Breaking up this sum to two parts, for j = 1 and j = 2 .. k

c∑l=1

log(1 +l∑

d=1

sd) >g−1∑i=0

k∑j=2

log(ik+j∑d=1

sd) +g−1∑i=0

log(ik+1∑d=1

sd) (2)

Substituting for the third summation in (1), using (2)

∆P <k∑j=2

g−1∑i=1

(log(j∑

d=1

sik+d)− log(ik+j∑d=1

sd)) +g∑i=2

log(ik∑d=1

sd)−g−1∑i=0

log(ik+1∑d=1

sd).

Since∑gi=2 log(

∑ikd=1 sd)−

∑g−1i=0 log(

∑ik+1d=1 sd) < log n, the bound on ∆P is

∆P <k∑j=2

g−1∑i=1

(log(j∑

d=1

sik+d)− log(ik+j∑d=1

sd)) + log n.

For every value of j from 2 to k, define

Tjdef=

g−1∑i=1

(log(j∑

d=1

sik+d)− log(ik+j∑d=1

sd)) (3)

Substituting with these new variables for the bound on ∆P , then

∆P <k∑j=2

Tj + log n (4)

If k ≥ c2 logn+1

, then k < ε which implies c < ε(2 log n + 1). And hence, the

actual number of comparisons is O(log n). Since the values of all the Tjs arenegative, it follows from (4) that ∆P = O(log n). This implies the O(log n)amortized cost for the deletemin, and the theorem follows. Hence, We mayassume for the rest of the proof that k < c

2 logn+1.

6

Next, we show that each of the Tjs is bounded from above by a negative valueproportional to the number of groups g. Since there are k − 1 such variables,the claimed bound of −c+O(log n) on ∆P follows.

Consider any value of j from 2 to k. For all i from 0 to g − 1, define

pidef=

j∑l=1

sik+l.

And for all i from 1 to g − 1, define

tidef= log pi − log(

i∑d=0

pd), Ridef=

i∑d=0

pd/i−1∑d=0

pd.

Substituting with the defined formula for ti in (3), then

Tj <g−1∑i=1

ti (5)

The following relations also follow from the defined formulas for ti and Ri

g−1∑i=1

logRi < log n, ti = logRi − 1

Ri

.

Using Markov’s inequality, there are at least g−12

values of i satisfying

Ri ≤ 22 logng−1 , ti ≤ log

22 logng−1 − 1

22 logng−1

.

Since k < c2 logn+1

, then g− 1 > 2 log n, which implies 22 logng−1 −1

22 logng−1

< 12. Hence, for

at least g−12

values of i we have ti < −1. From the definition of ti it followsthat ti < 0 for all values of i. From (5), the following relation holds for all jfrom 2 to k.

Tj < −g − 1

2(6)

Substituting in (4), then

∆P < −(k − 1)g − 1

2+ log n.

7

Should we have scaled the potential function to be 2kk−1 log(s1 + s2),

∆P <−k(g − 1) +2k

k − 1log n,

<−c+ k +2k

k − 1log n.

Since k = O(log n), then ∆P < −c+O(log n), and the theorem follows.

The case when k does not divide c is manipulated similarly. We only considerthe g−1 groups with k sub-trees each, and exclude the last group (rightmost orleftmost, according to the implementation) from the derivations. Without lossof generality, assuming that this incomplete group is the rightmost, Equation(1) would be

∆P ≤g−2∑i=0

k∑j=2

log(j∑

d=1

sik+d) +g−1∑i=2

log(ik∑d=1

sd) + log n−b ckck∑

l=1

log(1 +l∑

d=1

sd).

It follows that Tj < −g−22

instead of (6). The bound of −c+ O(log n) on ∆Pfollows similarly. 2

The same potential function, we use in this paper, was also used by Fred-man [4] to prove the following result stated in [3]. Beginning with an initiallyempty pairing heap and performing a sequence of m ≥ n heap operations,which includes at most n deletemin operations, such that the heap size doesnot exceed n at any point, the total execution cost of the sequence does notexceed O(m log2m/n n). This bound implies a constant cost for the decreasekeyoperation when m = Ω(n1+ε), for any constant ε > 0.

3 Parameterized skew heaps and skew-pairing heaps

We define the parameterized skew heap as a heap-ordered tree, each node ofwhich may have up to k children, for some specified constant k. The deleteminoperation will be performed as follows. First the root of the heap is deleted.Then, the at most k sub-trees are merged in one tree. Let u1, u2, . . . , uk bethe sub-trees of the deleted root that will be merged, from left to right. Letmerge(u1, u2, . . . , uk) denote the result of the merging. If k−1 trees are empty,the merging simply returns the only non-empty tree. Otherwise, assume thatthe root of ui is the smallest root. Let v1, v2, . . . , vk be the children of theroot of ui, in that order. Then, merge(u1, u2, . . . , uk) has the root of ui as its

8

Fig. 2. The deletemin operation for the 3-skew heap.

root, and the sub-trees of merge(vk, u1, . . . , ui−1, ui+1, . . . , uk), v1, v2, . . . , vk−1as this root’s sub-trees from left to right. See Figure 2 for an example of thedeletemin operation for the 3-skew heap.

The following theorem can be proved in a similar way to the original proof ofSleator and Tarjan [11] for the standard binary case.

Theorem 2 For an n-node parameterized skew heap, using any constant valueof the parameter k, the amortized cost of the deletemin operation is O(log n).

Next, we describe how the deletemin operation is implemented for the param-eterized skew-pairing heap. Let y0, y1, y2, . . . be the children of the tree root

9

Fig. 3. The deletemin operation for the 3-skew-pairing heap.

being deleted, in left-to-right order. For all i from 0 to k − 1, a total of kright-to-left incremental pairing passes are performed among the nodes in allthe positions whose numbers are i modulo k. Finally, the values of the rootsof the resulting k trees are sorted in ascending order forming the sequencer1, r2, . . . , rk, and the node of ri is linked to the node of ri−1 as its leftmostchild, for all i from 2 to k. Note that the way these k sub-trees are combineddoes not affect the logarithmic bound, as long as k is a constant. See Figure3 for an example of the deletemin operation for the 3-skew-pairing heap.

It is quite interesting to observe that Fredman’s proof [2], that shows the cor-respondence between the standard implementation of the skew heap and theskew-pairing heap under the depletion transform, also applies for the param-eterized form of the two structures. We skip the details of this proof in thecurrent context. However, we prove next that the parameterized skew-pairing

10

heap is as efficient as the standard skew-pairing heap, when k is a constant.

Theorem 3 For an n-node parameterized skew-pairing heap, using any con-stant value of the parameter k, the amortized cost of the deletemin operationis O(log n).

Proof. We use the same potential function used in Theorem 1. Let c be thenumber of children of the deleted root. Let sl be the size of the lth sub-treefrom the right, of the deleted root. When the root is cut by the deleteminoperation, all the links joining that root to its children are also cut, and thepotential decreases by the amount on these links. Since all the links took placeby making a sub-tree the leftmost child of the deleted root, the decrease inpotential is

∑cl=1 log(1+

∑ld=1 sd). New links are created during the two linking

passes. Assume without loss of generality that k divides c, and let g = ck

(Theproof is similar if k does not divide c). The increase in potential as the resultof the new links produced by the first pass is

∑kj=1

∑g−1i=1 log(

∑id=0 sdk+j). One

may think about i as a group index and j as a position in a group. The increasein potential as the result of linking the resulting k sub-trees is less than k log n.Let the change in potential as a result of the deletemin operation be ∆P , then

∆P <k∑j=1

g−1∑i=1

log(i∑

d=0

sdk+j)−c∑l=1

log(1 +l∑

d=1

sd) + k log n. (7)

What we are looking for is to bound ∆P in (7) from above by −c+O(log n).Since the actual number of comparisons involved in the deletemin is c − 1,adding the claimed change in potential to this actual cost implies the requiredlogarithmic amortized cost.

To simplify (7), we write∑cl=1 log(1 +

∑ld=1 sd) in terms of i and j.

c∑l=1

log(1 +l∑

d=1

sd) >g−1∑i=1

k∑j=1

log(ik+j∑d=1

sd) (8)

Breaking up the summation∑kj=1

∑g−1i=1 log(

∑id=0 sdk+j) for i = 1 .. g − 2 and

i = g − 1, and using Jensen’s inequality

k∑j=1

g−1∑i=1

log(i∑

d=0

sdk+j) <g−2∑i=0

k∑j=1

log(i∑

d=0

sdk+j) + k logn

k(9)

Substituting in the value of ∆p in (7), using (8) and (9)

∆P <g−2∑i=0

k∑j=1

log(i∑

d=0

sdk+j)−g−1∑i=1

k∑j=1

log(ik+j∑d=1

sd) + 2k log n− k log k.

11

For all i from 1 to g−1 and j from 1 to k, let Xi,jdef=

∑i−1d=0 sdk+j, which implies∑ik

d=1 sd =∑kj=1Xi,j. Then

∆P <g−1∑i=1

(k∑j=1

logXi,j − k logk∑j=1

Xi,j) + 2k log n− k log k.

Using Jensen’s inequality, then for any value of i

k logk∑j=1

Xi,j − k log k >k∑j=1

logXi,j.

It follows that

∆P < (g − 1)(−k log k) + 2k log n− k log k,

<−c log k + 2k log n.

Should we have scaled the potential function to be 1log k

log(s1 + s2), then

∆P < −c+2k

log klog n.

Since k is a constant, the claimed bound on ∆P follows.

The case when k does not divide c is treated similarly. We only consider thefirst g − 1 groups and exclude the last group from the negative part of ∆P .The bound of −c+O(log n) on ∆P follows similarly. 2

4 Experimental findings

Several authors performed experiments on the pairing heaps, either compar-ing its performance with other forms of priority queues [7,8], or comparingdifferent variants of this data structure [2,12].

The purpose of our experiments is to consider the effect of changing the valueof the parameter k on the deletemin operation, performed on the pairing heaps.Informally, we may think about the first pass of the deletemin operation asa way to group the queues, which helps successive deletemin operations toachieve the O(log n) amortized bound. After deleting the node with the mini-mum value, with a larger number of queues in the heap it is better to increase

12

the number of queues per group, which will in turn expedite a better structure.The intuition behind this is the following. Consider performing a deleteminoperation when the root of the heap has c children. If the new root of the heaphad r children before this deletemin operation, this root will have less thanr + k + c

kchildren after the operation. For a larger value of c, a bigger value

of k is preferred to decrease the work needed for the next operations. Amongthe possible operations on the pairing heaps, the decreasekey and insert wouldcause the structure of the heap to degrade, leading to a larger value for c.

We follow a similar strategy to [2,12]. After building an initial pairing heap ofa specified size (using repeated insertion), the remaining operations are sub-divided into multiple rounds, where each round consists of a specified numberof decreasekey operations, a single insertion, and a single deletemin operation.A round of operations leaves the priority queue size unchanged. As mentionedabove, by increasing the number of decreasekey operations, the structure ofthe heap degrades and the work done per deletemin is expected to increase.Theoretically, the cost per decreasekey operation may be anywhere in O(log n).Empirically, when the decreasekey operations are performed on random nodes,the cost per decreasekey operation is estimated to be a constant [7,12]. In ourexperiments, the decreasekey operations are performed on nodes selected atrandom using a uniform distribution. Different values of k are used in theimplementation of the deletemin operation. We use a fixed constant value ofk for every individual sample run.

We perform two types of experiments. In the first type of experiments, nokey values are assigned to the nodes. Instead, we adopt the following adver-sary. When a comparison takes place between two nodes, the node that hasa smaller number of children loses the comparison, and is linked to the othernode. For example, in an insert operation the new node is linked to the cur-rent root of the heap. This outcome yields the least amount of informationfor a given comparison, in effect maximizing the number of remaining orderpermutations consistent with the data structure. The fact that, during thedeletemin operation, we only compare the values of the children of the deletedroot implies that this adversary corresponds to a consistent set of numericaldata. In the second type of experiments, the values of the nodes are randomlyset using a uniform distribution. When a comparison takes place between twonodes, these assigned values are used to judge the result of the comparison. Anexception is that we impose that the value of a newly inserted item is largerthan the root of the heap at the time of the insertion. Hence, the newly in-serted node is linked to this root. When a decreasekey is performed on a node,the value of this node decreases using an exponentially decaying function.

For some specified values of the number of decreasekey operations per round,we repeat our experiments for different values of the heap size and differentvalues of k. The number of decreasekey operations per round used are 0, 15, 50

13

decreasekeys/round 0 15 50 200

k* 2 3 5 10

Table 1The best k for different counts of decreasekeys per round, using empirical results.

and 200. We tried different heap sizes between 210 and 220. For each run ofthe experiment, we fixed the value of k to be a constant like 2, 3, 5, 10 and20. After building an initial heap structure, around 106 rounds of operationsare performed. For the same set of values, the experiments are repeated 100times for each set of numbers. Our plots show the cost per deletemin versesthe logarithm of the heap size n. In Figures 4 and 5 the results of the firsttype of experiments, using the adversary, are reported. In Figures 6 and 7 theresults of the second type of experiments, using random data, are reported.We implemented the programs using Borland C++ Builder V6. A PC withan Intel Pentium III 1.2 MHz processor and 256 MB memory is used to runthe experiments.

As mentioned by Jones [7], the initial structure of the priority queue wouldinfluence its subsequent performance. Thus, our results are measured whenthe heap reaches a steady state structure, after several rounds of operationsare performed. All of our results are reported in terms of the average num-ber of comparisons and the running time per deletemin operation. Generallyspeaking, the work done per comparison is the dominating factor among theoperations that may be performed on a pairing heap. Since most of the workdone is related to comparisons and the work done per comparison is indepen-dent of the value of k, it follows that the main factor that judges how the datastructure works is the number of comparisons.

As expected, the average number of comparisons and the running time perdeletemin increases by increasing the number of decreasekey operations perround. When the adversarial data is used, the cost of the deletemin is largerthan the case when random data is used. Also, the effect of increasing thevalue of k is more effective when the number of the decreasekey operationsper round is more. The results show that there is an improvement of about30% for a 200 decreasekey operations per round, when we fix k = 10 insteadof using k = 2. On the other hand, using k = 2 is the best when there areno decreasekey operations performed per round. We summarize in Table 1 ourconclusions for the best values of k, when different numbers of decreasekeyoperations are performed per round.

For future work, one may try to use the parameterized pairing heaps in theimplementation of real practical algorithms. A good candidate would be Prim’salgorithm for finding the minimum spanning tree (MST) of a graph [8].

14

References

[1] R. Bayer and E. McCreight. Organization and maintenance of large orderedindices. Acta Informatica 1(3) (1972), 173-189.

[2] M. Fredman. A priority queue transform. 3rd Workshop on AlgorithmsEngineering 1999. In LNCS (1668) (1999), 243-257.

[3] M. Fredman. On the efficiency of pairing heaps and related data structures.Journal of the ACM. 46(4) (1999), 473-501.

[4] M. Fredman. Personal communications.

[5] M. Fredman, R. Sedgewick, D. Sleator, and R. Tarjan. The Pairing heap: a newform of self adjusting heap. Algorithmica 1,1 (1986), 111-129.

[6] J. Iacono. Improved upper bounds for pairing heaps. Scandinavian Workshop onAlgorithms Theory 2000. In LNCS vol. 1851 (2000), 32-45.

[7] D. Jones. An empirical comparison of priority-queues and event-setimplementations. Communications of the ACM 29,4 (1986), 300-311.

[8] B. Moret and H. Shapiro. An empirical assessment of algorithms for constructinga minimum spanning tree. DIMACS Monographs in Discrete Mathematics andtheoretical Computer Science 15 (1994), 99-117.

[9] M. Sherk. Self-adjusting k-ary search trees. Journal of Algorithms 19 (1995),25-44.

[10] D. Sleator and R. Tarjan. Self-adjusting binary search trees. Journal of the ACM32(3) (1985), 652-686.

[11] D. Sleater and E. Tarjan. Self-adjusting heaps. SIAM Journal on Computing15,1 (1986), 52-69.

[12] J. Stasko and J. Vitter. Pairing heaps: experiments and analysis.Communications of the ACM 30,3 (1987), 234-249.

[13] R. Tarjan. Amortized computational complexity. SIAM Journal on AlgebraicDiscrete Methods 6 (1985), 306-318.

15

a) 0 decreasekey operations per round. b) 15 decreasekey operations per round.

c) 50 decreasekey operations per round. d) 200 decreasekey operations per round.

Fig. 4. Adversarial data - Average number of comparisons per deletemin verses log n

16

a) 0 decreasekey operations per round. b) 15 decreasekey operations per round.

c) 50 decreasekey operations per round. d) 200 decreasekey operations per round.

Fig. 5. Adversarial data - Time (in microseconds) per deletemin verses log n

17

a) 0 decreasekey operations per round. b) 15 decreasekey operations per round.

c) 50 decreasekey operations per round. d) 200 decreasekey operations per round.

Fig. 6. Random data - Average number of comparisons per deletemin verses log n

18

a) 0 decreasekey operations per round. b) 15 decreasekey operations per round.

c) 50 decreasekey operations per round. d) 200 decreasekey operations per round.

Fig. 7. Random data - Time (in microseconds) per deletemin verses log n

19