optimal spanning trees
TRANSCRIPT

8/2/2019 Optimal Spanning Trees
1/36
Optimal Spanning Trees
Nahla Mohamed Ahmed Ibrahim ([email protected])African Institute for Mathematical Sciences (AIMS)
Supervised by: Prof Stephan Wagner
Stellenbosch University, South Africa
19 May 2011
Submitted in partial fulfillment of a postgraduate diploma at AIMS

8/2/2019 Optimal Spanning Trees
2/36
Abstract
In a given weighted graph (network), every edge has a certain weight (cost), and one wants to select acertain set of edges whose total cost is a minimum such that these edges keep all vertices connected 
an optimal spanning tree. This essay shows that this can essentially be solved by a greedy approach. Itexplains Prims algorithm and Kruskals algorithm, which are wellknown greedy algorithms in finding thesolution of a Minimum Spanning Tree (M.S.T). It also contains the implementations of these algorithms,which require O(m log n) time for a graph with n vertices and m edges. If the vertices are points in theplane and the weights are Euclidean distances, then one obtains the Euclidean Minimum Spanning Treeproblem (E.M.S.T). The property that an E.M.S.T is a subgraph of the Delaunay triangulation (D.T),can be used to increase the efficiency of finding an E.M.S.T to O(n log n) by applying any of the greedyalgorithms to work on the D.T which has O(n) edges.
Declaration
I, the undersigned, hereby declare that the work contained in this essay is my original work, and thatany work done by others or by myself previously has been acknowledged and referenced accordingly.
Nahla Mohamed Ahmed Ibrahim, 19 May 2011
i

8/2/2019 Optimal Spanning Trees
3/36
Contents
Abstract i
1 Introduction 1
2 Introducing Graphs and Spanning Trees 2
2.1 Introducing Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.2 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Depth First Search and Breadth First Search . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Optimum Spanning Trees 7
3.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Kruskals Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 Prims Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.4 The Comparison of Kruskals and Prims Algorithms . . . . . . . . . . . . . . . . . . . . 13
4 Matroids 14
4.1 Definition of a Matroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144.2 Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 Euclidean Minimum Spanning Trees 18
5.1 Delaunay Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.2 The Delaunay Triangulation and E.M.S.T . . . . . . . . . . . . . . . . . . . . . . . . . 20
6 Conclusion 23
A The Implementation Codes for Kruskals and Prims Algorithms 24
A.1 Kruskals Algorithm Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
A.2 Prims Algorithm Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
References 33
ii

8/2/2019 Optimal Spanning Trees
4/36
1. Introduction
A Minimum Spanning Tree (M.S.T) problem is a problem of finding a spanning tree of a connectedweighted graph with minimum cost, which is one of the most wellknown problems of combinatorial
optimization. The methods of its solution have generated important ideas in the design of computeralgorithms. The M.S.T problem has obvious applications in communication networks such as connectinga set of computers in a network, or linking a set of cities with minimum cost. It sometimes occurs as asubproblem in finding a solution to a bigger problem such as the Travelling Salesman Problem or the
Minimum Weight Matching Problem.
The first efficient solutions of the M.S.T problem were given by Kruskal [Kru56] and Prim [Pri57].Loberman and Weinberger [LW57] also discovered the same algorithm that was proposed by Kruskal,but their work was done a bit later after Kruskal published his paper. Kruskals construction can beviewed as a special case of a more general construction that was given in [LW57]. Prim and Dijkstra[Dij60] gave a similar approach in their construction, which was concerned with storage requirements
[GH85].Suitable data structures for such algorithms, were developed by J. E. Hopcroft and J. D. Ullman[HU73], and their application to the M.S.T problem was implemented by M. D. Mcllroy. The bestimplementations of Kruskals and Prims algorithms in finding an M.S.T of a given graph with n verticesand m edges require O(m log n) time. Nonetheless, time complexity O(m (m, n)), where is theinverse of Ackermanns function [Tar75], can be achieved if the edges of the graph are given in a sortedorder. Another way of achieving this time complexity is by using the radixsort algorithm on edge weightswhen the weights are relatively small integers [GH85].
This essay is grouped into four chapters. In Chapter 2, we introduce some of the basic concepts ingraph theory that we need to represent our problem. In Chapter 3, we discuss Kruskals and Prims
algorithms, and we refer to their implementation in Appendix A. Moving further on the theoretical side,Chapter 4 introduces matroids, which allow us to generalise the greedy method that is used to providea solution to the M.S.T problem. Finally, in Chapter 5, we look for Euclidean Minimum Spanning Trees(E.M.S.T), which is a special case of the M.S.T in which the vertices are points in the plane and theweights are distances.
1

8/2/2019 Optimal Spanning Trees
5/36
2. Introducing Graphs and Spanning Trees
In this chapter, we introduce some of the basic concepts of graph theory, which we will need in thefollowing chapters. In section 2.1 we define some basic expressions of graph theory, followed by data
structures which we need for computational purposes in section 2.2. In section 2.3, we introduce animportant class of graphs, which is called Trees. Section 2.4 describes the Depth first search and Breadthfirst search algorithms for generating what we call spanning trees of a connected graph.
2.1 Introducing Graphs
A graph G is a pair (V, E), where V is a set of elements called vertices, and E is a set of pairs (u, v)called edges, such that u and v belong to V, we write e = (u, v). Figure 2.1 shows an example of agraph G with 5 vertices and 7 edges.
e1
e2
e3
e4
e5
e6
e7
v1 v2
v3v4
v5
Figure 2.1: Graph G = ({v1, v2, . . . , v5}, {e1, e2, . . . , e7})
The degree of a vertex v, denoted by d(v), is the number of edges having v as an endpoint. In Figure2.1, we have d(v2) = d(v5) = 2, d(v1) = d(v4) = 3 and d(v3) = 4. A vertex v with d(v) = 0 is calledan isolated vertex. A regular graph is a graph where all vertices have the same degree. A kregulargraph is a graph where the degree of every vertex is k.
If the number of edges in a graph G is close to the maximal number of edges, G is called a dense graph.In the case where G has only few edges, it is called a sparse graph. When the two endpoints of anedge are the same vertex, this is called a loop. Multiple edges occur when more that one edge share
the same two endpoints. For our purposes we will work with simple graphs, which are graphs having noloops or multiple edges.
A complete graph is a graph which has an edge between every pair of distinct vertices. Notice that acomplete graph with n vertices is an (n 1)regular graph. Figure 2.2 is an example of incompletegraph, whereas the graph in Figure 2.3 is complete, which is 4regular [Gib85].
A graph is said to be finite if the number of vertices V, and the number of edges E, are finite.Throughout our discussions, we shall normally work with finite graphs. H is a subgraph of G if it isobtained by removing a nonzero number of edges and/or vertices of G. In such a case, G is called a
supergraph of H and we write H G. Removing a vertex necessarily implies that we remove all the
2

8/2/2019 Optimal Spanning Trees
6/36
Section 2.2. Data Structures Page 3
Figure 2.2: Incomplete graph
Figure 2.3: Complete graph
associated edges, whereas removing an edge does not imply removing any vertices.
A path in a graph G is a sequence of vertices such that from each of these vertices (except the last)there is an edge to the next vertex in the sequence. A finite path always has a first vertex called a startvertex, and a last vertex, called an end vertex. A cycle is a path such that the start vertex and endvertex are the same. Note that the choice of the start vertex in a cycle is arbitrary. A path with norepeated vertices is called a simple path, and a cycle with no repeated vertices except the start and endvertex is called a simple cycle. The length of a path or a cycle is the number of edges it contains. Twopaths are edgedisjoint if they do not have a common edge [Gib85].
Two vertices are connected if there is a path between them. If any two vertices in a graph G are
connected, then the graph is called a connected graph. The maximal connected subgraphs of G arecalled its components.
A planar graph is a graph that can be drawn in the plane without edge crossings, i.e., edges intersectonly at their common vertices. A plane graph is a planar graph that has been drawn in the plane. Adual graph G of a plane graph G is a plane graph whose vertices correspond to the faces of G. Theedges of G correspond to the edges of G as follows: If e is an edge of G with face X on one side anda face Y on the other side, then the endpoints of the dual edge e are the vertices x, y of G thatrepresent the faces X, Y of G. This concept is only valid for plane graphs [Jai06, Wes01].
In some applications, it is natural to assign a number to each edge of a graph. The resulting graph iscalled a weighted graph. The weight associated with an edge is usually a real number and it is denoted
by w(e). The sum of the weights of all edges of a graph is called its weight [Gib85].
2.2 Data Structures
We now introduce some of the common data structures that are used to represent graphs for computational purposes.

8/2/2019 Optimal Spanning Trees
7/36
Section 2.3. Trees Page 4
2.2.1 Adjacency Matrices. An adjacency matrix is a way of representing the edges in a graph as amatrix. For a given graph G = (V, E), its adjacency matrix is a V V matrix A such that
A[i, j] =
1 if (vi, vj) E
0 otherwise,
where V = {v1, v2, . . . , vV}. In the case of weighted graph G = (V, E), its adjacency matrix A isgiven by
A[i, j] =
w((vi, vj)) if (vi, vj) E
0 otherwise.
Figure 2.4, shows a weighted graph G and its corresponding adjacency matrix A. Note that A issymmetric [Wil96].
3
2
4
1
6
2
5
v1 v2
v3v4
v5
Graph G
A =
v1 v2 v3 v4 v5v1 0 2 0 2 4
v2 2 0 1 0 3v3 0 1 0 6 0v4 2 0 6 0 5v5 4 3 0 5 0
L =
v1
v2
v3v4v5
v2, 2 v4, 2 v5, 4
v1
, 2 v3
, 1 v5
, 3v2, 1 v4, 6
v1, 2 v3, 6 v5, 5
v1, 4 v2, 3 v4, 5
Figure 2.4: The adjacency matrix A and the adjacency list L of graph G.
Clearly, the specification ofA requires O(n2) steps, thus the use of an adjacency matrix for representinggraphs eliminates all algorithms of complexity O(E).
2.2.2 Adjacency List. An adjacency list is a representation of all edges or arcs in a graph as a list.Notice that with an undirected graph, an adjacency list representation duplicates the edge information.
Adjacency lists can also be used to represent weighted graphs, where each node in the adjacency listrepresents an edge with its weight as shown in Figure 2.4. An adjacency list is an efficient representationof a graph, because each node has a list which only contains the nodes that are its neighbours. Thusthe specification of the adjacency list requires O(E) steps.
2.3 Trees
In this section, we introduce the notion of trees, which is an important class of graphs. A tree T is aconnected graph with no cycles. The edges of trees are called branches. A forest is a disjoint union of

8/2/2019 Optimal Spanning Trees
8/36
Section 2.4. Depth First Search and Breadth First Search Page 5
trees.
Let T be a graph with n vertices, then the following statements are equivalent [Wil96]:
T contains no cycles, and has n 1 edges.
T is connected, and has n 1 edges.
There is a unique path between every pair of vertices in T.
T contains no cycles, and adding a new edge to T creates a unique cycle.
T is connected, and removing any edge from T makes it disconnected.
A spanning tree of a connected graph G is a subgraph of G that includes all the vertices of G and hasthe properties of a tree.
2.3.1 Theorem. A graph is connected if, and only if, it has a spanning tree.
Proof. Let G be a connected graph. While G has a cycle, remove one edge from the cycle until weget a connected acyclic subgraph H. Then H is a spanning tree of G. On the other hand, if G has aspanning tree, then there is a path between each pair of vertices in G; thus G is connected.
2.4 Depth First Search and Breadth First Search
In this section, we introduce some commonlyused algorithms for generating spanning trees of a connected graph.
2.4.1 Depth First Search Algorithm. To generate a spanning tree of a given connected graph G,we need to visit all the vertices of G only once. Suppose we are currently at vertex v in Depth FirstSearch (D.F.S) algorithm, the general requirement is that the next vertex to be visited is adjacent to v,and has not yet been visited. If no such vertex exists, then the search returns to the vertex visited justbefore v. This process is repeated until every vertex has been visited. The algorithm of D.F.S is shownin Algorithm 1.
Algorithm 1 D.F.S algorithm
Require: Connected graph G = (V, E).
Ensure: Spanning tree T.
T for all v V do
v.visit Falseend for
for all v V doif v.visit = False then
DF S(v) {Defined in Algorithm 2}end if
end for

8/2/2019 Optimal Spanning Trees
9/36
Section 2.4. Depth First Search and Breadth First Search Page 6
Algorithm 2 DFS(u):
Require: vertex u.
Ensure: DF S(u).
u.visit Truefor all u adjacent to u do
if u.visit = False thenT T {(u, u)}DF S(u)
end if
end for
2.4.2 Breadth First Search Algorithm. The general requirement in Breadth First Search (B.F.S) isthat all vertices adjacent to v which have not yet been visited, are visited in some order right afterwards.This process is repeated for each of those adjacent vertices, until each vertex has been visited. The
algorithm of B.F.S is shown in Algorithm 3.
The running time of D.F.S and B.F.S is O(E), therefore a spanning tree can be found in linear timeusing any of these algorithms [Gib85].
Algorithm 3 B.F.S algorithm
Require: Connected graph G = (V, E).Ensure: Spanning tree T.
i 1L []T for all v V do
v.order 0end for
choose u from VL.append(u)
while L nonempty dopick first element u in Lfor all u adjacent to u do
if u.order = 0 thenu.order ii i + 1L.append(u)T T {(u, u)}
end if
end for
end while

8/2/2019 Optimal Spanning Trees
10/36
3. Optimum Spanning Trees
In this chapter, we introduce the problem of finding Minimum Spanning Trees (M.S.T) of a connectedweighted graph. We also discuss Prims and Kruskals algorithms, which are the bestknown classic
greedy algorithms in solving M.S.T problem of a given graph G. In order to simplify the description ofthe algorithms, we assume G is simple and connected.
3.1 Problem Statement
Consider the problem of constructing a railway system linking a set of towns, or connecting a set ofrouters in a computer network by giving the cost of all possible direct connections. How can we makeall the communications at a minimum cost? We can model this kind of problem using graph theory,by generating a weighted graph G whose vertices represent the objects we need to connect (towns or
routers), and edges with their weights representing the direct paths with their costs.Greedy algorithms are algorithms that use the strategy of constructing a solution piece by piece, andalways choosing the next piece that makes the solution locally optimal. Although this approach can failto give the optimal solution for some computational tasks, our problem is one of the applications wherethe greedy algorithms succeed to compute its optimum solution [DPV06].
Problem statement: Given a weighted graph G, we are interested in finding a spanning tree T, withthe smallest total weight.
3.2 Kruskals Algorithm
Kruskals algorithm is an iterative method for solving the M.S.T problem of a connected weighted graphG with n vertices. The algorithm starts by initializing a graph T with all the vertices in G and no edges.After that, it repeats the operation of adding an edge with a minimum weight to T such that no cycleis created, otherwise it ignores the edge. This repetition terminates when T has (n 1) edges.
Algorithm 4 Kruskals Algorithm for Solving the MST Problem
Require: A simple connected weighted graph G with n vertices.Ensure: A minimum spanning tree T of G.
Sort the edges with respect to their weights as e1, e2, . . . , em
T i = 0
while T has less than (n 1) edges doif (T {ei}) does not contain a cycle then
add ei to Ti = i + 1
end if
end while
Figure 3.1 illustrates the execution of this algorithm.
7

8/2/2019 Optimal Spanning Trees
11/36
Section 3.2. Kruskals Algorithm Page 8
2
3
1
5
3
44
v1 v2
v3
v4
v5
Connected weighted graph G
v1 v2
v3
v4
v5
1
v1 v2
v3
v4
v5
Step 0 Step 1
2
1
v1 v2
v3
v4
v5
2
1
3
v1 v2
v3
v4
v5
Step 2 Step 3
2
1
3
4
v1 v2
v3
v4
v5
Step 4: Minimum Spanning Tree of G
Clusters Step 0 Step 1 Step 2 Step 3 Step 4
C(v1) {v1} {v1, v4} {v1, v2, v4} {v1, v2, v4} {v1, v2, v3, v4, v5}C(v2) {v2} {v2} {v1, v2, v4} {v1, v2, v4} {v1, v2, v3, v4, v5}C(v3) {v3} {v3} {v3} {v3, v5} {v1, v2, v3, v4, v5}C(v4) {v4} {v1, v4} {v1, v2, v4} {v1, v2, v4} {v1, v2, v3, v4, v5}C(v5) {v5} {v5} {v5} {v3, v5} {v1, v2, v3, v4, v5}
Figure 3.1: The steps of constructing an M.S.T of a connected weighted graph G by using Kruskalsalgorithm, and the table shows the changing of the clusters of vertices at each step.

8/2/2019 Optimal Spanning Trees
12/36
Section 3.2. Kruskals Algorithm Page 9
3.2.1 Theorem (Correctness). Given a connected weighted graph G with n vertices, Kruskals algorithmgenerates an M.S.T ofG.
Proof. Clearly, Kruskals algorithm generates a spanning tree T of G, that refers to the process ofignoring the edges which create cycles. Since G is a connected graph, the iteration terminates when Tgets exactly (n 1) edges, hence, T is a spanning tree.
We want to show that T has minimum total weight. By contradiction, assume T is not an M.S.T ofG, also, assume that the (n 1) edges of T are added in the order e1, e2, . . . , en1. Define an M.S.TTk = T for G with the largest index k such that e1, e2, . . . , ek1 T
k and ek is not in Tk. Adding
ek to Tk creates a unique cycle, and since T has no cycles, this implies that there exists an edge ek
in the cycle which is different from e1, e2, . . . , ek. Therefore, T = Tk ek + ek is a spanning tree
and e1, e2,...,ek T. From the definition of Kruskals algorithm, we must have w(ek) w(e
k), thus
w(T) w(Tk). Since Tk is an M.S.T of G, w(T) w(Tk), hence, T is a M.S.T of G. Butthis contradicts that Tk is M.S.T which has a maximum number of (k 1) first edges shared with T.Therefore, T is an M.S.T of G.
3.2.2 The Implementation of Kruskals Algorithm. The progress of Kruskals algorithm is based ontwo main operations. It starts by sorting the edges by their weights. This operation can be implementedby using a priority queue Q, which contains all edges of G by the order of their weights.
The second operation tests if the addition of an edge ei to T, in step i, creates a cycle. To simplifythis test, we can define a cluster C(v) for each vertex v in G, such that at each step, C(v) shows thelist of all vertices connected with v (as shown in Figure 3.1). Initially, the cluster C(v) contains onlyv. To test if the addition of ei = (u, v) to T generates a cycle, we can easily check if u and v belongto the same cluster or not The python implementation code of this algorithm is shown in Appendix Asection A.1.
.
Algorithm 5 The Implementation Algorithm
Require: A simple connected weighted graph G with n vertices.
Ensure: A minimum spanning tree T of G.
for each vertex in G doinitialize a cluster of v v.cluster {v}
end for
Define a priority queue Q that contains all the edges in G ordered with respect to their weights.T
while T has less than (n 1) edges doremove the first edge (u, v) from Q
if u.cluster = v.cluster thenadd (u, v) to Tu.cluster merge(u.cluster, v.cluster)v.cluster u.cluster
end if
end while

8/2/2019 Optimal Spanning Trees
13/36
Section 3.3. Prims Algorithm Page 10
3.2.3 The Complexity of the Algorithm. The initialization of a priority queue Q, which containssorted edges, needs an application of a sorting algorithm with complexity O(m log m) (Mergesort,etc). We can define C(v) as a pointer to the position of the cluster list of v, hence we only need onecomparison to test C(u) = C(v).
The merging of two clusters can be executed by appending the cluster with a smaller number of elementsto the larger one, and make both pointers C(u) and C(v) point to the same merged cluster. Thus, thetime complexity of merging the two clusters C(u) and C(v) is O(min(C(u), C(v))).
For a vertex v, the cluster C(v) of v, starts with one element v. At each step of merging clusters C(u)and C(v), the element v is appended to C(u) while C(v) C(u) and C(v) n. However, at eachtime of appending v in the merging process, the size of C(v) is at least duplicated; hence, v moves atmost (log n) times. Since this occurs for every vertex v, thus the time complexity of moving all verticesduring the merge operation is O(n log n).
3.3 Prims Algorithm
Prims algorithm is another greedy algorithm which is used to solve the M.S.T problem of a connectedweighted graph G with n vertices. The algorithm starts by initializing a tree T = (V, E) with one vertexV = {v}. Define a set ET as the set of edges that have one endpoint in V and the other endpointin V. The algorithm repeats the operation of adding the edge in ET with minimum weight to E, andthe endpoint in V to V. This repetition terminates when T has n vertices, or equivalently, has (n 1)edges.
Algorithm 6 Prims Algorithm for Solving the M.S.T Problem
Require: A simple connected weighted graph G with n vertices.
Ensure: A minimum spanning tree T = (V, E) of G.
V {v}E while T has less than (n 1) edges do
Choose e = (v, v) with the minimum weight such that v V, v V.V V {v}E E {e}.
end while
We illustrate the execution of this algorithm in the example shown in Figure 3.2.
3.3.1 Theorem (Correctness). Given a connected weighted graph G with n vertices, Prims algorithmconstructs T, an M.S.T ofG.
Proof. Prims algorithm constructs a spanning tree T of G: at any step, the current set of edges onlyconnects the vertices of V. We only insert edges between V and V so no cycles are created. Further,the iteration stops when T has (n 1) edges, which implies T is a spanning tree.
Now, we want to show that T has minimum total weight. By contradiction, assume T is not an M.S.TofG, and also, assume that (n 1) edges ofT are added in the order e1, e2, . . . , en1. Define an M.S.TTk = T for G with the largest index k such that e1, e2, . . . , ek1 T
k and ek not in Tk. Adding ek to

8/2/2019 Optimal Spanning Trees
14/36
Section 3.3. Prims Algorithm Page 11
2
3
1
5
3
44
v1 v2
v3
v4
v5
Connected weighted graph G
v1
1
v1
v4
Step 0 Step 1
2
1
v1 v2
v4
2
1
4
v1 v2
v3
v4
Step 2 Step 3
2
1
3
4
v1 v2
v3
v4
v5
Step 4: Minimum Spanning Tree of G
Set of vertices Step 0 Step 1 Step 2 Step 3 Step 4
V {v1} {v1, v4} {v1, v2, v4} {v1, v2, v3, v4} {v1, v2, v3, v4, v5}V {v2, v3, v4, v5} {v2, v3, v5} {v3, v5} {v5}
Figure 3.2: The steps of constructing an M.S.T of a connected weighted graph G by using Primsalgorithm.
Tk creates a unique cycle C, since T has no cycles, this implies that there exists an edge ek in C thatis different from e1, e2, . . . , ek, and one of its endpoint is in Vk and the other in Vk, where Vk is the setof vertices in T at step k.

8/2/2019 Optimal Spanning Trees
15/36
Section 3.3. Prims Algorithm Page 12
Hence, T = Tk ek + ek is a spanning tree and e1, e2, . . . , ek T. From the definition of the
algorithm, we must have w(ek) w(ek), thus w(T
) w(Tk). Since Tk is an M.S.T of G, that is,w(T) w(Tk), T is an M.S.T of G. There is a contradiction that Tk has a maximum number of(k 1) first edges shared with T. Therefore, T is an M.S.T of G.
3.3.2 The Implementation of Prims Algorithm. First, the algorithm starts by choosing an arbitraryvertex u, and initializing V = {u}. To define a set ET of all edges that have one endpoint in V andthe other in V, we can define a weight and a parent for all vertices. For each vertex v that connectswith u, we set the parent of v to be u, and its weight to be the weight of the edge (u, v). For all othervertices, we set their parent as null, and their weights as infinity. At each step of adding a vertex z toV, if the weight of any vertex v that connects with z is greater than the weight of the edge (z, v), weupdate the parent of v to z, and its weight to the weight of the edge (z, v).
Now, the operation of selecting the edge in ET with the minimum weight can be done by constructinga priority queue Q that contains all the vertices in V by using their weights as the key. Hence, the firstvertex v in Q and its parent u, are the two endpoints of the edge e, that has the minimum weight among
all edges incident with u. The python implementation code of this algorithm is shown in Appendix Asection A.2.
Algorithm 7 The Implementation Algorithm
Require: A simple connected weighted graph G with n vertices and m edges.
Ensure: A minimum spanning tree T of G.
for each vertex v in G doinitialize a weight of v v.weight +initialize a parent of v v.parent null
end for
Pick any vertex u in GSet u.weight 0for each vertex v adjacent to u do
v.weight w((u, v))v.parent u
end for
Initialize a priority queue Q that contains (v, v.weight), for each vertex v,where v.weight is the keyT while Q is not empty do
remove the first element (z,z.weight) from Q
add (z.parent, z) to Tfor each vertex v adjacent to z and v in Q do
if w((z, v)) < v.weight thenv.weight w((z, v))update the key v.weight in Q
end if
end for
end while

8/2/2019 Optimal Spanning Trees
16/36
Section 3.4. The Comparison of Kruskals and Prims Algorithms Page 13
3.3.3 The Complexity of Prims Algorithm. Let G be a graph with n vertices and m edges. The timecomplexity of the implementation of Prims algorithm is based on the process of updating the weightsof the vertices in Q. That is, updating the values of the keys of Q, and its corresponding change of theorder of the vertices in Q.
For any edge e = (v, v) in G, the weight ofv is updated at most once. Hence, the operation of updatingthe weights needs to be performed in O(m) time. Since the weight of v is the key value of the v in Q,then this update needs a changing of the position of v, which requires O(log n) time.
Note that, any other operations in the implementation of Prims algorithm are done in linear time.Therefore, the total running time of Prims algorithm is O(m log n).
3.4 The Comparison of Kruskals and Prims Algorithms
Although Kruskals and Prims algorithms can be implemented by using different data structures and
strategies, they have the same worstcase running time in constructing the minimum spanning tree.In our previous implementation of Kruskals algorithm, we used a priority queue data structure to storethe edges of the graph, and a list data structure to store the clusters. In Prims algorithm, we only used apriority queue. Hence, Prims algorithm is better from the point of view of data structures construction.In addition, the edges in the priority queue vary for each step of building the M.S.T when using Primsalgorithm, whereas the edges in the priority queue in Kruskals algorithm are the same during all theprocess of constructing the M.S.T. In Kruskals algorithm, there is an additional operation of mergingclusters, which is not needed in Prims algorithm.
Considering the case of a dense graph, the way of constructing the priority queue in Kruskals algorithmrequires the operation of sorting all edges in the graph. In Prims algorithm, this situation is relatively
better, since we do not need to make all possible comparison to build our M.S.T. Thus, Prims algorithmperforms better in the case of dense graphs, whereas Kruskals algorithm performs better in sparse graphs.

8/2/2019 Optimal Spanning Trees
17/36
4. Matroids
A matroid is a structure consisting of subsets of a fixed set satisfying certain properties. A graphicmatroid is a type of matroid that works on graphs. Independent subsets are defined on this matroid
as subsets of edges of a given graph which have no cycles. The optimum spanning tree problem isequivalent to the problem of finding a maximal independent subset in a graphic matroid with optimumtotal weight. One of the amazing properties of matroids is that the greedy algorithm always works onmatroids, this property gives us a guarantee that the greedy algorithm can always solve our optimizationproblem.
In this chapter, we will define matroids, give some examples of matroids, introduce some properties ofmatroids, and discuss the optimization problem.
4.1 Definition of a Matroid
Let E be a finite set and I a collection of subsets of E. A pair M = (E,I), is called a matroid if itsatisfies the following properties:
(I1) Inclusion property: If X Y and Y I, then X I.
(I2) Exchange property: If X and Y are in I and X < Y, then there is an element e Y\Xsuch that X {e} I.
If the pair M = (E,I) satisfies the inclusion property, then it is called a subset system. The subsets ofE which are in I are called independent sets, while the subsets outside I are called dependent sets. A
maximal independent set X in a matroid M = (E,I) is a subset of E, in which for every e E\X,X {e} is a dependent set. A maximal independent set is called a base of M [Goe09].
4.1.1 Examples of Matroid.
Uniform matroid: For a given k, a matroid M = (E,I) is a uniform matroid, if I = {X E :X k}. M is denoted by Uk,n where n = E. Note that a base of Uk,n is any subset X suchthat X = k.
Free matroid: This is a special case of a uniform matroid Uk,n, in which any subset of E isindependent, i.e., k = n.
Linear matroid: Let E be the set of columns of a matrix A, and let I be the set of all subsetsof columns of A that are linearly independent. Then the pair (E,I) is a matroid.
Proof. (I1) is obviously satisfied, since for any linearly independent set Y, X Y is also linearlyindependent. Let X, Y I and X < Y. Define S = span{X Y} to be the vector spacespanned by the columns of X Y. Since Y is linearly independent, then the dimension of S,dim(S), is at least Y. To prove the exchange property by contradiction, assume that for alle Y\X, X {e} is a linearly dependent set. Then S span{X}, thus Y dim(S) X w(S). Suppose that F = {f1, f2, . . . , f k}, such that the elements in F are ordered as w(f1) w(f2) w(f). Since w(F) > w(S), we can choose the first index p such that w(fp) > w(sp). Considerthe two sets A = {f1, f2, . . . , f p} and B = {s1, s2, . . . , sp1}. Since A F and B S, from theinclusion property, they are independent sets. Since A > B from the exchange property, there existsfi A\B such that B {fi} I. But w(fi) w(fp) > w(sp), which contradicts the progress of thegreedy algorithm in adding sp to S instead of fi.
4.2.2 Theorem. Consider a subset system M = (E,I, w). The greedy algorithm solves the optimizationproblem for M if, and only if, M is a matroid.
Proof. Let M be a matroid. Theorem 4.2.1 shows that the greedy algorithm finds the maximal independent set S which has a maximal weight, which is a solution of the optimization problem.
Now, assume M is not a matroid. We want to show how the greedy algorithm fails to find the set ofmaximal weight for some weight functions. Suppose that there are two sets X, Y I and X < Ysuch that for all e Y\X, the set X {e} is dependent. Define a weight function w(e) as
w(e) =
X + 2 e X
X + 1 e Y\X
0 Otherwise.
The greedy algorithm adds the highest weights first. That is, the elements in X, then jumps over all theelements in Y\X since they will generate a dependent set with X, after this, it checks the remainingelements. The total weight becomes X(X + 2). But the optimal solution contains at least all theelements in Y with total weight Y(X + 1) which is at least (X +1)(X + 1). Therefore, the greedyalgorithm fails to find the solution of the optimization problem.
We have observed this powerful property, which determines the behaviour of a greedy algorithm withoutdoing something obvious with the algorithm.

8/2/2019 Optimal Spanning Trees
21/36
5. Euclidean Minimum Spanning Trees
A Euclidean Minimum Spanning Tree (E.M.S.T) of a set P of n points in the plane is a minimumspanning tree of P, where the weight of an edge between a pair of points is the Euclidean distance
between those points. The simplest way to find an E.M.S.T of a set of n points is to construct acomplete weighted graph G = (V , E , w) ofn vertices, which has n(n 1)/2 edges, such that the weightfunction w of G is the length of any edge e in the graph.
After this construction, we can apply any of the Minimum Spanning Trees (M.S.T) algorithm (suchas Kruskals algorithm which we have discussed in Chapter 2) to find an E.M.S.T. Since the graph Ghas (n2) edges, M.S.T algorithms require O(n2 log n) time to get an E.M.S.T. This running time canbe decreased to O(n log n) by using what is called Delaunay triangulation which we will discuss in thefollowing sections.
5.1 Delaunay Triangulation
5.1.1 Triangulation. A triangulation T of Rn is a subdivision of Rn into a set of ndimensionalsimplices1, such that:
Any simplex face is shared by either one adjacent simplex or none at all.
Any bounded set in Rn intersects only finitely many simplices in T.
In a sense, a triangulation generates a mesh of simplices from a given set of points in Rn.
5.1.2 Delaunay Triangulation. A Delaunay Triangulation (D.T) of a set P of points in the plane isdefined as a triangulation where the circumcircle of every triangle2 does not contain any other pointsfrom P. Figure 5.1 shows an example of a Delaunay and a nonDelaunay triangle. For an edge e thatconnects two points from a set of points P in the plane, e is called Delaunay edge if there exists a circlepassing through the two endpoints of e that does not contain any interior point of P (see Figure 5.2).
t
p
t
p
A B
Figure 5.1: A: Triangle t is nonDelaunay since the point p lies within the circumcircle; B: t is a Delaunaytriangle.
1ndimensional simplices are a generalization of the notion of triangles or tetrahedrons to n dimensions.
2a circle that passes through the endpoints of a triangle
18

8/2/2019 Optimal Spanning Trees
22/36
Section 5.1. Delaunay Triangulation Page 19
e e
A B
Figure 5.2: A: e is a nonDelaunay edge; B: e is a Delaunay edge since it has an empty circumcircle.
5.1.3 Properties of The Delaunay Triangulation.
Emptycircle property: A circle that passes through the endpoints of any D.T triangle is empty.
Convex hull: The exterior face of D.T is a convex polygon.
The Delaunay triangulation is equivalent to Delaunay edges: A triangulation T is a Delaunaytriangulation if, and only if, all the edges in T are Delaunay edges.
This property is used in some algorithms for constructing a Delaunay triangulation. Therefore,the condition that needs to be tested in these algorithms, is if all edges in the graph are Delaunayedges or not. The proof of this property is discussed in Theorem 5.1.4.
The flipped diagonal is always Delaunay: If two nonDelaunay triangles share an edge e, thenby removing e and adding a diagonal edge, the new triangles are Delaunay triangles. Thus, anynonDelaunay edge can always be flipped to form a Delaunay edge (see Figure 5.3).
This property affects a single part of the mesh, however, some algorithms of constructing theDelaunay triangulation are based on this property, and they start by an arbitrary triangulation and
continue flipping nonDelaunay edges until none are left. This type of algorithms works in twodimensions and does not extend to higher dimensions. It takes O(n2) in edge flips.
Maxminangle property: The Delaunay triangulation maximizes the minimum angle among alltriangulations of a given set of points. This property is one of the advantages of the Delaunaytriangulation, which improves the mesh quality [Zim05, Ple].
e
e
A B
Figure 5.3: A is not a Delaunay triangulation. Flipping of a nonDelaunay edge e in A forms a Delaunaytriangulation in B.
5.1.4 Theorem. For a set of points P, two points p1 and p2 are connected by an edge in the Delaunaytriangulation if, and only if, there is an empty circle passing through p1 and p2.

8/2/2019 Optimal Spanning Trees
23/36
Section 5.2. The Delaunay Triangulation and E.M.S.T Page 20
Proof. Let p1, p2 and p3 be the endpoints of a Delaunay triangle, then there exists a circle C thatpasses through p1, p2, p3 and does not contain any interior point. Thus, C is an empty circle passingthrough p1 and p2.
Now we prove the other direction. Let p1, p2 and p3 be the endpoints of a triangle t. By contradiction,
assume that there are empty circles passing through each edge of t, and t is a nonDelaunay triangle,this implies that the circumcircle of t contains an interior point pi. Without any loss of generality, theedge that separates pi from inside t is (p1, p2) (see Figure 5.4). Hence, any circle that passes through
p1 and p2 must contains p3 or pi as an interior point, but this contradicts our assumption.
p1
p3
p2
pi
t
Figure 5.4: pi lies inside the circumcircle of triangle t, the edge (p1, p2) is not a Delaunay edge.
5.1.5 The Relation Between D.T and Voronoi Diagram. A Voronoi diagram of a set of points inthe plane is a division of the plane into regions for each point in the set, where the region of a point
p contains the part of the plane which is closer to p than any other point. Each region is known as aVoronoi cell and denoted by vo(p). Boundaries between the cells are the perpendicular bisectors of thelines joining the points. Voronoi vertices are the points which are created in the intersections of theseboundaries. Voronoi edges are the boundaries between two Voronoi cells.
The Delaunay triangulation is a dual graph of a Voronoi diagram which is a planar graph. The Delaunaytriangulation has a vertex for each Voronoi cell, and an edge for each Voronoi edge. Figure 5.5 showsan example of a Delaunay triangulation and a Voronoi diagram of a set of points and the dual relationbetween them.
Generally the construction of the Voronoi diagram requires O(n log n) running time with certain algorithms such as Fortunes algorithm [Zim05]. Then, the Delaunay triangulation can be generated fromthe Voronoi diagram by using the dual relation. The Delaunay triangulation can be constructed directlyin O(n log n), by using more technical algorithms such as divide and conquer algorithms. These arebased on drawing a line recursively to divide the set of points into two sets, computing the Delaunaytriangulation for each set and merging these two sets. The merge operation can be done in time O(n),
so the total running time is O(n log n) [Zim05].
5.2 The Delaunay Triangulation and E.M.S.T
The Delaunay triangulation has an interesting property: The E.M.S.T of a set of n points is a subgraph ofthe Delaunay triangulation. This property can increase the efficiency of finding an E.M.S.T, by applyingminimum spanning tree algorithms on the Delaunay triangulation which has O(n) edges instead ofusing the complete graph which has O(n2) edges. The proof of this property is shown in the followingtheorem.

8/2/2019 Optimal Spanning Trees
24/36
Section 5.2. The Delaunay Triangulation and E.M.S.T Page 21
A B
Figure 5.5: A shows a Delaunay triangulation of a set of 6 vertices, and B shows the correspondingVoronoi diagram.
5.2.1 Theorem. A Euclidean minimum spanning tree of a set of points P is a subgraph of any Delaunaytriangulation of P.
Proof. Let T be an E.M.S.T of P with total weight w(T). Assume that p1 and p2 are two points in P,and that there is an edge e in T that connects these two points. By contradiction, we assume that e isnot a Delaunay edge, thus every circle passing through p1 and p2 contains an interior point. Choose acircle C with diameter e, and let pi be an interior point ofC (as shown in Figure 5.6). Removing the edgee from T divides T into two connected components. One of them contains p1 and the other contains p2.Assume that pi lies with p1 in the same connected component, then by adding an edge e
that connectsp2 with pi to T, it generates a spanning tree T
with total weight w(T) = w(T) w(e) + w(e). Sincethe length of e is greater than the length of e, i.e. w(e) > w(e), this implies that T has smaller total
weight than T which contradicts the assumption that T is an E.M.S.T.
ep1
pi
p2e
p1
pi
p2
Graph T Graph T
Figure 5.6: The Delaunay triangulation and E.M.S.T
Therefore, using this property, the generation of an E.M.S.T as a spanning tree of the Delaunay triangulation needs O(n log n) running time. Since the construction of the Delaunay triangulation runsin O(n log n) time, hence, the total running time needed to find an E.M.S.T by using the Delaunaytriangulation is O(n log n). Figure 5.7 is an example showing that the E.M.S.T of a set of 5 points is asubgraph of the Delaunay triangulation.
The E.M.S.T problem can be generalized to mdimensional space Rm. In higher dimensions, the property

8/2/2019 Optimal Spanning Trees
25/36
Section 5.2. The Delaunay Triangulation and E.M.S.T Page 22
A: Delaunay triangulation B: The E.M.S.T
Figure 5.7: The E.M.S.T is a subgraph of the Delaunay triangulation
that E.M.S.T is a subgraph of the Delaunay triangulation also holds, but the number of edges in thetriangulation increases with the dimension [AESW91].

8/2/2019 Optimal Spanning Trees
26/36
6. Conclusion
Going through the discussions in this essay, it is clear that the wellknown problem of finding a minimumspanning tree (M.S.T) can be analysed and solved using greedy algorithms such as Kruskals and Prims
algorithms. In this essay, we gave an overview of some of the graph theoretic definitions and concepts,which enabled us to discuss Kruskals and Prims algorithm. Also, we discussed the theory of matroidsthat provides a general form of the greedy method, which is an important concept that can be appliedto various other problems. Finally, we discussed Euclidean minimum spanning trees, which have theproperty of being subgraphs of the Delaunay triangulation, this property was used to decrease therunning time of computing M.S.T.
In this essay, we achieve time complexity O(m log n) for finding the M.S.T of a graph with n verticesand m edges. This also provided us with a method of finding a Euclidean minimum spanning tree intime O(n log n). We conclude this essay with implementations and illustrative examples.
23

8/2/2019 Optimal Spanning Trees
27/36
Appendix A. The Implementation Codes for
Kruskals and Prims Algorithms
We use python to implement both Kruskals algorithm and Prims algorithm. Note that the implementation codes of Kruskals and Prims algorithms are available in the link: http://users.aims.ac.za/
~nahla/essay.html
A.1 Kruskals Algorithm Code
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
#===============================================================
# Each vertex has a name and a cluster contains the vertices
# that connect with v.
class vertex:
def __init__ (self, v):
self.label = v
self.cluster = [v]
def merge (self, cluster1):
for v in cluster1:
self.cluster.append (v)
#===============================================================
def readf ():
edges = []
G = [ ]
File = open("datakruskal.dat", "r")
for line in File:
L = line.split ()
G.append((L[1],L[2],float(L[0])))
L[0] = float (L[0])
L[1] = vertex (L[1])
L[2] = vertex (L[2])
edges.append (L)
File.close ()
return edges,G
#===============================================================
24
http://users.aims.ac.za/~nahla/essay.htmlhttp://users.aims.ac.za/~nahla/essay.htmlhttp://users.aims.ac.za/~nahla/essay.htmlhttp://users.aims.ac.za/~nahla/essay.htmlhttp://users.aims.ac.za/~nahla/essay.html 
8/2/2019 Optimal Spanning Trees
28/36
Section A.1. Kruskals Algorithm Code Page 25
def plotG (edges):
G = nx.Graph()
for e in edges:
G.add_edge (e[0], e[1], weight = e[2])
pos = nx.spring_layout(G)
nx.draw (G, pos)
edge_labels = dict ([((u,v), d[weight])
for u, v, d in G.edges (data = True)])
nx.draw_networkx_edge_labels (G, pos, edge_labels = edge_labels)
#===============================================================
if __name__ == "__main__":
n = input("Enter the number of vertices : ")
edges,G = readf ()
print "G = ", G
plt.figure (1)
plotG (G)
Q = sorted (edges)
T = [ ]
i = 0
T_weight = 0
while len (T) < n  1:
w, u, v = Q[i]
if u.cluster != v.cluster:
T.append ((u.label,v.label,w))
u.merge (v.cluster)
v.cluster = u.cluster
T_weight += w
i = i + 1
plt.figure (2)
print "T = ", T
plotG (T)
plt.show ()
print "The total weight of T = ", T_weight
A.1.1 datakruskal.dat File Format.
7 A B
5 A K
1 A D
2 B C
1 C E
2 E F

8/2/2019 Optimal Spanning Trees
29/36
Section A.2. Prims Algorithm Code Page 26
9 F M
2 M L
1 L N
2 K N
1 K O
2 J O
1 J I
2 H I
1 H G
5 D G
1 C I
4 E L
8 F J
3 D J
4 D E
6 G L
A.2 Prims Algorithm Code
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
#===============================================================
# Each vertex has a label, weight, parent, a list of nighbour# vertices and a list of these edges weights.
class vertex:
def __init__ (self, v):
self.label = v
self.weight = np.inf
self.vlist = []
self.wlist = []
self.parent = 0
def add (self, v, w):self.vlist.append (v)
self.wlist.append (w)
#===============================================================
def readf ():
edges = []
File = open ("dataprim.dat","r")
for line in File:

8/2/2019 Optimal Spanning Trees
30/36
Section A.2. Prims Algorithm Code Page 27
L = line.split ()
vertexList = [L[0]]
for i in range (1, len(L)  1, 2):
vertexList.append ( (L[i], float(L[i+1])) )
edges.append (vertexList)
File.close ()
return edges
#===============================================================
def plotG (T):
G=nx.Graph ()
for e in T:
G.add_edge (e[0], e[1], weight = e[2])
pos = nx.spring_layout (G)
nx.draw (G, pos)
edge_labels = dict([((u, v), d[weight])
for u, v, d in G.edges(data = True)])
nx.draw_networkx_edge_labels (G, pos, edge_labels = edge_labels)
#===============================================================
if __name__ == "__main__":
L = readf ()
V = dict()
G = [ ]
for l in L:
v = vertex (l[0])
V.update({l[0] : v})
for i in range (1, len(l)):
v.add (l[i][0], l[i][1])
G.append ((l[0], l[i][0], l[i][1]))
plt.figure (1)
plotG (G)
u = v
u.weight = 0
Q = [ ]
i = 0
for label in u.vlist:
v = V[label]
v.parent = u.label
v.weight = u.wlist[i]
i = i + 1
for v in V.values():

8/2/2019 Optimal Spanning Trees
31/36
Section A.2. Prims Algorithm Code Page 28
Q.append ([v.weight,v])
Q = sorted(Q)
print " "
T = [ ]
i = 1
T_weight = 0
n = len (Q)
Q.remove (Q[0])
while len(Q) > 0:
w, u = Q[0]
Q.remove(Q[0])
T_weight += w
T.append ((u.parent,u.label,w))
j = 0
for label in u.vlist:
v = V[label]
if u.wlist[j] < v.weight:
v.weight = u.wlist[j]
v.parent = u.label
j = j + 1
for q in Q:
q[0] = q[1].weight
Q = sorted(Q)
plt.figure (2)
print "T = ", T
plotG (T)
plt.show ()
print "The total weight of T = ", T_weight
A.2.1 dataprim.dat File Format.
A B 7 D 1 K 5
B A 7 C 2
C B 2 E 1 I 1
D A 1 G 5 J 3 E 4
F E 2 M 9 J 8
E C 1 F 2 D 4 L 4
G D 5 H 1 L 6
H G 1 I 2
I H 2 J 1 C 1
J I 1 D 3 F 8 O 2
K A 5 O 1 N 2
L M 2 N 1 G 6 E 4
M F 9 L 2

8/2/2019 Optimal Spanning Trees
32/36
Section A.2. Prims Algorithm Code Page 29
N L 1 K 2
O J 2 K 1
The Output
The output of the previous Kruskals code, in the case of a graph with 15 vertices and 22 edges, asshown below,
G = [(A, B, 7.0), (A, K, 5.0), (A, D, 1.0), (B, C, 2.0),
(C, E, 1.0), (E, F, 2.0), (F, M, 9.0), (M, L, 2.0),
(L, N, 1.0), (K, N, 2.0), (K, O, 1.0), (J, O, 2.0),
(J, I, 1.0), (H, I, 2.0), (H, G, 1.0), (D, G, 5.0),
(C, I, 1.0), (E, L, 4.0), (F, J, 8.0), (D, J, 3.0),
(D, E, 4.0), (G, L, 6.0)]
T = [(A, D, 1.0), (C, E, 1.0), (L, N, 1.0), (K, O, 1.0),
(J, I, 1.0), (H, G, 1.0), (C, I, 1.0), (B, C, 2.0),
(E, F, 2.0), (M, L, 2.0), (K, N, 2.0), (J, O, 2.0),
(H, I, 2.0), (D, J, 3.0)]
The total weight of T = 22.0
The output in the case of implementing Prims code for the same graph G,
T = [(O, K, 1.0), (O, J, 2.0), (J, I, 1.0), (I, C, 1.0),
(C, E, 1.0), (C, B, 2.0), (E, F, 2.0), (I, H, 2.0),
(H, G, 1.0), (K, N, 2.0), (N, L, 1.0), (L, M, 2.0),
(J, D, 3.0), (D, A, 1.0)]
The total weight of T = 22.0

8/2/2019 Optimal Spanning Trees
33/36
Section A.2. Prims Algorithm Code Page 30
Figure A.1: Graph G

8/2/2019 Optimal Spanning Trees
34/36
Section A.2. Prims Algorithm Code Page 31
Figure A.2: The minimum spanning tree of G

8/2/2019 Optimal Spanning Trees
35/36
Acknowledgements
All appreciation begins and ends with Allah. I thank Allah for helping me to do this work. I would liketo thank my supervisor Prof. Wagner for all his explanations and comments on my work. I would like
to mention that I am glad for being with this lovely AIMS family and I thank them also. Starting withour fathers Prof. B. Green and Igsaan including all my AIMS brothers and sisters, I wish them the best.In particular, I would like to thank my brothers Ahmed and Obeng for their guidance throughout theAIMS period and on this work. Last but not least, I would like to thank my mother Zainab Yassin andalso dedicate this piece of work to her. I ask Allah to take care of her.
32

8/2/2019 Optimal Spanning Trees
36/36
References
[AESW91] P. K. Agarwal, H. Edelsbrunner, O. Schwarzkopf, and E. Welzl, Euclidean minimum spanningtrees and bichromatic closest pairs, Discrete and Computational Geometry 6 (1991), 407
422.
[CT76] David R. Cheriton and Robert Endre Tarjan, Finding minimum spanning trees, Siam Journalon Computing 5 (1976), 724742.
[Dij60] E. W. Dijkstra, Some theorem on spanning subtrees of a graph, Indag. math. 28 (1960),196199.
[DPV06] S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani., Algorithms, http://highered.mcgrawhill.com/sites/0073523402/, 2006.
[GH85] R. L. Graham and Pavol Hell, On the history of the minimum spanning tree problem, IEEEAnnals of The History of Computing 7 (1985), 4357.
[Gib85] Alan Gibbons, Algorithmic graph theory, Press Syndicate of University of Cambridge, 1985.
[Goe09] Michel X. Goemans, Combinatorial optimization, 2009.
[GT02] M. Goodrich and R. Tamassia, Algorithm design: Foundations, analysis and internet examples, John Wiley and Sons Inc., 2002.
[HU73] J. E. Hopcroft and J. D. Ullman, Set merging algorithms, SIAM J. Comput. 2 (1973),294303.
[Jai06] A. Jain, Planar graphs, dual graphs, connectivity and separability, Tech. Report 01CS3016,2006.
[Kru56] J. B. Kruskal, On the shortest spanning subtree of a graph and the traveling salesmanproblem, Proceeding of the American Mathematical Society 7 (1956), 4850.
[LW57] H. Loberman and A. Weinberger, Formal procedures for connecting terminals with a minimum total wire length, Journal of The ACM 4 (1957), 428437.
[Oxl92] James G. Oxley, Matroid theory, Oxford University Press, 1992.
[Ple] Robert Pless, Voronoi diagrams and delauney triangulations, http://research.engineering.wustl.edu/~pless/506/l17.html.
[Pri57] R. C. Prim, Shortest connection networks and some generalizations., Bell Systems Technical
Journal 36 (1957), 13891401.[Tar75] Robert Endre Tarjan, Efficiency of a good but not linear set union algorithm, Journal of The
ACM 22 (1975), 215225.
[Wes01] Douglas B. West, Introduction to graph theory, second edition, arrangement with PearsonEduaction, Inc, 2001.
[Wil96] Robin J. Wilson, Introduction to graph theory, fourth edition, Longman Malaysia, PP, 1996.
[Zim05] Henrik Zimmer, Voronoi and delaunay techniques, July 2005, http://www.henrikzimmer.com/VoronoiDelaunay.pdf .
33
http://highered.mcgrawhill.com/sites/0073523402/http://highered.mcgrawhill.com/sites/0073523402/http://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://www.henrikzimmer.com/VoronoiDelaunay.pdfhttp://www.henrikzimmer.com/VoronoiDelaunay.pdfhttp://www.henrikzimmer.com/VoronoiDelaunay.pdfhttp://www.henrikzimmer.com/VoronoiDelaunay.pdfhttp://www.henrikzimmer.com/VoronoiDelaunay.pdfhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://%20http//research.engineering.wustl.edu/~pless/506/l17.htmlhttp://highered.mcgrawhill.com/sites/0073523402/http://highered.mcgrawhill.com/sites/0073523402/