15 spectral analysis

Upload: morgoth12

Post on 06-Apr-2018

233 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 15 Spectral Analysis

    1/44

    14 Spectral Analysis

    Andreas Baltz and Lasse Kliemann

    A graph can be associated with several matrices, whose eigenvalues reflect struc-tural properties of the graph. The adjacency matrix, the Laplacian, and thenormalized Laplacian are in the main focus of spectral studies. How can the

    spectrum be used to analyze a graph? In particular, the following questions willbe of interest:

    What can the spectrum tell us about subgraphs? Can we prove the existenceor nonexistence of certain subgraphs by looking at the spectrum?

    Can certain eigenvalues be related to other global statistics (also called graphparameters), such as diameter, isoperimetric number or chromatic number?

    How can spectra help in classifying graphs?

    This chapter is organized as follows: In the first section, we review facts fromlinear algebra and point out some fundamental properties of graph spectra. In thenext section, we summarize what methods we know to compute the spectrum.In the sections that follow, we give some answers and ideas regarding the abovequestions.

    14.1 Fundamental Properties

    We define the different spectra and point out some fundamental properties. We

    show why it makes sense to consider more than the adjacency spectrum and listthe three spectra for some basic graph classes (Table 14.1).

    14.1.1 Basics from Linear Algebra

    Let M = (mi,j) nn be an n n matrix with complex numbers as entries.A non-zero vector x n is an eigenvector of M with corresponding eigenvalue if x and satisfy the equation

    M x = x. (14.1)

    The vector 0n is excluded from the set of possible eigenvectors since every is a solution to M0n = 0n. Equation (14.1) has a non-zero solution if and onlyif rank(M In) < n, which is equivalent to det(M In) = 0. Hence we cancharacterize the eigenvalues of M as roots of the polynomial pM() := det(M

    U. Brandes and T. Erlebach (Eds.): Network Analysis, LNCS 3418, pp. 373417, 2005.c Springer-Verlag Berlin Heidelberg 2005

  • 8/3/2019 15 Spectral Analysis

    2/44

    374 A. Baltz and L. Kliemann

    In). This characteristic polynomial pM does not change if M is replaced byQ1M Q for some arbitrary non-singular matrix Q. So M and Q1M Q have thesame eigenvalues. The spectrum ofM is defined as the multiset of all eigenvaluesof M, where the multiplicity of an eigenvalue is its (algebraic) multiplicity as

    a root of pM.From now on we shall assume that M is a symmetric matrix, i.e. M = M,

    with real valued entries. One can show that there exists a non-singular matrixQ such that M := Q1M Q has diagonal form and Q1 = Q. Clearly, eachvector ei from the standard basis of

    n is an eigenvector of M with i := mi,i,the ith entry on the diagonal of M, as its corresponding eigenvalue. Since thecharacteristic polynomial is unchanged,

    det(M In) = det(M In) =ni=1

    (i ). (14.2)

    So the spectrum ofM consists ofn (not necessarily distinct) eigenvalues 1, . . . ,n , and we have det(M) =

    ni=1 i. From the definition of the determinant

    asdet M =

    Sn

    sign() m1,(1) mn,(n)

    we see that det(M In) = ni=1(mi,i ) + R(), where R() is the sum

    corresponding to permutations from Sn \ {id} and thus has degree n 2.Hence the coefficient of n1 in the characteristic polynomial is ni=1 mi,i, thetrace of M, and by comparison with (14.2) we infer that trace(M) =

    ni=1 i.

    Defining vi := Qei for all i {1, . . . , n}, we haveM vi = M Qei = QQ

    1M Qei = Qiei = ivivi vj = (Qei)

    Qej = ei QQej = ei ej,

    i.e., v1, . . . , vn are orthonormal eigenvectors of M. Consequently, {v1, . . . , vn} isa basis of n, and the multiplicity of

    ias an eigenvalue equals the maximum

    cardinality of a corresponding set of linearly independent eigenvectors.In sum we have observed the following facts.

    Theorem 14.1.1 ([135, 247, 576]). Let M nn be a matrix with M =M, then

    1. M has real eigenvalues 1 . . . n and n orthonormal eigenvectorsforming a basis of

    n [Spectral Theorem],2. multiplicity of i as an eigenvalue := multiplicity of i as a root of the

    characteristic polynomial det(M

    In

    ) = cardinality of a maximum linearlyindependent set of eigenvectors corresponding to i,

    3. there is a matrixQ with Q = Q1 such that QM Q =

    1 0. . .

    0 n

    ,

    4. det(M) =ni=1 i, trace(M) =

    ni=1 i.

  • 8/3/2019 15 Spectral Analysis

    3/44

    14 Spectral Analysis 375

    14.1.2 The Spectrum of a Graph

    Given a multi-digraph G = (V, E) with labeled vertices v1, . . . , vn, we define theadjacency matrix A = (ai,j) by

    ai,j := multiplicity of edge (vi, vj) in E,

    i.e., ai,j is equal to the number of arcs starting at vertex vi and terminating atvertex vj (for loops (vi, vi) E it is sometimes useful to define ai,i as 2 insteadof 1). The spectrum of G is the spectrum of the adjacency matrix of G (seeFigure 14.1 for an example).

    v2

    v1

    v3

    A =

    10 0

    10 1

    21 0

    Fig. 14.1. Example of an adjacency matrix

    Note that though A depends on the order of labeling, the spectrum does not,since exchanging the labels of vertices vi and vj corresponds to exchanging rowi with row j and column i with column j in A, which does not affect det(A)nor det(A In). For the rest of this chapter we will identify V with the set oflabels, i.e., we will assume that V = {1, . . . , n}. Moreover we shall mainly focuson simple undirected graphs without loops. So, any graph G = (V, E) will beassumed as simple, loopless, and undirected, unless explicitly stated otherwise.Hence the adjacency matrix A will (almost always) be a symmetric 0/1 matrixwith a real spectrum of n eigenvalues i, where we assume for convenience that1 2 . . . n.Notation. We will use both, spectrum(A) and spectrum(G) to denote the eigen-values of an adjacency matrix A corresponding to a graph G. Moreover, whenspeaking of the spectrum of a graph, we will always have the adjacency spectrumin mind (unless stated otherwise).

    Let w

    n be an arbitrary vector and let : V

    map each i

    V onwi. Since A represents a graph, the ith component of Aw,

    nj=1 ai,jwj , can be

    written asjN(i) (j). Now the equation Ax = x has the following useful

    interpretation.

    Remark 14.1.2. 1. A has eigenvalue if and only if there exists a non-zeroweight function : V such that for all i V, (i) = jN(i) (j).

  • 8/3/2019 15 Spectral Analysis

    4/44

    376 A. Baltz and L. Kliemann

    2. The Spectral Theorem 14.1.1.1. ensures that we can restrict ourselves toconsidering real-valued weight functions. Moreover, we can assume the max-imum weight to be non-negative (if max {(i); i V} < 0 then (i) < 0 forall i

    V and we can consider

    instead of ).

    Consider an assignment of weights to the vertices of a triangle as depicted inFigure 14.2. From

    vertex 1 vertex 3

    vertex 2

    0

    11

    Fig. 14.2. A triangle with eigenvector components as weights

    (1) = 1 = (0 + (1)) = ((2) + (3)),(2) = 0 = (1 + (1)) = ((1) + (3)),(3) = 1 = (1 + 0) = ((1) + (2)),

    we can conclude that 1 is an eigenvalue. Similarly, by assigning a weight of 1to all vertices we can check that 2 is in the spectrum of the triangle. For anotherexample, have a look at Figure 14.3, which proves that 2 is an eigenvalue of astar on 5 vertices.

    Remark 14.1.2 enables us to prove the following claims.

    Lemma 14.1.3 ([576]). LetG = (V, E) be a graph onn vertices with adjacencymatrix A and eigenvalues 1 . . . n. Let be the maximum vertex degreeof G.

    1. n .2. If G = G1G2 is the union of two disjoint graphs G1 and G2 then spec-

    trum(G) = spectrum(G1) spectrum(G2).3. IfG is bipartite then spectrum(G) spectrum(G).4. IfG is a simple cycle then spectrum(G) =

    2cos

    2kn

    ; k {1, . . . , n}

    .

  • 8/3/2019 15 Spectral Analysis

    5/44

    14 Spectral Analysis 377

    1

    2

    11

    1

    Fig. 14.3. A star with eigenvector components as weights

    Proof. 1. Let be a non-zero weight function on the vertices such that n(i) =jN(i) (j) for all i V and let i0 be a vertex of maximum weight. Then

    n(i0) =jN(i0) (j) (i0) implies n .

    2. Let be a non-zero weight function for i spectrum(G). Since is notidentically zero, either |V(G1) or |V(G2) must not be identically zero and henceis a weight function for i on G1 or G2. On the other hand, if is a weightfunction for spectrum(Gj) for j = 1 (or j = 2) then extending by defining(i) := 0 for all i

    V

    \V(Gj), yields a non-zero weight function for on G.

    3. Let be a weight function for on G. Let V1, V2 denote the partition classesof V. Define : V

    by

    i

    (i), if i V1(i), if i V2.

    Then for all i V1,

    (i) =

    (i) =

    jN(i)(j) = jN(i)

    (j),

    and for all i V2,(i) = (i) =

    jN(i)

    (j) =jN(i)

    (j).

    4. For the proof of this claim we will use complex weights. Assume that theedges of the cycle are {1, n} and {i, i + 1}, for all i {1, . . . , n 1}. For each k {1, . . . , n} let k := exp(2ik/n) be an nth root of unity and put (j) := j1k .(Here, i denotes the complex number i with i

    2

    = 1 and nota vertex.) Then forall j V, lN(j)

    (l) = (1k + k) j1k ,

    and thus 1k + k = exp(2ik/n)+exp(2ik/n) = 2 cos

    2kn

    is an eigenvalue

    of G.

  • 8/3/2019 15 Spectral Analysis

    6/44

    378 A. Baltz and L. Kliemann

    We mention two further results on the adjacency spectrum (see [135] for a proof):it can be shown that n = if and only if G has a -regular component. Forthe smallest eigenvalue of G one can prove the lower bound 1 n, whereequality holds if and only ifG has a bipartite component whose largest eigenvalue

    is equal to n.Let us determine the spectra of the complete bipartite graph Kn1,n2 and the

    complete graph Kn.

    Lemma 14.1.4 ([576]). Let n1, n2, and n be positive integers.

    1. For G = Kn1,n2, 1 =

    n1n2, 2 = = n1 = 0, and n = n1n2.2. For G = Kn, 1 = = n1 = 1, n = n 1.

    Proof. 1. Since A is diagonalizable with eigenvalues as diagonal entries, the rank

    ofA is equal to the number of non-zero eigenvalues. For Kn1,n2 , the rank is 2, soA has two non-zero eigenvalues i and j . Note that the trace of A is both thesum of the eigenvalues and the number of loops in G. Hence, i + j = 0, andwe conclude that the spectrum of G is 1 = c, 2 = = n1 = 0, n = c forsome c

    >0. Let us look at the characteristic polynomial, det(AIn) = (c)n2(c ) = n c2n2. Since appears only on the diagonal ofA In,terms in the permutation expansion that contribute to n2 arise from thosepermutations that select n 2 diagonal elements and 2 non-diagonal elements,ai,j = aj,i = 1. Choosing i and j completely determines the permutation, so

    there are exactly n1 n2 permutations contributing to n2, each with negativesign. Consequently, c2 = n1n2 and thus 1 = n1n2, n = n1n2.

    2. For Kn, the adjacency matrix is J In, where J is the matrix of all ones.Subtracting c from the diagonal of a matrix M shifts its eigenvalues by c, sinceM x = x is equivalent to (M cIn)x = ( c)x. By induction on n it can beshown that the spectrum of J consists of a single eigenvalue being n and n 1eigenvalues equal to zero. Thus the spectrum of Kn is 1 = = n1 = 1,n = n 1.

    The adjacency matrix is also useful for counting paths of length k in a graph.Lemma 14.1.5 ([576]). Let G be a multi-digraph possibly with loops. The(i, j)th entry of Ak counts the i j-paths of length k. The eigenvalues of Akare ki .

    Proof. The first claim can be shown by induction on k. For the second claim, notethat for every eigenvector x with corresponding eigenvalue , Akx = Ak1(Ax) =Ak1x = = kx.

    Corollary 14.1.6. 1. ni=1 i = number of loops in G.2. ni=1 2i = 2 |E|.3.

    ni=1

    3i = 6 number of triangles in G.

    Now we can prove that the converse of the third claim in Lemma 14.1.3 isalso true.

  • 8/3/2019 15 Spectral Analysis

    7/44

    14 Spectral Analysis 379

    Lemma 14.1.7 ([576]). G is bipartite if and only if the eigenvalues of G occurin pairs , such that = .Proof. One direction of the claim has been shown in the proof of Lemma 14.1.3.

    For the other direction, note that i = j implies ki =

    kj for every odd k.

    Since trace(Ak) =ki=1

    ki = 0 counts the number of cycles of length k in G,

    we infer that in particular there are no odd simple cycles which means that Gis bipartite. We have seen that it is possible to retrieve certain structural properties of agraph (e.g. number of edges, number of triangles, whether or not it is bipartite)from its spectrum. However, the spectrum does not reflect all of the graphsstructure. Take for example the graphs in Figure 14.4.

    Fig. 14.4. Two non-isomorphic graphs with identical adjacency spectrum

    Both have eigenvalues 1 = 2, 2 = 3 = 4 = 0, and 5 = 2, but they arenot isomorphic. Such graphs are called cospectral. Obviously, we can not evendetermine from the spectrum if a graph is connected. Nevertheless, this can beachieved by looking at eigenvalues of another graph matrix, the Laplacian.

    14.1.3 The Laplacian Spectrum

    Let G = (V, E) be an undirected multigraph (possibly with loops) with adjacencymatrix A. Let D =diag(d(1), . . . , d(n)) be the diagonal matrix of vertex degrees.The Laplacian matrix L = (li,j) is defined as L := D A, so if G is a simpleundirected graph, then

    li,j = 1, if{i, j} Ed(i), if i = j0 otherwise.

    Another way of defining the Laplacian of an undirected simple graph is thefollowing. Consider an arbitrary orientation of G, i.e. a mapping assigning eachedge e = {i, j} a direction by indicating whether i or j is to be viewed as the

  • 8/3/2019 15 Spectral Analysis

    8/44

    380 A. Baltz and L. Kliemann

    head of e. The incidence matrix B = (bi,e) of the oriented graph (G, ) is a{0, 1, 1} matrix with rows and columns indexed by the vertices and edges ofG, respectively, such that

    bi,e :=

    1, if i is the head ofe1, if i is the tail of e0 otherwise.

    It can be shown that independently of the choice of, L = BB. Consequently,we obtain the following result.

    Lemma 14.1.8. For each x n, xLx = xBBx = {i,j}E(xi xj)2.Since A is real and symmetric, L = D

    A is also symmetric, and hence the

    Laplacian spectrum consists of n real eigenvalues 1(L) n(L). Again,we may interpret an eigenvector x n as an assignment of weights : V ,i xi. From this point of view, is an eigenvalue ofL if there exists a non-zero(and not completely negative) weight function : V such that

    (i) =jN(i)

    ((i) (j))

    for all i V. Considering this equation for i V with maximum weight, we seethat (i) = jN(i)((i) (j)) 0, so all eigenvalues are non-negative. For 1 we have = (i) = jN(i)((i) (j)) = 0, hence the vector 1n is aneigenvector of L with eigenvalue 0.

    Lemma 14.1.9 ([247]). A graph G consists of k connected components if andonly if 1(L) = = k(L) = 0 and k+1(L) > 0.Proof. Let B be the incidence matrix of an arbitrary orientation of G. For eachcomponent C of G define z(C) n by

    z(C)i :=1, if i V(C)

    0 otherwise.

    Then, Z := {z(C); C component of G} is linearly independent and Lz(C) =BBz(C) = 0n. Hence the connected component can be injectively mappedinto a linearly independent set of eigenvectors with eigenvalue 0. On the otherhand, if z n is a vector such that Lz = BBz = 0n, then zBBz = 0implies Bz = 0n, meaning that z must be constant on each connected compo-nent. Thus, z is a linear combination of elements from Z, and consequently we

    have exactly as many components as there are linearly independent eigenvectorscorresponding to eigenvalue 0.

    The Laplacian is useful for counting the number of spanning trees of a graph.

    Theorem 14.1.10 (Matrix-Tree Theorem [135, 247, 427]). The Laplacianmatrix L of a graph G is related to spanning trees in the following way.

  • 8/3/2019 15 Spectral Analysis

    9/44

    14 Spectral Analysis 381

    1. For every i {1, . . . , n} the number of spanning trees in G is equal to|det(Li)|, where Li is obtained from the Laplacian L by deleting row i andcolumn i.

    2. Moreover, the number of spanning trees is equal to 1ni2 i(L).While the Laplacian spectrum has the advantage over the adjacency spectrum

    of indicating the number of connected components of a graph, it fails to identifybipartite structures, as can be seen from the graphs in Figure 14.5 which arecospectral with respect to L.

    Fig. 14.5. Two cospectral graph with respect to the Laplacian

    14.1.4 The Normalized Laplacian

    A matrix whose spectrum enables us to recognize both, bipartite structure andconnected components, can be obtained by multiplying L from left and rightwith the diagonal matrix D1/2, where the ith entry in the diagonal is d(i)1/2

    if d(i) > 0 and 0 otherwise. This matrix is called the normalized LaplacianL = D1/2LD1/2. For simple graphs, L = (li,j) satisfies

    li,j =

    1, if i = j and d(i) > 0 1d(i)d(j)

    , if{i, j} E0 otherwise.

    is an eigenvalue of L if there is a non-zero weight function : V , suchthat

    (i) =1

    d(i)jN(i)

    (i)

    d(i) (j)

    d(j)

    .

    Again, L is symmetric with real valued entries, and we can order its n eigenvaluesin the sequence 1(L) n(L).

    The following claims are proved in [125].

    Lemma 14.1.11 ([125]). Let G be a graph with normalized Laplacian ma-trix L.

  • 8/3/2019 15 Spectral Analysis

    10/44

  • 8/3/2019 15 Spectral Analysis

    11/44

    14 Spectral Analysis 383

    x =n

    i=i0(x)

    ivi.

    Consequently,

    xM x = xMn

    i=i0(x)

    ivi = x

    ni=i0(x)

    iM vi

    = xn

    i=i0(x)

    iivi =n

    i=i0(x)

    ii xvi

    =i

    i0(x)

    n

    i=i0(x) 2i = i0(x)x

    x.

    (14.3)

    Obviously, equality holds for x = vk, i.e., for all k {1, . . . , n} we have,vkM vk = kvkvk. (14.4)

    We now make the following claim:

    For every k-dimensional subspace U n there exists x U, x =0n, such that i0(x) k. (14.5)

    First, we explain how to prove the theorem from this claim. From (14.3) itfollows that for every k-dimensional subspace U,

    maxxUx=0n

    xM xxx

    k

    On the other hand, because v1, . . . , vk is a k-dimensional subspace, it followsfrom (14.4) that

    minU ndim(U)=k

    maxxUx=0n

    xM xxx = k.

    We now have to prove (14.5). For a contradiction, assume that for all x U,we have i0(x) < k. By the definition of i0 this means that for every x U thereexists an i < k such that x vi or equivalently, 1

    v1, . . . , vk1 U = 0n.On the other hand

    dimv1, . . . , vk1 + dim U = n (k 1) + k = n + 1,a contradiction. 1 Here, W denotes the linear hull of a subset W

    n, and W denotes the set ofall vectors that are orthogonal to each vector from W.

  • 8/3/2019 15 Spectral Analysis

    12/44

    384 A. Baltz and L. Kliemann

    For later use, we state a simple corollary from this theorem.

    Corollary 14.1.14. (a) The largest eigenvalue n of a real symmetric matrixM nn satisfies

    n = maxx nx=0n

    xM xxx .

    (b) The second smallest eigenvalue of the Laplacian matrix satisfies

    2(L) = minx1n

    xLxxx

    .

    Later, we will also use another characterization of Laplacian eigenvalues,which we cite without a proof. Let us call a vector x n constant, if all itsentries are the same, i.e., if it is a multiple of 1n.Theorem 14.1.15 (Fiedler [211]).

    2(L) = n min

    {i,j}E(xi xj)2{i,j}(V2)(xi xj)2

    ; x n non-constant

    (14.6)

    n(L) = n max

    {i,j}E(xi xj)2

    {i,j}(V2)(xi xj)2

    ; x

    n non-constant

    (14.7)

    Now, we turn to the proof of Lemma 14.1.12.

    Proof (of Lemma 14.1.12). Since n+1k(L) is the kth largest eigenvalue of L,n+1k(L) is the kth smallest eigenvalue of the matrix InL = A(DIn)which differs from A only on the diagonal, where the non-negative values d(i)are subtracted. We have x

    (A(DIn))xxx = x

    Axr(x) for r(x) := x(DIn)xxx 0, and Theorem 14.1.13 gives

    n+1k(L) = k(In L)= k(A (D In))= min

    U ndim(U)=k

    maxxUx=0n

    x (A (D In)) xxx

    = minU n

    dim(U)=k

    maxxUx=0n

    xAxxx

    r(x)

    minU

    n

    dim(U)=k

    maxx

    U

    x=0n

    xAxx

    x

    = k(A).

    The other inequality is obtained in a similar way. Figure 14.6 shows two non-isomorphic graphs that are cospectral with respect

    to all three matrices.

  • 8/3/2019 15 Spectral Analysis

    13/44

    14 Spectral Analysis 385

    Fig. 14.6. Two graphs that are cospectral with respect to A, L, and L

    14.1.6 Examples

    Table 14.1 lists the spectrum of the adjacency matrix A, the Laplacian L, andthe normalized Laplacian L for some elementary graph classes. All graphs areassumed to have n vertices.

    Table 14.1. Spectra of some elementary graph classes

    graph class spectrum(A) spectrum(L) spectrum(L)

    simple pathG = Pn

    2cos

    kn+1

    ,

    k {1, . . . , n}2 2cos

    (k1)n

    ,

    k {1, . . . , n}1 cos

    (k1)n1

    ,

    k {1, . . . , n}

    simple cycleG = Cn

    2cos

    2kn

    ,k {1, . . . , n}

    2 2cos 2kn

    ,k {1, . . . , n}

    1 cos 2kn

    ,k {1, . . . , n}

    starG = K1,n

    n,n,0 (n 2 times)

    0, n,1 (n 2 times)

    0, 2,1 (n 2 times)

    G = Kn1,n2n1n2,n1n2,

    0(n

    2 times)

    0, n1 (n2 1 times)n2 (n1

    1 times), n

    0, 21 (n

    2 times)

    G = Kn 1, 1 (n 1 times) 0, n (n 1 times) 0, nn1 (n 1 times)

    14.2 Numerical Methods

    To use the spectrum of a graph for analysis, we have to compute it (or parts ofit) first. What methods are available? What are their running times and othercharacteristics?

  • 8/3/2019 15 Spectral Analysis

    14/44

    386 A. Baltz and L. Kliemann

    14.2.1 Methods for Computing the Whole Spectrum of SmallDense Matrices

    It is not at all obvious how to compute the spectrum of a matrix M efficiently,

    since already a straightforward evaluation of the characteristic polynomial asdet(M In) takes O(n!) steps. A better strategy is to utilize the fact thatas a graph matrix, M is real and (in case of undirected graphs) symmetricand thus can be transformed into a diagonal matrix by means of a similaritytransformation M P1M P. If we are interested only in eigenvalues, noteigenvectors, it is enough to transform the matrix M to be triangular, with allelements below (or above) the diagonal equal to zero. In this case the diagonalelements are already the eigenvalues.There is a two-stage technique for implementing the diagonalization strategy. In

    the first stage we iteratively approximate P (and P1) as a product of certainatomic transformations Pi designed for zeroing a particular off-diagonal element(Jacobi transformation [482]) or a whole particular row or column (Householdertransformation [482], elimination method [482]). We stop after O(n3) steps witha matrix P such that M1 = (mi,j) := P

    1MP has tridiagonal form (i.e., mi,j =0 whenever |i j| > 1). Now the second stage starts, where we perform aQL- (or QR-) decomposition: the basic idea is that any real matrix M can bedecomposed in the form M = QL (or QR), such that Q is orthogonal (i.e.,QT = Q1) and L (R) is lower (upper) triangular. Writing these factors in

    opposite order we get M := LQ = QTQLQ = QTML, where properties suchas symmetry and tridiagonal form are preserved. The QL-algorithm consists ofa sequence of transformations

    Mi := QiLi,

    Mi+1 := LiQi = QTi MiQi,

    and relies on the following theorem [535].

    Theorem 14.2.1 ([535]). 1. If M has eigenvalues of different absolute value|i| then Ms converts to lower triangular form as s .

    2. If M has an eigenvalue |i| of multiplicity p then Ms converts to lowertriangular form as s , except for a diagonal block matrix of order p,whose eigenvalues converge to i.

    Note that if M has an eigenvalue with multiplicity greater than one, then thesecond part of Theorem 14.2.1 allows us to split the matrix into submatrices thatcan be diagonalized separately. For tridiagonal matrices, one iteration of the QL-algorithm can be performed in O(n) steps. With the technique of implicit shifts[482] a reasonably good convergence is achieved in O(n) steps, resulting in atotal number of O(n2) steps for the second stage. Thus the overall complexityof computing all eigenvalues with good precision is O(n3).

  • 8/3/2019 15 Spectral Analysis

    15/44

    14 Spectral Analysis 387

    14.2.2 Methods for Computing Part of the Spectrum of LargeSparse Matrices

    When our given matrix M is very large, the computational and storage costs

    required for the diagonalization strategy of the previous section become pro-hibitive. However, in many situations M is sparse and it suffices to determine asmall subset of (extremal) eigenvalues. Suppose, we are interested in the largesteigenvalue of M. The eigenvalue equation M x = x is clearly equivalent to

    = xMxxx . In fact, we have observed in Corollary 14.1.14 that

    n = max

    xM x

    xx; x

    n \ {0n}

    .

    This suggests a simple algorithm: Choose an arbitrary starting vector x1 withx1 x1 = 1, and then follow the direction of steepest ascend to successively ob-tain an improved approximation to n. The Lanczos algorithm is a run-timeand storage cost efficient algorithm that uses this idea to approximate extremaleigenvalues ofM. It proceeds as follows: a given initial vector x1 is normalized tolength one. Then at each step i an orthonormal basis (x1, x2, . . . , xi) for the spacespanned by (x1, M x1, . . . , M

    i1x1) is constructed (this space is called Krylovspace). Let Xi denote the ni-matrix with x1, x2, . . . , xi as column vectors. Thematrix T = Xi M Xi has tridiagonal form. Its eigenvalues which are easy to

    calculate (see [2]) provide approximations to i eigenvalues ofM. The methodfavors convergence to eigenvalues in the outermost part of the spectrum of A.The process is restarted every k steps for some fixed k n until sufficientlygood convergence is achieved. We state the Lanczos method in its simplest formas Algorithm 29.

    Algorithm 29: The Lanczos algorithm

    1. Initialization: Choose the number of steps k, the desired number of

    eigenvalues r and an initial vector x1; let 0 := x1 x1,. x1 := x1/0

    2. Lanczos steps:for i = 1 to k do

    (i) y := Mxi(ii) i := x

    i y

    (iii) xi+1 := y ixi i1xi1(iv) i := x

    i+1xi+1

    (v) xi+1 := xi+1/i;

    Set Xi :=Mat(x1, . . . , xi)

    3. Eigenvalue computation: Compute the eigenvalues ofT := X

    i MXi.4. Convergence test and restart: If the first r columns ofT satisfy theconvergence criteria then accept the corresponding eigenvalues and stop.Otherwise restart with a suitable new x1.

  • 8/3/2019 15 Spectral Analysis

    16/44

    388 A. Baltz and L. Kliemann

    By replacing M with (M In)1 in step 2(i) of the algorithm, this schemeis capable of approximating eigenvalues in the vicinity of any given value .In general, the minimum number of Lanczos iterations necessary to computea desired set of eigenvalues is unknown. In practice, a fast convergence can be

    achieved via a restarting scheme (see [591]). Farkas et al. [198] were able tocompute the spectrum of graphs with up to 40,000 vertices and 200,000 edgeswith the thick-restart version of the Lanczos algorithm. The Arnoldi Methoddiffers from the Lanczos algorithm in replacing steps (ii)(iv) with

    (ii) hi,j := xj y for all j {1, . . . , i}(iii) xi+1 := y

    ij+1 xjhj,i

    (iv) hi,i+1 := xi+1xi+1, i := hi,i+1.

    For symmetrical matrices, Arnoldis and Lanczos methods are mathematicallyequivalent, but the Lanczos algorithm uses fewer arithmetic operations per stepby explicitly taking advantage of the symmetry ofM. The advantage of Arnoldismethod is that it can be applied to treat asymmetric matrices. In that case,H = Xi M Xi is an upper Hessenberg matrix that can be reduced to blocktriangular form, allowing for an efficient computation of eigenvalues (see [509]).

    14.3 Subgraphs and Operations on Graphs

    What can the spectrum tell us about subgraphs in a graph? Do eigenvalues ofsubgraphs show up in the spectrum? Can we conclude that a certain graph isnot a subgraph (or at least not an induced subgraph) by looking at the spec-trum? What happens to the spectra of two graphs that are joined by taking theCartesian product or the sum?

    In this section, we will show some directions for answering these questions.We only consider the adjacency spectrum here, although many results also holdfor the Laplacian spectra.

    14.3.1 Interlacing Theorem

    How do induced subgraphs manifest themselves in the spectrum? Obviously, notevery induced subgraph inserts its eigenvalues or some of its eigenvalues intothe spectrum of the whole graph. For example, the complete graph K2 on twovertices has eigenvalues 1 and 1. But many graphs containing K2 as an inducedsubgraph, i.e., an edge, do not have 1 or 1 in their spectrum. See Table 14.1in Section 14.1.6 for examples.

    However, there is an interlacing of the eigenvalues of induced subgraphs. Let

    G = (V, E) be a graph on n vertices and let H be an induced subgraph ofG withn 1 vertices. (In other words: H is obtained from G be removing one vertex.)If, in our usual notation, 1 . . . n are the eigenvalues of the adjacencymatrix of G and 1 . . . n1 are those of H, we have

    i i i+1 i {1, . . . , n 1},

  • 8/3/2019 15 Spectral Analysis

    17/44

    14 Spectral Analysis 389

    i.e., between two eigenvalues ofG there lies exactly one eigenvalue ofH. Conse-quently, if G has an eigenvalue with multiplicity k, then H has this eigenvaluewith multiplicity k 1.

    For an induced subgraph on m vertices, by induction (over the number of

    vertices removed to obtain H from G) the interlacing generalizes to

    i i i+(nm) i {1, . . . , m}. (14.8)We will now prove an even more general result than this, from which we will

    get (14.8) directly.

    Theorem 14.3.1. Let n, m and S nm such that SS = Idm. Let A

    nn be symmetric and B := SAS. Then, for the eigenvalues 1 . . . nof A and the eigenvalues 1 . . . m of B we have the interlacing property(14.8), i.e.,

    i i i+(nm) i {1, . . . , m}. (14.9)The interlacing property of the adjacency spectrum follows from this. To see

    this, let A be the adjacency matrix of G. The adjacency matrix of an inducedsubgraph H is a principal submatrix of A, i.e., a matrix that is obtained bydeleting the ith row and ith column of A for every vertex i V \ V(H). Such aprincipal submatrix can be obtained from A in the same way as B is obtainedfrom A in the above theorem. If i1, . . . , ik are the vertices removed from G to

    get H, we have to choose S to be the identity matrix with the ijth row deleted,j = 1, . . . , k.

    Proof (of Theorem 14.3.1). The matrix S defines an injective mapping from

    m

    to n. For a subset U m we as usually denote by S(U) the image ofU underthat mapping, i.e.,

    S(U) := {Su; u U} .If U m is an i-dimensional subspace (i m), then S(U) is an i-dimensionalsubspace of n, because S is injective.

    Remember the characterization of eigenvalues given in Theorem 14.1.13. Forevery i {1, . . . , m} we have

    i = minU

    n

    dim(U)=i

    maxxUx=0

    xAxxx

    minUS( m)dim(U)=i

    maxxUx=0

    xAxxx

    = minU m

    dim(U)=i

    maxxS(U)x=0

    xAxxx

    = minU m

    dim(U)=i

    maxxUx=0

    (Sx)

    A(Sx)

    (Sx)

    (Sx)

    = minU m

    dim(U)=i

    maxxUx=0

    x(SAS)xxx

    = minU m

    dim(U)=i

    maxxUx=0

    xBxxx

    = i.

    This is the first inequality of (14.9).

  • 8/3/2019 15 Spectral Analysis

    18/44

    390 A. Baltz and L. Kliemann

    Applying the same argument to A instead of A, we get for every k {0, . . . , m 1} that nk mk, which means

    nk

    m

    k.

    Setting k := m i yields the second inequality of (14.9). From the Interlacing Theorem we immediately get the following corollary.

    Corollary 14.3.2. Let G and H be two graphs with eigenvalues 1 . . . nand 1 . . . m respectively. If 1 < 1 or n < n, then H does not occuras an induced subgraph of G.

    For example, let G be a graph with all eigenvalues smaller than 2.

    The graphs Cj , j , all have 2 as their largest eigenvalue (see Table 14.1in Section 14.1.6). Hence, G does not contain a cycle as an induced subgraph.Because in every cycle, we can find a simple cycle, in which in turn we can findan induced simple cycle, G even is cycle-free.

    We also know that G has no vertices of degree larger than 3. Otherwise, Gwould have a K1,j , j 4, as an induced subgraph, which has

    j (as largest

    eigenvalue) in its spectrum.

    14.3.2 Grafting

    Let us now approach the question about the role of subgraphs from a differentdirection. Suppose you have a graph G and another graph H and would like tomodify G in a way that gives G some of the eigenvalues of H.

    For a disconnected graph the spectrum is the union of the spectra of itscomponents. This is easy to see. It also gives us a straightforward method foraccomplishing our goal in the given setup: Simply add H to G as a separatecomponent. However this is not a suitable approach in case one has to preservethe connectivity of the graph.

    Let be an eigenvalue ofH which we would like to add to the spectrum ofG.Let us first consider the special case that the eigenvector x corresponding to has an entry equal to zero, say, xi0 = 0. Think of the graph G

    as the union ofGand H where the vertex i0 of H has been identified with some arbitrary vertexof G. Formally, we define G from G = (V(G), E(G)) and H = (V(H), E(H))by picking a vertex j0 V(G) and setting

    V(G) := V(G) (V(H) \ {i0}) andE(G) := E(G)

    E(H

    i0)

    {{j0, i

    }; i

    V(H),

    {i0, i

    } E(H)

    }.

    We say that H is grafted onto G along i0 and j0.To get an eigenvector for , assign 0 to all vertices ofG in G. To the vertices of

    Hi0 in G assign the values given by the eigenvector x. Using the combinatorialinterpretation of eigenvectors (Remark 14.1.2), one easily verifies that this yieldsan eigenvector of G for eigenvalue . See Figure 14.7 for an example.

  • 8/3/2019 15 Spectral Analysis

    19/44

    14 Spectral Analysis 391

    j0

    00

    0

    0

    0

    11Fig. 14.7. A triangle grafted on a star of five vertices. The new graph has an eigenvalue1, which the star alone did not have

    But what if we do not have an eigenvector of H for with an entry equalto zero? We can still do a similar trick. We pick a vertex i0 V(H) and maketwo copies H+ and H of H. The copies of a vertex i V(H) are called i+ andi respectively. We then take a new vertex i1 and connect H+ with H via twonew edges

    {i+0 , i1

    }and

    {i1, i

    0

    }. Call the resulting graph H.

    Let be an eigenvalue of H and x a corresponding eigenvector. Then thefollowing vector x is an eigenvector of H with the same eigenvalue .

    xi+ := xi and xi := xi i V(H)

    andxi1 := 0.

    Now, H can be grafted onto G along i1 and an arbitrary vertex j0 V(G).We call such a construction a symmetric graft. Note, that if H is a tree, H willbe a tree as well. Symmetric grafts of trees play an important role in the analysisof spectra of random graphs [49]; see also the notes about sparse random graphsin 14.5.2.

    As an example, consider the path on three vertices, P3. This graph has an

    eigenvalue

    2 with eigenvector (1/

    2, 1, 1/

    2)

    . Take the middle vertex as i0.Figure 14.8 shows the symmetric construction ready to be grafted along i1.

    Finally we remark that grafting along one vertex can be extended in a naturalway to grafting along several vertices.

    14.3.3 Operations on Graphs

    Consider two graphs G1 = (V1, E1) and G2 = (V2, E2). The sum G1 + G2 is agraph on V1 V2 where two vertices (i1, i2), (j1, j2) V1 V2 are connected ifeither (but not both!) {i1, j1} E1 or {i2, j2} E2. On the other hand, letting

  • 8/3/2019 15 Spectral Analysis

    20/44

    392 A. Baltz and L. Kliemann

    i1

    i+0

    12

    i0

    1

    2

    12

    1

    0

    1

    12

    Fig. 14.8. A symmetric construction from two copies ofP3, ready to be grafted

    (i1, i2) and (j1, j2) share an edge if and only if {i1, j1} E1 and {i2, j2} E2defines the Cartesian product graph G1 G2. Figure 14.9 depicts the sum andthe product of a simple path on four vertices with itself.

    Fig. 14.9. The sum and the Cartesian product of a 4-vertex simple path with itself

    It is easy to compute the eigenvalues ofG1 +G2 and G1G2 from the spectraof G1 and G2.

    Lemma 14.3.3. (a) spectrum(G + H) = spectrum(G) + spectrum(H).(b) spectrum(G H) = spectrum(G) spectrum(H).

    More general, the spectrum of any non-complete extended p-sumof graphs isdetermined by the individual spectra.

    Definition 14.3.4 ([135]). Let p 2 and let B {0, 1}p \ 0p. The non-complete extended p-sum of graphs G1 = (V1, E1), . . . , Gp = (Vp, Ep) with basisB is the graph NEpS(Gk)pk=1 with vertex set V1 Vp, in which two vertices(i1, . . . , ip), (j1, . . . , jp) are adjacent if and only if there exists B such thatik = jk whenever k = 0 and {ik, jk} Ek whenever k = 1.

  • 8/3/2019 15 Spectral Analysis

    21/44

    14 Spectral Analysis 393

    Theorem 14.3.5 ([135]). Let p 2. For each k {1, . . . , p} let Gk be agraph on nk vertices with spectrum k1, . . . , knk . Then

    spectrum(NEpS(Gk)p

    k=1

    ) = B

    11i1

    . . .

    p

    pip; (i1, . . . , ik)

    I ,

    whereI := {1, . . . , n1} . . . {1, . . . , nk}.

    Note that the sum and the Cartesian product of graphs are NEpS with p = 2and basis B = {(1, 0), (0, 1)} resp. B = {(1, 1)}.

    14.4 Bounds on Global Statistics

    Certain eigenvalues, especially the extreme eigenvalues, give bounds for globalstatistics. Recall from Section 11.7 in Chapter 11 that a global statistic assignsto each graph (from some class of graphs) a single value. These statistics aresometimes also called graph parameters.

    We will study a selection of eigenvalue bounds on graph parameters.

    Average Degree

    Recall that the degree d(i) of a vertex i is the number of edges incident with i.Denote the average degree by d, i.e.,

    d :=1

    n

    iV

    d(i). (14.10)

    The average degree is related to the largest eigenvalue of the adjacency matrix,as seen in the following observation.

    Lemma 14.4.1. LetG be a graph and d its average degree. Then

    d n.

    Proof. Let x := 1n be the n-dimensional vector with all entries equal to 1. Thenby Corollary 14.1.14 we have

    n xAxxx

    =

    iV d(i)

    n= d.

  • 8/3/2019 15 Spectral Analysis

    22/44

    394 A. Baltz and L. Kliemann

    Diameter and Mean Distance

    We only cite some results in this section. Recall that the diameter is the largestdistance between two vertices. Eigenvalues of the Laplacian provide bounds on

    the diameter of a connected graph.

    Theorem 14.4.2 ([126, 426]). Let > 1. Then

    diam(G) 2 cosh1(n 1)

    cosh1n(L)+2(L)n(L)2(L)

    + 1

    and

    diam(G) 4n2(L) .Recall that the mean distance is the average over all distances between

    distinct vertices.

    Theorem 14.4.3 ([426]). The mean distance is bounded by Laplacian eigen-values in the following way:

    1

    n

    1

    2

    2(L)+

    n 22 (G)

    n

    n

    1

    + 2(L)

    42(L)ln(n 1) .

    Connectivity

    We already know that 2(L) is non-zero if and only if the graph is connected.Fiedler [210] noticed more relations between 2(L) and connectivity properties.2(L) is therefore also called the algebraic connectivity. We only cite his results.

    Theorem 14.4.4 ([210]). Let G be a graph and = n . Let (G) and (G)denote the minimum number of nodes and edges, respectively that have to be

    removed in order to make G disconnected. Then,

    1. 2(L) (G) (G),2. 2(L) 2(G)(1 cos ), and3. 2(L) 2(cos cos2)(G) 2cos (1 cos )(G).

    Isoperimetric Number

    The isoperimetric number is defined as follows:

    i(G) := min |E(X, Y)|

    min{|X|, |Y|} ; = X = V, Y = V \ X

    ,

    where E(X, Y) denotes the set of edges connecting X with Y. The definitioncharacterizes i(G) as the size of a smallest possible edge-cut separating as largea subset X as possible from the remaining larger part Y (we assume |X| |Y|

  • 8/3/2019 15 Spectral Analysis

    23/44

    14 Spectral Analysis 395

    w.l.o.g.). Hence, i(G) is a measure of how many edges we have to remove froma network to isolate a large portion of nodes. This relates i(G) to both, networkconnectivity and the min-bisection problem, which has important applicationsin VLSI design.

    Obviously, by choosing X = {v} with d(v) = (G), we see thati(G) (G). (14.11)

    If G is disconnected, we have i(G) = 0.A well-known relationship between the isoperimetric number and 2(L) states

    that i(G) 2(L)2 . For the case of2(L) 2, this bound is outperformed by thefollowing result.

    Theorem 14.4.5 ([176]). The isoperimetric number is bounded from below byLaplacian eigenvalues in the following way:

    i(G) min

    1,2(L)n(L)

    2(n(L) + 2(L) 2)

    .

    We will now look at an upper bound following [425].

    Proposition 14.4.6. Let G be a graph with maximum degree that is not acomplete graph. Then,

    2(L)

    .

    Proof. IfG is disconnected, then 2(L) = 0 . So let G be connected. Then Gcontains a path on three vertices P3 as an induced subgraph. (Otherwise, G wouldbe complete: Take two vertices i and j and by induction on the length of theshortest i-j path show that there exists an edge {i, j}.) By the examples in Table14.1 in Section 14.1.6 we know that 2(A(P3)) = 0. Hence, by the InterlacingTheorem, see (14.8), we get

    0 = 2(A(P3)) n1(A).We also know from Lemma 14.1.12 that

    2(L) n1(A) .

    With the help of the preceding proposition, we can prove:

    Theorem 14.4.7. Let G = (V, E) be a graph not equal to K1, K2 or K3 withn vertices and m edges. Then

    i(G) 2(L)(2 2(L)). (14.12)The proof takes up several pages and includes some more technical details.

    Nevertheless it is a nice proof and an interesting example, how we can obtainnon-trivial bounds from Laplacian eigenvalues. It is therefore presented in anextra section 14.4.1 at the end of this section.

  • 8/3/2019 15 Spectral Analysis

    24/44

    396 A. Baltz and L. Kliemann

    Expansion

    Many times a good vertex expansion is a desired property of a network. Onecommon definition capturing this property that all small sets of nodes have

    large neighborhoods is the following.

    cV := min

    |N(S) \ S||S| ; S V, |S|

    n

    2

    .

    A large vertex expansion cV is crucial, e.g. for the construction of parallel sortingnetworks, superconcentrators, fault-tolerant networks, and networks for simulat-ing random generators, the latter being an important means for derandomizationvia small sample spaces. Using probabilistic methods, one can generate a randomnetwork that will have good expansion properties almost surely, but it is hardto measure the expansion properties from the definition. Therefore the followingbounds are very useful.

    Theorem 14.4.8 ([19]). 2(L)2

    +2(L) cV = O(

    2(L)).

    A graph with expansion at least is called an -magnifier.Let us prove a weak version of Theorem 14.4.8 for regular graphs.

    Theorem 14.4.9 ([576]). A d-regular graph is a 2(L)2d -magnifier.

    Proof. Let G = (V, E) be a d-regular graph on V = {1, . . . , n}. Take a subset Sof V of cardinality s n/2. Define a vector x {s n, s}n by

    xi :=

    s n, i Ss, i V \ S.

    By definition of L, xLx = x(D A)x = dni=1 x2i ni=1 xi{i,j}E xj =

    {i,j}E(xixj)2 = n2 |E(S, V \S)|, where E(S, V \S) is the set of edges withexactly one end point in S. Since i xi = s(s n) + (n s)s = 0, we see that xis perpendicular to the eigenvector 1n of L which corresponds to the eigenvalue0. From Corollary 14.1.14 we conclude that n2|(S, S)| = xLx 2(L)xx =2(L)sn(n s), and consequently |N(S)| |(S,S)|d 2(L)s(ns)dn 2(L)|S|2d asclaimed.

    For results on the closely related Cheeger constants see [125].

    Routing Number

    Consider a set of (distinguishable) pebbles. Initially, each pebble is located ona distinct vertex of the connected graph G = (V, E), |V| = n. We are givena permutation on V. The goal is to move each pebble which at the start ison vertex i V to vertex (i). This has to be done in a number of steps ofthe following form. At each step, choose a set of disjoint edges E0 E andinterchange the pebbles at the endpoints of each edge in E0.

  • 8/3/2019 15 Spectral Analysis

    25/44

    14 Spectral Analysis 397

    The minimum number of steps required to accomplish the goal is denoted byrt(G, ). The routing number rt(G) of G is

    rt(G) := max

    Sn

    rt(G, ).

    Now let G be connected and d-regular. We can upper bound the routingnumber in terms of n1. We only cite the following result.

    Theorem 14.4.10 ([21]). Let G be a d-regular connected graph. Then we haven1 < d and

    rt(G) = O

    d2

    (d n1)2 log2 n

    .

    Chromatic Number

    A coloring of a graph is an assignment of colors, e.g., natural numbers, to thevertices such that adjacent vertices have different colors. We speak of a k-coloringif the graph is assigned k different colors. The chromatic number of a graph isthe minimum number of colors required for a coloring. We denote the chromaticnumber by (G).

    It is well-known that computing the chromatic number is NP-hard. How-ever, eigenvalues provide us with lower and upper bounds. We consider onlyeigenvalues of the adjacency matrix.

    Theorem 14.4.11 ([428]). Let G be a graph. Then(G) 1 + n.

    Proof. Let H be a subgraph of G without isolated vertices and such that

    (H) = (G) and for every edge e in H we have (H e) < (H). (14.13)It is easy to show that such a subgraph always exists and that in fact (H

    e) = (H) 1. We have (H) (H) + 1. To see this, assume (H) > (H) + 1and let i V(H) be a vertex with dH(i) = (H). Let j N(i) and e = {i, j}.Then k := (H

    e) = (H)

    1 > (H). Because dH(i) = (H), we can construct

    a k-coloring of H from a k-coloring ofH e. This is a contradiction to (14.13).From Lemma 14.4.1 and the Interlacing Theorem (14.8), we derive

    (G) 1 = (H) 1 (H) n(H) n(G).

    It is possible to actually find a 1 + ncoloring in polynomial time; see [428]for details.

    Theorem 14.4.12 ([246, 268]). Let G be a graph. Then

    1 n1

    (G).

    Theorem 14.4.13 ([135]). Let G be a graph. Then

    n

    n n (G).

  • 8/3/2019 15 Spectral Analysis

    26/44

    398 A. Baltz and L. Kliemann

    Independence Number

    An independent set is a set of vertices such that none of them is adjacent toanother vertex from that set. Independent sets are also called stable sets. For

    a graph G, the independence number (G) is the cardinality of an independentset of maximal cardinality among all independent sets of G.

    We will follow [427] in extending the following result due to Hoffman andLovasz [397].

    Theorem 14.4.14. Let G be a d-regular graph. Then

    (G) n

    1 dn(L)

    .

    Now, let G = (V, E) be a graph with n vertices and degrees

    d1 d2 . . . dn.

    Set

    ds :=1

    s

    i{1,...,s}

    di for all s {1, . . . , n}.

    Then the sequence d1, d2, . . . , dn is non-decreasing and for a d-regular graphds = d for all s {1, . . . , n}.Theorem 14.4.15. Let s0 be the smallest integer such that

    ds0 >n(L)(n s0)

    n.

    Then(G) s0 1,

    and consequently

    (G) n1 ds01n(L)

    .

    Proof. We will show thatn(L)(n s)

    n ds (14.14)

    whenever there is a stable set of size s > 1 in G. To this end, let S V be astable set of size s := |S| > 1. Equation (14.14) is true for s = n, since thendi = 0 for all i

    {1, . . . , n

    }. So let s < n. Define x

    n by

    xi :=

    0 if i S1 otherwise

    .

    Then x is non-constant, i.e., not a multiple of 1. By Fiedlers characterization,Theorem 14.1.15, equation (14.7), we have

  • 8/3/2019 15 Spectral Analysis

    27/44

    14 Spectral Analysis 399

    n(L) n

    {i,j}E(xi xj)2{i,j}(V2)(xi xj)2

    = n|E(S, V \ S)|

    s(n s) .

    Because S contains no edges, we have|E(S, V

    \S)

    | sds, and so

    n(L) n sdss(n s) = n

    dsn s .

    This yields (14.14).Now let s0 be the first integer that violates (14.14). Then, there is no stable

    set of cardinality s0. Because (ds)s=1,...,n is non-decreasing, all s s0 violate(14.14) as well, and so there cannot be a stable set larger than s0 1.

    Bisection Width

    Given a graph with an even number of vertices, the Minimum Bisection problemaims at partitioning the vertices into two classes of equal size that are con-nected by as few edges as possible. The minimum number of edges between thetwo classes is called the bisection width of the graph. The decision version ofthe Minimum Bisection problem is NP-complete [241], and the currently bestpolynomial approximation algorithm is guaranteed only to stay within a multi-plicative error ofO(log2 n) [203]. The following bound on the bisection width isa special case of a result of Alon and Milman [23].Lemma 14.4.16. Let G = (V, E) be a graph on {1, . . . , n}, where n is an evenpositive integer, then

    bw(G) n4

    2(L).

    Proof. Let S be an arbitrary subset of V of cardinality n2

    and define

    xi :=

    1, i S

    1, i /

    S

    for all i V. Then iV xi = 0. Hence x 1n and by Corollary 14.1.14 andLemma 14.1.8 we have n2(L) = x

    x2(L) xLx =

    {i,j}E(xi xj)2 ={i,j}E(S,V\S)(xi xj)2 = 4 |E(S, V \ S)|. Choosing S as one class of a

    minimum bisection yields the claim. Bezrukow et al. [60] have shown that this bound is tight if and only if all vertices

    are incident with exactly 2(L)2 cut edges, which is true, e.g., for complete graphs,complete bipartite graphs, hypercubes, and the Petersen graph. However, for the

    nn grid graph, the bisection width is n while 2(L) = 22 cos(/n) 2/n [60] and hence n4 2(L)

    2

    4. So the gap between the optimal bisection

    width and the bound of Lemma 14.4.16 can be large.As mentioned earlier, the bisection width is closely related to the isoperimet-

    ric number: directly from the definition of i(G) and bw(G) we obtain i(G) 2bw(G)n . Hence, lower bounds on i(G) yield lower bounds on bw(G).

  • 8/3/2019 15 Spectral Analysis

    28/44

    400 A. Baltz and L. Kliemann

    14.4.1 Proof of Theorem 14.4.7

    Recall that G = (V, E) is a graph not equal to K1, K2 or K3 with n verticesand m edges. We have to prove

    i(G)

    2(L)(2 2(L)). (14.15)

    For the proof, let us write := 2(L), := (G), and := (G).If = 0, then G is disconnected and so i(G) = 0, and we are done. The case

    that G is a complete graph on n 4 vertices or more can be dealt with easilyby using = n, see 14.1.6.

    Hence we may assume that G is not a complete graph. Then by Proposition14.4.6, we have . If < , we have

    (2 ) > (2 ) (2 ) = 2 (14.11)

    i(G)2.

    This is (14.15). We now may assume that

    . (14.16)

    Let y n be an eigenvector for and set W := {i V; yi > 0}. Then, perhapsafter switching from y to y, we have |W| = min{|W| , |V \ W|}. Define

    gi :=

    yi if i W0 otherwise

    .

    Let E(W) E be the edges between vertices from W. We haveiW

    d(i)yi

    j: {i,j}E

    yj

    yi

    =iW

    j: {i,j}E

    (yi yj)yi

    =

    {i,j}E(W)((yi yj)yi + (yj yi)yj ) +

    {i,j}E(W,V\W)

    (yi yj)yi

    =

    {i,j}E(W)(yi yj)2 +

    {i,j}E(W,V\W)

    (yi yj)yi

    = {i,j}E(W)(yi yj)

    2 + iWd(i)y2i

    {i,j}E(W,V\W)yjyi

    =

    {i,j}E(gi gj)2

    iW

    d(i)g2i +iW

    d(i)y2i

    {i,j}E(W,V\W)yjyi

    =

    {i,j}E(gi gj)2

    {i,j}E(W,V\W)

    yjyi.

    (14.17)

  • 8/3/2019 15 Spectral Analysis

    29/44

    14 Spectral Analysis 401

    Keep in mind that when summing over all edges {i, j} E, the terms must notdepend on which end of the edge actually is i and which is j. Observe that thisis always the case here, e.g., in the preceding calculation because (gi gj)2 =(gj

    gi)

    2 for all i, j

    V.

    Using the eigenvalue property of y we get

    yi = d(i)yi

    j: {i,j}Eyj i V. (14.18)

    This, together with the version of (14.17), yields

    iW

    y2i =iW

    d(i)yi

    j: {i,j}E

    yj

    yi

    =(14.17)

    {i,j}E

    (gi gj)2

    {i,j}E(W,V\W)yjyi,

    (14.19)

    and using the + version of (14.17)

    (2 )iW

    y2i

    = 2 iWy2i iWd(i)y2i + iW j: {i,j}E yjyi

    iW

    d(i)y2i +iW

    j: {i,j}E

    yjyi

    =(14.17)

    {i,j}E

    (gi + gj)2 +

    {i,j}E(W,V\W)

    yjyi.

    (14.20)

    Set := {i,j}E(W,V\W) yiyj . Because the left and right hand sides of (14.19)are not negative, using (14.20) we derive(2 )

    iW

    y2i

    2

    {i,j}E

    (gi + gj)2

    {i,j}E

    (gi gj)2

    + {i,j}E

    (gi gj)2

    {i,j}E(gi + gj)

    2

    2

    = {i,j}E

    (gi + gj)2

    {i,j}E(gi gj)2 4

    {i,j}E(W)yiyj +

    .(14.21)

    We would like to drop the term completely in this equation. To this end,observe that by the definition of W we have 0. Furthermore, using againthe eigenvalue property of (see also (14.18)), we have

  • 8/3/2019 15 Spectral Analysis

    30/44

    402 A. Baltz and L. Kliemann

    4

    {i,j}E(W)yiyj +

    = 2

    {i,j}E(W)yiyj + 2 {i,j}E(W)

    yiyj + {i,j}E(W,V\W)yiyj

    = 2

    {i,j}E(W)yiyj +

    iW

    yi

    j: {i,j}Eyj

    = 2

    {i,j}E(W)yiyj0

    +iW

    (d(i) ) 0

    y2i

    0,

    because of the definition of W and (14.16). We thus have by (14.21)

    (2 )iW

    y2i

    2

    {i,j}E

    (gi + gj)2

    {i,j}E

    (gi gj)2. (14.22)

    Define v, w

    m by v{i,j} := gi + gj and w{i,j} := |gi gj|. We apply theCauchy-Schwartz inequality to v and w, getting

    {i,j}E g2i g2j

    2

    = {i,j}E

    (gi + gj) |gi gj |2

    = v, w2 v2 w2

    =

    {i,j}E(gi + gj)

    2

    {i,j}E(gi gj)2

    (14.22)

    (2 )iW

    y2i2 .

    (14.23)

    We will now bound this from below. Let 0 = t0 < t1 < . . . < tN be all thedifferent values of the components of g. Define Vk := {i V; gi tk} for k {0, . . . , N } and for convenience VN+1 := . Then, for k {1, . . . , N + 1} we haveVk W and therefore |Vk| |W|, hence |Vk| = min{|Vk| , |V \ Vk|}. It also holdsthat VN VN1 . . . V 1 = W V0 = V and that |Vk| |Vk+1| is the numberof entries in g equal to tk for all k

    {0, . . . , N

    }.

    We will later show that we can express the sum {i,j}E g2i g2j in a con-venient way:

  • 8/3/2019 15 Spectral Analysis

    31/44

    14 Spectral Analysis 403

    {i,j}E

    g2i g2j = Nk=1

    {i,j}Egi

  • 8/3/2019 15 Spectral Analysis

    32/44

    404 A. Baltz and L. Kliemann

    Nk=1

    {i,j}Egi

  • 8/3/2019 15 Spectral Analysis

    33/44

    14 Spectral Analysis 405

    N1k=1

    {i,j}E(Vk,V\Vk)

    jVN

    (t2k t2k1)

    = {i,j}E(VN ,V\VN )jVN

    k{1,...,N1}iV\Vk

    (t2k t

    2k1)

    =

    {i,j}E(VN ,V\VN )jVN

    N1k=k(i)

    (t2k t2k1)

    =telescope {i,j}E(VN ,V\VN )jVN

    (t2N1 t2k(i)1)

    =

    {i,j}E(VN ,V\VN )jVN

    (t2N1 g2i ).

    (14.27)

    We can now continue our work on (14.25). Note that

    {(i, j); {i, j} E(VN, V \ VN), j VN}=

    {(i, j);

    {i, j

    } E, gi < gj = tN

    }.

    (14.28)

    This will later allow us to combine the last sum from (14.25) and that from(14.27).

    Putting everything together, we see:

    Nk=1

    {i,j}Egi

  • 8/3/2019 15 Spectral Analysis

    34/44

    406 A. Baltz and L. Kliemann

    . . . =N1k=1

    {i,j}E(Vk,V\Vk)

    (t2k t2k1)

    {i,j}E(VN ,V\VN )jVN

    (t2N1 g2i )

    + {i,j}Egi

  • 8/3/2019 15 Spectral Analysis

    35/44

    14 Spectral Analysis 407

    Inverse Participation Ratio. Recall that we can interpret an eigenvector asan assignment of weights to the vertices; see Remark 14.1.2. We will ask, howthese weights are distributed. Are there a few vertices that have relatively largeweights compared to the rest (we also speak of a high localization in this case),

    or are the weights very similar for all vertices?Let G be a graph and let w1, . . . , wn be the normalized eigenvectors of its

    adjacency matrix. We define the inverse participation ratio [198] of the jth eigen-vector as

    Ij(G) :=

    nk=1

    ((wj)k)4 . (14.29)

    We also write Ij when dealing with only one graph at a time.Because the eigenvectors are normalized, we have Ij(G) [ 1n , 1] for all j

    {1, . . . , n}. The inverse participation ratio measures an eigenvectors extent oflocalization. This becomes clear by considering extreme cases. First let wj defineperfectly evenly distributed weights on the graph, i.e., (wj)k :=

    1n

    for all k {1, . . . , n}. Then, Ij = 1n , i.e., the inverse participation ration attains its minimalvalue.

    On the other hand, if for an arbitrary vertex k0 V, we have

    (wj)k =

    1 if k = k0

    0 otherwise,

    then Ij = 1, i.e., the inverse participation ration attains its maximum value.

    Offset of Largest Eigenvalue. In certain classes of graphs, the largest eigen-value tends to escape from the rest of the spectrum. For example, this is thecase in certain models of random graphs. It therefore is interesting to study thequantity

    R(G) :=n n1n1 1 . (14.30)

    R(G) captures the offset of n from the second largest eigenvalue n1 normal-ized with respect to the range of the rest of the spectrum.

    There is a correlation between R(G) and the chromatic number (G), as wasnoticed in [198]. Define by the equation

    1 = n1 + .For || small, the smallest interval containing all eigenvalues except n is al-most centered around 0. (For certain random graphs, || is in fact small.) Recallfrom 14.4 that there are bounds on the chromatic number in terms of extremaleigenvalues. In particular, we have by Theorem 14.4.12,

    (G)

    2 1 n

    21 1

    2=

    n 121

    =n + n1 +

    1 n1 =n n1 n1 1 + .

  • 8/3/2019 15 Spectral Analysis

    36/44

    408 A. Baltz and L. Kliemann

    So, for graphs with = 0 the chromatic number is lower bounded in terms ofR(G) as

    (G) 2R(G) + 2.

    For small ||, this equation obviously still holds in an approximate sense.14.5.2 Spectral Properties of Random Graphs

    Can random graphs from different models be distinguished by looking at spectralproperties? One way of approaching this problem is to study the outcome ofa large number of random experiments. We will present some results of somesuch experiments and discuss possible interpretations. The three random modelsunder consideration are the model by Erdos and Renyi, G(n, p), the scale-freegraphs by Barabasi and Albert see 13.1.4 and the small-world graphs byWatts and Strogatz see 13.1.2.

    First, we investigate how these models differ in their spectra by looking athistogram plots of the eigenvalues of a number of randomly generated graphs.Then, we compare the inverse participation ratios, defined in (14.29). Finally wealso look at the offsets of the largest eigenvalues, defined in (14.30). Our focuswill be on G(n, p) and Barabasi-Albert-like graphs. For details on small-worldgraphs, see [198].

    For our numerical experiments, we wrote a small C program (about 600 linesof code). To compute the eigenvalues and later (for the inverse participation ra-tios) also the eigenvectors, we use the function ssyev from the Sun PerformanceLibrary [537]. The actual plots were made using gnuplot [588].

    Spectra. Let us start with G(n, p). We generated 100 graphs randomly accord-ing to G(2000, 12 ) and computed their spectra. A first observation is that thelargest eigenvalue is significantly set off from the rest of the spectrum in all ofthese experiments. Because this offset will be explicitly regarded later, we fornow exclude the largest eigenvalue from our considerations. So, when talking of

    spectrum, for the rest of this section, we mean spectrum without the largesteigenvalue.

    To get a visualization of all the spectra, we computed a histogram of theeigenvalues of all 100 random graphs. This technique will be used for the otherrandom graph models as well. The quantity approximated by those histogramsis also known as the spectral density. Although we do not introduce this notionformally [198], we will in the following occasionally use that term when referringto the way eigenvalues are distributed.

    All histograms, after being normalized to comprise an area of 1, were scaled

    in the following way. Let 1, . . . , N be the eigenvalues. (In our case N = 100 (2000 1), for we have 100 graphs with 2000 eigenvalues each, and we excludethe largest of them for each graph). Define as the mean of all these values

    =1

    N

    Ni=1

    i.

  • 8/3/2019 15 Spectral Analysis

    37/44

    14 Spectral Analysis 409

    Then, we compute the standard deviation

    = N

    i=1(i )2.

    The x-axis of our plots is scaled by 1 and the y-axis is scaled by . Thismakes it easy to compare spectra of graphs of different sizes.

    Figure 14.10 shows the scaled histogram. The semicircle form actually is nosurprise. It follows from a classical result from random matrix theory known asthe semicircle law. It originally is due to Wigner [584, 585] and has later beenrefined by a number of researchers [34, 233, 337].

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

    "Gnp_2000_0.5_.dens"

    Fig. 14.10. A histogram of the union of the spectra of 100 random graphs fromG(2000, 12 )

    Now we look at graphs that originate from an evolutionary process with

    preferential attachment, like the Barabasi-Albert graphs. We implemented a verysimple algorithm for creating graphs with preferential attachment. There is onlyone parameter m (apart from the number of vertices n). The process starts withm unconnected vertices. Whenever a new vertex arrives, it is connected withm edges to older vertices. To this end, the algorithm m times chooses a vertexout of the set of old vertices at random. The probability of an old vertex being

  • 8/3/2019 15 Spectral Analysis

    38/44

    410 A. Baltz and L. Kliemann

    chosen is proportional to its degree; at the very first step, when all vertices havedegree 0, each of them is equally likely to be chosen.

    This way of generating preferential attachment graphs is slightly differentfrom what usually is considered in that multiple edges are allowed. However,

    the resulting histograms of the spectra look very similar to those of [198], wheremultiple edges do not occur. See Figure 14.11 for our results. For comparison,Figure 14.12 shows plots for our preferential attachment graphs and for G(n, p)graphs in one figure.

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    0.45

    -4 -3 -2 -1 0 1 2 3 4

    "BA_2000_10_.dens"

    Fig. 14.11. A smoothed histogram of the union of the spectra of 100 random graphswith preferential attachment on 2000 vertices and with m = 10

    The preferential attachment graph obviously has significantly more smalleigenvalues than the G(n, p) graph. Since it essentially is connected2 these smalleigenvalues cannot originate from small connected components. In [198] it issuggested that these eigenvalues belong to eigenvectors that are highly localized.This is supported by high inverse participation ratios, which we will observe later.

    An interesting case for our studies of G(n, p) are sparse random graphs. Weused to look at large values of n, for which according to the semicircle law, ourhistograms should more and more (with increasing n) resemble a semicircle. In

    2 There might be a number of less than m = 10 isolated vertices.

  • 8/3/2019 15 Spectral Analysis

    39/44

    14 Spectral Analysis 411

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    0.45

    -2 -1 0 1 2

    c(x)"Gnp_2000_0.5_.dens"

    "BA_2000_10_.dens"

    Fig. 14.12. Histograms of spectra of 100 random graphs with preferential attachment(n = 2000, m = 10) and from G(2000, 12 ) each. The solid line marks an ideal semicircle

    the case displayed in Figure 14.10 we have an expected degree of approximately3

    pn = 12 2000 = 1000. This quantity pn is also referred to as the connectivity of thegraph, although we will avoid this term, because it has already been extensivelyused in another context (see Chapter 7).

    What if we look at graphs with fewer edges than in Figure 14.10, say, graphs

    in G(n, p) where pn = 5? Figure 14.13 shows that the spectral densities of suchgraphs rise above the semicircle in the vicinity of 0. Reasons for this have beendiscussed in [198].

    One can also notice small peaks at larger eigenvalues. They become moreobvious in plots of graphs with even lower expected degree. To study them inmore detail, it is helpful to look at the cumulative distribution of eigenvalues,i.e., at the function

    x |{i; i x}|Figure 14.14 shows an examplary plot.

    3 The expected degree is in fact p(n 1). To see this, fix a vertex i V. This vertexhas n 1 potential neighbors. Let X1, . . . ,Xn1 be 0/1 random variables, where Xjindicates whether or not an edge is drawn from i to j. If we put X :=

    n1j=1 Xi, then

    [X] is the expected degree of i. The claim follows from linearity of expectation.

  • 8/3/2019 15 Spectral Analysis

    40/44

    412 A. Baltz and L. Kliemann

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    0.45

    0.5

    -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5

    c(x)"Gnp_alpha_1000_5.0.dens""Gnp_alpha_2000_5.0.dens""Gnp_alpha_4000_5.0.dens"

    Fig. 14.13. Histograms of spectra of sparse random graphs. For n = 1000, 2000 and4000 each we created 10 random graphs from G(n, p) with p set to satisfy pn = 5. Thesolid line marks an ideal semicircle

    In [49], the reasons for the peaks are investigated. It is argued that theyoriginate from small connected components and, for pn > 1, also from smalltrees grafted on the giant component.

    Inverse Participation Ratios. Looking at Figure 14.12 has already lead usto conjecture that our graphs with preferential attachment have highly localizedeigenvectors. To investigate this further, we created random graphs in all threemodels and plotted the inverse participation ratios of their eigenvectors. Figure14.15 shows all three plots. Each circle represents an eigenvector. The x positionof the circle corresponds to the eigenvalue and the y position to the inverse par-ticipation ratio. Note that in the second plot (which shows the results for theBarabasi-Albert-like graph), there are even eigenvectors with inverse participa-tion ratio near to 1.

    It is also interesting that we obtain very distinguishing plots even for smallnumbers of vertices; all considered graphs only had 100 vertices each. We ran-domly created more such graphs and always could observe the same character-istics. Obviously one can recognize the G(n, p) by the fact that all eigenvectorshave their inverse participation ratios rather evenly distributed in a small band atthe bottom of the diagram. The Barabasi-Albert-like graph exhibits as a salient

  • 8/3/2019 15 Spectral Analysis

    41/44

    14 Spectral Analysis 413

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    -6 -4 -2 0 2 4 6

    "Gnp_2000__1_20""Gnp_2000__2_00""Gnp_2000__5_00"

    Fig. 14.14. The cumulative distribution of eigenvalues for sparse random graphs. Forn = 2000 and pn = 1.2, 2.00 and 5.00 each we created 10 random graphs from G(n, p)

    feature eigenvectors with high inverse participation ratio. The small-world graphshows a very asymmetric structure.

    We were not able to produce likewise distinguishing plots of the spectraldensity for such small graphs.

    Offset of Largest Eigenvalue. We already mentioned that in the G(n, p)model as well as in the graphs with preferential attachment, the largest eigen-value is significantly set off from the rest of the spectrum. The offset R, see(14.30), in these two models differs from that in the small-world graphs by sev-eral orders of magnitude. But there is also a difference between the R values ofsparse G(n, p) graphs and graphs with preferential attachment. For the former,with increasing number of vertices and constant average degree, the R valuesseem to stay constant, while for the latter they decrease. See [198] for numericalresults.

    14.5.3 Random Power Law Graphs

    In [127, 128], first the adjacency spectrum of a very general class of randomgraphs is examined. Given a sequence w = (w1, w2, . . . , wn) of non-negativereals satisfying

  • 8/3/2019 15 Spectral Analysis

    42/44

    414 A. Baltz and L. Kliemann

    0

    0.2

    0.4

    0.6

    0.8

    1

    -15 -10 -5 0 5 10 15

    "Gnp_100_0.1_0001_.ipr"

    0

    0.2

    0.4

    0.6

    0.8

    1

    -15 -10 -5 0 5 10 15

    "BA_100_10_0001_.ipr"

    0

    0.2

    0.4

    0.6

    0.8

    1

    -15 -10 -5 0 5 10 15

    "sw_100_0.01_10_0001_.ipr"

    Fig. 14.15. Inverse participation ratios of three random graphs. The models are:G(n, p) (Gnp 100 0.1 0001 ipr), preferential attachment (BA 100 10 0001 ipr), small-world (sw 100 0.01 10 0001 ipr). All graphs have an expected degree of 10

    maxi{1,...,n}

    w2i 2.

    Theorem 14.5.1 ([127, 128]). Let G be a random power law graph with ex-ponent and adjacency spectrum 1, . . . , n.

    4 I.e., with probability tending to 1 as n tends to .

  • 8/3/2019 15 Spectral Analysis

    43/44

    14 Spectral Analysis 415

    1. For 3 andm > d2 log3 n, (14.31)

    we have almost surely

    n = (1 + o(1))

    m.

    2. For 2.5 < < 3 and

    m > d22.5 log

    32.5 n, (14.32)

    we have almost surelyn = (1 + o(1))

    m.

    3. For 2 < < 2.5 and

    m > log3

    2.5 n,

    we have almost surely

    n = (1 + o(1))d.

    4. For2.5 < and k < n

    d

    m logn

    1, almost surely the k largest eigenvalues

    of G have power law distribution with exponent 2 1, provided m is largeenough (satisfying (14.31) and (14.32)).

    We remark that the second order average degree d can actually be computedin these cases. For details see [127, 128].

    14.6 Chapter Notes

    Fundamental Properties

    The adjacency spectrum is discussed in a vast amount of papers and textbooks.The results presented in this chapter are taken from [135, 247, 576]. More onthe Laplacian can be found in [135] (under the term admittance matrix) and in[427]. The normalized Laplacian is extensively studied in [125]. While the eigen-values alone do not generally determine the structure of a graph (as shown bycospectral graphs), eigenvalues plus eigenvectors do: ifu1, u2, . . . , un are linearlyindependent eigenvectors ofA corresponding to 1, 2, . . . , n respectively, thenA = U DU1, where U :=Mat(u1, . . . , un) is the matrix with ui as column vec-tors and D :=diag(1, . . . , n) is the diagonal matrix with entries i. Alreadythe knowledge of some eigenvectors can be very useful to recognize importantproperties of a graph. This is elaborated on in [136]. The cited references con-tain also many results on the spectrum of regular graphs which we treated ratherstepmotherly since in network analysis we are typically dealing with highly non-regular graphs.

    Numerical Methods

    The diagonalization strategy for small dense matrices is comprehensively treatedin [482, 535]. For a discussion of QR-like algorithms including parallelizableversions see [571]. More on the Lanczos method can be found in [467, 591]. AnArnoldi code for real asymmetric matrices is discussed in [509].

  • 8/3/2019 15 Spectral Analysis

    44/44

    416 A. Baltz and L. Kliemann

    Subgraphs and Operations on Graphs

    The Interlacing Theorem plays an important role in many publications on spec-tral graph theory. In addition to the already mentioned literature, we point the

    reader to [561]. More on graph operations and resulting spectra can be found in[135, Chapter 2] and [427]. As already mentioned, symmetric grafting also playsa role in [49].

    Bounds on Global Statistics

    For more results on the connection between eigenvalues and graph parameterssee [427, 428] and the references therein, such as [426, 425]. More on the role ofthe eigenvalues of the normalized Laplacian for graph parameters can be found

    in [125]. More on the connection between 2(L) and expansion properties canbe found in [19]. Better spectral lower bounds on the bisection width have beengiven by Donath, Hoffman [159], Boppana [78], and Rendl, Wolkowicz [491].Boppanas technique also yields an efficient algorithm with good average casebehavior. For a discussion of spectral methods for the multiway partition problemof finding a k-partition with prescribed part sizes see [27, 389].

    Heuristics for Graph Identification

    For further reading, see [198, 49] as well as the already mentioned [127, 128].The behavior of certain extremal eigenvalues is also considered in [337].