gaussian random vectors and processes

Download GAUSSIAN RANDOM VECTORS AND PROCESSES

Post on 29-Jan-2017

214 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • Chapter 3

    GAUSSIAN RANDOMVECTORS AND PROCESSES

    3.1 Introduction

    Poisson processes and Gaussian processes are similar in terms of their simplicity and beauty.When we first look at a new problem involving stochastic processes, we often start withinsights from Poisson and/or Gaussian processes. Problems where queueing is a majorfactor tend to rely heavily on an understanding of Poisson processes, and those where noiseis a major factor tend to rely heavily on Gaussian processes.

    Poisson and Gaussian processes share the characteristic that the results arising from themare so simple, well known, and powerful that people often forget how much the resultsdepend on assumptions that are rarely satisfied perfectly in practice. At the same time,these assumptions are often approximately satisfied, so the results, if used with insight andcare, are often useful.

    This chapter is aimed primarily at Gaussian processes, but starts with a study of Gaussian(normal1) random variables and vectors, These initial topics are both important in theirown right and also essential to an understanding of Gaussian processes. The material hereis essentially independent of that on Poisson processes in Chapter 2.

    3.2 Gaussian random variables

    A random variable (rv) W is defined to be a normalized Gaussian rv if it has the density

    fW (w) =1p2

    expw2

    2

    ; for all w 2 R. (3.1)

    1Gaussian rvs are often called normal rvs. I prefer Gaussian, first because the corresponding processesare usually called Gaussian, second because Gaussian rvs (which have arbitrary means and variances) areoften normalized to zero mean and unit variance, and third, because calling them normal gives the falseimpression that other rvs are abnormal.

    109

  • 110 CHAPTER 3. GAUSSIAN RANDOM VECTORS AND PROCESSES

    Exercise 3.1 shows that fW (w) integrates to 1 (i.e., it is a probability density), and that Whas mean 0 and variance 1.

    If we scale a normalized Gaussian rv W by a positive constant , i.e., if we consider therv Z = W , then the distribution functions of Z and W are related by FZ(w) = FW (w).This means that the probability densities are related by fZ(w) = fW (w). Thus the PDFof Z is given by

    fZ(z) =1

    fW z

    =

    1p2

    expz222

    . (3.2)

    Thus the PDF for Z is scaled horizontally by the factor , and then scaled vertically by1/ (see Figure 3.1). This scaling leaves the integral of the density unchanged with value 1and scales the variance by 2. If we let approach 0, this density approaches an impulse,i.e., Z becomes the atomic rv for which Pr{Z=0} = 1. For convenience in what follows, weuse (3.2) as the density for Z for all 0, with the above understanding about the = 0case. A rv with the density in (3.2), for any 0, is defined to be a zero-mean Gaussianrv. The values Pr{|Z| > } = .318, Pr{|Z| > 3} = .0027, and Pr{|Z| > 5} = 2.2 1012give us a sense of how small the tails of the Gaussian distribution are.

    fW (w)

    fZ(w)

    0 2 4 6

    0.3989

    Figure 3.1: Graph of the PDF of a normalized Gaussian rv W (the taller curve) andof a zero-mean Gaussian rv Z with standard deviation 2 (the flatter curve).

    If we shift Z by an arbitrary 2 R to U = Z+, then the density shifts so as to be centeredat E [U ] = , and the density satisfies fU (u) = fZ(u ). Thus

    fU (u) =1p2

    exp(u )2

    22

    . (3.3)

    A random variable U with this density, for arbitrary and 0, is defined to be a Gaussianrandom variable and is denoted U N (,2).

    The added generality of a mean often obscures formulas; we usually assume zero-mean rvsand random vectors (rvs) and add means later if necessary. Recall that any rv U with amean can be regarded as a constant plus the fluctuation, U , of U .

    The moment generating function, gZ(r), of a Gaussian rv Z N (0,2), can be calculated

  • 3.3. GAUSSIAN RANDOM VECTORS 111

    as follows:

    gZ(r) = E [exp(rZ)] =1p2

    Z 11

    exp(rz) expz222

    dz

    =1p2

    Z 11

    expz2 + 22rz r24

    22+

    r22

    2

    dz (3.4)

    = expr22

    2

    1p2

    Z 11

    exp(z r)2

    22

    dz

    (3.5)

    = expr22

    2

    . (3.6)

    We completed the square in the exponent in (3.4). We then recognized that the term inbraces in (3.5) is the integral of a probability density and thus equal to 1.

    Note that gZ(r) exists for all real r, although it increases rapidly with |r|. As shown inExercise 3.2, the moments for Z N (0,2), can be calculated from the MGF to be

    EhZ2k

    i=

    (2k)!2k

    k! 2k= (2k 1)(2k 3)(2k 5) . . . (3)(1)2k. (3.7)

    Thus, EZ4

    = 34, E

    Z6

    = 156, etc. The odd moments of Z are all zero since z2k+1 is

    an odd function of z and the Gaussian density is even.

    For an arbitrary Gaussian rv U N (,2), let Z = U , Then Z N (0,2) and gU (r)is given by

    gU (r) = E [exp(r( + Z))] = erEerZ

    = exp(r + r22/2). (3.8)

    The characteristic function, gZ(i) = EeiZ

    for Z N (0,2) and i imaginary can be

    shown to be (e.g., see Chap. 2.12 in [27]).

    gZ(i) = exp22

    2

    , (3.9)

    The argument in (3.4) to (3.6) does not show this since the term in braces in (3.5) isnot a probability density for r imaginary. As explained in Section 1.5.5, the characteristicfunction is useful first because it exists for all rvs and second because an inversion formula(essentially the Fourier transform) exists to uniquely find the distribution of a rv from itscharacteristic function.

    3.3 Gaussian random vectors

    An n ` matrix [A] is an array of n` elements arranged in n rows and ` columns; Ajkdenotes the kth element in the jth row. Unless specified to the contrary, the elements arereal numbers. The transpose [AT] of an n` matrix [A] is an `n matrix [B] with Bkj = Ajk

  • 112 CHAPTER 3. GAUSSIAN RANDOM VECTORS AND PROCESSES

    for all j, k. A matrix is square if n = ` and a square matrix [A] is symmetric if [A] = [A]T.If [A] and [B] are each n ` matrices, [A]+ [B] is an n ` matrix [C] with Cjk = Ajk +Bjkfor all j, k. If [A] is n ` and [B] is ` r, the matrix [A][B] is an n r matrix [C] withelements Cjk =

    Pi AjiBik. A vector (or column vector) of dimension n is an n by 1 matrix

    and a row vector of dimension n is a 1 by n matrix. Since the transpose of a vector is arow vector, we denote a vector a as (a1, . . . , an)T. Note that if a is a (column) vector ofdimension n, then aaT is an nn matrix whereas aTa is a number. The reader is expectedto be familiar with these vector and matrix manipulations.

    The covariance matrix, [K] (if it exists) of an arbitrary zero-mean n-rv Z = (Z1, . . . , Zn)Tis the matrix whose components are Kjk = E [ZjZk]. For a non-zero-mean n-rv U , letU = m + Z where m = E [U ] and Z = U m is the fluctuation of U . The covariancematrix [K] of U is defined to be the same as the covariance matrix of the fluctuation Z ,i.e., Kjk = E [ZjZk] = E [(Uj mj)(Uk mk)]. It can be seen that if an n n covariancematrix [K] exists, it must be symmetric, i.e., it must satisfy Kjk = Kkj for 1 j, k n.

    3.3.1 Generating functions of Gaussian random vectors

    The moment generating function (MGF) of an n-rv Z is defined as gZ (r) = E [exp(rTZ )]where r = (r1, . . . , rn)T is an n-dimensional real vector. The n-dimensional MGF mightnot exist for all r (just as the one-dimensional MGF discussed in Section 1.5.5 need notexist everywhere). As we will soon see, however, the MGF exists everywhere for Gaussiann-rvs.

    The characteristic function, gZ (i) = Ehei

    TZi, of an n-rv Z , where = (1, . . . , n)T is

    a real n-vector, is equally important. As in the one-dimensional case, the characteristicfunction always exists for all real and all n-rv Z . In addition, there is a uniquenesstheorem2 stating that the characteristic function of an n-rv Z uniquely specifies the jointdistribution of Z .

    If the components of an n-rv are independent and identically distributed (IID), we call thevector an IID n-rv.

    3.3.2 IID normalized Gaussian random vectors

    An example that will become familiar is that of an IID n-rvW where each component Wj ,1 j n, is normalized Gaussian, Wj N (0, 1). By taking the product of n densities asgiven in (3.1), the joint density of W = (W1,W2, . . . ,Wn)T is

    fW (w) =1

    (2)n/2exp

    w21 w22 w2n

    2

    =

    1(2)n/2

    expwTw

    2

    . (3.10)

    2See Shiryaev, [27], for a proof in the one-dimensional case and an exercise providing the extension tothe n-dimensional case. It appears that the exercise is a relatively straightforward extension of the prooffor one dimension, but the one-dimensional proof is measure theoretic and by no means trivial. The readercan get an engineering understanding of this uniqueness theorem by viewing the characteristic function andjoint probability density essentially as n-dimensional Fourier transforms of each other.

  • 3.3. GAUSSIAN RANDOM VECTORS 113

    The joint density of W at a sample value w depends only on the squared distance wTw ofthe sample value w from the origin. That is, fW (w) is spherically symmetric around theorigin, and points of equal probability density lie on concentric spheres around the origin(see Figure 3.2).

    &%'$

    n w1w2

    Figure 3.2: Equi-probability contours for an IID Gaussian 2-rv.

    The moment generating function of W is easily calculated as follows:

    gW (r) = E [exp rTW )] = E [exp(r1W1 + + rnWn] = E

    24Y

    j

    exp(rjWj)

    35

    =Yj

    E [exp(rjWj)] =Yj

    exp

    r2j2

    != exp

    rTr

    2

    . (3.11)

    The interchange of the expectation with the product above is justified because, first, thervs Wj (and thus the rvs exp(rjWj)) are independent, and, sec

Recommended

View more >