random matrices

41
Random matrices Wikipedia

Upload: man

Post on 17-Dec-2015

264 views

Category:

Documents


8 download

DESCRIPTION

Random MatricesWikipedia

TRANSCRIPT

  • Random matricesWikipedia

  • Contents

    1 Circular ensemble 11.1 Probability distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2 Circular law 32.1 Precise statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    3 Euclidean random matrix 53.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.2.1 Hermitian Euclidean random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2.2 Non-Hermitian Euclidean random matrices . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4 Inverse matrix gamma distribution 84.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    5 MarchenkoPastur distribution 95.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    6 Matrix gamma distribution 116.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.2 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    7 Matrix normal distribution 127.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    i

  • ii CONTENTS

    7.1.1 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    7.2.1 Expected values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137.2.2 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    7.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137.4 Maximum Likelihood Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.5 Drawing values from the distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.6 Relation to other distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    8 Matrix t-distribution 168.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168.2 Generalized matrix t-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    8.2.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

    9 Random matrix 189.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    9.1.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189.1.2 Mathematical statistics and numerical analysis . . . . . . . . . . . . . . . . . . . . . . . . 189.1.3 Number theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189.1.4 Theoretical neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199.1.5 Optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    9.2 Gaussian ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199.2.1 Distribution of level spacings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    9.3 Generalisations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209.4 Spectral theory of random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    9.4.1 Global regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219.4.2 Local regime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    9.5 Other classes of random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.5.1 Wishart matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.5.2 Random unitary matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.5.3 Non-Hermitian random matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    9.6 Guide to references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    10 TracyWidom distribution 2510.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

  • CONTENTS iii

    10.2 Equivalent formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2610.3 Other Tracy-Widom Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2610.4 Numerical approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710.5 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2710.7 Additional reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2810.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    11 Weingarten function 2911.1 Unitary groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    11.1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2911.1.2 Asymptotic behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    11.2 Orthogonal and symplectic groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011.3 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3011.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    12 Wishart distribution 3112.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.2 Occurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.3 Probability density function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3112.4 Use in Bayesian statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    12.4.1 Choice of parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.5 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    12.5.1 Log-expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.5.2 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3212.5.3 Characteristic function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    12.6 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3312.6.1 Corollary 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3312.6.2 Corollary 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    12.7 Estimator of the multivariate normal distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 3412.8 Bartlett decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3412.9 Marginal distribution of matrix elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3412.10The possible range of the shape parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3512.11Relationships to other distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3512.12See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3512.13References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3612.14External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3612.15Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    12.15.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.15.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3712.15.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

  • Chapter 1

    Circular ensemble

    In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introducedby Freeman Dyson as modications of the Gaussian matrix ensembles.[1] The three main examples are the circularorthogonal ensemble (COE) on symmetric unitary matrices, the circular unitary ensemble (CUE) on unitarymatrices, and the circular symplectic ensemble (CSE) on self dual unitary quaternionic matrices.

    1.1 Probability distributionsThe distribution of the unitary circular ensemble CUE(n) is the Haar measure on the unitary group U(n). If U is arandom element of CUE(n), then UTU is a random element of COE(n); if U is a random element of CUE(2n), thenURU is a random element of CSE(n), where

    UR =

    0BBBBBBBBB@

    0 11 0

    0 11 0

    . . .0 11 0

    1CCCCCCCCCAUT

    0BBBBBBBBB@

    0 11 0

    0 11 0

    . . .0 11 0

    1CCCCCCCCCA:

    Each element of a circular ensemble is a unitary matrix, so it has eigenvalues on the unit circle: eik with 0 < k 0, we have:

    MNnp(X jM;U;V) =MNnp(X jM; sU; 1/sV):

    7.5 Drawing values from the distributionSampling from the matrix normal distribution is a special case of the sampling procedure for the multivariate normaldistribution. Let X be an n by p matrix of np independent samples from the standard normal distribution, so that

    X MNnp(0; I; I):

    Then let

    Y = M+ AXB;

    so that

    Y MNnp(M;AAT ;BTB);

    where A and B can be chosen by Cholesky decomposition or a similar matrix square root operation.

  • 7.6. RELATION TO OTHER DISTRIBUTIONS 15

    7.6 Relation to other distributionsDawid (1981) provides a discussion of the relation of the matrix-valued normal distribution to other distributions,including the Wishart distribution, Inverse Wishart distribution and matrix t-distribution, but uses dierent notationfrom that employed here.

    7.7 See also Multivariate normal distribution.

    7.8 References[1] A K Gupta; D K Nagar (22 October 1999). Chapter 2: MATRIX VARIATE NORMAL DISTRIBUTION. Matrix

    Variate Distributions. CRC Press. ISBN 978-1-58488-046-2. Retrieved 23 May 2014.

    [2] Ding, Shanshan; R. Dennis Cook (2014). DIMENSION FOLDING PCA AND PFC FOR MATRIX- VALUED PRE-DICTORS. Statistica Sinica 24 (1): 463492.

    [3] Glanz, Hunter; Carvalho, Luis. An Expectation-Maximization Algorithm for the Matrix Normal Distribution. Retrieved18 March 2015.

    Dawid, A.P. (1981). Some matrix-variate distribution theory: Notational considerations and a Bayesian ap-plication. Biometrika 68 (1): 265274. doi:10.1093/biomet/68.1.265. JSTOR 2335827. MR 614963.

    Dutilleul, P (1999). The MLE algorithm for the matrix normal distribution. Journal of Statistical Computa-tion and Simulation 64 (2): 105123. doi:10.1080/00949659908811970.

    Arnold, S.F. (1981), The theory of linear models and multivariate analysis, New York: John Wiley & Sons,ISBN 0471050652

  • Chapter 8

    Matrix t-distribution

    In statistics, the matrix t-distribution (or matrix variate t-distribution) is the generalization of the multivariatet-distribution from vectors to matrices.[1] The matrix t-distribution shares the same relationship with the multivariatet-distribution that the matrix normal distribution shares with the multivariate normal distribution. For example, thematrix t-distribution is the compound distribution that results from sampling from amatrix normal distribution havingsampled the covariance matrix of the matrix normal from an inverse Wishart distribution.In a Bayesian analysis of a multivariate linear regression model based on the matrix normal distribution, the matrixt-distribution is the posterior predictive distribution.

    8.1 DenitionFor a matrix t-distribution, the probability density function at the point X of an n p space is

    f(X; ;M;;) = K In +1(XM)1(XM)T +n+p12 ;where the constant of integration K is given by

    K =p+n+p1

    2

    ()

    np2 p

    +p1

    2

    jjn2 jj p2 :Here p is the multivariate gamma function.The characteristic function and various other properties can be derived from the generalized matrix t-distribution (seebelow).

    8.2 Generalized matrix t-distributionThe generalized matrix t-distribution is a generalization of the matrix t-distribution with two parameters and in place of .[2]

    This reduces to the standard matrix t-distribution with = 2; = +p12 :The generalized matrix t-distribution is the compound distribution that results from an innite mixture of a matrixnormal distribution with an inverse multivariate gamma distribution placed over either of its covariance matrices.

    8.2.1 Properties

    If X Tn;p(; ;M;;) then

    16

  • 8.3. SEE ALSO 17

    XT Tp;n(; ;MT;;):

    This makes use of the following:

    detIn + 21(XM)1(XM)T

    =

    detIp +

    2

    1(XT MT)1(XT MT)T

    :

    If X Tn;p(; ;M;;) and A(n n) and B(p p) are nonsingular matrices then

    AXB Tn;p(; ;AMB;AAT;BTB):

    The characteristic function is[2]

    T (Z) =exp(tr(iZ0M))jj

    p()(2)pjZ0ZjB

    1

    2Z0Z

    ;

    where

    B(WZ) = jWjZS>0

    exptr(SW S1Z) jSj 12 (p+1)dS;

    and where B is the type-two Bessel function of Herz of a matrix argument.

    8.3 See also multivariate t-distribution. matrix normal distribution.

    8.4 Notes[1] Zhu, Shenghuo and Kai Yu and Yihong Gong (2007). Predictive Matrix-Variate t Models. In J.C. Platt, D. Koller,

    Y. Singer, and S. Roweis, editors, NIPS '07: Advances in Neural Information Processing Systems 20, pages 1721-1728.MIT Press, Cambridge, MA, 2008. The notation is changed a bit in this article for consistency with the matrix normaldistribution article.

    [2] Iranmanesh, Anis, M. Arashi and S. M. M. Tabatabaey (2010). On Conditional Applications of Matrix Variate NormalDistribution. Iranian Journal of Mathematical Sciences and Informatics, 5:2, pp. 3343.

    8.5 External links A C++ library for random matrix generator

  • Chapter 9

    Random matrix

    In probability theory and mathematical physics, a randommatrix (sometimes stochastic matrix) is a matrix-valuedrandom variablethat is, a matrix some or all of whose elements are random variables. Many important propertiesof physical systems can be represented mathematically as matrix problems. For example, the thermal conductivity ofa lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.

    9.1 Applications

    9.1.1 Physics

    In nuclear physics, random matrices were introduced by Eugene Wigner[1] to model the spectra of heavy atoms. Hepostulated that the spacings between the lines in the spectrum of a heavy atom should resemble the spacings betweenthe eigenvalues of a random matrix, and should depend only on the symmetry class of the underlying evolution.[2]In solid-state physics, random matrices model the behaviour of large disordered Hamiltonians in the mean eldapproximation.In quantum chaos, the BohigasGiannoniSchmit (BGS) conjecture[3] asserts that the spectral statistics of quantumsystems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory.Random matrix theory has also found applications to the chiral Dirac operator in quantum chromodynamics,[4]quantum gravity in two dimensions,[5] mesoscopic physics,[6]spin-transfer torque,[7] the fractional quantum Hall ef-fect,[8] Anderson localization,[9] quantum dots,[10] and superconductors[11]

    9.1.2 Mathematical statistics and numerical analysis

    In multivariate statistics, randommatrices were introduced by JohnWishart for statistical analysis of large samples;[12]see estimation of covariance matrices.Signicant results have been shown that extend the classical scalar Cherno, Bernstein, and Hoeding inequalities tothe largest eigenvalues of nite sums of randomHermitian matrices.[13] Corollary results are derived for the maximumsingular values of rectangular matrices.In numerical analysis, random matrices have been used since the work of John von Neumann and Herman Gold-stine[14] to describe computation errors in operations such as matrix multiplication. See also[15] for more recentresults.

    9.1.3 Number theory

    In number theory, the distribution of zeros of the Riemann zeta function (and other L-functions) is modelled by thedistribution of eigenvalues of certain random matrices.[16] The connection was rst discovered by Hugh Montgomeryand Freeman J. Dyson. It is connected to the HilbertPlya conjecture.

    18

  • 9.2. GAUSSIAN ENSEMBLES 19

    9.1.4 Theoretical neuroscienceIn the eld of theoretical neuroscience, random matrices are increasingly used to model the network of synapticconnections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrixwere shown to exhibit a phase transition to chaos [17] when the variance of the synaptic weights crosses a criticalvalue, at the limit of innite system size. Relating the statistical properties of the spectrum of biologically inspiredrandom matrix models to the dynamical behavior of randomly connected neural networks is an intensive researchtopic.[18][19][20][21]

    9.1.5 Optimal controlIn optimal control theory, the evolution of n state variables through time depends at any time on their own valuesand on the values of k control variables. With linear evolution, matrices of coecients appear in the state equation(equation of evolution). In some problems the values of the parameters in these matrices are not known with cer-tainty, in which case there are random matrices in the state equation and the problem is known as one of stochasticcontrol.[22]:ch. 13[23][24] A key result in the case of linear-quadratic control with stochastic matrices is that the cer-tainty equivalence principle does not apply: while in the absence of multiplier uncertainty (that is, with only additiveuncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertaintywere ignored, this no longer holds in the presence of random coecients in the state equation.

    9.2 Gaussian ensemblesThe most studied random matrix ensembles are the Gaussian ensembles.The Gaussian unitary ensemble GUE(n) is described by the Gaussian measure with density

    1

    ZGUE(n)e

    n2 trH2

    on the space of n n Hermitian matrices H = (Hij)ni,j=1. Here ZGUE(n) = 2n/2 n2/2 is a normalization constant, chosen so that the integral of the density is equal toone. The term unitary refers to the fact that the distribution is invariant under unitary conjugation. The Gaussianunitary ensemble models Hamiltonians lacking time-reversal symmetry.The Gaussian orthogonal ensemble GOE(n) is described by the Gaussian measure with density

    1

    ZGOE(n)e

    n4 trH2

    on the space of n n real symmetric matrices H = (Hij)ni,j=1. Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal sym-metry.The Gaussian symplectic ensemble GSE(n) is described by the Gaussian measure with density

    1

    ZGSE(n)entrH

    2

    on the space of n n quaternionic Hermitian matrices H = (Hij)ni,j=1. Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry.The joint probability density for the eigenvalues 1,2,...,n of GUE/GOE/GSE is given by

    1

    Z;n

    nYk=1

    en4

    2k

    Yi

  • 20 CHAPTER 9. RANDOM MATRIX

    where the Dyson index, = 1 for GOE, = 2 for GUE, and = 4 for GSE, counts the number of real components permatrix element; Z,n is a normalisation constant which can be explicitly computed, see Selberg integral. In the caseof GUE ( = 2), the formula (1) describes a determinantal point process. Eigenvalues repel as the joint probabilitydensity has a zero (of th order) for coinciding eigenvalues j = i .For the distribution of the largest eigenvalue for GOE, GUE and Wishart matrices of nite dimensions, see.[25]

    9.2.1 Distribution of level spacingsFrom the ordered sequence of eigenvalues 1 < : : : < n < n+1 < : : : , one denes the normalized spacingss = (n+1 n)/hsi , where hsi = hn+1 ni is the mean spacing. The probability distribution of spacings isgiven by,

    p1(s) =

    2s e4 s2

    for the orthogonal ensemble GOE = 1 ,

    p2(s) =32

    2s2e 4 s2

    for the unitary ensemble GUE = 2 , and

    p4(s) =218

    363s4e 649 s2

    for the symplectic ensemble GSE = 4 .The numerical constants are such that p(s) is normalized:

    Z 10

    ds p(s) = 1

    and the mean spacing is,

    Z 10

    ds s p(s) = 1;

    for = 1; 2; 4 .

    9.3 GeneralisationsWigner matrices are random Hermitian matricesHn = (Hn(i; j))ni;j=1 such that the entries

    fHn(i; j) ; 1 i j ngabove the main diagonal are independent random variables with zero mean, and

    fHn(i; j) ; 1 i < j nghave identical second moments.Invariant matrix ensembles are random Hermitian matrices with density on the space of real symmetric/ Hermitian/quaternionic Hermitian matrices, which is of the form 1Zn e

    ntrV (H) ; where the function V is called the potential.The Gaussian ensembles are the only common special cases of these two classes of random matrices.

  • 9.4. SPECTRAL THEORY OF RANDOM MATRICES 21

    9.4 Spectral theory of random matricesThe spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes toinnity.

    9.4.1 Global regimeIn the global regime, one is interested in the distribution of linear statistics of the form Nf, H = n1 tr f(H).

    Empirical spectral measure

    The empirical spectral measure H of H is dened by

    H(A) =1

    n# f of eigenvaluesH in Ag = N1A;H ; A R:

    Usually, the limit of H is a deterministic measure; this is a particular case of self-averaging. The cumulativedistribution function of the limiting measure is called the integrated density of states and is denoted N(). If theintegrated density of states is dierentiable, its derivative is called the density of states and is denoted ().The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wignersemicircle distribution. As far as sample covariance matrices are concerned, a theory was developed by Marenkoand Pastur.[26][27]

    The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equationwhich arises from potential theory.[28]

    Fluctuations

    For the linear statistics Nf,H = n1 f(j), one is also interested in the uctuations about f() dN(). For manyclasses of random matrices, a central limit theorem of the form

    Nf;H Rf() dN()

    f;n

    D!N(0; 1)

    is known, see,[29][30] etc.

    9.4.2 Local regimeIn the local regime, one is interested in the spacings between eigenvalues, and, more generally, in the joint distributionof eigenvalues in an interval of length of order 1/n. One distinguishes between bulk statistics, pertaining to intervalsinside the support of the limiting spectral measure, and edge statistics, pertaining to intervals near the boundary of thesupport.

    Bulk statistics

    Formally, x 0 in the interior of the support of N() . Then consider the point process

    (0) =Xj

    n(0)(j 0)

    ;

    where j are the eigenvalues of the random matrix.The point process (0) captures the statistical properties of eigenvalues in the vicinity of 0 . For the Gaussianensembles, the limit of (0) is known;[2] thus, for GUE it is a determinantal point process with the kernel

  • 22 CHAPTER 9. RANDOM MATRIX

    K(x; y) =sin(x y)(x y)

    (the sine kernel).The universality principle postulates that the limit of (0) as n!1 should depend only on the symmetry class ofthe random matrix (and neither on the specic model of random matrices nor on 0 ). This was rigorously provedfor several models of random matrices: for invariant matrix ensembles,[31][32] for Wigner matrices,[33][34] et cet.

    Edge statistics

    See TracyWidom distribution.

    9.5 Other classes of random matrices

    9.5.1 Wishart matricesMain article: Wishart distribution

    Wishart matrices are n n random matrices of the form H = X X*, where X is an n m random matrix (m n) withindependent entries, and X* is its conjugate matrix. In the important special case considered by Wishart, the entriesof X are identically distributed Gaussian random variables (either real or complex).The limit of the empirical spectral measure of Wishart matrices was found[26] by Vladimir Marchenko and LeonidPastur, see MarchenkoPastur distribution.

    9.5.2 Random unitary matricesSee circular ensembles

    9.5.3 Non-Hermitian random matricesSee circular law.

    9.6 Guide to references Books on random matrix theory:[2][35][36]

    Survey articles on random matrix theory:[15][27][37][38]

    Historic works:[1][12][14]

    9.7 References[1] Wigner, E. (1955). Characteristic vectors of bordered matrices with innite dimensions. Annals of Mathematics 62 (3):

    548564. doi:10.2307/1970079.

    [2] Mehta, M.L. (2004). Random Matrices. Amsterdam: Elsevier/Academic Press. ISBN 0-12-088409-7.

    [3] Bohigas, O.; Giannoni, M.J.; Schmit, Schmit (1984). Characterization of Chaotic Quantum Spectra and Universality ofLevel Fluctuation Laws. Phys. Rev. Lett. 52: 14. Bibcode:1984PhRvL..52....1B. doi:10.1103/PhysRevLett.52.1.

    [4] Verbaarschot JJM, Wettig T (2000). RandomMatrix Theory and Chiral Symmetry in QCD. Ann.Rev.Nucl.Part.Sci. 50:343. arXiv:hep-ph/0003017. Bibcode:2000ARNPS..50..343V. doi:10.1146/annurev.nucl.50.1.343.

  • 9.7. REFERENCES 23

    [5] Franchini F, Kravtsov VE (October 2009). Horizon in random matrix theory, the Hawking radiation, and ow of coldatoms. Phys. Rev. Lett. 103 (16): 166401. arXiv:0905.3533. Bibcode:2009PhRvL.103p6401F. doi:10.1103/PhysRevLett.103.166401.PMID 19905710.

    [6] Snchez D, Bttiker M (September 2004). Magnetic-eld asymmetry of nonlinear mesoscopic transport. Phys. Rev.Lett. 93 (10): 106802. arXiv:cond-mat/0404387. Bibcode:2004PhRvL..93j6802S. doi:10.1103/PhysRevLett.93.106802.PMID 15447435.

    [7] Rychkov VS, Borlenghi S, Jares H, Fert A, Waintal X (August 2009). Spin torque and waviness in magnetic multilayers:a bridge between Valet-Fert theory and quantum approaches. Phys. Rev. Lett. 103 (6): 066602. arXiv:0902.4360.Bibcode:2009PhRvL.103f6602R. doi:10.1103/PhysRevLett.103.066602. PMID 19792592.

    [8] Callaway DJE (April 1991). Random matrices, fractional statistics, and the quantum Hall eect. Phys. Rev. B Condens.Matter 43 (10): 86418643. Bibcode:1991PhRvB..43.8641C. doi:10.1103/PhysRevB.43.8641. PMID 9996505.

    [9] Janssen M, Pracz K (June 2000). Correlated random band matrices: localization-delocalization transitions. Phys. Rev.E 61 (6 Pt A): 627886. arXiv:cond-mat/9911467. Bibcode:2000PhRvE..61.6278J. doi:10.1103/PhysRevE.61.6278.PMID 11088301.

    [10] Zumbhl DM, Miller JB, Marcus CM, Campman K, Gossard AC (December 2002). Spin-orbit coupling, antilocal-ization, and parallel magnetic elds in quantum dots. Phys. Rev. Lett. 89 (27): 276803. arXiv:cond-mat/0208436.Bibcode:2002PhRvL..89A6803Z. doi:10.1103/PhysRevLett.89.276803. PMID 12513231.

    [11] Bahcall SR (December 1996). Random Matrix Model for Superconductors in a Magnetic Field. Phys. Rev. Lett. 77(26): 52765279. arXiv:cond-mat/9611136. Bibcode:1996PhRvL..77.5276B. doi:10.1103/PhysRevLett.77.5276. PMID10062760.

    [12] Wishart, J. (1928). Generalized productmoment distribution in samples. Biometrika 20A (12): 3252. doi:10.1093/biomet/20a.1-2.32.

    [13] Tropp, J. (2011). User-Friendly Tail Bounds for Sums of RandomMatrices. Foundations of Computational Mathematics.doi:10.1007/s10208-011-9099-z.

    [14] von Neumann, J.; Goldstine, H.H. (1947). Numerical inverting of matrices of high order. Bull. Amer. Math. Soc. 53(11): 10211099. doi:10.1090/S0002-9904-1947-08909-6.

    [15] Edelman, A.; Rao, N.R (2005). Random matrix theory. Acta Numer. 14: 233297. Bibcode:2005AcNum..14..233E.doi:10.1017/S0962492904000236.

    [16] Keating, Jon (1993). The Riemann zeta-function and quantum chaology. Proc. Internat. School of Phys. Enrico FermiCXIX: 145185. doi:10.1016/b978-0-444-81588-0.50008-0.

    [17] Sompolinsky, H.; Crisanti, A.; Sommers, H. (July 1988). Chaos in Random Neural Networks. Physical Review Letters61 (3): 259262. Bibcode:1988PhRvL..61..259S. doi:10.1103/PhysRevLett.61.259.

    [18] Garca del Molino, Luis Carlos; Pakdaman, Khashayar; Touboul, Jonathan; Wainrib, Gilles (October 2013). Synchroniza-tion in randombalanced networks. Physical Review E 88 (4). Bibcode:2013PhRvE..88d2824G. doi:10.1103/PhysRevE.88.042824.

    [19] Rajan, Kanaka; Abbott, L. (November 2006). Eigenvalue Spectra of Random Matrices for Neural Networks. PhysicalReview Letters 97 (18). Bibcode:2006PhRvL..97r8104R. doi:10.1103/PhysRevLett.97.188104.

    [20] Wainrib, Gilles; Touboul, Jonathan (March 2013). Topological and Dynamical Complexity of RandomNeural Networks.Physical Review Letters 110 (11). arXiv:1210.5082. Bibcode:2013PhRvL.110k8101W. doi:10.1103/PhysRevLett.110.118101.

    [21] Timme, Marc; Wolf, Fred; Geisel, Theo (February 2004). Topological Speed Limits to Network Synchronization. Physi-cal Review Letters 92 (7). arXiv:cond-mat/0306512. Bibcode:2004PhRvL..92g4101T. doi:10.1103/PhysRevLett.92.074101.

    [22] Chow, Gregory P. (1976). Analysis and Control of Dynamic Economic Systems. New York: Wiley. ISBN 0-471-15616-7.

    [23] Turnovsky, Stephen (1976). Optimal stabilization policies for stochastic linear systems: The case of correlated multi-plicative and additive disturbances. Review of Economic Studies 43 (1): 191194. JSTOR 2296741.

    [24] Turnovsky, Stephen (1974). The stability properties of optimal economic policies. American Economic Review 64 (1):136148. JSTOR 1814888.

    [25] Chiani M (2014). Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a sim-ple approximation for the Tracy-Widom distribution. Journal of Multivariate Analysis 129: 6981. arXiv:1209.3394.doi:10.1016/j.jmva.2014.04.002.

  • 24 CHAPTER 9. RANDOM MATRIX

    [26] .Marenko, V A; Pastur, L A (1967). Distribution of eigenvalues for some sets of random matrices. Mathematics of theUSSR-Sbornik 1 (4): 457483. Bibcode:1967SbMat...1..457M. doi:10.1070/SM1967v001n04ABEH001994.

    [27] Pastur, L.A. (1973). Spectra of random self-adjoint operators. Russ. Math. Surv. 28 (1): 167. Bibcode:1973RuMaS..28....1P.doi:10.1070/RM1973v028n01ABEH001396.

    [28] Pastur, L.; Shcherbina, M.; Shcherbina, M. (1995). On the Statistical Mechanics Approach in the RandomMatrix Theory:Integrated Density of States. J. Stat. Phys. 79 (34): 585611. Bibcode:1995JSP....79..585D. doi:10.1007/BF02184872.

    [29] Johansson, K. (1998). On uctuations of eigenvalues of random Hermitian matrices. Duke Math. J. 91 (1): 151204.doi:10.1215/S0012-7094-98-09108-6.

    [30] Pastur, L.A. (2005). A simple approach to the global regime of Gaussian ensembles of random matrices. UkrainianMath. J. 57 (6): 936966. doi:10.1007/s11253-005-0241-4.

    [31] Pastur, L.; Shcherbina, M. (1997). Universality of the local eigenvalue statistics for a class of unitary invariant randomma-trix ensembles. Journal of Statistical Physics 86 (12): 109147. Bibcode:1997JSP....86..109P. doi:10.1007/BF02180200.

    [32] Deift, P.; Kriecherbauer, T.; McLaughlin, K.T.-R.; Venakides, S.; Zhou, X. (1997). Asymptotics for polynomialsorthogonal with respect to varying exponential weights. International Mathematics Research Notices (16): 759782.doi:10.1155/S1073792897000500.

    [33] Erds, L.; Pch, S.; Ramrez, J.A.; Schlein, B.; Yau, H.T. (2010). Bulk universality for Wigner matrices. Communica-tions on Pure and Applied Mathematics 63 (7): 895925.

    [34] Tao, Terence; Vu, Van H. (2010). Random matrices: universality of local eigenvalue statistics up to the edge. Communi-cations inMathematical Physics 298 (2): 549572. arXiv:0908.1982. Bibcode:2010CMaPh.298..549T. doi:10.1007/s00220-010-1044-5.

    [35] Anderson, G.W.; Guionnet, A.; Zeitouni, O. (2010). An introduction to random matrices. Cambridge: Cambridge Univer-sity Press. ISBN 978-0-521-19452-5.

    [36] Akemann, G.; Baik, J.; Di Francesco, P. (2011). The Oxford Handbook of Random Matrix Theory. Oxford: OxfordUniversity Press. ISBN 978-0-19-957400-1.

    [37] Diaconis, Persi (2003). Patterns in eigenvalues: the 70th Josiah Willard Gibbs lecture. American Mathematical Society.Bulletin. New Series 40 (2): 155178. doi:10.1090/S0273-0979-03-00975-3. MR 1962294.

    [38] Diaconis, Persi (2005). What is ... a randommatrix?". Notices of the American Mathematical Society 52 (11): 13481349.ISSN 0002-9920. MR 2183871.

    9.8 External links Fyodorov, Y. (2011). Random matrix theory. Scholarpedia 6 (3): 9886. Bibcode:2011SchpJ...6.9886F.doi:10.4249/scholarpedia.9886.

    Weisstein, E.W. Random Matrix. MathWorld--A Wolfram Web Resource.

  • Chapter 10

    TracyWidom distribution

    -5 -4 -3 -2 -1 0 1 2 3 40

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    =1=2=4

    s

    F(s)

    TracyWidom distributions for =1,2,4

    The TracyWidom distribution, introduced by Craig Tracy and Harold Widom (1993, 1994), is the probabilitydistribution of the normalized largest eigenvalue of a random Hermitian matrix.[1]

    In practical terms, Tracy-Widom is the crossover function between the two phases of weakly versus strongly coupledcomponents in a system.[2] It also appears in the distribution of the length of the longest increasing subsequence ofrandom permutations (Baik, Deift & Johansson 1999), in current uctuations of the asymmetric simple exclusionprocess (ASEP) with step initial condition (Johansson 2000, Tracy & Widom 2009), and in simplied mathemati-cal models of the behavior of the longest common subsequence problem on random inputs (Majumdar & Nechaev2005). See (Takeuchi & Sano 2010, Takeuchi et al. 2011) for experimental testing (and verifying) that the interfaceuctuations of a growing droplet (or substrate) are described by the TW distribution F2 (or F1 ) as predicted by(Prhofer & Spohn 2000).The distribution F1 is of particular interest in multivariate statistics (Johnstone 2007, 2008, 2009). For a discussionof the universality of F, = 1, 2, and 4, see Deift (2007). For an application of F1 to inferring population structurefrom genetic data see Patterson, Price & Reich (2006).

    10.1 DenitionThe Tracy-Widom distribution is dened as the limit:[3]

    25

  • 26 CHAPTER 10. TRACYWIDOM DISTRIBUTION

    F2(s) = limn!1Prob

    (max

    p2n)(

    p2)n1/6 s

    The shift by

    p2n is used to keep the distributions centered at 0. The multiplication by (

    p2)n1/6 is used because the

    standard deviation of the distributions scales as n1/6 .

    10.2 Equivalent formulationsThe cumulative distribution function of the TracyWidom distribution can be given as the Fredholm determinant

    F2(s) = det(I As)

    of the operator As on square integrable functions on the half line (s, ) with kernel given in terms of Airy functionsAi by

    Ai(x)Ai0(y) Ai0(x)Ai(y)x y :

    It can also be given as an integral

    F2(s) = expZ 1s

    (x s)q2(x) dx

    in terms of a solution of a Painlev equation of type II

    q00(s) = sq(s) + 2q(s)3

    where q, called the HastingsMcLeod solution, satises the boundary condition

    q(s) Ai(s); s!1:

    10.3 Other Tracy-Widom DistributionsThe distribution F2 is associated to unitary ensembles in random matrix theory. There are analogous TracyWidomdistributions F1 and F4 for orthogonal ( = 1) and symplectic ensembles ( = 4) that are also expressible in terms ofthe same Painlev transcendent q (Tracy & Widom 1996):

    F1(s) = exp12

    Z 1s

    q(x) dx

    (F2(s))

    1/2

    and

    F4(s/p2) = cosh

    1

    2

    Z 1s

    q(x) dx

    (F2(s))

    1/2:

    For an extension of the denition of the TracyWidom distributions F to all > 0 see Ramrez, Rider & Virg(2006).

  • 10.4. NUMERICAL APPROXIMATIONS 27

    10.4 Numerical approximationsNumerical techniques for obtaining numerical solutions to the Painlev equations of the types II and V, and numeri-cally evaluating eigenvalue distributions of randommatrices in the beta-ensembles were rst presented by Edelman &Persson (2005) using MATLAB. These approximation techniques were further analytically justied in Bejan (2005)and used to provide numerical evaluation of Painlev II and TracyWidom distributions (for =1,2, and 4) in S-PLUS. These distributions have been tabulated in Bejan (2005) to four signicant digits for values of the argumentin increments of 0.01; a statistical table for p-values was also given in this work. Bornemann (2009) gave accurateand fast algorithms for the numerical evaluation of F and the density functions f(s) = dF/ds for = 1, 2, and 4.These algorithms can be used to compute numerically the mean, variance, skewness and kurtosis of the distributionsF.Functions for working with the TracyWidom laws are also presented in the R package 'RMTstat' by Johnstone et al.(2009) and MATLAB package 'RMLab' by Dieng (2006).For a simple approximation based on a shifted gamma distribution see Chiani (2012).

    10.5 Footnotes[1] Dominici, D. (2008) Special Functions and Orthogonal Polynomials American Math. Soc.

    [2] Mysterious Statistical Law May Finally Have an Explanation, wired.com 2014-10-27

    [3] Tracy, C. A.; Widom, H. (1996), On orthogonal and symplectic matrix ensembles, Communications in MathematicalPhysics 177 (3): 727754, Bibcode:1996CMaPh.177..727T, doi:10.1007/BF02099545, MR 1385083

    10.6 References Baik, J.; Deift, P.; Johansson, K. (1999), On the distribution of the length of the longest increasing subse-quence of randompermutations, Journal of the AmericanMathematical Society 12 (4): 11191178, doi:10.1090/S0894-0347-99-00307-0, JSTOR 2646100, MR 1682248.

    Deift, P. (2007), Universality for mathematical and physical systems, International Congress of Mathemati-cians (Madrid, 2006), European Mathematical Society, pp. 125152, MR 2334189.

    Johansson, K. (2000), Shape uctuations and random matrices, Communications in Mathematical Physics209 (2): 437476, arXiv:math/9903134, Bibcode:2000CMaPh.209..437J, doi:10.1007/s002200050027.

    Johansson, K. (2002), Toeplitz determinants, random growth and determinantal processes, Proc. InternationalCongress of Mathematicians (Beijing, 2002) 3, Beijing: Higher Ed. Press, pp. 5362, MR 1957518.

    Johnstone, I. M. (2007), High dimensional statistical inference and random matrices, International Congressof Mathematicians (Madrid, 2006), European Mathematical Society, pp. 307333, MR 2334195.

    Johnstone, I. M. (2008), Multivariate analysis and Jacobi ensembles: largest eigenvalue, TracyWidom limitsand rates of convergence,Annals of Statistics 36 (6): 26382716, arXiv:0803.3408, doi:10.1214/08-AOS605,PMC 2821031, PMID 20157626.

    Johnstone, I. M. (2009), Approximate null distribution of the largest root in multivariate analysis, Annalsof Applied Statistics 3 (4): 16161633, arXiv:1009.5854, doi:10.1214/08-AOAS220, PMC 2880335, PMID20526465.

    Majumdar, Satya N.; Nechaev, Sergei (2005), Exact asymptotic results for the Bernoulli matching model ofsequence alignment, Physical Review E 72 (2): 020901, 4, doi:10.1103/PhysRevE.72.020901, MR 2177365.

    Patterson, N.; Price, A. L.; Reich, D. (2006), Population structure and eigenanalysis, PLoS Genetics 2 (12):e190, doi:10.1371/journal.pgen.0020190, PMC 1713260, PMID 17194218.

    Prhofer, M.; Spohn, H. (2000), Universal distributions for growing processes in 1+1 dimensions and randommatrices, Physical Review Letters 84 (21): 48824885, arXiv:cond-mat/9912264, Bibcode:2000PhRvL..84.4882P,doi:10.1103/PhysRevLett.84.4882, PMID 10990822.

  • 28 CHAPTER 10. TRACYWIDOM DISTRIBUTION

    Takeuchi, K. A.; Sano, M. (2010), Universal uctuations of growing interfaces: Evidence in turbulent liq-uid crystals, Physical Review Letters 104 (23): 230601, arXiv:1001.5121, Bibcode:2010PhRvL.104w0601T,doi:10.1103/PhysRevLett.104.230601, PMID 20867221

    Takeuchi, K. A.; Sano, M.; Sasamoto, T.; Spohn, H. (2011), Growing interfaces uncover universal uc-tuations behind scale invariance, Scientic Reports 1: 34, arXiv:1108.2118, Bibcode:2011NatSR...1E..34T,doi:10.1038/srep00034

    Tracy, C. A.; Widom, H. (1993), Level-spacing distributions and the Airy kernel, Physics Letters B 305 (1-2):115118, arXiv:hep-th/9210074, Bibcode:1993PhLB..305..115T, doi:10.1016/0370-2693(93)91114-3.

    Tracy, C. A.; Widom, H. (1994), Level-spacing distributions and the Airy kernel, Communications in Mathe-matical Physics 159 (1): 151174, arXiv:hep-th/9211141, Bibcode:1994CMaPh.159..151T, doi:10.1007/BF02100489,MR 1257246.

    Tracy, C. A.; Widom, H. (2002), Distribution functions for largest eigenvalues and their applications, Proc.International Congress of Mathematicians (Beijing, 2002) 1, Beijing: Higher Ed. Press, pp. 587596, MR1989209.

    Tracy, C. A.; Widom, H. (2009), Asymptotics in ASEP with step initial condition, Communications in Math-ematical Physics 290 (1): 129154, arXiv:0807.1713, Bibcode:2009CMaPh.290..129T, doi:10.1007/s00220-009-0761-0.

    10.7 Additional reading Bejan, Andrei Iu. (2005), Largest eigenvalues and sample covariance matrices. TracyWidom and Painleve II:Computational aspects and realization in S-Plus with applications, M.Sc. dissertation, Department of Statistics,The University of Warwick.

    Bornemann, F. (2010), On the numerical evaluation of distributions in random matrix theory: A reviewwith an invitation to experimental mathematics, Markov Processes and Related Fields 16 (4): 803866,arXiv:0904.1581, Bibcode:2009arXiv0904.1581B.

    Chiani, M. (2012), Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and asimple approximation for the TracyWidom distribution, arXiv:1209.3394.

    Edelman, A.; Persson, P.-O. (2005), Numerical Methods for Eigenvalue Distributions of Random Matrices,arXiv:math-ph/0501068, Bibcode:2005math.ph...1068E.

    Ramrez, J. A.; Rider, B.; Virg, B. (2006), Beta ensembles, stochastic Airy spectrum, and a diusion, arXiv:math/0607331,Bibcode:2006math......7331R.

    10.8 External links Kuijlaars, Universality of distribution functions in random matrix theory. Tracy, C. A.; Widom, H., The distributions of random matrix theory and their applications. Johnstone, Iain; Ma, Zongming; Perry, Patrick; Shahram, Morteza (2009), Package 'RMTstat'. Quanta Magazine: At the Far Ends of a New Universal Law

  • Chapter 11

    Weingarten function

    In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used tocalculate integrals of products of matrix coecients over classical groups. They were rst studied by Weingarten(1978) who found their asymptotic behavior, and named by Collins (2003), who evaluated them explicitly for theunitary group.

    11.1 Unitary groupsWeingarten functions are used for evaluating integrals over the unitary group Ud of products of matrix coecientsof the form

    ZUd

    Ui1j1 UiqjqUj01i01 Uj0qi0q

    dU:

    (Here U denotes the conjugate transpose of U , alternatively denoted as Uy .)This integral is equal to

    X;2Sq

    i1i01 iqi0qj1j01 jqj0qWg(d; 1)

    whereWg is the Weingarten function, given by

    Wg(d; ) =1

    q!2

    X

    (1)2()

    s;d(1)

    where the sum is over all partitions of q (Collins 2003). Here is the character if Sq corresponding to the partition and s is the Schur polynomial of , so that sd(1) is the dimension of the representation of Ud corresponding to .The Weingarten functions are rational functions in d. They can have poles for small values of d, which cancel out inthe formula above. There is an alternative inequivalent denition of Weingarten functions, where one only sums overpartitions with at most d parts. This is no longer a rational function of d, but is nite for all positive integers d. Thetwo sorts of Weingarten functions coincide for d larger than q, and either can be used in the formula for the integral.

    11.1.1 ExamplesThe rst few Weingarten functionsWg(d) are

    Wg(; d) = 1 (The trivial case where q = 0)

    29

  • 30 CHAPTER 11. WEINGARTEN FUNCTION

    Wg(1; d) =1

    d

    Wg(2; d) =1

    d(d2 1)Wg(12; d) =

    1

    d2 1Wg(3; d) =

    2

    d(d2 1)(d2 4)Wg(21; d) =

    1(d2 1)(d2 4)

    Wg(13; d) =d2 2

    d(d2 1)(d2 4)

    where permutations are denoted by their cycle shapes.There exists a computer algebra program to produce these expressions.[1]

    11.1.2 Asymptotic behaviorFor large d, the Weingarten functionWg has the asymptotic behavior

    Wg(; d) = dnjjYi

    (1)jCij1cjCij1 +O(dnjj2)

    where the permutation is a product of cycles of lengths Ci, and cn = (2n)!/n!(n + 1)! is a Catalan number, and || isthe smallest number of transpositions that is a product of. There exists a diagrammatic method[2] to systematicallycalculate the integrals over the unitary group as a power series in 1/d.

    11.2 Orthogonal and symplectic groupsFor orthogonal and symplectic groups the Weingarten functions were evaluated by Collins & niady (2006). Theirtheory is similar to the case of the unitary group. They are parameterized by partitions such that all parts have evensize.

    11.3 External links Collins, Benot (2003), Moments and cumulants of polynomial random variables on unitary groups, theItzykson-Zuber integral, and free probability, International Mathematics Research Notices 2003 (17): 953982, arXiv:math-ph/0205010, doi:10.1155/S107379280320917X, MR 1959915

    Collins, Benot; niady, Piotr (2006), Integration with respect to the Haar measure on unitary, orthogonaland symplectic group, Communications in Mathematical Physics 264 (3): 773795, doi:10.1007/s00220-006-1554-3, MR 2217291

    Weingarten, Don (1978), Asymptotic behavior of group integrals in the limit of innite rank, Journal ofMathematical Physics 19 (5): 9991001, doi:10.1063/1.523807, MR 0471696

    11.4 References[1] Z. Puchaa and J.A. Miszczak, Symbolic integration with respect to the Haar measure on the unitary group in Mathematica.,

    arXiv:1109.4244 (2011).[2] P.W. Brouwer and C.W.J. Beenakker, Diagrammatic method of integration over the unitary group, with applications to

    quantum transport in mesoscopic systems, J. Math. Phys. 37, 4904 (1996), arXiv:cond-mat/9604059.

  • Chapter 12

    Wishart distribution

    In statistics, theWishart distribution is a generalization to multiple dimensions of the chi-squared distribution, or,in the case of non-integer degrees of freedom, of the gamma distribution. It is named in honor of John Wishart, whorst formulated the distribution in 1928.[1]

    It is a family of probability distributions dened over symmetric, nonnegative-denite matrix-valued random vari-ables (random matrices). These distributions are of great importance in the estimation of covariance matrices inmultivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.

    12.1 DenitionSuppose X is an n p matrix, each row of which is independently drawn from a p-variate normal distribution withzero mean:

    X(i)=(x1i ; : : : ; x

    pi )T Np(0; V ):

    Then theWishart distribution is the probability distribution of the p p randommatrix S = XT X known as the scattermatrix. One indicates that S has that probability distribution by writing

    S Wp(V; n):

    The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For n p thematrix S is invertible with probability 1 if V is invertible.If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.

    12.2 OccurrenceThe Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariatenormal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises inthe spectral theory of randommatrices and in multidimensional Bayesian analysis.[2] It is also encountered in wirelesscommunications, while analyzing the performance of Rayleigh fading MIMO wireless channels .[3]

    12.3 Probability density functionThe Wishart distribution can be characterized by its probability density function as follows:

    31

  • 32 CHAPTER 12. WISHART DISTRIBUTION

    Let X be a p p symmetric matrix of random variables that is positive denite. Let V be a (xed) positive denitematrix of size p p.Then, if n p, X has a Wishart distribution with n degrees of freedom if it has a probability density function givenby

    1

    2np2 jVjn2 p(n2 )

    jXjnp12 e 12 tr(V1X)

    where jXj denotes determinant and p() is the multivariate gamma function dened as

    pn2

    =

    p(p1)4 pj=1

    n2 +

    1j2

    :

    In fact the above denition can be extended to any real n > p 1. If n p 1, then the Wishart no longer has adensityinstead it represents a singular distribution that takes values in a lower-dimension subspace of the space ofp p matrices.[4]

    12.4 Use in Bayesian statisticsIn Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugateprior to the precision matrix = 1, where is the covariance matrix.

    12.4.1 Choice of parameters

    The least informative, proper Wishart prior is obtained by setting n = p.The prior mean ofWp(V, n) is nV, suggesting that a reasonable choice for V1 would be n0, where 0 is some priorguess for the covariance matrix.

    12.5 Properties

    12.5.1 Log-expectation

    Note the following formula:[5]

    E[ln jXj] =pXi=1

    n+ 1 i

    2

    + p ln(2) + ln jVj

    where is the digamma function (the derivative of the log of the gamma function).This plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution.

    12.5.2 Entropy

    The information entropy of the distribution has the following formula:[5]

    H[X] = ln (B(V; n)) n p 12

    E[ln jXj] + np2

    where B(V, n) is the normalizing constant of the distribution:

  • 12.6. THEOREM 33

    B(V; n) = 1jVjn2 2np2 p(n2 )

    This can be expanded as follows:

    H[X] = n2 ln jVj+ np2 ln(2) + lnp(

    n2 ) n p 1

    2E[ln jXj] + np2

    = n2 ln jVj+ np2 ln(2) +p(p 1) ln()

    4+

    pXi=1

    lnn+1i

    2

    n p 12

    pXi=1

    n+1i

    2

    + p ln(2) + ln jVj

    !+ np2

    = n2 ln jVj+ np2 ln(2) +p(p 1) ln()

    4+

    pXi=1

    lnn+1i

    2

    n p 12

    pXi=1

    n+1i

    2

    n p 12

    (p ln(2) + ln jVj) + np2

    = p+12 ln jVj+ 12p(p+ 1) ln(2) +p(p 1) ln()

    4+

    pXi=1

    lnn+1i

    2

    n p 12

    pXi=1

    n+1i

    2

    + np2

    12.5.3 Characteristic function

    The characteristic function of the Wishart distribution is

    7! jI 2iVjn2 :

    In other words,

    7! E [exp (itr(X))] = jI 2iVjn2

    where E[] denotes expectation. (Here and I are matrices the same size as V(I is the identity matrix); and i is thesquare root of 1).[6]

    12.6 TheoremIf a p p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V writeX Wp(V;m) and C is a q p matrix of rank q, then [7]

    CXCT WqCVCT ;m

    :

    12.6.1 Corollary 1

    If z is a nonzero p 1 constant vector, then:[7]

    zTXz 2z2m:

    In this case, 2m is the chi-squared distribution and 2z = zTVz (note that 2z is a constant; it is positive because V ispositive denite).

  • 34 CHAPTER 12. WISHART DISTRIBUTION

    12.6.2 Corollary 2Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary1 above shows that

    wjj jj2mgives the marginal distribution of each of the elements on the matrixs diagonal.Noted statistician George Seber points out that the Wishart distribution is not called the multivariate chi-squareddistribution because the marginal distribution of the o-diagonal elements is not chi-squared. Seber prefers toreserve the term multivariate for the case when all univariate marginals belong to the same family.[8]

    12.7 Estimator of the multivariate normal distributionThe Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariancematrix of a multivariate normal distribution.[9] A derivation of the MLE uses the spectral theorem.

    12.8 Bartlett decompositionThe Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degreesof freedom is the factorization:

    X = LAATLT ;

    where L is the Cholesky factor of V, and:

    A =

    0BBBBB@c1 0 0 0n21 c2 0 0n31 n32 c3 0... ... ... . . . ...

    np1 np2 np3 cp

    1CCCCCAwhere c2i 2ni+1 and nij ~ N(0, 1) independently.[10] This provides a useful method for obtaining random samplesfrom a Wishart distribution.[11]

    12.9 Marginal distribution of matrix elementsLet V be a 2 2 variance matrix characterized by correlation coecient 1 < < 1 and L its lower Cholesky factor:

    V =

    21 1212

    22

    ; L =

    1 0

    2p1 22

    Multiplying through the Bartlett decomposition above, we nd that a random sample from the 2 2 Wishart distri-bution is

    X =

    0B@ 21c21 12c21 +

    p1 2c1n21

    12

    c21 +

    p1 2c1n21

    22

    1 2 c22 + p1 2n21 + c12

    1CA

  • 12.10. THE POSSIBLE RANGE OF THE SHAPE PARAMETER 35

    The diagonal elements, most evidently in the rst element, follow the 2 distribution with n degrees of freedom (scaledby 2) as expected. The o-diagonal element is less familiar but can be identied as a normal variance-mean mixturewhere the mixing density is a 2 distribution. The corresponding marginal probability density for the o-diagonalelement is therefore the variance-gamma distribution

    f(x12) =jx12j

    n12

    n2

    q2n1 (1 2) (12)n+1

    Kn12

    jx12j12 (1 2)

    exp

    x12

    12(1 2)

    whereK(z) is themodied Bessel function of the second kind.[12] Similar results may be found for higher dimensions,but the interdependence of the o-diagonal correlations becomes increasingly complicated. It is also possible towrite down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936)[13]equation 10) although the probability density becomes an innite sum of Bessel functions.

    12.10 The possible range of the shape parameterIt can be shown [14] that the Wishart distribution can be dened if and only if the shape parameter n belongs to theset

    p := f0; ; p 1g [ (p 1;1) :This set is named after Gindikin, who introduced it[15] in the seventies in the context of gamma distributions onhomogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,

    p := f0; ; p 1g;the corresponding Wishart distribution has no Lebesgue density.

    12.11 Relationships to other distributions The Wishart distribution is related to the Inverse-Wishart distribution, denoted by W1p , as follows: If X ~Wp(V, n) and if we do the change of variables C = X1, then C W1p (V1; n) . This relationship may bederived by noting that the absolute value of the Jacobian determinant of this change of variables is |C|p+1, seefor example equation (15.15) in.[16]

    In Bayesian statistics, theWishart distribution is a conjugate prior for the precision parameter of themultivariatenormal distribution, when the mean parameter is known.[17]

    A generalization is the multivariate gamma distribution. A dierent type of generalization is the normal-Wishart distribution, essentially the product of a multivariatenormal distribution with a Wishart distribution.

    12.12 See also Chi-squared distribution F-distribution Gamma distribution Hotellings T-squared distribution Inverse-Wishart distribution

  • 36 CHAPTER 12. WISHART DISTRIBUTION

    Multivariate gamma distribution Students t-distribution Wilks lambda distribution

    12.13 References[1] Wishart, J. (1928). The generalised product moment distribution in samples from a normal multivariate population.

    Biometrika 20A (12): 3252. doi:10.1093/biomet/20A.1-2.32. JFM 54.0565.02. JSTOR 2331939.

    [2] Gelman, A. (2013). Bayesian Data Analysis. Chapman & Hall. p. 582. ISBN 158488388X. Retrieved 6/3/15. Checkdate values in: |accessdate= (help)

    [3] Zanella, A.; Chiani, M.; Win, M.Z. (April 2009). On the marginal distribution of the eigenvalues of wishart matrices.IEEE Transactions on Communications 57 (4): 10501060. doi:10.1109/TCOMM.2009.04.070143.

    [4] Uhlig, H. (1994). On Singular Wishart and Singular Multivariate Beta Distributions. The Annals of Statistics 22: 395.doi:10.1214/aos/1176325375.

    [5] C.M. Bishop, Pattern Recognition and Machine Learning, Springer 2006, p. 693.

    [6] Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience.p. 259. ISBN 0-471-36091-0.

    [7] Rao, C. R. (1965). Linear Statistical Inference and its Applications. Wiley. p. 535.

    [8] Seber, George A. F. (2004). Multivariate Observations. Wiley. ISBN 978-0471691211.

    [9] Chateld, C.; Collins, A. J. (1980). Introduction to Multivariate Analysis. London: Chapman and Hall. pp. 103108.ISBN 0-412-16030-7.

    [10] Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience.p. 257. ISBN 0-471-36091-0.

    [11] Smith, W. B.; Hocking, R. R. (1972). Algorithm AS 53: Wishart Variate Generator. Journal of the Royal StatisticalSociety, Series C 21 (3): 341345. JSTOR 2346290.

    [12] Pearson, Karl; Jeery, G. B.; Elderton, Ethel M. (December 1929). On the Distribution of the First Product Moment-Coecient, in Samples Drawn from an Indenitely LargeNormal Population. Biometrika (Biometrika Trust) 21: 164201.doi:10.2307/2332556. JSTOR 2332556.

    [13] Craig, Cecil C. (1936). On the Frequency Function of xy. Ann. Math. Statist. 7: 115. doi:10.1214/aoms/1177732541.

    [14] Peddada and Richards, Shyamal Das; Richards, Donald St. P. (1991). Proof of a Conjecture of M. L. Eaton on the Char-acteristic Function of the Wishart Distribution,. Annals of Probability 19 (2): 868874. doi:10.1214/aop/1176990455.

    [15] Gindikin, S.G. (1975). Invariant generalized functions in homogeneous domains,. Funct. Anal. Appl., 9 (1): 5052.doi:10.1007/BF01078179.

    [16] Dwyer, Paul S. (1967). Some Applications of Matrix Derivatives in Multivariate Analysis. J. Amer. Statist. Assoc. 62(318): 607625. JSTOR 2283988.

    [17] Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.

    12.14 External links A C++ library for random matrix generator

  • 12.15. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES 37

    12.15 Text and image sources, contributors, and licenses12.15.1 Text

    Circular ensemble Source: http://en.wikipedia.org/wiki/Circular_ensemble?oldid=610966950 Contributors: Michael Hardy, Giftlite,Rjwilmsi, R.e.b., Sodin, Acipsen, Brienanni, Headbomb, Yobot and Techibun

    Circular law Source: http://en.wikipedia.org/wiki/Circular_law?oldid=658917528 Contributors: Michael Hardy, Bearcat, Jason Quinn,ESkog, Rjwilmsi, Sodin, Gutworth, David Eppstein, XLinkBot, Yobot, Azylber, RjwilmsiBot, Syxiao, Ugncreative Usergname, Eigenbra,Anrnusna and Anonymous: 4

    Euclidean random matrix Source: http://en.wikipedia.org/wiki/Euclidean_random_matrix?oldid=530734590 Contributors: Andrew-man327, Melcombe, Aamuizz, Mabdul, Nolelover, AnomieBOT, EarwigBot, Sergey69 and Anonymous: 1

    Inversematrix gamma distribution Source: http://en.wikipedia.org/wiki/Inverse_matrix_gamma_distribution?oldid=640370885 Con-tributors: Benwing, Gadget850, Shorespirit, Melcombe, Yobot, AnomieBOT, BeyondNormality and Anonymous: 2

    MarchenkoPastur distribution Source: http://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution?oldid=624343048Contributors: Michael Hardy, Sodin, Lavaka, Melcombe, 1ForTheMoney, Burningview, Yobot, Water-vole, Angry bee, FrescoBot, Lit-tleWink, Shlyakht, RichardMills65, Johnmoses, BeyondNormality and Anonymous: 5

    Matrix gamma distribution Source: http://en.wikipedia.org/wiki/Matrix_gamma_distribution?oldid=600007467 Contributors: Ben-wing, Melcombe, AnomieBOT, FrescoBot, BG19bot, BeyondNormality and Anonymous: 2

    Matrix normal distribution Source: http://en.wikipedia.org/wiki/Matrix_normal_distribution?oldid=653501539 Contributors: BryanDerksen, Fnielsen, Michael Hardy, Jurgen~enwiki, Charles Matthews, Robbot, Benwing, Robinh, Giftlite, 3mta3, Teumo, PAR, Btyner,Ype, Entropeneur, Maksim-e~enwiki, CBM, Wikid77, Headbomb, David Eppstein, Josuechan, Melcombe, Alksentrs, Qwfp, Addbot,Erik9bot, Citation bot 1, Duoduoduo, EmausBot, Joe ez, MerlIwBot, Timutre, KLBot2, BG19bot, BeyondNormality, Monkbot andAnonymous: 12

    Matrix t-distribution Source: http://en.wikipedia.org/wiki/Matrix_t-distribution?oldid=625183913 Contributors: Benwing, Wzyuan,Melcombe, Sun Creator, Yobot, AnomieBOT, Honeychip, BeyondNormality and Anonymous: 3

    Random matrix Source: http://en.wikipedia.org/wiki/Random_matrix?oldid=662452770 Contributors: The Anome, Michael Hardy,Kku, Ronz, Charles Matthews, Kevinatilusa, Giftlite, Dmmaus, Barrettam, Thorwald, Nabla, Linas, Waldir, Rjwilmsi, R.e.b., Brighteror-ange, Lionelbrits, JYOuyang, Sodin, Sbyrnes321, SolarMcPanel, SmackBot, Charlesfahringer, Brienanni, CBM, Safalra, Ntsimp, Ttiotsw,Headbomb, Dougher, David Eppstein, R'n'B, Leyo, Punkstar89, Camrn86, Ssccdd, Danshiber, Rossweisse, Jdaloner, Melcombe, DFRus-sia, Roxy the dog, Addbot, Yobot, KamikazeBot, AnomieBOT, Citation bot, ArthurBot, Xqbot, Citation bot 1, Fredkinfollower, Meier99,Jonkerz, Duoduoduo, Lady Lovelace, RjwilmsiBot, Dick Chu, ZroBot, ClueBot NG, Helpful Pixie Bot, Syxiao, Bibcode Bot, Nikos Pa-padakis, Uday.shl14, BG19bot, BerkeleyStudent, Muennix, Benoit Balzac, Ece8950, ChrisGualtieri, Akokustatal, Fazhbr, Anrnusna,Mchiani, Monkbot, Bensuperpippo, Loraof, Alberto.d.verga and Anonymous: 23

    TracyWidom distribution Source: http://en.wikipedia.org/wiki/Tracy%E2%80%93Widom_distribution?oldid=642356626 Contribu-tors: Michael Hardy, Giftlite, Rjwilmsi, R.e.b., Derek farn, JHunterJ, Kvng, Headbomb, JustAGal, David Eppstein, Melcombe, Qwfp,AndyMcB, Yobot, AnomieBOT, Citation bot, Gilo1969, Catracy7, Citation bot 1, RjwilmsiBot, Scientic29, Bibcode Bot, BG19bot,Ignatus, Eatmajor7th, Saung Tadashi, Brirush, BeyondNormality, Mchiani, Maklaan and Anonymous: 12

    Weingarten function Source: http://en.wikipedia.org/wiki/Weingarten_function?oldid=453687326Contributors: Michael Hardy, Jarekadam,Giftlite, R.e.b., Brienanni, Headbomb, Citation bot 1 and Anonymous: 3

    Wishart distribution Source: http://en.wikipedia.org/wiki/Wishart_distribution?oldid=665272621Contributors: BryanDerksen,MichaelHardy, Ixfd64, Tomi, Dmytro, Robbyjo~enwiki, Benwing, Robinh, Aetheling, Giftlite, Lockeownzj00, Ryker, Bender235, Srbauer,3mta3, Deacon of Pndapetzim, Oleg Alexandrov, Joriki, Btyner, Shae, AtroX Worf, Schmock, Entropeneur, Teply, Zvika, SmackBot,Dean P Foster, Bilgrau, Aleenf1, WhiteHatLurker, Yoderj, Kurtitski, TNeloms, Shorespirit, 137 0, Wzyuan, Headbomb, MDSchnei-der, Jrennie, Baccyak4H, David Eppstein, R'n'B, PhysPhD, Melcombe, Perturbationist, Alexbot, Qwfp, Addbot, P.wirapati, Erki derLoony, Yobot, Wjastle, AnomieBOT, JackieBot, Citation bot, Xqbot, FrescoBot, BenzolBot, Citation bot 1, Parametrist, Gammalgubbe,Kiefer.Wolfowitz, Tom.Reding, Amonet, Crusoe8181, Trappist the monk, Kastchei, P omega sigma, Mishnadar, Panosmarko, Illia Con-nell, Honeychip, Sumitppai, SJ Defender, BeyondNormality, Monkbot and Anonymous: 50

    12.15.2 Images File:Edit-clear.svg Source: http://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The

    Tango! Desktop Project. Original artist:The people from the Tango! project. And according to themeta-data in the le, specically: Andreas Nilsson, and Jakub Steiner (althoughminimally).

    File:Fig_expc.png Source: http://upload.wikimedia.org/wikipedia/commons/8/86/Fig_expc.png License: CC BY-SA 3.0 Contributors:Own work Original artist: Sergey69

    File:Fig_sinc.png Source: http://upload.wikimedia.org/wikipedia/commons/e/ed/Fig_sinc.png License: CC BY-SA 3.0 Contributors:Own work Original artist: Sergey69

    File:Tracy-Widom_distr.svg Source: http://upload.wikimedia.org/wikipedia/commons/c/c2/Tracy-Widom_distr.svgLicense: CC0Con-tributors: Own work Original artist: Ignatus

    12.15.3 Content license Creative Commons Attribution-Share Alike 3.0

    Circular ensembleProbability distributionsGeneralizationsCalculationsReferencesExternal links

    Circular lawPrecise statementHistorySee AlsoReferences

    Euclidean random matrixHistory Properties Hermitian Euclidean random matrices Non-Hermitian Euclidean random matrices

    References

    Inverse matrix gamma distributionSee also References

    MarchenkoPastur distributionSee also References

    Matrix gamma distributionSee also NotesReferences

    Matrix normal distributionDefinition Proof

    PropertiesExpected valuesTransformation

    ExampleMaximum Likelihood Parameter EstimationDrawing values from the distributionRelation to other distributionsSee also References

    Matrix t-distributionDefinitionGeneralized matrix t-distribution Properties

    See also Notes External links

    Random matrixApplicationsPhysicsMathematical statistics and numerical analysisNumber theoryTheoretical neuroscienceOptimal control

    Gaussian ensemblesDistribution of level spacings

    GeneralisationsSpectral theory of random matricesGlobal regimeLocal regime

    Other classes of random matricesWishart matricesRandom unitary matricesNon-Hermitian random matrices

    Guide to referencesReferencesExternal links

    TracyWidom distributionDefinitionEquivalent formulationsOther Tracy-Widom DistributionsNumerical approximationsFootnotesReferencesAdditional readingExternal links

    Weingarten functionUnitary groupsExamplesAsymptotic behavior

    Orthogonal and symplectic groupsExternal links References

    Wishart distributionDefinitionOccurrenceProbability density functionUse in Bayesian statistics Choice of parameters

    PropertiesLog-expectationEntropyCharacteristic function

    TheoremCorollary 1Corollary 2

    Estimator of the multivariate normal distributionBartlett decompositionMarginal distribution of matrix elementsThe possible range of the shape parameterRelationships to other distributions See alsoReferencesExternal linksText and image sources, contributors, and licensesTextImagesContent license