all_1314_chap8.pdf

Upload: gil-sousa

Post on 02-Jun-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/10/2019 all_1314_chap8.pdf

    1/16

    Chapter 8

    Microcanonical ensemble

    8.1 DefinitionWe consider an isolated system with N particles and energy E in a volume V . Bydefinition, such a system exchanges neither particles nor energy with the surroundings.In thermal equilibrium, the distribution function (q, p) of the system is a function of itsenergy:

    (q, p) =(H(q, p)).

    According to the ergodic hypothesis and the postulate of a priori equal probabilities, isconstantover an energy surface and defines the microcanonical ensemble:

    (q, p) = 1

    (E , V , N ) ifE < H(q, p)< E+ ,0 otherwise,

    where

    (E , V , N ) =

    d3Nq

    d3Np(E < H(q, p)< E+ )

    =

    E

  • 8/10/2019 all_1314_chap8.pdf

    2/16

    88 CHAPTER 8. MICROCANONICAL ENSEMBLE

    and

    (E) =(E)

    E (E) for

  • 8/10/2019 all_1314_chap8.pdf

    3/16

    8.2. ENTROPY 89

    FactorN! arises from the fact that particles are indistinguishable. Eventhough onemay be in a temperature and density range where the motion of molecules can betreated to a very good approximation by classical mechanics, one cannot go so far as

    to disregard the essential indistinguishability of the molecules; one cannot observeand label individual atomic particles as though they were macroscopic billiard balls.Without the factor N! we are faced with the Gibbs paradox (see next sections).

    Using expression (8.3), we define now the microcanonical ensemble as

    (q, p) =h3NN!(q, p) =

    h3NN!

    forE < H(q, p)< E+ ,

    0 otherwise.

    and define formally the entropy as the ensemble average of the logarithm of the phase

    space density:

    S = kBln (q, p) = kB

    d(q, p) ln (q, p)

    = kBE

  • 8/10/2019 all_1314_chap8.pdf

    4/16

    90 CHAPTER 8. MICROCANONICAL ENSEMBLE

    Figure 8.2: The wall between the two sys-tems allows for heat exchange but not forparticle exchange.

    However, when two systems are brought to-gether in thermal equilibrium, the entropy is

    only approximatelyadditive.

    It can be shown that

    S(E , V , N ) =

    S1(E1, V1, N1) + S2(E2, V2, N2) + O(lnN)(8.4)

    E1 +E2 = Eand Ei is the value at which thefunction

    f(E1) = ln 1(E1) + ln 2(E E1)has a maximum (for the derivation, see Nolting: Grundkurs Theoretische Physik 6,

    page 41).

    2) Consistence with the second law of thermodynamics

    We would like to confirm here that the statistical entropy, as it is defined above, fulfillsthe second law of thermodynamics, which reads:

    if an isolated system undergoes a process between two states at equilibrium, theentropy of the final state cannot be smaller than that of the initial state.

    As an example, let us look at the free expansion of an ideal gas:

    During the free expansion (irreversible process) there is no heat transfer,

    Q= 0, T = 0

    therefore

    S= nR lnV2V1 >0,according to the second law.

    The second law is equivalent to saying that, when dynamical constraints

    are eliminated, the entropy rises.

    In the case of the ideal gas free expansion, eliminating dynamical constraints meantincreasingthe volume. The phase space volume is a monotonically increasingfunction ofthe configuration volume V and, since Sis by definition proportional to the phase spacevolume (eq. (8.2)), the statistical Sincreases when dynamical constraints are eliminated.

  • 8/10/2019 all_1314_chap8.pdf

    5/16

    8.3. CALCULATIONS WITH THE MICROCANONICAL ENSEMBLE 91

    3) Adiabatic invariance of the entropy

    The entropy does not change if the energy of the system is changed only by performingwork:

    if E E+ dE with dE=W, then dS= 0.

    Q = dE W,dS =

    Q

    T ; S constant (8.5)

    The statistical definition ofS:

    S=kBln 0 .fulfills this requirement if in the process the phase space doesnt change. The phase space does not change it is an adiabatic invariant.The proof of statements 2) and 3) will be given in class.

    8.3 Calculations with the microcanonical ensemble

    In order to perform calculations in statistical physics one proceeds through the followingsteps.

    1) Formulation of the Hamilton function

    H(q, p) =H(q1, . . . , q 3N, p1, . . . , p3N, z),

    wherezis some external parameter, e.g., volumeV. H(q, p) specifies the microscopicinteractions.

    2) Determination of the phase space (E , V , N ) and calculation of the density of states(E , V , N ):

    (E , V , N ) = d3Nq d3Np (E H(q, p)).3) Calculation of the entropy out of the density of states:

    S(E , V , N ) =kBln

    (E , N , V )

    0

    4) Calculation of P, T,:

    1

    T =

    S

    E

    V,N

    , T

    =

    S

    N

    E,V

    , P

    T =

    S

    V

    E,N

    .

  • 8/10/2019 all_1314_chap8.pdf

    6/16

    92 CHAPTER 8. MICROCANONICAL ENSEMBLE

    5) Calculation of the internal energy:

    U= H =E(S,V,N).

    6) Calculation of other thermodynamical potentials and their derivatives by applicationof the Legendre transformation:

    F(T , V , N ) = U T S,H(S,P,N) = U+ P V,

    G(T , P , N ) = U+ P V T S.7) One can calculate other quantities than the thermodynamic potentials, for instance,

    probability distribution functions of certain properties of the system, e.g., mo-menta/velocity distribution functions. If the phase space density of a system ofNparticles is given by

    (q, p) =(q1, . . . , qN, p1, . . . , pN),

    then the probability of finding particle i with momentum p is

    i(p) = (p pi) =

    d3q1 . . . d3qN

    d3p1 . . . d

    3pN (q1, . . . , qN, p1,..,pi, . . ,pN) (p pi) .

    8.4 The classical ideal gas

    We consider now the steps given in section 8.3 in order to analyze an ideal gas of Nparticles with mass m in a volume V. The Hamiltonian function of such a system is

    H(qi, pi) =Ni=1

    p 2i2m

    ,

    and the total energy isE < H < E+ .

    Let us first calculate the phase space volume (E , V , N ), which is the volume enclosedwithin the energy hypersurfaces EandE+ and the container walls (section 8.1.). It isgiven as

    (E , V , N ) = (E+ )

    (E) =

    E

    (E)

    ,

    where

    (E) =

    3N

    i=1p2i2mE

    d3Nq d3Np= VN3N

    i=1p2i2mE

    d3Np. (8.6)

    The last integral in eq. (8.6) gives the volume of a 3N-dimensional sphere with radius2mE. In order to evaluate it, we consider first the volume VN(R) of an N-dimensional

    sphere with radiusR, which we can write as

    VN(R) =

    N

    i=1 x2i

  • 8/10/2019 all_1314_chap8.pdf

    7/16

    8.4. THE CLASSICAL IDEAL GAS 93

    withCNbeing the volume of the N-dimensional sphere with radius 1. In order to calculateCN, we use the following well-known integrals:

    +

    dxex2

    = (8.7)

    and (as an extension of (8.7) to the Nth order)

    +

    dx1 . . .

    +

    dxNe(x21+...+x2N) =N/2. (8.8)

    Since the integrand in (8.8) depends only on R =

    x21+ . . . + x

    2N, it proves advantageous

    to switch to spherical coordinates, where we have:

    dx1 . . . d xN=dVN(R) =N RN1CNdR.

    Then, eq. (8.8) can be written as:

    N CN

    0

    RN1dR eR2

    =N/2.

    We now use this result, rewriting it as

    N CN120

    dx xN

    2 1ex =N/2 (R2 =x),

    knowing that the -function,

    (z) =

    0

    dx xz1ex,

    we get the expression for CN:

    CN= N/2

    N

    2

    N2 .Going back to (E), (eq. (8.6)), we have now

    (E) =VNC3N

    2mE

    3N,

    from where we obtain the following expression for the density of states (E):

    (E) =

    E =VNC3N(2mE)

    3N21 3N

    2 2m .

  • 8/10/2019 all_1314_chap8.pdf

    8/16

    94 CHAPTER 8. MICROCANONICAL ENSEMBLE

    Entropy

    We calculate the entropy. For largeNwe can replace the shell volume between E andE+ by the volume enclosed by the surface of energy E(see the exercises):

    S(E , V , N ) = kBln

    (E)

    h3NN!

    = kBln

    VN

    2mE

    3NC3N

    h3NN!

    . (8.9)

    It is easy to check, that the argument of the logarithm in (8.9) is dimensionless as itshould be.

    For N >>1, one may use the Stirling formula,

    ln(n) (n 1)ln(n 1) (n 1) n ln n n,

    in order to simplify the expression for S. We perform the following algebraic transforma-tions to C3N:

    C3N =

    3N2

    3N2

    3N2

    ,ln C3N = ln

    3N2

    3N2

    3N2

    =3N2 ln ln3N2 3N2 =

    3N

    2 ln

    3N

    2 ln

    3N

    2

    3N

    2

    =

    3N

    2 ln

    2

    3N +

    3N

    2

    = N

    ln

    2

    3N

    3/2+

    3

    2+ O

    ln N

    N

    ,

    where we used (n + 1) =n(n). We insert this expression in eq. (8.9) and obtain:

    S=kBNlnV(2mE)3/2

    h3 + ln 23N

    3/2

    +3

    2 (ln N 1) lnN!N

    and, finally,

    S= kBN

    ln

    4mE

    3h2N

    3/2V

    N

    +

    5

    2

    Sackur- Tetrode Equation

    Now we can differentiate the above expression and thus obtain

  • 8/10/2019 all_1314_chap8.pdf

    9/16

    8.5. HARMONIC OSCILLATORS 95

    - the caloric equationof state for the ideal gas:

    1

    T = SEV,N =N kB

    3

    2 1

    E E=

    3

    2NkBT =U ;

    - the thermal equationof state for the ideal gas:

    P

    T =

    S

    V

    E,T

    =kBN

    V P V =N kBT .

    Gibbs Paradox

    If we hadnt considered the factor N! when working out the entropy, then one wouldobtain that

    Sclassical = S+ kBln N! =kBN

    ln

    4mE

    3h2N

    3/2V

    +

    3

    2

    .

    With this definition, the entropy is non-additive, i.e.,

    S(E , V , N ) =N s

    E

    N,V

    N

    is not fulfilled. This was realized by Gibbs, who introduced the factor N! and attributedit to the fact that the particles are indistinguishable.

    8.5 Harmonic oscillators

    Consider a system ofNclassicaldistinguishableone-dimensional harmonic oscillators withfrequency in the microcanonical ensemble conditions. The condition of an oscillatorbeing distinguishable can be fulfilled if one assumes that the oscillators are localized inreal space. The Hamiltonian function of this system is given by

    H(qi, pi) =N

    i=1

    p2i2m

    +1

    2m2q2i

    .

    The normalized phase space volume (E , V , N ) is:

    (E , V , N ) = 1

    hN

    H(qi,pi)E

    dNq dNp, (8.10)

    note that in eq. (8.10) we didnt include the factor N! since our harmonic oscillators aredistinguishable. We express eq. (8.10) in terms ofxi = mqi:

    (E , V , N ) = 1

    hN

    1

    m

    NN

    i=1(p2i+x2i )2mEdNx dNp.

  • 8/10/2019 all_1314_chap8.pdf

    10/16

    96 CHAPTER 8. MICROCANONICAL ENSEMBLE

    The resulting integral corresponds to a 2N-dimensional sphere of radius

    2mE. There-fore,

    (E , V , N ) =

    1

    hN 1mN

    N

    N(N)(2mE)

    N

    =

    1

    N(N) EhN

    .

    The entropy is then

    S(E , V , N ) =N kB

    1 + ln

    E

    Nh

    , (8.11)

    where we used, as in the previous example, the Stirling formula.

    Discussion:

    1 - Eventhough we are considering classical harmonic oscillators, the factor E

    Nh es-

    tablishes the relation between the energy per particle and the characteristic energyof the harmonic oscillator h (where hcomes from the consideration of the volumeunity h3N).

    2 - It is typical for many systems that their thermodynamic properties are dependenton the ratios between the total energy and the characteristic energies of the systems.

    From eq. (8.11), we obtain the caloric and thermal equations of state for the system ofoscillators:

    1

    T =

    S

    EV,N =N kB1

    E E=U=N kBT,

    P

    T =

    S

    V

    E,N

    = 0 P = 0.

    The absence of pressure (P = 0) is a consequence of the fact that the oscillators arelocalized: they cannot move freely and, as a result, dont create any pressure. Sdoes notdepend on the volume of the container. From

    T

    =

    S

    N

    E,V

    =kBln

    E

    Nh

    andCP =CV =

    E

    T

    V

    =N kB

    we see that CP =CVsince the system cannot produce work.

    8.6 Fluctuations and correlation functions

    Our starting point is a classical observable B (q, p),

    B(q, p) =B (q1, . . . , q 3N, p1, . . . , p3N) ,

  • 8/10/2019 all_1314_chap8.pdf

    11/16

    8.6. FLUCTUATIONS AND CORRELATION FUNCTIONS 97

    which is measured M times so that for each l = 1, . . . , M measurement we have theinformation about the state:

    (q(l), p(l))

    and the value ofB :B(q(l), p(l)).

    Out of these measurements we can calculate the distribution function M(q, p) in phasespace as

    M(q, p) = 1

    M

    Ml=1

    (3N)

    q q(l) (3N) p p(l) .Following the statistical postulate of equally a priori probabilities, for an isolated systemin thermodynamical equilibrium with E < H < E+ we will have that

    limM

    M(q, p) =(q, p) =

    1

    (E) ifE < H(q, p)< E+ ,

    0 otherwise.

    By knowing(q, p), we can obtain the following statistical quantities.

    Average value ofB (q, p) as

    B(q, p) = 1M

    Ml=1

    B

    q(l), p(l)

    =

    d3Nq d3Np M(q, p)B(q, p).

    Variance:(B)2 =

    (B B)2 = B2 B2 =2B.

    The variance provides information about the fluctuation effects around the averagevalue.

    Momentaof higher orders,n= Bn ,

    are calculated by making use of the characteristic function

    B

    () = eiB .The characteristic function of any random variable (B) completely defines its prob-ability distribution:

    B() =

    eiB

    =

    d3Nq d3Np M(q, p)e

    iB,

    so that

    n= 1

    indn

    dnB()|=0 B() =

    n=0

    (i)n

    n! n . (8.12)

  • 8/10/2019 all_1314_chap8.pdf

    12/16

    98 CHAPTER 8. MICROCANONICAL ENSEMBLE

    Cumulants. If instead of expandingB() in powers of, we expand the logarithmofB() in powers of we obtain:

    ln B() =

    n=1

    n(i)n

    n! =1i 2

    2

    2 + . . . ,

    the coefficients

    1 = 1=< B >

    2 = 2 =< B2 > < B >2 (8.13)

    are called cumulants.

    Probability distribution of the observable B

    Consider the probability that a measurement of the observable B gives a value betweenb

    andb + b, which is given as

    (b)b=

    b)(B < B >)=

    da

    db (a, b) (a < A >)(b < B >),

    where the distribution function (a, b) describes the mutual probability density of the

    observables A and B and is given by

    (a, b) = (a A)(b B)=

    d3Nq d3Np (q, p)(a A(q, p))(b B(q, p)).

    IfA andB are statistically independent, then

    (a, b) =A(a)B(b)

    and therefore the correlation function in this case is zero:

    CAB = (A < A >)(B < B >) = A < A > B < B > = 0.

  • 8/10/2019 all_1314_chap8.pdf

    13/16

    8.7. THE CENTRAL LIMIT THEOREM 99

    8.7 The central limit theorem

    Consider M statistically independent observables Bm(q, p). The central theorem points

    out that the probability density of the observable

    B(q, p) =M

    m=1

    Bm(q, p)

    will show a normal distribution centered at < B >, with the width proportional to 1/

    M.

    8.7.1 Normal distribution

    An observable Y is said to show a normal distribution if the probability distributionfunction(y) =< (y Y)> has the form of a Gauss curve:

    (y) =< (y

    Y)>= 1

    22e

    (yy0)2

    22 .

    < Y > =

    +

    dy (y)y= y0,

    (Y < Y >)2 = +

    dy (y) (y y0)2 =2,

    Y2n

    =

    (2n)!

    2nn!2n,

    Y2n+1 = 0, n= 1, 2, . . .The corresponding characteristic function is also a Gauss function:

    f() =

    eiY

    =

    +

    dy eiy(y) =eiy0e1222. (8.14)

    We consider now the central limit. We define the variable Y as

    Y 1M

    (X1+ X2+ . . . + XM),

  • 8/10/2019 all_1314_chap8.pdf

    14/16

    100 CHAPTER 8. MICROCANONICAL ENSEMBLE

    whereX1,X2,. . .,XM areMmutually independent variables such that < Xi >= 0. Thecharacteristic function ofY is then

    eiY = ei(X1+X2+...+XM)/M

    =

    M

    j=1

    eiXj/M

    =M

    j=1

    eiXj/

    M

    = expM

    j=1

    Aj(/

    M) =

    eiY

    , (8.15)

    withAj(/

    M) ln

    eiXj/

    M

    . (8.16)

    IfMis a large number, Aj(/

    M) can be expanded as

    Aj(/

    M) Aj(0) + 2

    2MAj (0) +

    3

    MO(M1/2). (8.17)

    Here, the first-order term in is 0 because it is proportional toXj = 0. The quadraticterm is obtained by differentiating eq. (8.16):

    Aj (0) = X2j .Now, substituting eq. (8.17) into eq. (8.16) one has

    eiY

    = exp

    1

    222 + O

    1

    M

    , (8.18)

    where

    2 1M

    Mj=1

    X2j

    .

    The quadratic term in the exponent (eq. (8.18)) is the central limit term. We havethus shown thatY isnormallydistributed if we neglect terms ofO(1/M) (compare eq.(8.18) and (8.14)).

    IfXj = 0, then

    < Y >=M

    j=1

    < Xj > /

    M,

    andY < Y >is a normal distribution.

    Remarks:

  • 8/10/2019 all_1314_chap8.pdf

    15/16

    8.7. THE CENTRAL LIMIT THEOREM 101

    1) As Y is a normal distribution with width , Y /

    M is also a normal distributionwith width/

    Mand the variable Y /

    M:

    YM = (X1+ X2+ . . . + XM)/M.

    can be regarded as the mean value ofM differentXis. IfMis large,

    Mis small,

    what means that the mean value differs very little from the average value, i.e., thatfluctuations are small. If we increase the number of variables M, the fluctuationswill increase only as

    Mand not as M.

    2) For our Mstatistically independent observables Bm(q, p),

    B(q, p) =M

    m=1 Bm(q, p)with the average value

    < B >=M

    m=1

    < Bm > .

    the distribution function is

    (b) =

    (b

    Mm=1

    Bm)

    M

    122M

    eY2

    22

    where we defined

    Y =

    B

    < B >

    M .

    Figure 8.3: The width is proportional to 1/M, y0 =< B >

    Note that the normal distribution ofY is limited to the calculation of average values oflower powers ofY.

    Example of normal distribution: Binomial distribution.

    n=N

    i=1ni with ni= 0 or 1.

  • 8/10/2019 all_1314_chap8.pdf

    16/16

    102 CHAPTER 8. MICROCANONICAL ENSEMBLE

    The probability of ni = 1 is p, and the nis are mutually independent. The resultingdistribution ofn is the binomial distribution:

    (n) = NnpnqNn withq= (1 p). (8.19)

    Using the central limit theorem, we get

    (n) 12N 2

    e(n)2/2N2 , (8.20)

    2 = 1

    N

    Ni=1

    n2i

    ni2

    =p(1p) =pq.

    Let us compare eq. (8.19) and (8.20) for N= 2500 and p = 0.02:

    n eq. (8.19) eq. (8.20)

    25 0.0000 0.000140 0.0212 0.020550 0.0569 0.057070 0.0013 0.0010

    < n >= 50, n2 = 49.

    This theorem is very important in statistical physics since it allows to describe probabilitydistributions of observables!