chem201 3 second law

26
3. The Second Law - 1 C. Rose-Petruck, Brown University, 1999 3. The Second Law © C. Rose-Petruck, Brown University, 1998

Upload: rahmansyah-gaek

Post on 23-May-2017

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chem201 3 Second Law

3. The Second Law - 1 C. Rose-Petruck, Brown University, 1999

3. The Second Law

© C. Rose-Petruck, Brown University, 1998

Page 2: Chem201 3 Second Law

3. The Second Law - 2 C. Rose-Petruck, Brown University, 1999

3.1 Entropy The first law of thermodynamics treats all forms of work and heat on an equal basis: they all add up to the total energy, which then must be conserved. As far as the first law is concerned, there is no difference of quality between the different forms of energy. However, observations do show that heat and work are not equal: one can always transform work to heat, but the reverse is much more complicated and often not possible. For instance, we have learned in the second chapter that in order to do mechanical work, e.g., a gas-system has to change its volume against an external pressure. If this external pressure is zero and an ideal gas expands (certainly irreversibly) no work would be done and no heat would be absorbed. That's puzzling because the internal energy doesn't change; there is no energetic "motivation" for the gas to expand. Nevertheless, the gas will evenly fill the larger volume that became available. What's the "driving force" for that change? Clearly, there must be some property, called entropy that governs the transformation of systems. We shall see that this quantity is of statistical nature.

While the first law determines what transformations of a system are energetically possible, the second law determines the probability for these changes to happen.

In searching for a new property that describes the observation we clearly need to look at the amount of heat that is exchanged in the process. We define the entropy, S, as follows:

TQddS = (3.1)

Page 3: Chem201 3 Second Law

3. The Second Law - 3 C. Rose-Petruck, Brown University, 1999

Even though the entropy was defined for a path (via Qd , for some unspecified reversible process), it turns out that the function S obtained through integration of the above equation is a state function. It depends only on the state of the system, not on the method by which the system was prepared. We shall proof now that S is a state function even though Qd is not.

If a system does only work in the form pV, the 1st law (2.5) can be written

pdVQddU rev −= . (3.2)

Let's consider a perfect gas. The internal energy of a perfect gas is independent of the volume. We then can combine (2.19) with (3.2) and obtain

pdVdTCQd Vrev += . (3.3)

Inserting the ideal gas' equation of state pV=RT

VdVRTdTCQd Vrev += . (3.4)

VdVR

TdTC

TQddS V

rev +== . (3.5)

+

=−=∆

A

B

A

BVAB V

VRTTCSSS lnln . (3.6)

This implies that TQd rev is an exact differential and the S is a state function for a perfect gas. More

general arguments, as derived by Carathéodory (1909), of this type enable us to show that S is a state function for all substances.

Certainly, the heat that the system exchanges with its surroundings can be expressed as:

QdTdS = . (3.7)

The temperature T is the absolute temperature, which we will discuss in more detail later. Our definition of the entropy is only valid for reversible processes.

We now discuss how the definition can be extended to irreversible processes. Consider a system that moves from a state A to a state B in either a reversible or an irreversible way. Such a situation is always possible: states A and B are independent of their history, and therefore it is always possible to construct reversible and irreversible paths between them. Now recall from our discussion of the first law that the amount of work that a system can deliver is always maximized in a reversible process. We therefore write:

revirrev WW −<− (3.8)

Note that the system is supposed to perform work; therefore, both Wirrev and Wrev are negative quantities,

Page 4: Chem201 3 Second Law

3. The Second Law - 4 C. Rose-Petruck, Brown University, 1999

so that the minus signs makes both sides positive; the left side symbolizes a smaller amount of work than the right side. We rewrite this as:

revirrev WW > (3.9)

Now, we also know that the system in states A and B must be described by an internal energy U that is, as a state function, independent of the paths between A and B. The energy difference between the states is therefore:

revrevirrevirrev QdWdQdWddU +=+= (3.10)

Combining the last two equations yields:

revirrevrevrevirrevirrev QdWdQdWdQdWd +<+=+ (3.11)

revirrev QdQd < (3.12)

Next we divide both sides of the inequality by the temperature T:

TQd

TQd revirrev < (3.13)

The expression on the right hand side we recognize as the entropy change ∆S of the system in a reversible process, Therefore the entropy change in an irreversible process is:

TQddS irrev> (3.14)

Combining the entropy changes for a reversible and an irreversible process we have:

TQddS ≥ (3.15)

with Q standing for the heat that is exchanged in either a reversible or an irreversible process. The '=' sign of the equation represents the reversible path.

We are now in a position to develop several statements, all of which are expressions of the second law of thermodynamics.

In a reversible process, the sum of the entropy of a system and the entropy of its surroundings is unchanged:

0=∆+∆=∆ SurrSysrevUniv SSS (3.16)

This statement comes about as follows: the entire universe is divided into two parts, the one that I call system, and then the rest, the surroundings. Now, in a reversible process, the entropy change of the system is given by the last equation, with an = sign. The entropy change of the surroundings has the same magnitude, but the opposite sign, because whichever heat the system takes on, the surrounding looses. Thus, the sum of the entropy changes of the system and the surroundings is simply zero in a reversible process.

Page 5: Chem201 3 Second Law

3. The Second Law - 5 C. Rose-Petruck, Brown University, 1999

Another statement of the second law is as follows:

In an irreversible process, the total entropy of a system plus its surroundings must increase:

0>∆+∆=∆ SurrSysirrevUniv SSS (3.17)

To understand this statement of the second law, consider that the whole universe is the sum of system and surroundings. Also, the whole universe is an isolated system, and any processes we invent inside the universe don't cause a flow of heat to the outside of the universe, if there is such a thing. Therefore, the heat flow is zero, Q = 0. Then, we are left with the statement that 0>∆S for an irreversible process.

Finally:

A process for which 0<∆ UnivS is impossible.

This statement follows automatically from the previous ones: if a reversible process has a total entropy change of zero, and the irreversible process has only increases of the total entropy, then logically there is never a process in which the total entropy decreases.

In just the same way 0<∆ isolatedS is never true for isolated systems as expressed in equation (3.15) for zero heat exchange.

All the statements of the second law are equivalent. Given one of them it is possible to derive the others. Notice that what we have discussed is no proof of the second law. Just like the first law of thermodynamics, or the basic principles of quantum mechanics, there is no proof for the second law. The second law is simply the summary of many observations that can be explained by it. The evidence for the second law comes therefore from many experiments; no experiment has yet provided any evidence contradicting the second law. As we will see later in the semester, it is in principle possible to observe violations of the second law. However, they are so rare that the age of the universe isn't sufficiently long for such violations to occur.

We also see the close relation between entropy change and reversibility: A reversible process is reversible because the entropy of system and surroundings don't change. So we can just "step back in time" and reverse the process. In an irreversible process the entropy of the universe increases. But we cannot reverse this process because the entropy of the universe can never decrease.

Let's shed light of the statistical properties of the entropy by considering an ideal gas system in more detail.

3.2 Microscopic Interpretation Consider N gas molecules initially contained in one half of a volume, as shown in the picture below.

Page 6: Chem201 3 Second Law

3. The Second Law - 6 C. Rose-Petruck, Brown University, 1999

The probability that this state A could occur by chance is N

21

, that is the same as the chance the N

objects are all in one of two boxes between which they have been randomly distributed. We can write the probability of state A with respect to state B as

N

B

A

PP

=21

. (3.18)

If instead of choosing 21=

B

A

VV

we had selected arbitrary volumes we can show that

N

B

A

B

A

VV

PP

= . (3.19)

⋅=

A

B

A

B

VVN

PP lnln (3.20)

Page 7: Chem201 3 Second Law

3. The Second Law - 7 C. Rose-Petruck, Brown University, 1999

While the system evolves from state A to state B we have gone from a state of low probability to one of high probability.

Using (3.6) for one mole of gas we derive with TA =TB

( ) ( )[ ]ABA

BAB PPk

PPkSS lnlnln −⋅=

⋅=− . (3.21)

Thus, the entropy S of a system in any particular state is proportional to the logarithm of the probability P for that state of the system, i.e.,

( )PkS ln⋅= . (3.22)

P is the probability of the entire system to be in a certain state. Don't confuse P with the probabilities pi of the system's particles to be in certain quantum states, such as quantum states of molecular vibrations or rotations.

Comparison of the entropy for systems with few and with many particles In simple mechanical systems the entropy is usually negligible. Assume that the system can assume two states with one being twice as probable as the other. From (3.22) follows

KJ10K

J301.03.21038.12ln 2323 −− ≈×××≈⋅=−=∆ kSSS AB . (3.23)

This is a very small entropy difference, which may be neglected without introducing any significant inaccuracy when performing calculations on such a system. This is why only energy needs to be considered when performing calculations on simple mechanical systems. In contrast, for a system with the same two possible states but with 1-mole particles we obtain

KJ8.5K

J301.03.23.82ln ≈××≈⋅=−=∆ ANAB kSSS , (3.24)

which can be a substantial contribution. Deviations from the most probable state of a system are very unlikely. We found in chapter 2 that the individual states are distributed according to the Boltzmann distribution (1.6). We did discuss that the Boltzmann distribution is the most probable of all conceivable distributions for a system in equilibrium. However, we did not address the question how much less probable other conceivable distributions are. The probability is overwhelming that a system assumes the even distribution of particles in equilibrium from which is deviates only to a very small extend. We insert the result from (3.22) into (3.23) for a hypothetical process from B to A.

⋅=−=∆

B

A

PPkS ln

KJ8.5 . (3.25)

( )2323 10exp

104.18.5exp −≈

×−=

B

A

PP

(3.26)

Page 8: Chem201 3 Second Law

3. The Second Law - 8 C. Rose-Petruck, Brown University, 1999

Are the postulates 2 and 3 correct? Postulate 2: We see from (3.22) that the entropy is proportional to the logarithm of the probability for the state of the system. Naturally, the state with the highest probability and, consequently, the largest entropy is assumed once relaxation processes are over. This means that for composite systems the extensive parameters will vary until the entropy is maximized. For instance, a wall separating two volumes will move or heat will flow until the state of highest probability is assumed.

Postulate 3: The probability to find a composite system in a certain state is the product of the probabilities to find each subsystem in a certain state. Therefore,

( ) ∑∑∏ ==

=i

ii

ii

i SPPS lnln , (3.27)

the entropy is additive over the constituent subsystems. Furthermore, the entropy is a monotonically increasing function of the internal energy. Let's consider an isolated system, i.e., dV=0. We then derive from (3.7):

TdU

TQddS == , (3.28)

∫=T

dUS . (3.29)

q.e.d.

This implies that S is linear in U and linear in the number of subsystems. The Fundamental Equations In fact, the entropy is a homogeneous first-order function of the extensive parameters, i.e.

( )iNNNVUSS ,...,,,, 21= (3.30)

with

( ) ( )ii NNNVUSNNNVUS ,...,,,,,...,,,, 2121 λλλλλλ = . (3.31)

The monotony implies that the partial derivative

0,...,,, 21

>

∂∂

iNNNVUS

. (3.32)

We shall see later that the inverse of (3.32) is taken as the definition of the temperature. Therefore, the temperature cannot assume negative values.

The continuity, differentiability and monotony of the entropy imply that the entropy function can be inverted with respect to the energy and that the energy is a continuous and differentiable function of the entropy.

Page 9: Chem201 3 Second Law

3. The Second Law - 9 C. Rose-Petruck, Brown University, 1999

( )iNNNVSUU ,...,,,, 21= , (3.33)

The equations (3.29) and (3.33) are alternative forms of the fundamental relation, and each contains all thermodynamic information about the system.

Both, the entropy and the internal energy are extensive parameters. Consequently we can scale them for properties of system of N moles of some substance to systems of 1 mole of the same substance according to (3.31).

( )

= 1,,,...,,,, 21 NV

NUNSNNNVUS i . (3.34)

But U/N, and V/N are the energy and the volume per mole, respectively.

With

NUu ≡ , (3.35)

and

NVv ≡ (3.36)

we obtain the entropy of a single mole:

( ) ( )1,,, vuSvus ≡ . (3.37)

Or

( ) ( )vuNsNVUS ,,, = . (3.38)

Let's briefly discuss how these equations can be useful when solving a thermodynamic problem. The entropy of a system consisting of various subsystems is the sum of the entropies of the subsystems. We, therefore, obtain the entropy as a function of the extensive parameters of the subsystems. In a constrained equilibrium, e.g. when the internal energy is constant, the entropy does not change. The entropy reaches an extremum. This is mathematically equivalent to vanishing of the first derivatives of the entropy with respect to the extensive parameters. In the general case, depending on the second derivative we can classify the extrema as stable or unstable extrema. Stable extrema are entropy maximums; all others are unstable extrema.

The fundamental equations (3.30) and (3.33) are equivalent. Any of them can be used to characterize the system, e.g., for finding equilibrium states. It turns out that, in fact, the energetic fundamental equation is often more convenient to use. Since the derivative of the entropy with respect to the energy is positive, a maximum for the entropy implies a minimum for the energy and vice versa.

The correspondence between entropy maximum and energy minimum is represents a natural correspondence principle between thermodynamics and mechanics. In mechanics thermal effects do not influence the stability of a system and the entropy is not used. A stable equilibrium, however, is a state of minimum energy.

Page 10: Chem201 3 Second Law

3. The Second Law - 10 C. Rose-Petruck, Brown University, 1999

3.3 Engines In order to convert heat into work we require some suitable thermodynamic engine that consumes heat and produces work. During any process this machine itself does not suffer any permanent changes itself. Any series of processes that returns the system to its original state is called a cycle. The system with which the engine operates is called working substance. The engine itself does not include the heat source and the heat sink but comprises just the thermodynamics cycle.

In any cycle, such as the one below, the amount of work done by the system is equal to the enclosed area in the figure below and is equal the net absorbed heat during the cycle. From (3.7) follows therefore

∫= dSTQrev . (3.39)

Since the entropy is a state function

0==∫∫ TQddS rev (3.40)

for a reversible cycle.

The Thermal efficiency is defined as

hQW−=η , (3.41)

with Qh being the heat absorbed from a hot heat reservoir. Writing the heat discharged into a cold reservoir Qc we can transform (3.40) using the first law, ch QQW −=− , into

Page 11: Chem201 3 Second Law

3. The Second Law - 11 C. Rose-Petruck, Brown University, 1999

h

c

QQ−−=1η . (3.42)

Certainly, since also the internal energy is a state function, the change of internal energy after one cycle is zero.

We consider the engine to be the thermodynamics system and the heat baths part of the surroundings. Our resulting "energy accounting" deviates from that of some authors. However, it is more consistent because the we calculate the efficiency of the engine (= system) and have to consider the energy flow in and out of this system, not in and out of the surroundings of the system. W and Qc leave the engine and are, therefore, negative. Furthermore, it is essentially a matter of personal taste whether we put the sign indicating the direction of energy flow into the formulas or into the number that we plug into the formulas once we perform a calculation. We put the signs into the formulas.

The Carnot cycle and the Carnot theorem A thermodynamic cycle of particular importance is the Carnot cycle. This cycle, as we shall see, is the cycle with the largest possible thermal efficiency η.

The Carnot cycle consists of four processes:

Page 12: Chem201 3 Second Law

3. The Second Law - 12 C. Rose-Petruck, Brown University, 1999

1. Reversible, isothermal expansion from A to B at the temperature Th of the hot heat source.

h

hAB T

QS =∆

2. Reversible, adiabatic expansion from B to C. In the course of this expansion the temperature of the system falls from Th to Tc.

0=∆ BCS

3. Reversible, isothermal compression from C to D at the temperature Tc of the cold heat sink.

c

cCD T

QS −=∆

4. Reversible, adiabatic compression from D to A. In the course of this expansion the temperature of the system rises from Tc to Th.

0=∆ DAS

The Carnot cycle is the only circle in which a single working substance exchanges heat at two temperatures only. ("Two temperatures only" means two isothermal processes and two processes that do not exchange heat and that are, therefore, adiabatic.)

While the Carnot cycle is often depicted in the p-V-diagram the T-S-diagram, see below, shows much more clearly the simplicity of the process. Furthermore, in contrast to p-V-diagram the T-S-diagram allows to perform all calculations without the reference to the materials of the system. The conclusions drawn are, therefore, of general validity for all systems.

Page 13: Chem201 3 Second Law

3. The Second Law - 13 C. Rose-Petruck, Brown University, 1999

The total change in entropy is

c

c

h

hCDABcycle T

QTQSSS −=∆+∆=∆=0 . (3.43)

c

c

h

h

TQ

TQ = . (3.44)

( )ABhABhh SSTSTQ −=∆= . (3.45)

Since the extracted work is equal to the net absorbed heat which is equal to the area enclosed by the circle we get

( )( )( )

( )h

ch

ABh

ABch

hrev T

TTSST

SSTTQW −=

−−−=−=η . (3.46)

Thus we see that the efficiency of a thermal engine operating according to a reversible Carnot circle is independent of the working substance and depends only on the two operating temperatures. This result is known as Carnot's theorem. Equation (3.45) also means that a reversible Carnot engine can be used as a thermometer by having the engine work with one heat bath at some reference temperature while measuring the thermodynamics efficiency.

This thermodynamic efficiency is by no means large. Let's consider the temperatures that are available in a typical steam engine: Th=390 K(117 ºC) and Tc=350 K(77 ºC). The thermodynamic efficiency of the best possible engine, the Carnot engine, would be just 10%; the efficiency of a real steam engine would certainly be even lower then that.

Sometimes Carnot's theorem is written as follows: "No engine operating between two given reservoirs can be more efficient than a Carnot engine operating between the same two reservoirs". Or it is expressed as: "All reversible engines operating between the same reservoirs are equally efficient", CJA, p.56. We have implicitly proven these forms of the Carnot theorem by deriving (3.45) without reference to a particular working substance. Moreover, 0=∆ cycleS is a true statement for any reversible process and, consequently, (3.45) is true for any reversible engine. We summary by stating

( )h

chrev T

TT −=η (3.47)

for any reversible engine operating between the same heat reservoirs. Since

revirrev WW −<− (3.8)

it follows from equation (3.45)

revirrev ηη < . (3.48)

Finally, let's have a look at two forms of the second law that are frequently cited in the literature.

Page 14: Chem201 3 Second Law

3. The Second Law - 14 C. Rose-Petruck, Brown University, 1999

Kelvin Statement "No process is possible whose sole result is the complete conversion of heat into work", CJA, p.53.

Prove: Equation (3.45) and (3.47) express this explicitly.

( ) 0allfor1 ><−≤ ch

ch TT

TTη . (3.49)

Since the thermodynamics efficiency is smaller then 1, a complete conversion of heat into work is not possible. q.e.d. Clausius Statement "No process is possible whose sole result is the transfer of heat from a colder to a hotter body", CJA, p.53.

Prove: If we bring to bodies (= sub-systems) in thermal contact the amount of heat that leaves one of the bodies is equal to the heat that enters the other, i.e., hc QdQd −= .

The overall change in entropy has to be positive, that is

0!11 >

−=−=

hchc TTQd

TQd

TQddS . (3.50)

Since Th > Tc equation (3.49) is in fact fulfilled and, therefore, this process is in fact spontaneous. Heat transfer into the other direction is, consequently, impossible. q.e.d. Endoreversible Engines When discussing the Carnot cycle, our primary attention rested on the thermodynamics efficiency of an engine. However, maximum thermodynamic efficiency is not necessarily the primary concern in design of real engines. More important might be power output, simplicity of construction, cost. We shall now discuss engines that are not completely reversible but which power output is maximized. These engines are called endoreversible engines and provide a good approximation to real engines. (See HBC, p126.)

Since the Carnot engine is reversible, all processes are quasi-equilibrium processes. Theoretically, this is equivalent to the processes progressing infinitesimally slow and making the temperature differences between the working substance and the heat baths infinitesimally small. Consequently, the power delivered by the engine is infinitesimally small.

In practice, one runs an engine at some finite speed. Because "slow" has to be compared to the speed of the relaxation processes the adiabatic processes could progress much faster than the isothermal processes because the relaxation times in the working substance are much faster than the heat transfer between the working substance and the heat baths. Therefore, an endoreversible engine has two reversible, adiabatic processes and two irreversible heat transfer processes.

Page 15: Chem201 3 Second Law

3. The Second Law - 15 C. Rose-Petruck, Brown University, 1999

We assume that the heat source is at a temperature Th and the heat sink at a temperature Tc. The heat between the working substance and the heat source and the heat sink is transferred with thermal conductances Kh, Kc, respectively. Consequently, during the isothermal expansion the temperature of the working substance Tw, during the isothermal compression the temperature of the working substance is Tc.

ctwh TTTT >>> (3.51)

If the time th is required to transfer an amount Qh, then

( )whhh

h TTKtQ −= . (3.52)

Since an equivalent equation holds for the heat transfer to the cold reservoir, the time required for the two isothermal strokes of the engine is

( ) ( )ctc

c

whh

hch TTK

QTTK

Qttt−

+−

=+= . (3.53)

As mentioned earlier, the adiabatic strokes can be very fast. Their contribution to the amount of time for one complete cycle is, therefore, neglected.

The heats Qh and Qc are related by the Carnot cycle operating between the temperatures Tw and Tt. From (3.46) follows then

Page 16: Chem201 3 Second Law

3. The Second Law - 16 C. Rose-Petruck, Brown University, 1999

( ) ( ) ( ) ( ) WTT

TTTKTT

TTTK

ttw

t

ctctw

w

whh

−−+

−−= 11

. (3.54)

The power output of the engine (-W) / t has to be maximized with respect to the yet undetermined temperatures Tw and Tt. It can be found that the power is maximized for

cthw TKTTKT == and , (3.55)

with

ch

cchh

KKTKTK

K++

= . (3.56)

The maximum power delivered by the engine is

2

max

+−

=

ch

chch KK

TTKK

tW

. (3.57)

The efficiency for this endoreversible engine at maximum power is

h

cendorev T

T−=1η . (3.58)

It is important to note that this efficiency is not dependent of the thermal conductances. The model employed here is, in fact, quite accurate as a comparison with a number of power plants shows.

Power Plant Tc [ºC] Th [ºC] revη endorevη observedη

endorev

observed

ηη

West Thurrock (U.K.) coal fired steam plant

~25 565 0.64 0.40 0.36 90%

CANDU (Canada) PHW nuclear reactor ~25 300 0.48 0.28 0.30 93%

Lardello (Italy) geothermal steam plant 80 250 0.32 0.175 0.16 91% Ref. for endoreversible engines and table: F. L. Curzon and B. Ahlborn, Amer. J. Phys. 43, 22 (1975)

3.4 Equilibrium Conditions and Stability Writing the fundamental equation

( )iNNNVSUU ,...,,,, 21= , (3.59)

in differential form we obtain

Page 17: Chem201 3 Second Law

3. The Second Law - 17 C. Rose-Petruck, Brown University, 1999

j

r

j NVSjNSNV

dNNUdV

VUdS

SUdU

jiii

∑=

∂∂+

∂∂+

∂∂=

1 ,,,,

. (3.60)

The various partial differentials are called energetic intensive parameters and have the following physical meaning:

TSU

iNV

∂∂

,

, the temperature (3.61)

pVU

iNS

∂∂−

,

, the pressure (3.62)

j

NVSjji

NU µ≡

∂∂

≠,,

, the electrochemical potential

of the jth component. (3.63)

Equation (3.38) can then be written as

j

r

jjdNPdVTdSdU ∑

=

+−=1

µ . (3.64)

We are already familiar with the first and second term. The third term, the electrochemical potential, describes the energy exchange between the system and its surroundings due to flux of matter.

Just to give it some name, we call this term the quasi-static chemical work dWc. Strange name, but it is an energy term that has to be considered in chemical systems.

cdWPdVTdSdU +−= . (3.65)

Equations of State Since the temperature, pressure, and chemical potential are derivatives of functions of S, V, and Ni they are functions of these extensive parameters themselves:

( )iNVSTT ,,= , (3.66)

( )iNVSpp ,,= , (3.67)

( )jijj NVS ≠= ,,µµ . (3.68)

Such relationships are called equations of state. Again, this is not new to us. We already used the equation of state for the ideal gas p = RT / V. For convenience we abbreviate N1, N2,..., Ni, with Ni.

Finally we encounter a good reason why the equations of state have to be homogeneous first order functions. The derivative of such functions is homogeneous zeroth order, which is nice because now we get:

Page 18: Chem201 3 Second Law

3. The Second Law - 18 C. Rose-Petruck, Brown University, 1999

( ) ( )ii NVSTNVST ,,,, =λλλ , (3.69)

( ) ( )ii NVSpNVSp ,,,, =λλλ , (3.70)

( ) ( )ijij NVSNVS ,,,, µλλλµ = , (3.71)

This means that, e.g., the temperature of part of the system is equal to the temperature of the whole system, and that's how it should be.

As demonstrated with equation (3.37) the energetic fundamental equation can, certainly, be written in molar terms:

pdvTdsdu −= . (3.72)

We based these considerations on the energetic fundamental equations, i.e., we chose the work in the so called energy representation. This means that we chose to use the energy to be dependent on the independent variable entropy. Alternatively we could have started with the entropic fundamental equations and would have arrived at the corresponding entropic intensive parameters. In such as case we would work in the entropy representation. However, we shall not consider this alternative here. Thermal Equilibrium Let's consider a system that consists of two subsystems in thermal contact. In equilibrium dS=0. The entropy for the system is the sum of the entropies of the sub-systems:

( ) ( )Bi

BBBAi

AAA NVUSNVUSS ,,,, += . (3.73)

The differential form is then:

BB

AA

B

NVB

BA

NVA

A

dUT

dUT

dS

dUUSdU

USdS

Bi

BAi

A

11

,,

+=

∂∂+

∂∂=

w . (3.74)

Because of conservation of energy we have dUB = - dUA.

Equation (3.74) in then:

ABA dU

TTdS

−= 11. (3.75)

Since in equilibrium dS has to vanish

BA

BA

TT

TT

=

=

{

11

. (3.76)

The system is in thermal equilibrium. (By the way, we began this derivation with the entropic fundamental

Page 19: Chem201 3 Second Law

3. The Second Law - 19 C. Rose-Petruck, Brown University, 1999

equation and, therefore, worked in the entropy representation. But don't worry about that.)

We can see now that our definition of the temperature T is in agreement with our intuitive concept of temperature.

If TA > TB the system is not in equilibrium and ∆S>0. Just like equation (3.75) we obtain

011 >∆

−=∆ ABA U

TTS . (3.77)

0<∆ AU (3.78)

This means that in a spontaneous process the energy flows from the part with higher temperature to the part with lower temperature. Furthermore, the temperature is an intensive parameter, i.e., it has the same value everywhere in an equilibrium system. Temperature Units The temperature is defined by (3.61)

iNVSUT

,

∂∂≡ . (3.61)

While the dimensions of the energy is [mass length2 / time2], the dimension of the entropy can be arbitrarily chosen because any entropy multiplied by some constant satisfies the extremum principles and is, consequently, an entropy. Nevertheless, considering

( )PkS ln⋅= (3.22)

it is clear that the entropy has the dimension of the constant in front of the logarithm.

The units of energy are Joule, erg, calories, etc. The thermodynamic temperature has a uniquely defined zero point. This is, according to equation (3.46) the temperature at which the thermodynamic efficiency for a reversible cycle equals 1. The Kelvin scale of temperature is defined by assigning the number 273.13 to the triple point of water. This corresponds to about 0 ºC. However, the only temperature scale that can be used in thermodynamic calculations is the Kelvin scale. The corresponding unit of temperature is called Kelvin, designated by the notation K. Kelvin and Joule have the same dimension, their ration is 1.3806 x 10-23 Joule / Kelvin. This ration is called Boltzmann's constant, designated kB or often simply k. Thus kBT is an energy.

For more information on energy scales read, e.g., HBC, pp.47 or CJA, pp.18, pp. 58. Mechanical Equilibrium Let's consider a system that consists of two subsystems separated by a diathermal, movable wall. In equilibrium dS=0.

The differential form is then:

Page 20: Chem201 3 Second Law

3. The Second Law - 20 C. Rose-Petruck, Brown University, 1999

B

NUB

BB

NVB

B

A

NUA

AA

NVA

A

dVVSdU

US

dVVSdU

USdS

Bi

BBi

B

Ai

AAi

A

,,

,,

∂∂+

∂∂+

∂∂+

∂∂=

. (3.79)

BB

BB

BA

A

AA

A dVTpdU

TdV

TpdU

TdS −+−= 11

(3.80)

Because of conservation of energy we have dUB = - dUA and dVB = - dVA.

Equation (3.79) in then:

011 =

−+

−= AB

B

A

AA

BA dVTp

TpdU

TTdS . (3.81)

Since temperature and pressure are independent (3.80) can only be satisfied if

B

B

A

A

BA Tp

Tpand

TT== 11

. (3.82)

BA TT = (3.83)

BA pp = (3.84)

The system is in thermal and pressure equilibrium. The equality of pressure is exactly what we expect intuitively. It is clear that the pressure defined in equations (3.61) is exactly the mechanical pressure. Equilibrium with Respect to Matter Flow We now consider a system connected by a diathermal wall that is permeable to the ith but not to any other substance. We are now searching for equilibrium conditions with respect to the temperature and the chemical potential. The mathematical formalism is exactly the same as in the previous examples. Beginning with

BB

BB

BA

A

AA

A dNT

dUT

dNT

dUT

dS µµ −+−= 11 (3.85)

we obtain the results

BA TT = , (3.86)

BA µµ = . (3.87)

Just as the temperature can be looked upon as a "potential" for heat flow, and the pressure can be looked upon as a "potential" for volume changes, the chemical potential can be looked upon as a "potential" for

Page 21: Chem201 3 Second Law

3. The Second Law - 21 C. Rose-Petruck, Brown University, 1999

matter flow. We shall see later that the chemical potential also provides a generalized force for the change of phases and for chemical reactions. Thus, the chemical potential is of great importance for theoretical chemistry. The units for the chemical potential are energy units per mole.

3.5 The Second Law: Examples

Example 1. Engines: Refrigerators and heat pumps. Let's recall the ideal engine. We have discussed the reversible cycle that converts transports heat from a hot to a cold heat bath while performing work, the Carnot cycle. Such an engine is shown below.

Refrigerator In contrast, a refrigerator absorbs work from an external source and removes heat from out of an isolated volume to the environment as show in the next figure.

Page 22: Chem201 3 Second Law

3. The Second Law - 22 C. Rose-Petruck, Brown University, 1999

The thermodynamic efficiency for this engine would be defined similarly to equation (3.45). However, now our focus rests on the amount of heat Qc removed from the isolated volume per work W supplied to the engine.

( )ch

ccorrefrigerat TT

TWQ

−=≤η . (3.88)

Heat Pump The thermodynamic efficiency for the heat pump is given by the heat released, e.g., into the house at Th divided by the work consumed by the engine.

Page 23: Chem201 3 Second Law

3. The Second Law - 23 C. Rose-Petruck, Brown University, 1999

( ) orrefrigeratch

hhpumpheat TT

TWQ

ηη +=−

=≤ 1_ . (3.89)

These efficiencies are displayed together in the next figure. Tin refers to the temperature of the heat source, Tout refers to the temperature of the heat sink.

Page 24: Chem201 3 Second Law

3. The Second Law - 24 C. Rose-Petruck, Brown University, 1999

The thermodynamic efficiency of a reversible engine linearly decreases as a function of Tc/Th. In contrast, the efficiencies for the refrigerator and heat pump diverges for Tc/Th=1. This means that no work has to be supplied if there is no temperature difference to "work against".

Example 2. Entropy changes when heating a substance Suppose we heat a substance reversibly from an initial temperature Ti to a final temperature Tf. What is the change of entropy of the substance?

Since this is supposed to be a reversible process, we use the formula

TdQdS ≥ (3.90)

For processes at constant pressure, such as those occurring at atmospheric pressure, we know that

dTCdQ P= (3.91)

We plug in to get:

dTTCdS P= (3.92)

We get the value of the entropy at the final temperature as:

Page 25: Chem201 3 Second Law

3. The Second Law - 25 C. Rose-Petruck, Brown University, 1999

∫+=B

A

T

T

PAB dT

TCTSTS )()( (3.93)

The important thing to note is this: the heat capacity as a function of the temperature is an experimentally accessible function. Thus, one can obtain the value of the entropy at some temperature from the one at another temperature by integrating the above equation.

For small changes in temperature, that is whenever TA ~ TB, we can assume the heat capacity to be independent of the temperature. This allows us to take it out of the integral:

⋅+=

+= ∫

A

BPA

T

TPAB

TTCTS

TdTCTSTS

B

A

ln)(

)()( (3.94)

As a numerical example, we can look at the change in entropy upon heating of one mole of water, from 298 K to 299K:

molKJ25.0

K298K299ln

molKJ75ln)()( =⋅=

⋅=−=∆

A

BPAB T

TCTSTSS (3.95)

Thus, if the entropy at one temperature is known, then it is an easy matter to calculate the entropy at a slightly higher temperature, provided the heat capacities are known.

Example 3. Entropy of phase transitions Now consider the change of the entropy of a substance during a phase transition. We again assume that the phase transition is done in a reversible way. This implies, for example, that one melts ice to water at 0 °C (that is: all phase transitions are studied at their transition temperature). We know that phase transitions are associated with an enthalpy change. For a reversible process we have:

revtrans QH =∆ (3.96)

We can immediately plug in to obtain:

trans

transtrans T

HS ∆=∆ (3.97)

If the phase transition is exothermic )0( <∆ transH , the change in the entropy is negative. For example, in the exothermic freezing of water, the entropy change is negative. We will later see how we can interpret this observation on a microscopic level.

Let us take a look at some typical molar entropies of vaporization of liquids:

Page 26: Chem201 3 Second Law

3. The Second Law - 26 C. Rose-Petruck, Brown University, 1999

We notice that all the entropies of vaporization are fairly similar; this similarity is known as Trouton's rule, which states that all substances have entropies of vaporization of about 85 J/K mol. Water is somewhat of an exception, as it has a fairly high entropy of vaporization. The reason for this empirically found rule is that a comparable amount of disorder is generated when a liquid evaporates. However, water molecules in the liquid phase strongly interact with each other, which introduces some amount of order. This order is lost upon evaporation and, consequently, the associated entropy is larger then for other substances.