quantum mechanics

172
Quantum Mechanics 2 nd term 2002 Martin Plenio Imperial College Version January 28, 2002 Office hours: Tuesdays 11am-12noon and 5pm-6pm! Office: Blackett 622 Available at:

Upload: sergio-zaina

Post on 11-May-2015

1.099 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Quantum mechanics

Quantum Mechanics

2nd term 2002

Martin PlenioImperial College

Version January 28, 2002

Office hours: Tuesdays11am-12noon and 5pm-6pm!

Office: Blackett 622Available at:

Page 2: Quantum mechanics

Contents

I Quantum Mechanics 5

1 Mathematical Foundations 111.1 The quantum mechanical state space . . . . . . . . . . . 111.2 The quantum mechanical state space . . . . . . . . . . . 12

1.2.1 From Polarized Light to Quantum Theory . . . . 121.2.2 Complex vector spaces . . . . . . . . . . . . . . . 201.2.3 Basis and Dimension . . . . . . . . . . . . . . . . 231.2.4 Scalar products and Norms on Vector Spaces . . . 261.2.5 Completeness and Hilbert spaces . . . . . . . . . 361.2.6 Dirac notation . . . . . . . . . . . . . . . . . . . . 39

1.3 Linear Operators . . . . . . . . . . . . . . . . . . . . . . 401.3.1 Definition in Dirac notation . . . . . . . . . . . . 411.3.2 Adjoint and Hermitean Operators . . . . . . . . . 431.3.3 Eigenvectors, Eigenvalues and the Spectral The-

orem . . . . . . . . . . . . . . . . . . . . . . . . . 451.3.4 Functions of Operators . . . . . . . . . . . . . . . 51

1.4 Operators with continuous spectrum . . . . . . . . . . . 571.4.1 The position operator . . . . . . . . . . . . . . . 571.4.2 The momentum operator . . . . . . . . . . . . . . 611.4.3 The position representation of the momentum

operator and the commutator between positionand momentum . . . . . . . . . . . . . . . . . . . 62

2 Quantum Measurements 652.1 The projection postulate . . . . . . . . . . . . . . . . . . 652.2 Expectation value and variance. . . . . . . . . . . . . . . 702.3 Uncertainty Relations . . . . . . . . . . . . . . . . . . . . 71

1

Page 3: Quantum mechanics

2 CONTENTS

2.3.1 The trace of an operator . . . . . . . . . . . . . . 742.4 The density operator . . . . . . . . . . . . . . . . . . . . 762.5 Mixed states, Entanglement and the speed of light . . . . 82

2.5.1 Quantum mechanics for many particles . . . . . . 822.5.2 How to describe a subsystem of some large system? 862.5.3 The speed of light and mixed states. . . . . . . . 90

2.6 Generalized measurements . . . . . . . . . . . . . . . . . 92

3 Dynamics and Symmetries 933.1 The Schrodinger Equation . . . . . . . . . . . . . . . . . 93

3.1.1 The Heisenberg picture . . . . . . . . . . . . . . . 963.2 Symmetries and Conservation Laws . . . . . . . . . . . . 98

3.2.1 The concept of symmetry . . . . . . . . . . . . . 983.2.2 Translation Symmetry and momentum conserva-

tion . . . . . . . . . . . . . . . . . . . . . . . . . 1033.2.3 Rotation Symmetry and angular momentum con-

servation . . . . . . . . . . . . . . . . . . . . . . 1053.3 General properties of angular momenta . . . . . . . . . . 108

3.3.1 Rotations . . . . . . . . . . . . . . . . . . . . . . 1083.3.2 Group representations and angular momentum

commutation relations . . . . . . . . . . . . . . . 1103.3.3 Angular momentum eigenstates . . . . . . . . . . 113

3.4 Addition of Angular Momenta . . . . . . . . . . . . . . . 1163.4.1 Two angular momenta . . . . . . . . . . . . . . . 119

3.5 Local Gauge symmetries and Electrodynamics . . . . . . 123

4 Approximation Methods 1254.1 Time-independent Perturbation Theory . . . . . . . . . . 126

4.1.1 Non-degenerate perturbation theory . . . . . . . . 1264.1.2 Degenerate perturbation theory . . . . . . . . . . 1294.1.3 The van der Waals force . . . . . . . . . . . . . . 1324.1.4 The Helium atom . . . . . . . . . . . . . . . . . . 136

4.2 Adiabatic Transformations and Geometric phases . . . . 1374.3 Variational Principle . . . . . . . . . . . . . . . . . . . . 137

4.3.1 The Rayleigh-Ritz Method . . . . . . . . . . . . . 1374.4 Time-dependent Perturbation Theory . . . . . . . . . . . 141

4.4.1 Interaction picture . . . . . . . . . . . . . . . . . 143

Page 4: Quantum mechanics

CONTENTS 3

4.4.2 Dyson Series . . . . . . . . . . . . . . . . . . . . . 1454.4.3 Transition probabilities . . . . . . . . . . . . . . . 146

II Quantum Information Processing 153

5 Quantum Information Theory 1555.1 What is information? Bits and all that. . . . . . . . . . . 1585.2 From classical information to quantum information. . . . 1585.3 Distinguishing quantum states and the no-cloning theorem.1585.4 Quantum entanglement: From qubits to ebits. . . . . . . 1585.5 Quantum state teleportation. . . . . . . . . . . . . . . . 1585.6 Quantum dense coding. . . . . . . . . . . . . . . . . . . . 1585.7 Local manipulation of quantum states. . . . . . . . . . . 1585.8 Quantum cyptography . . . . . . . . . . . . . . . . . . . 1585.9 Quantum computation . . . . . . . . . . . . . . . . . . . 1585.10 Entanglement and Bell inequalities . . . . . . . . . . . . 1605.11 Quantum State Teleportation . . . . . . . . . . . . . . . 1665.12 A basic description of teleportation . . . . . . . . . . . . 167

Page 5: Quantum mechanics

4 CONTENTS

Page 6: Quantum mechanics

Part I

Quantum Mechanics

5

Page 7: Quantum mechanics
Page 8: Quantum mechanics

7

Introduction

This lecture will introduce quantum mechanics from a more abstractpoint of view than the first quantum mechanics course that you tookyour second year.

What I would like to achieve with this course is for you to gain adeeper understanding of the structure of quantum mechanics and ofsome of its key points. As the structure is inevitably mathematical, Iwill need to talk about mathematics. I will not do this just for the sakeof mathematics, but always with a the aim to understand physics. Atthe end of the course I would like you not only to be able to understandthe basic structure of quantum mechanics, but also to be able to solve(calculate) quantum mechanical problems. In fact, I believe that theability to calculate (finding the quantitative solution to a problem, orthe correct proof of a theorem) is absolutely essential for reaching areal understanding of physics (although physical intuition is equallyimportant). I would like to go so far as to state

If you can’t write it down, then you do not understand it!

With ’writing it down’ I mean expressing your statement mathemati-cally or being able to calculate the solution of a scheme that you pro-posed. This does not sound like a very profound truth but you would besurprised to see how many people actually believe that it is completelysufficient just to be able to talk about physics and that calculations area menial task that only mediocre physicists undertake. Well, I can as-sure you that even the greatest physicists don’t just sit down and awaitinspiration. Ideas only come after many wrong tries and whether a tryis right or wrong can only be found out by checking it, i.e. by doingsome sorts of calculation or a proof. The ability to do calculations isnot something that one has or hasn’t, but (except for some exceptionalcases) has to be acquired by practice. This is one of the reasons whythese lectures will be accompanied by problem sheets (Rapid FeedbackSystem) and I really recommend to you that you try to solve them. Itis quite clear that solving the problem sheets is one of the best ways toprepare for the exam. Sometimes I will add some extra problems to theproblem sheets which are more tricky than usual. They usually intend

Page 9: Quantum mechanics

8

to illuminate an advanced or tricky point for which I had no time inthe lectures.

The first part of these lectures will not be too unusual. The firstchapter will be devoted to the mathematical description of the quan-tum mechanical state space, the Hilbert space, and of the description ofphysical observables. The measurement process will be investigated inthe next chapter, and some of its implications will be discussed. In thischapter you will also learn the essential tools for studying entanglement,the stuff such weird things as quantum teleportation, quantum cryptog-raphy and quantum computation are made of. The third chapter willpresent the dynamics of quantum mechanical systems and highlight theimportance of the concept of symmetry in physics and particularly inquantum mechanics. It will be shown how the momentum and angularmomentum operators can be obtained as generators of the symmetrygroups of translation and rotation. I will also introduce a different kindof symmetries which are called gauge symmetries. They allow us to ’de-rive’ the existence of classical electrodynamics from a simple invarianceprinciple. This idea has been pushed much further in the 1960’s whenpeople applied it to the theories of elementary particles, and were quitesuccessful with it. In fact, t’Hooft and Veltman got a Nobel prize for itin 1999 work in this area. Time dependent problems, even more thantime-independent problems, are difficult to solve exactly and thereforeperturbation theoretical methods are of great importance. They willbe explained in chapter 5 and examples will be given.

Most of the ideas that you are going to learn in the first five chaptersof these lectures are known since about 1930, which is quite some timeago. The second part of these lectures, however, I will devote to topicswhich are currently the object of intense research (they are also mymain area of research). In this last chapter I will discuss topics such asentanglement, Bell inequalities, quantum state teleportation, quantumcomputation and quantum cryptography. How much of these I cancover depends on the amount of time that is left, but I will certainly talkabout some of them. While most physicists (hopefully) know the basicsof quantum mechanics (the first five chapters of these lectures), manyof them will not be familiar with the content of the other chapters. So,after these lectures you can be sure to know about something that quitea few professors do not know themselves! I hope that this motivates

Page 10: Quantum mechanics

9

you to stay with me until the end of the lectures.Before I begin, I would like to thank, Vincenzo Vitelli, John Pa-

padimitrou and William Irvine who took this course previously andspotted errors and suggested improvements in the lecture notes andthe course. These errors are fixed now, but I expect that there aremore. If you find errors, please let me know (ideally via email so thatthe corrections do not get lost again) so I can get rid of them.

Last but not least, I would like to encourage you both, to ask ques-tions during the lectures and to make use of my office hours. Questionsare essential in the learning process, so they are good for you, but I alsolearn what you have not understood so well and help me to improve mylectures. Finally, it is more fun to lecture when there is some feedbackfrom the audience.

Page 11: Quantum mechanics

10

Page 12: Quantum mechanics

Chapter 1

Mathematical Foundations

Before I begin to introduce some basics of complex vector spaces anddiscuss the mathematical foundations of quantum mechanics, I wouldlike to present a simple (seemingly classical) experiment from which wecan derive quite a few quantum rules.

1.1 The quantum mechanical state space

When we talk about physics, we attempt to find a mathematical de-scription of the world. Of course, such a description cannot be justifiedfrom mathematical consistency alone, but has to agree with experimen-tal evidence. The mathematical concepts that are introduced are usu-ally motivated from our experience of nature. Concepts such as positionand momentum or the state of a system are usually taken for grantedin classical physics. However, many of these have to be subjected toa careful re-examination when we try to carry them over to quantumphysics. One of the basic notions for the description of a physical sys-tem is that of its ’state’. The ’state’ of a physical system essentially canthen be defined, roughly, as the description of all the known (in factone should say knowable) properties of that system and it therefore rep-resents your knowledge about this system. The set of all states formswhat we usually call the state space. In classical mechanics for examplethis is the phase space (the variables are then position and momentum),which is a real vector space. For a classical point-particle moving in

11

Page 13: Quantum mechanics

12 CHAPTER 1. MATHEMATICAL FOUNDATIONS

one dimension, this space is two dimensional, one dimension for posi-tion, one dimension for momentum. We expect, in fact you probablyknow this from your second year lecture, that the quantum mechanicalstate space differs from that of classical mechanics. One reason for thiscan be found in the ability of quantum systems to exist in coherentsuperpositions of states with complex amplitudes, other differences re-late to the description of multi-particle systems. This suggests, that agood choice for the quantum mechanical state space may be a complexvector space.

Before I begin to investigate the mathematical foundations of quan-tum mechanics, I would like to present a simple example (includingsome live experiments) which motivates the choice of complex vectorspaces as state spaces a bit more. Together with the hypothesis ofthe existence of photons it will allow us also to ’derive’, or better, tomake an educated guess for the projection postulate and the rules forthe computation of measurement outcomes. It will also remind youof some of the features of quantum mechanics which you have alreadyencountered in your second year course.

1.2 The quantum mechanical state space

In the next subsection I will briefly motivate that the quantum mechan-ical state space should be a complex vector space and also motivatesome of the other postulates of quantum mechanics

1.2.1 From Polarized Light to Quantum Theory

Let us consider plane waves of light propagating along the z-axis. Thislight is described by the electric field vector ~E orthogonal on the di-rection of propagation. The electric field vector determines the stateof light because in the cgs-system (which I use for convenience in thisexample so that I have as few ε0 and µ0 as possible.) the magnetic field

is given by ~B = ~ez× ~E. Given the electric and magnetic field, Maxwellsequations determine the further time evolution of these fields. In theabsence of charges, we know that ~E(~r, t) cannot have a z-component,

Page 14: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 13

so that we can write

~E(~r, t) = Ex(~r, t)~ex + Ey(~r, t)~ey =

(Ex(~r, t)

Ey(~r, t)

). (1.1)

The electric field is real valued quantity and the general solution of thefree wave equation is given by

Ex(~r, t) = E0x cos(kz − ωt+ αx)

Ey(~r, t) = E0y cos(kz − ωt+ αy) .

Here k = 2π/λ is the wave-number, ω = 2πν the frequency, αx and αyare the real phases and E0

x and E0y the real valued amplitudes of the

field components. The energy density of the field is given by

ε(~r, t) =1

8π( ~E2(~r, t) + ~B2(~r, t))

=1

[(E0

x)2 cos2(kz − ωt+ αx) + (E0

y)2 cos2(kz − ωt+ αy)

].

For a fixed position ~r we are generally only really interested in the time-averaged energy density which, when multiplied with the speed of light,determines the rate at which energy flows in z-direction. Averaging overone period of the light we obtain the averaged energy density ε(~r) with

ε(~r) =1

[(E0

x)2 + (E0

y)2]. (1.2)

For practical purposes it is useful to introduce the complex field com-ponents

Ex(~r, t) = Re(Exei(kz−ωt)) Ey(~r, t) = Re(Eye

i(kz−ωt)) , (1.3)

with Ex = E0xeiαx and Ey = E0

yeiαy . Comparing with Eq. (1.2) we find

that the averaged energy density is given by

ε(~r) =1

[|Ex|2 + |Ey|2

]. (1.4)

Usually one works with the complex field

~E(~r, t) = (Ex~ex + Ey~ey)ei(kz−ωt) =

(ExEy

)ei(kz−ωt) . (1.5)

Page 15: Quantum mechanics

14 CHAPTER 1. MATHEMATICAL FOUNDATIONS

This means that we are now characterizing the state of light by a vectorwith complex components.

The polarization of light waves are described by Ex and Ey. In thegeneral case of complex Ex and Ey we will have elliptically polarizedlight. There are a number of important special cases (see Figures 1.1for illustration).

1. Ey = 0: linear polarization along the x-axis.

2. Ex = 0: linear polarization along the y-axis.

3. Ex = Ey: linear polarization along 450-axis.

4. Ey = iEx: Right circularly polarized light.

5. Ey = −iEx: Left circularly polarized light.

Figure 1.1: Left figure: Some possible linear polarizations of light,horizontally, vertically and 45 degrees. Right figure: Left- and right-circularly polarized light. The light is assumed to propagate away fromyou.

In the following I would like to consider some simple experimentsfor which I will compute the outcomes using classical electrodynam-ics. Then I will go further and use the hypothesis of the existence ofphotons to derive a number of quantum mechanical rules from theseexperiments.

Experiment I: Let us first consider a plane light wave propagatingin z-direction that is falling onto an x-polarizer which allows x-polarized

Page 16: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 15

light to pass through (but not y polarized light). This is shown infigure 1.2. After passing the polarizer the light is x-polarized and from

Figure 1.2: Light of arbitrary polarization is hitting a x-polarizer.

the expression for the energy density Eq. (1.4) we find that the ratiobetween incoming intensity Iin (energy density times speed of light)and outgoing intensity Iout is given by

IoutIin

=|Ex|2

|Ex|2 + |Ey|2. (1.6)

So far this looks like an experiment in classical electrodynamics oroptics.

Quantum Interpretation: Let us change the way of looking atthis problem and thereby turn it into a quantum mechanical experi-ment. You have heard at various points in your physics course that lightcomes in little quanta known as photons. The first time this assumptionhad been made was by Planck in 1900 ’as an act of desperation’ to beable to derive the blackbody radiation spectrum. Indeed, you can alsoobserve in direct experiments that the photon hypothesis makes sense.When you reduce the intensity of light that falls onto a photodetector,you will observe that the detector responds with individual clicks eachtriggered by the impact of a single photon (if the detector is sensitiveenough). The photo-electric effect and various other experiments alsoconfirm the existence of photons. So, in the low-intensity limit we haveto consider light as consisting of indivisible units called photons. It isa fundamental property of photons that they cannot be split – there isno such thing as half a photon going through a polarizer for example.In this photon picture we have to conclude that sometimes a photonwill be absorbed in the polarizer and sometimes it passes through. Ifthe photon passes the polarizer, we have gained one piece of informa-tion, namely that the photon was able to pass the polarizer and thattherefore it has to be polarized in x-direction. The probability p for

Page 17: Quantum mechanics

16 CHAPTER 1. MATHEMATICAL FOUNDATIONS

the photon to pass through the polarizer is obviously the ratio betweentransmitted and incoming intensities, which is given by

p =|Ex|2

|Ex|2 + |Ey|2. (1.7)

If we write the state of the light with normalized intensity

~EN =Ex√

|Ex|2 + |Ey|2~ex +

Ey√|Ex|2 + |Ey|2

~ey , (1.8)

then in fact we find that the probability for the photon to pass the x-polarizer is just the square of the amplitude in front of the basis vector~ex! This is just one of the quantum mechanical rules that you havelearned in your second year course.

Furthermore we see that the state of the photon after it has passedthe x-polarizer is given by

~EN = ~ex , (1.9)

ie the state has has changed from(Ex

Ey

)to(Ex

0

). This transformation

of the state can be described by a matrix acting on vectors, ie

(Ex0

)=

(1 00 0

)(ExEy

)(1.10)

The matrix that I have written here has eigenvalues 0 and 1 and istherefore a projection operator which you have heard about in the sec-ond year course, in fact this reminds strongly of the projection postulatein quantum mechanics.

Experiment II: Now let us make a somewhat more complicatedexperiment by placing a second polarizer behind the first x-polarizer.The second polarizer allows photons polarized in x′ direction to passthrough. If I slowly rotate the polarizer from the x direction to the ydirection, we observe that the intensity of the light that passes throughthe polarizer decreases and vanishes when the directions of the twopolarizers are orthogonal. I would like to describe this experimentmathematically. How do we compute the intensity after the polarizer

Page 18: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 17

now? To this end we need to see how we can express vectors in thebasis chosen by the direction x′ in terms of the old basis vectors ~ex, ~ey.

The new rotated basis ~e′x, ~e′y (see Fig. 1.3) can be expressed by the

old basis by

~e′x = cosφ~ex + sinφ~ey ~e′y = − sinφ~ex + cosφ~ey (1.11)

and vice versa

~ex = cosφ~e′x − sinφ~e′y ~ey = sinφ~e′x + cosφ~e′y . (1.12)

Note that cosφ = ~e′x · ~ex and sinφ = ~e′x · ~ey where I have used the realscalar product between vectors.

Figure 1.3: The x’- basis is rotated by an angle φ with respect to theoriginal x-basis.

The state of the x-polarized light after the first polarizer can berewritten in the new basis of the x’-polarizer. We find

~E = Ex~ex = Ex cosφ~e′x − Ex sinφ~e′y = Ex(~e′x · ~ex)~e′x − Ex(~e′y · ~ey)~e′y

Now we can easily compute the ratio between the intensity beforeand after the x′-polarizer. We find that it is

IafterIbefore

= |~e′x · ~ex|2 = cos2φ (1.13)

or if we describe the light in terms of states with normalized intensityas in equation 1.8, then we find that

IafterIbefore

= |~e′x · ~EN |2 =|~e′x · ~EN |2

|~e′x · ~EN |2 + |~e′y · ~EN |2(1.14)

Page 19: Quantum mechanics

18 CHAPTER 1. MATHEMATICAL FOUNDATIONS

where ~EN is the normalized intensity state of the light after the x-polarizer. This demonstrates that the scalar product between vectorsplays an important role in the calculation of the intensities (and there-fore the probabilities in the photon picture).

Varying the angle φ between the two bases we can see that the ratioof the incoming and outgoing intensities decreases with increasing anglebetween the two axes until the angle reaches 90o degrees.

Interpretation: Viewed in the photon picture this is a rathersurprising result, as we would have thought that after passing the x-polarizer the photon is ’objectively’ in the x-polarized state. However,upon probing it with an x′-polarizer we find that it also has a quality ofan x′-polarized state. In the next experiment we will see an even moreworrying result. For the moment we note that the state of a photon canbe written in different ways and this freedom corresponds to the factthat in quantum mechanics we can write the quantum state in manydifferent ways as a quantum superpositions of basis vectors.

Let us push this idea a bit further by using three polarizers in arow.

Experiment III: If after passing the x-polarizer, the light falls ontoa y-polarizer (see Fig 1.4), then no light will go through the polarizerbecause the two directions are perpendicular to each other. This sim-

Figure 1.4: Light of arbitrary polarization is hitting a x-polarizer andsubsequently a y-polarizer. No light goes through both polarizers.

ple experimental result changes when we place an additional polarizerbetween the x and the y-polarizer. Assume that we place a x’-polarizerbetween the two polarizers. Then we will observe light after the y-polarizer (see Fig. 1.5) depending on the orientation of x′. The lightafter the last polarizer is described by Ey~ey. The amplitude Ey is calcu-lated analogously as in Experiment II. Now let us describe the (x-x’-y)experiment mathematically. The complex electric field (without thetime dependence) is given by

Page 20: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 19

Figure 1.5: An x’-polarizer is placed in between an x-polarizer and ay-polarizer. Now we observe light passing through the y-polarizer.

before the x-polarizer:

~E1 = Ex~ex + Ey~ey.

after the x-polarizer:

~E2 = ( ~E1~ex)~ex = Ex~ex = Ex cosφ~e′x − Ex sinφ~e′y.

after the x’-polarizer:

~E3 = ( ~E2~e′x)~e

′x = Ex cosφ~e′x = Ex cos2 φ~ex + Ex cosφ sinφ~ey.

after the y-polarizer:

~E4 = ( ~E3~ey)~ey = Ex cosφ sinφ~ey = Ey~ey.

Therefore the ratio between the intensity before the x’-polarizer andafter the y-polarizer is given by

IafterIbefore

= cos2 φ sin2 φ (1.15)

Interpretation: Again, if we interpret this result in the photonpicture, then we arrive at the conclusion, that the probability for thephoton to pass through both the x’ and the y polarizer is given bycos2 φ sin2 φ. This experiment further highlights the fact that light ofone polarization may be interpreted as a superposition of light of otherpolarizations. This superposition is represented by adding vectors withcomplex coefficients. If we consider this situation in the photon picturewe have to accept that a photon of a particular polarization can alsobe interpreted as a superposition of different polarization states.

Page 21: Quantum mechanics

20 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Conclusion: All these observations suggest that complex vectors,their amplitudes, scalar products and linear transformations betweencomplex vectors are the basic ingredient in the mathematical structureof quantum mechanics as opposed to the real vector space of classi-cal mechanics. Therefore the rest of this chapter will be devoted to amore detailed introduction to the structure of complex vector-spacesand their properties.

Suggestions for further reading:G. Baym Lectures on Quantum Mechanics, W.A. Benjamin 1969.P.A.M. Dirac, The principles of Quantum Mechanics, Oxford Univer-sity Press 1958

End of 1st lecture

1.2.2 Complex vector spaces

I will now give you a formal definition of a complex vector space andwill then present some of its properties. Before I come to this definition,I introduce a standard notation that I will use in this chapter. Givensome set V we defineNotation:

1. ∀|x〉 ∈ V means: For all |x〉 that lie in V .

2. ∃|x〉 ∈ V means: There exists an element |x〉 that lies in V .

Note that I have used a somewhat unusual notation for vectors. Ihave replaced the vector arrow on top of the letter by a sort of bracketaround the letter. I will use this notation when I talk about complexvectors, in particular when I talk about state vectors.Now I can state the definition of the complex vector space. It willlook a bit abstract at the beginning, but you will soon get used to it,especially when you solve some problems with it.

Definition 1 Given a quadruple (V,C,+, ·) where V is a set of ob-jects (usually called vectors), C denotes the set of complex numbers,

Page 22: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 21

′+′ denotes the group operation of addition and ′·′ denotes the mul-tiplication of a vector with a complex number. (V,C,+, ·) is called acomplex vector space if the following properties are satisfied:

1. (V,+) is an Abelian group, which means that

(a) ∀|a〉, |b〉 ∈ V ⇒ |a〉+ |b〉 ∈ V. (closure)

(b) ∀|a〉, |b〉, |c〉 ∈ V ⇒ |a〉 + (|b〉 + |c〉) = (|a〉 + |b〉) + |c〉.(associative)

(c) ∃|O〉 ∈ V so that ∀|a〉 ∈ V ⇒ |a〉+ |O〉 = |a〉. (zero)

(d) ∀|a〉 ∈ V : ∃(−|a〉) ∈ V so that |a〉+(−|a〉) = |O〉. (inverse)

(e) ∀|a〉, |b〉 ∈ V ⇒ |a〉+ |b〉 = |b〉+ |a〉. (Abelian)

2. The Scalar multiplication satisfies

(a) ∀α ∈ C, |x〉 ∈ V ⇒ α|x〉 ∈ V(b) ∀|x〉 ∈ V ⇒ 1 · |x〉 = |x〉 (unit)

(c) ∀c, d ∈ C, |x〉 ∈ V ⇒ (c · d) · |x〉 = c · (d · |x〉) (associative)

(d) ∀c, d ∈ C, |x〉, |y〉 ∈ V ⇒ c · (|x〉+ |y〉) = c · |x〉+ c · |y〉and (c+ d) · |x〉 = c · |x〉+ c · |y〉. (distributive)

This definition looks quite abstract but a few examples will make itclearer.

Example:

1. A simple proofI would like to show how to prove the statement 0 · |x〉 = |O〉.This might look trivial, but nevertheless we need to prove it, as ithas not been stated as an axiom. From the axioms given in Def.1 we conclude.

|O〉 (1d)=−|x〉+ |x〉

(2b)=−|x〉+ 1 · |x〉

= −|x〉+ (1 + 0) · |x〉(2d)=−|x〉+ 1 · |x〉+ 0 · |x〉

Page 23: Quantum mechanics

22 CHAPTER 1. MATHEMATICAL FOUNDATIONS

(2b)=−|x〉+ |x〉+ 0 · |x〉

(1d)=|O〉+ 0 · |x〉

(1c)=

0 · |x〉 .

2. The C2

This is the set of two-component vectors of the form

|a〉 =

(a1

a2

), (1.16)

where the ai are complex numbers. The addition and scalar mul-tiplication are defined as(

a1

a2

)+

(b1b2

):=

(a1 + b1a2 + b2

)(1.17)

c ·(a1

a2

):=

(c · a1

c · a2

)(1.18)

It is now easy to check that V = C2 together with the additionand scalar multiplication defined above satisfy the definition ofa complex vector space. (You should check this yourself to getsome practise with the axioms of vector space.) The vector spaceC2 is the one that is used for the description of spin-1

2particles

such as electrons.

3. The set of real functions of one variable f : R→ RThe group operations are defined as

(f1 + f2)(x) := f1(x) + f2(x)

(c · f)(x) := c · f(x)

Again it is easy to check that all the properties of a complexvector space are satisfied.

4. Complex n× n matricesThe elements of the vector space are

M =

m11 . . . m1n

.... . .

...mn1 . . . mnn

, (1.19)

Page 24: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 23

where the mij are arbitrary complex numbers. The addition andscalar multiplication are defined asa11 . . . a1n...

. . ....

an1 . . . ann

+

b11 . . . b1n...

. . ....

bn1 . . . bnn

=

a11 + b11 . . . a1n + b1n

.... . .

...an1 + bn1 . . . ann + bnn

,

c ·

a11 . . . a1n...

. . ....

an1 . . . ann

=

c · a11 . . . c · a1n

.... . .

...c · an1 . . . c · ann

.

Again it is easy to confirm that the set of complex n×n matriceswith the rules that we have defined here forms a vector space.Note that we are used to consider matrices as objects acting onvectors, but as we can see here we can also consider them aselements (vectors) of a vector space themselves.

Why did I make such an abstract definition of a vector space? Well,it may seem a bit tedious, but it has a real advantage. Once we haveintroduced the abstract notion of complex vector space anything wecan prove directly from these abstract laws in Definition 1 will holdtrue for any vector space irrespective of how complicated it will looksuperficially. What we have done, is to isolate the basic structure ofvector spaces without referring to any particular representation of theelements of the vector space. This is very useful, because we do notneed to go along every time and prove the same property again whenwe investigate some new objects. What we only need to do is to provethat our new objects have an addition and a scalar multiplication thatsatisfy the conditions stated in Definition 1.

In the following subsections we will continue our exploration of theidea of complex vector spaces and we will learn a few useful propertiesthat will be helpful for the future.

1.2.3 Basis and Dimension

Some of the most basic concepts of vector spaces are those of linearindependence, dimension and basis. They will help us to express vec-

Page 25: Quantum mechanics

24 CHAPTER 1. MATHEMATICAL FOUNDATIONS

tors in terms of other vectors and are useful when we want to defineoperators on vector spaces which will describe observable quantities.

Quite obviously some vectors can be expressed by linear combina-tions of others. For example(

12

)=

(10

)+ 2 ·

(01

). (1.20)

It is natural to consider a given set of vectors {|x〉1, . . . , |x〉k} and toask the question, whether a vector in this set can be expressed as alinear combination of the others. Instead of answering this questiondirectly we will first consider a slightly different question. Given a setof vectors {|x〉1, . . . , |x〉k}, can the null vector |O〉 can be expressed asa linear combination of these vectors? This means that we are lookingfor a linear combination of vectors of the form

λ1|x〉1 + . . .+ λ2|x〉k = |O〉 . (1.21)

Clearly Eq. (1.21) can be satisfied when all the λi vanish. But this caseis trivial and we would like to exclude it. Now there are two possiblecases left:

a) There is no combination of λi’s, not all of which are zero, thatsatisfies Eq. (1.21).

b) There are combinations of λi’s, not all of which are zero, thatsatisfy Eq. (1.21).

These two situations will get different names and are worth the

Definition 2 A set of vectors {|x〉1, . . . , |x〉k} is called linearly inde-pendent if the equation

λ1|x〉1 + . . .+ λ2|x〉k = |O〉 (1.22)

has only the trivial solution λ1 = . . . = λk = 0.If there is a nontrivial solution to Eq. (1.22), i.e. at least one of theλi 6= 0, then we call the vectors {|x〉1, . . . , |x〉k} linearly dependent.

Now we are coming back to our original question as to whetherthere are vectors in {|x〉1, . . . , |x〉k} that can be expressed by all theother vectors in that set. As a result of this definition we can see thefollowing

Page 26: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 25

Lemma 3 For a set of linearly independent vectors {|x〉1, . . . , |x〉k},no |x〉i can be expressed as a linear combination of the other vectors,i.e. one cannot find λj that satisfy the equation

λ1|x〉1 + . . .+ λi−1|x〉i−1 + λi+1|x〉i+1 + . . .+ λk|x〉k = |x〉i . (1.23)

In a set of linearly dependent vectors {|x〉1, . . . , |x〉k} there is at leastone |x〉i that can be expressed as a linear combination of all the other|x〉j.

Proof: Exercise! 2

Example: The set {|O〉} consisting of the null vector only, is linearlydependent.

In a sense that will become clearer when we really talk about quan-tum mechanics, in a set of linearly independent set of vectors, eachvector has some quality that none of the other vectors have.

After we have introduced the notion of linear dependence, we cannow proceed to define the dimension of a vector space. I am sure thatyou have a clear intuitive picture of the notion of dimension. Evidentlya plain surface is 2-dimensional and space is 3-dimensional. Why dowe say this? Consider a plane, for example. Clearly, every vector inthe plane can be expressed as a linear combination of any two linearlyindependent vectors |e〉1, |e〉2. As a result you will not be able to find aset of three linearly independent vectors in a plane, while two linearlyindependent vectors can be found. This is the reason to call a plane atwo-dimensional space. Let’s formalize this observation in

Definition 4 The dimension of a vector space V is the largest numberof linearly independent vectors in V that one can find.

Now we introduce the notion of basis of vector spaces.

Definition 5 A set of vectors {|x〉1, . . . , |x〉k} is called a basis of avector space V if

Page 27: Quantum mechanics

26 CHAPTER 1. MATHEMATICAL FOUNDATIONS

a) |x〉1, . . . , |x〉k are linearly independent.b) ∀|x〉 ∈ V : ∃λi ∈ C⇒ x =

∑ki=1 λi|x〉i.

Condition b) states that it is possible to write every vector as alinear combination of the basis vectors. The first condition makes surethat the set {|x〉1, . . . , |x〉k} is the smallest possible set to allow for con-dition b) to be satisfied. It turns out that any basis of anN -dimensionalvector space V contains exactly N vectors. Let us illustrate the notionof basis.

Example:

1. Consider the space of vectors C2 with two components. Then thetwo vectors

|x〉1 =

(10

)|x〉2 =

(01

). (1.24)

form a basis of C2. A basis for the CN can easily be constructedin the same way.

2. An example for an infinite dimensional vector space is the spaceof complex polynomials, i.e. the set

V = {c0 + c1z + . . .+ ckzk|arbitrary k and ∀ci ∈ C} . (1.25)

Two polynomials are equal when they give the same values for allz ∈ C. Addition and scalar multiplication are defined coefficientwise. It is easy to see that the set {1, z, z2, . . .} is linearly inde-pendent and that it contains infinitely many elements. Togetherwith other examples you will prove (in the problem sheets) thatEq. (1.25) indeed describes a vector space.

End of 2nd lecture

1.2.4 Scalar products and Norms on Vector Spaces

In the preceding section we have learnt about the concept of a basis.Any set of N linearly independent vectors of an N dimensional vector

Page 28: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 27

space V form a basis. But not all such choices are equally convenient.To find useful ways to chose a basis and to find a systematic methodto find the linear combinations of basis vectors that give any arbitraryvector |x〉 ∈ V we will now introduce the concept of scalar productbetween two vectors. This is not to be confused with scalar multiplica-tion which deals with a complex number and a vector. The concept ofscalar product then allows us to formulate what we mean by orthogo-nality. Subsequently we will define the norm of a vector, which is theabstract formulation of what we normally call a length. This will thenallow us to introduce orthonormal bases which are particularly handy.

The scalar product will play an extremely important role in quan-tum mechanics as it will in a sense quantify how similar two vectors(quantum states) are. In Fig. 1.6 you can easily see qualitatively thatthe pairs of vectors become more and more different from left to right.The scalar product puts this into a quantitative form. This is the rea-son why it can then be used in quantum mechanics to quantify howlikely it is for two quantum states to exhibit the same behaviour in anexperiment.

Figure 1.6: The two vectors on the left are equal, the next pair is’almost’ equal and the final pair is quite different. The notion of equaland different will be quantified by the scalar product.

To introduce the scalar product we begin with an abstract formu-lation of the properties that we would like any scalar product to have.Then we will have a look at examples which are of interest for quantummechanics.

Definition 6 A complex scalar product on a vector space assigns toany two vectors |x〉, |y〉 ∈ V a complex number (|x〉, |y〉) ∈ C satisfyingthe following rules

Page 29: Quantum mechanics

28 CHAPTER 1. MATHEMATICAL FOUNDATIONS

1. ∀|x〉, |y〉, |z〉 ∈ V, αi ∈ C :

(|x〉, α1|y〉+ α2|z〉) = α1(|x〉, |y〉) + α2(|x〉, |z〉) (linearity)

2. ∀|x〉, |y〉 ∈ V : (|x〉, |y〉) = (|y〉, |x〉)∗ (symmetry)

3. ∀|x〉 ∈ V : (|x〉, |x〉) ≥ 0 (positivity)

4. ∀|x〉 ∈ V : (|x〉, |x〉) = 0⇔ |x〉 = |O〉

These properties are very much like the ones that you know fromthe ordinary dot product for real vectors, except for property 2 whichwe had to introduce in order to deal with the fact that we are nowusing complex numbers. In fact, you should show as an exercise thatthe ordinary condition (|x〉, ~y) = (|y〉, |x〉) would lead to a contradictionwith the other axioms if we have complex vector spaces. Note that weonly defined linearity in the second argument. This is in fact all weneed to do. As an exercise you may prove that any scalar product isanti-linear in the first component, i.e.

∀|x〉, |y〉, |z〉 ∈ V, α ∈ C : (α|x〉+β|y〉, |z〉) = α∗(|x〉, |z〉)+β∗(|y〉, |z〉) .(1.26)

Note that vector spaces on which we have defined a scalar product arealso called unitary vector spaces. To make you more comfortablewith the scalar product, I will now present some examples that playsignificant roles in quantum mechanics.

Examples:

1. The scalar product in Cn.Given two complex vectors |x〉, |y〉 ∈ Cn with components xi andyi we define the scalar product

(|x〉, |y〉) =n∑i=1

x∗i yi (1.27)

where ∗ denotes the complex conjugation. It is easy to convinceyourself that Eq. (1.27) indeed defines a scalar product. (Do it!).

Page 30: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 29

2. Scalar product on continuous square integrable functionsA square integrable function ψ ∈ L2(R) is one that satisfies∫ ∞

−∞|ψ(x)|2dx <∞ (1.28)

Eq. (1.28) already implies how to define the scalar product forthese square integrable functions. For any two functions ψ, φ ∈L2(R) we define

(ψ, φ) =∫ ∞

−∞ψ(x)∗φ(x)dx . (1.29)

Again you can check that definition Eq. (1.29) satisfies all prop-erties of a scalar product. (Do it!) We can even define the scalarproduct for discontinuous square integrable functions, but thenwe need to be careful when we are trying to prove property 4 forscalar products. One reason is that there are functions which arenonzero only in isolated points (such functions are discontinuous)and for which Eq. (1.28) vanishes. An example is the function

f(x) =

{1 for x = 00 anywhere else

The solution to this problem lies in a redefinition of the elementsof our set. If we identify all functions that differ from each otheronly in countably many points then we have to say that they arein fact the same element of the set. If we use this redefinition thenwe can see that also condition 4 of a scalar product is satisfied.

An extremely important property of the scalar product is the Schwarzinequality which is used in many proofs. In particular I will used itto prove the triangular inequality for the length of a vector and in theproof of the uncertainty principle for arbitrary observables.

Theorem 7 (The Schwarz inequality) For any |x〉, |y〉 ∈ V we have

|(|x〉, |y〉)|2 ≤ (|x〉, |x〉)(|y〉, |y〉) . (1.30)

Page 31: Quantum mechanics

30 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Proof: For any complex number α we have

0 ≤ (|x〉+ α|y〉, |x〉+ α|y〉)= (|x〉, |x〉) + α(|x〉, |y〉) + α∗(|y〉, |x〉) + |α|2(|y〉, |y〉)= (|x〉, |x〉) + 2vRe(|x〉, |y〉)− 2wIm(|x〉, |y〉) + (v2 + w2)(|y〉, |y〉)=: f(v, w) (1.31)

In the definition of f(v, w) in the last row we have assumed α = v +iw. To obtain the sharpest possible bound in Eq. (1.30), we need tominimize the right hand side of Eq. (1.31). To this end we calculate

0 =∂f

∂v(v, w) = 2Re(|x〉, |y〉) + 2v(|y〉, |y〉) (1.32)

0 =∂f

∂w(v, w) = −2Im(|x〉, |y〉) + 2w(|y〉, |y〉) . (1.33)

Solving these equations, we find

αmin = vmin + iwmin = −Re(|x〉, |y〉)− iIm(|x〉, |y〉)(|y〉, |y〉)

= −(|y〉, |x〉)(|y〉, |y〉)

.

(1.34)Because all the matrix of second derivatives is positive definite, wereally have a minimum. If we insert this value into Eq. (1.31) weobtain

0 ≤ (|x〉, |x〉)− (|y〉, |x〉)(|x〉, |y〉)(|y〉, |y〉)

(1.35)

This implies then Eq. (1.30). Note that we have equality exactly if thetwo vectors are linearly dependent, i.e. if |x〉 = γ|y〉 2.

Quite a few proofs in books are a little bit shorter than this onebecause they just use Eq. (1.34) and do not justify its origin as it wasdone here.

Having defined the scalar product, we are now in a position to definewhat we mean by orthogonal vectors.

Definition 8 Two vectors |x〉, |y〉 ∈ V are called orthogonal if

(|x〉, |y〉) = 0 . (1.36)

We denote with |x〉⊥ a vector that is orthogonal to |x〉.

Page 32: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 31

Now we can define the concept of an orthogonal basis which will bevery useful in finding the linear combination of vectors that give |x〉.

Definition 9 An orthogonal basis of an N dimensional vector spaceV is a set of N linearly independent vectors such that each pair ofvectors are orthogonal to each other.

Example: In C the three vectors 002

, 0

30

, 1

00

, (1.37)

form an orthogonal basis.

Planned end of 3rd lecture

Now let us chose an orthogonal basis {|x〉1, . . . , |x〉N} of an N di-mensional vector space. For any arbitrary vector |x〉 ∈ V we would liketo find the coefficients λ1, . . . , λN such that

N∑i=1

λi|x〉i = |x〉 . (1.38)

Of course we can obtain the λi by trial and error, but we would like tofind an efficient way to determine the coefficients λi. To see this, letus consider the scalar product between |x〉 and one of the basis vectors|x〉i. Because of the orthogonality of the basis vectors, we find

(|x〉i, |x〉) = λi(|x〉i, |x〉i) . (1.39)

Note that this result holds true only because we have used an orthogonalbasis. Using Eq. (1.39) in Eq. (1.38), we find that for an orthogonalbasis any vector |x〉 can be represented as

|x〉 =N∑i=1

(|x〉i, |x〉)(|x〉i, |x〉i)

|x〉i . (1.40)

In Eq. (1.40) we have the denominator (|x〉i, |x〉i) which makes theformula a little bit clumsy. This quantity is the square of what we

Page 33: Quantum mechanics

32 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Figure 1.7: A vector |x〉 =(ab

)in R2. From plane geometry we know

that its length is√a2 + b2 which is just the square root of the scalar

product of the vector with itself.

usually call the length of a vector. This idea is illustrated in Fig. 1.7which shows a vector |x〉 =

(ab

)in the two-dimensional real vector

space R2. Clearly its length is√a2 + b2. What other properties does

the length in this intuitively clear picture has? If I multiply the vectorby a number α then we have the vector α|x〉 which evidently has thelength

√α2a2 + α2b2 = |α|

√a2 + b2. Finally we know that we have a

triangular inequality. This means that given two vectors |x〉1 =(a1

b1

)and |x〉2 =

(a2

b2

)the length of the |x〉1 + |x〉2 is smaller than the sum

of the lengths of |x〉1 and |x〉2. This is illustrated in Fig. 1.8. Inthe following I formalize the concept of a length and we will arrive atthe definition of the norm of a vector |x〉i. The concept of a norm isimportant if we want to define what we mean by two vectors being closeto one another. In particular, norms are necessary for the definition ofconvergence in vector spaces, a concept that I will introduce in thenext subsection. In the following I specify what properties a norm of avector should satisfy.

Definition 10 A norm on a vector space V associates with every |x〉 ∈V a real number |||x〉||, with the properties.

1. ∀|x〉 ∈ V : |||x〉|| ≥ 0 and |||x〉|| = 0 ⇔ |x〉 = |O〉.(positivity)

2. ∀|x〉 ∈ V, α ∈ C : ||α|x〉|| = |α| · |||x〉||. (linearity)

Page 34: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 33

Figure 1.8: The triangular inequality illustrated for two vectors |x〉 and|y〉. The length of |x〉 + |y〉 is smaller than the some of the lengths of|x〉 and |y〉.

3. ∀|x〉, |y〉 ∈ V : |||x〉+ |y〉|| ≤ |||x〉||+ |||y〉||. (triangularinequality)

A vector space with a norm defined on it is also called a normedvector space. The three properties in Definition 10 are those thatyou would intuitively expect to be satisfied for any decent measure oflength. As expected norms and scalar products are closely related. Infact, there is a way of generating norms very easily when you alreadyhave a scalar product.

Lemma 11 Given a scalar product on a complex vector space, we candefine the norm of a vector |x〉 by

|||x〉|| =√

(|x〉, |x〉) . (1.41)

Proof:

• Properties 1 and 2 of the norm follow almost trivially from thefour basic conditions of the scalar product. 2

Page 35: Quantum mechanics

34 CHAPTER 1. MATHEMATICAL FOUNDATIONS

• The proof of the triangular inequality uses the Schwarz inequality.

|||x〉+ |y〉||2 = |(|x〉+ |y〉, |x〉+ |y〉)|= |(|x〉, |x〉+ |y〉) + (|y〉, |x〉+ |y〉)|≤ |(|x〉, |x〉+ |y〉)|+ |(|y〉, |x〉+ |y〉)|≤ |||x〉|| · |||x〉+ |y〉||+ |||y〉|| · |||x〉+ |y〉|| .(1.42)

Dividing both sides by |||x〉 + |y〉|| yields the inequality. Thisassumes that the sum |x〉+ |y〉 6= |O〉. If we have |x〉+ |y〉 = |O〉then the Schwarz inequality is trivially satisfied.2

Lemma 11 shows that any unitary vector space can canonically(this means that there is basically one natural choice) turned into anormed vector space. The converse is, however, not true. Not everynorm gives automatically rise to a scalar product (examples will begiven in the exercises).

Using the concept of the norm we can now define an orthonormalbasis for which Eq. (1.40) can then be simplified.

Definition 12 An orthonormal basis of an N dimensional vectorspace is a set of N pairwise orthogonal linearly independent vectors{|x〉1, . . . , |x〉N} where each vector satisfies |||x〉i||2 = (|x〉i, |x〉i) = 1,i.e. they are unit vectors. For an orthonormal basis and any vector |x〉we have

|x〉 =N∑i=1

(|x〉i, |x〉)|x〉i =N∑i=1

αi|x〉i , (1.43)

where the components of |x〉 with respect to the basis {|x〉1, . . . , |x〉N}are the αi = (|x〉i, |x〉).

Remark: Note that in Definition 12 it was not really necessary to de-mand the linear independence of the vectors {|x〉1, . . . , |x〉N} becausethis follows from the fact that they are normalized and orthogonal. Tryto prove this as an exercise.

Now I have defined what an orthonormal basis is, but you still donot know how to construct it. There are quite a few different methods

Page 36: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 35

to do so. I will present the probably most well-known procedure whichhas the name Gram-Schmidt procedure.

This will not be presented in the lecture. You should study this at home.

There will be an exercise on this topic in the Rapid Feedback class.

The starting point of the Gram-Schmidt orthogonalization proce-dure is a set of linearly independent vectors S = {|x〉1, . . . , |x〉n}. Nowwe would like to construct from them an orthonormal set of vectors{|e〉1, . . . , |e〉n}. The procedure goes as follows

First step We chose |f〉1 = |x〉1 and then construct from it the nor-malized vector |e〉1 = |x〉1/|||x〉1||.Comment: We can normalize the vector |x〉1 because the set S islinearly independent and therefore |x〉1 6= 0.

Second step We now construct |f〉2 = |x〉2 − (|e〉1, |x〉2)|e〉1 and fromthis the normalized vector |e〉2 = |f〉2/|||f〉2||.Comment: 1) |f〉2 6= |O〉 because |x〉1 and |x〉2 are linearly inde-pendent.2) By taking the scalar product (|e〉2, |e〉1) we find straight awaythat the two vectors are orthogonal.

...

k-th step We construct the vector

|f〉k = |x〉k −k−1∑i=1

(|e〉i, |x〉k)|e〉i .

Because of linear independence of S we have |f〉k 6= |O〉. Thenormalized vector is then given by

|e〉k =|f〉k|||f〉k||

.

It is easy to check that the vector |e〉k is orthogonal to all |e〉iwith i < k.

n-th step With this step the procedure finishes. We end up with aset of vectors {|e〉1, . . . , |e〉n} that are pairwise orthogonal andnormalized.

Page 37: Quantum mechanics

36 CHAPTER 1. MATHEMATICAL FOUNDATIONS

1.2.5 Completeness and Hilbert spaces

In the preceding sections we have encountered a number of basic ideasabout vector spaces. We have introduced scalar products, norms, basesand the idea of dimension. In order to be able to define a Hilbert space,the state space of quantum mechanics, we require one other concept,that of completeness, which I will introduce in this section.

What do we mean by complete? To see this, let us consider se-quences of elements of a vector space (or in fact any set, but we areonly interested in vector spaces). I will write sequences in two differentways

{|x〉i}i=0,...,∞ ≡ (|x〉0, |x〉1, |x〉2 . . .) . (1.44)

To define what we mean by a convergent sequences, we use normsbecause we need to be able to specify when two vectors are close toeach other.

Definition 13 A sequence {|x〉i}i=0,...,∞ of elements from a normedvector space V converges towards a vector |x〉 ∈ V if for all ε > 0there is an n0 such that for all n > n0 we have

|||x〉 − |x〉n|| ≤ ε . (1.45)

But sometimes you do not know the limiting element, so you wouldlike to find some other criterion for convergence without referring tothe limiting element. This idea led to the following

Definition 14 A sequence {|x〉i}i=0,...,∞ of elements from a normedvector space V is called a Cauchy sequence if for all ε > 0 there isan n0 such that for all m,n > n0 we have

|||x〉m − |x〉n|| ≤ ε . (1.46)

Planned end of 4th lecture

Now you can wonder whether every Cauchy sequence converges.Well, it sort of does. But unfortunately sometimes the limiting ele-ment does not lie in the set from which you draw the elements of your

Page 38: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 37

sequence. How can that be? To illustrate this I will present a vectorspace that is not complete! Consider the set

V = {|x〉 : only finitely many components of |x〉 are non-zero} .

An example for an element of V is |x〉 = (1, 2, 3, 4, 5, 0, . . .). It is nowquite easy to check that V is a vector-space when you define additionof two vectors via

|x〉+ |y〉 = (x1 + y1, x2 + y2, . . .)

and the multiplication by a scalar via

c|x〉 = (cx1, cx2, . . .) .

Now I define a scalar product from which I will then obtain a norm viathe construction of Lemma 11. We define the scalar product as

(|x〉, |y〉) =∞∑k=1

x∗kyk .

Now let us consider the series of vectors

|x〉1 = (1, 0, 0, 0, . . .)

|x〉2 = (1,1

2, 0, . . .)

|x〉3 = (1,1

2,1

4, 0, . . .)

|x〉4 = (1,1

2,1

4,1

8, . . .)

...

|x〉k = (1,1

2, . . . ,

1

2k−1, 0, . . .)

For any n0 we find that for m > n > n0 we have

|||x〉m − |x〉n|| = ||(0, . . . , 0,1

2n, . . . ,

1

2m−1, 0, . . .)|| ≤ 1

2n−1.

Therefore it is clear that the sequence {|x〉k}k=1,...∞ is a Cauchy se-quence. However, the limiting vector is not a vector from the vector

Page 39: Quantum mechanics

38 CHAPTER 1. MATHEMATICAL FOUNDATIONS

space V , because the limiting vector contains infinitely many nonzeroelements.

Considering this example let us define what we mean by a completevector space.

Definition 15 A vector space V is called complete if every Cauchysequence of elements from the vector space V converges towards anelement of V .

Now we come to the definition of Hilbert spaces.

Definition 16 A vector space H is a Hilbert space if it satisfies thefollowing two conditions

1. H is a unitary vector space.

2. H is complete.

Following our discussions of the vectors spaces, we are now in theposition to formulate the first postulate of quantum mechanics.

Postulate 1 The state of a quantum system is described by a vectorin a Hilbert space H.

Why did we postulate that the quantum mechanical state space isa Hilbert space? Is there a reason for this choice?

Let us argue physically. We know that we need to be able to rep-resent superpositions, i.e. we need to have a vector space. From thesuperposition principle we can see that there will be states that are notorthogonal to each other. That means that to some extent one quan-tum state can be ’present’ in another non-orthogonal quantum state –they ’overlap’. The extent to which the states overlap can be quantifiedby the scalar product between two vectors. In the first section we havealso seen, that the scalar product is useful to compute probabilitiesof measurement outcomes. You know already from your second year

Page 40: Quantum mechanics

1.2. THE QUANTUM MECHANICAL STATE SPACE 39

course that we need to normalize quantum states. This requires thatwe have a norm which can be derived from a scalar product. Becauseof the obvious usefulness of the scalar product, we require that thestate space of quantum mechanics is a vector space equipped with ascalar product. The reason why we demand completeness, can be seenfrom a physical argument which could run as follows. Consider anysequence of physical states that is a Cauchy sequence. Quite obviouslywe would expect this sequence to converge to a physical state. It wouldbe extremely strange if by means of such a sequence we could arriveat an unphysical state. Imagine for example that we change a state bysmaller and smaller amounts and then suddenly we would arrive at anunphysical state. That makes no sense! Therefore it seems reasonableto demand that the physical state space is complete.

What we have basically done is to distill the essential features ofquantum mechanics and to find a mathematical object that representsthese essential features without any reference to a special physical sys-tem.

In the next sections we will continue this programme to formulatemore principles of quantum mechanics.

1.2.6 Dirac notation

In the following I will introduce a useful way of writing vectors. Thisnotation, the Dirac notation, applies to any vector space and is veryuseful, in particular it makes life a lot easier in calculations. As mostquantum mechanics books are written in this notation it is quite im-portant that you really learn how to use this way of writing vectors. Ifit appears a bit weird to you in the first place you should just practiseits use until you feel confident with it. A good exercise, for example, isto rewrite in Dirac notation all the results that I have presented so far.

So far we have always written a vector in the form |x〉. The scalarproduct between two vectors has then been written as (|x〉, |y〉). Let usnow make the following identification

|x〉 ↔ |x〉 . (1.47)

We call |x〉 a ket. So far this is all fine and well. It is just a newnotation for a vector. Now we would like to see how to rewrite the

Page 41: Quantum mechanics

40 CHAPTER 1. MATHEMATICAL FOUNDATIONS

scalar product of two vectors. To understand this best, we need to talka bit about linear functions of vectors.

Definition 17 A function f : V → C from a vector space into thecomplex numbers is called linear if for any |ψ〉, |φ〉 ∈ V and any α, β ∈C we have

f(α|ψ〉+ β|φ〉) = αf(|ψ〉) + βf(|φ〉) (1.48)

With two linear function f1, f2 also the linear combination µf1+νf2

is a linear function. Therefore the linear functions themselves form avector space and it is even possible to define a scalar product betweenlinear functions. The space of the linear function on a vector space Vis called the dual space V ∗.

Now I would like to show you an example of a linear function which Idefine by using the scalar product between vectors. I define the functionf|φ〉 : V → C, where |φ〉 ∈ V is a fixed vector so that for all |ψ〉 ∈ V

f|φ〉(|ψ〉) := (|φ〉, |ψ〉) . (1.49)

Now I would like to introduce a new notation for f|φ〉. From now on Iwill identify

f|φ〉 ↔ 〈φ| (1.50)

and use this to rewrite the scalar product between two vectors |φ〉, |ψ〉as

〈φ|ψ〉 := 〈φ|(|ψ〉) ≡ (|φ〉, |ψ〉) . (1.51)

The object 〈φ| is called bra and the Dirac notation is therefore some-times called braket notation. Note that while the ket is a vector inthe vector space V , the bra is an element of the dual space V ∗.

At the moment you will not really be able to see the usefulnessof this notation. But in the next section when I will introduce linearoperators, you will realize that the Dirac notation makes quite a fewnotations and calculations a lot easier.

1.3 Linear Operators

So far we have only dealt with the elements (vectors) of vector spaces.Now we need to learn how to transform these vectors, that means how

Page 42: Quantum mechanics

1.3. LINEAR OPERATORS 41

to transform one set of vectors into a different set. Again as quantummechanics is a linear theory we will concentrate on the description oflinear operators.

1.3.1 Definition in Dirac notation

Definition 18 An linear operator A : H → H associates to everyvector |ψ〉 ∈ H a vector A|ψ〉 ∈ H such that

A(λ|ψ〉+ µ|φ〉) = λA|ψ〉+ µA|φ〉 (1.52)

for all |ψ〉, |φ〉 ∈ H and λ, µ ∈ C.

Planned end of 5th lecture

A linear operator A : H → H can be specified completely by de-scribing its action on a basis set of H. To see this let us chose anorthonormal basis {|ei〉|i = 1, . . . , N}. Then we can calculate the ac-tion of A on this basis. We find that the basis {|ei〉|i = 1, . . . , N} ismapped into a new set of vectors {|fi〉|i = 1, . . . , N} following

|fi〉 := A|ei〉 . (1.53)

Of course every vector |fi〉 can be represented as a linear combinationof the basis vectors {|ei〉|i = 1, . . . , N}, i.e.

|fi〉 =∑k

Aki|ek〉 . (1.54)

Combining Eqs. (1.53) and (1.54) and taking the scalar product with|ej〉 we find

Aji = 〈ej|∑k

(Aki|ek〉) (1.55)

= 〈ej|fi〉≡ 〈ej|A|ei〉 . (1.56)

The Aji are called the matrix elements of the linear operator A withrespect to the orthonormal basis {|ei〉|i = 1, . . . , N}.

Page 43: Quantum mechanics

42 CHAPTER 1. MATHEMATICAL FOUNDATIONS

I will now go ahead and express linear operators in the Dirac nota-tion. First I will present a particularly simple operator, namely the unitoperator 1, which maps every vector into itself. Surprisingly enoughthis operator, expressed in the Dirac notation will prove to be veryuseful in calculations. To find its representation in the Dirac notation,we consider an arbitrary vector |f〉 and express it in an orthonormalbasis {|ei〉|i = 1, . . . , N}. We find

|f〉 =N∑j=1

fj|ej〉 =N∑j=1

|ej〉〈ej|f〉 . (1.57)

To check that this is correct you just form the scalar product between|f〉 and any of the basis vectors |ei〉. Now let us rewrite Eq. (1.57) alittle bit, thereby defining the Dirac notation of operators.

|f〉 =N∑j=1

|ej〉〈ej|f〉 =: (N∑j=1

|ej〉〈ej|)|f〉 (1.58)

Note that the right hand side is defined in terms of the left hand side.The object in the brackets is quite obviously the identity operator be-cause it maps any vector |f〉 into the same vector |f〉. Therefore it istotally justified to say that

1 ≡N∑j=1

|ej〉〈ej| . (1.59)

This was quite easy. We just moved some brackets around and wefound a way to represent the unit operator using the Dirac notation.Now you can already guess how the general operator will look like, butI will carefully derive it using the identity operator. Clearly we havethe following identity

A = 1A1 . (1.60)

Now let us use the Dirac notation of the identity operator in Eq. (1.59)and insert it into Eq. (1.60). We then find

A = (N∑j=1

|ej〉〈ej|)A(N∑k=1

|ek〉〈ek|)

Page 44: Quantum mechanics

1.3. LINEAR OPERATORS 43

=N∑jk

|ej〉(〈ej|A|ek〉)〈ek|

=N∑jk

(〈ej|A|ek〉)|ej〉〈ek|

=N∑jk

Ajk|ej〉〈ek| . (1.61)

Therefore you can express any linear operator in the Dirac notation,once you know its matrix elements in an orthonormal basis.

Matrix elements are quite useful when we want to write down lin-ear operator in matrix form. Given an orthonormal basis {|ei〉|i =1, . . . , N} we can write every vector as a column of numbers

|g〉 =∑i

gi|ei〉 .=

g1...gN

. (1.62)

Then we can write our linear operator in the same basis as a matrix

A.=

A11 . . . A1N...

. . ....

AN1 . . . ANN

. (1.63)

To convince you that the two notation that I have introduced give thesame results in calculations, apply the operator A to any of the basisvectors and then repeat the same operation in the matrix formulation.

1.3.2 Adjoint and Hermitean Operators

Operators that appear in quantum mechanics are linear. But not alllinear operators correspond to quantities that can be measured. Onlya special subclass of operators describe physical observables. In thissubsection I will describe these operators. In the following subsection Iwill then discuss some of their properties which then explain why theseoperators describe measurable quantities.

In the previous section we have considered the Dirac notation andin particular we have seen how to write the scalar product and matrix

Page 45: Quantum mechanics

44 CHAPTER 1. MATHEMATICAL FOUNDATIONS

elements of an operator in this notation. Let us reconsider the matrixelements of an operator A in an orthonormal basis {|ei〉|i = 1, . . . , N}.We have

〈ei|(A|ej〉) = (〈ei|A)|ej〉 (1.64)

where we have written the scalar product in two ways. While the lefthand side is clear there is now the question, what the bra 〈ei|A on theright hand side means, or better, to which ket it corresponds to. To seethis we need to make the

Definition 19 The adjoint operator A† corresponding to the linearoperator A is the operator such that for all |x〉, |y〉 we have

(A†|x〉, |y〉) := (|x〉, A|y〉) , (1.65)

or using the complex conjugation in Eq. (1.65) we have

〈y|A†|x〉 := 〈x|A|y〉∗ . (1.66)

In matrix notation, we obtain the matrix representing A† by transposi-tion and complex conjugation of the matrix representing A. (Convinceyourself of this).Example:

A =

(1 2ii 2

)(1.67)

and

A† =

(1 −i−2i 2

)(1.68)

The following property of adjoint operators is often used.

Lemma 20 For operators A and B we find

(AB)† = B†A† . (1.69)

Proof: Eq. (1.69) is proven by

(|x〉, (AB)†|y〉 = ((AB)|x〉, |y〉)= (A(B|x〉), |y〉) Now use Def. 19.

= (B|x〉), A†|y〉) Use Def. 19 again.

= (|x〉), B†A†|y〉)

Page 46: Quantum mechanics

1.3. LINEAR OPERATORS 45

As this is true for any two vectors |x〉 and |y〉 the two operators (AB)†

and B†A† are equal 2.It is quite obvious (see the above example) that in general an op-

erator A and its adjoint operator A† are different. However, there areexceptions and these exceptions are very important.

Definition 21 An operator A is called Hermitean or self-adjoint ifit is equal to its adjoint operator, i.e. if for all states |x〉, |y〉 we have

〈y|A|x〉 = 〈x|A|y〉∗ . (1.70)

In the finite dimensional case a self-adjoint operator is the same asa Hermitean operator. In the infinite-dimensional case Hermitean andself-adjoint are not equivalent.

The difference between self-adjoint and Hermitean is related to thedomain of definition of the operators A and A† which need not be thesame in the infinite dimensional case. In the following I will basicallyalways deal with finite-dimensional systems and I will therefore usuallyuse the term Hermitean for an operator that is self-adjoint.

1.3.3 Eigenvectors, Eigenvalues and the SpectralTheorem

Hermitean operators have a lot of nice properties and I will explore someof these properties in the following sections. Mostly these propertiesare concerned with the eigenvalues and eigenvectors of the operators.We start with the definition of eigenvalues and eigenvectors of a linearoperator.

Definition 22 A linear operator A on an N-dimensional Hilbert spaceis said to have an eigenvector |λ〉 with corresponding eigenvalue λ if

A|λ〉 = λ|λ〉 , (1.71)

or equivalently

(A− λ1)|λ〉 = 0 . (1.72)

Page 47: Quantum mechanics

46 CHAPTER 1. MATHEMATICAL FOUNDATIONS

This definition of eigenvectors and eigenvalues immediately showsus how to determine the eigenvectors of an operator. Because Eq.(1.72) implies that the N columns of the operator A− λ1 are linearlydependent we need to have that

det(A− λ1)) = 0 . (1.73)

This immediately gives us a complex polynomial of degree N . As weknow from analysis, every polynomial of degree N has exactly N solu-tions if one includes the multiplicities of the eigenvalues in the counting.In general eigenvalues and eigenvectors do not have many restriction foran arbitrary A. However, for Hermitean and unitary (will be definedsoon) operators there are a number of nice results concerning the eigen-values and eigenvectors. (If you do not feel familiar with eigenvectorsand eigenvalues anymore, then a good book is for example: K.F. Riley,M.P. Robinson, and S.J. Bence, Mathematical Methods for Physics andEngineering.

We begin with an analysis of Hermitean operators. We find

Lemma 23 For any Hermitean operator A we have

1. All eigenvalues of A are real.

2. Eigenvectors to different eigenvalues are orthogonal.

Proof: 1.) Given an eigenvalue λ and the corresponding eigenvector|λ〉 of a Hermitean operator A. Then we have using the hermiticity ofA

λ∗ = 〈λ|A|λ〉∗ = 〈λ|A†|λ〉 = 〈λ|A|λ〉 = λ (1.74)

which directly implies that λ is real 2.2.) Given two eigenvectors |λ〉, |µ〉 for different eigenvalues λ and

µ. Then we have

λ〈λ|µ〉 = (λ〈µ|λ〉)∗ = (〈µ|A|λ〉)∗ = 〈λ|A|µ〉 = µ〈λ|µ〉 (1.75)

As λ and µ are different this implies 〈λ|µ〉 = 0. This finishes the proof 2.

Page 48: Quantum mechanics

1.3. LINEAR OPERATORS 47

Lemma 23 allows us to formulate the second postulate of quantummechanics. To motivate the second postulate a little bit imagine thatyou try to determine the position of a particle. You would put downa coordinate system and specify the position of the particle as a set ofreal numbers. This is just one example and in fact in any experimentin which we measure a quantum mechanical system, we will always ob-tain a real number as a measurement result. Corresponding to eachoutcome we have a state of the system (the particle sitting in a partic-ular position), and therefore we have a set of real numbers specifyingall possible measurement outcomes and a corresponding set of states.While the representation of the states may depend on the chosen basis,the physical position doesn’t. Therefore we are looking for an object,that gives real numbers independent of the chosen basis. Well, theeigenvalues of a matrix are independent of the chosen basis. Thereforeif we want to describe a physical observable its a good guess to use anoperator and identify the eigenvalues with the measurement outcomes.Of course we need to demand that the operator has only real eigenval-ues. Thus is guaranteed only by Hermitean operators. Therefore weare led to the following postulate.

Postulate 2 Observable quantum mechanical quantities are describedby Hermitean operators A on the Hilbert space H. The eigenvalues aiof the Hermitean operator are the possible measurement results.

Planned end of 6th lecture

Now I would like to formulate the spectral theorem for finite di-mensional Hermitean operators. Any Hermitean operator on an N -dimensional Hilbert space is represented by an N × N matrix. Thecharacteristic polynomial Eq. (1.73) of such a matrix is of N -th orderand therefore possesses N eigenvalues, denoted by λi and correspondingeigenvectors |λi〉. For the moment, let us assume that the eigenvaluesare all different. Then we know from Lemma 23 that the correspond-ing eigenvectors are orthogonal. Therefore we have a set of N pairwise

Page 49: Quantum mechanics

48 CHAPTER 1. MATHEMATICAL FOUNDATIONS

orthogonal vectors in an N -dimensional Hilbert space. Therefore thesevectors form an orthonormal basis. From this we can finally concludethe following important

Completeness theorem: For any Hermitean operator A on aHilbert space H the set of all eigenvectors form an orthonormal basisof the Hilbert spaceH, i.e. given the eigenvalues λi and the eigenvectors|λi〉 we find

A =∑i

λi|λi〉〈λi| (1.76)

and for any vector |x〉 ∈ H we find coefficients xi such that

|x〉 =∑i

xi|λi〉 . (1.77)

Now let us briefly consider the case for degenerate eigenvalues. Thisis the case, when the characteristic polynomial has multiple zero’s. Inother words, an eigenvalue is degenerate if there is more than oneeigenvector corresponding to it. An eigenvalue λ is said to be d-fold degenerate if there is a set of d linearly independent eigenvec-tors {|λ1〉, . . . , |λd〉} all having the same eigenvalue λ. Quite obviouslythe space of all linear combinations of the vectors {|λ1〉, . . . , |λd〉} isa d-dimensional vector space. Therefore we can find an orthonormalbasis of this vector space. This implies that any Hermitean operatorhas eigenvalues λ1, . . . , λk with degeneracies d(λ1), . . . , d(λk). To eacheigenvectorλi I can find an orthonormal basis of d(λi) vectors. There-fore the above completeness theorem remains true also for Hermiteanoperators with degenerate eigenvalues.

Now you might wonder whether every linear operator A on an Ndimensional Hilbert space has N linearly independent eigenvectors? Itturns out that this is not true. An example is the 2× 2 matrix(

0 10 0

)

which has only one eigenvalue λ = 0. Therefore any eigenvector to thiseigenvalue has to satisfy(

0 10 0

)(ab

)=

(00

).

Page 50: Quantum mechanics

1.3. LINEAR OPERATORS 49

which implies that b = 0. But then the only normalized eigenvector is(10

)and therefore the set of eigenvectors do not form an orthonormal

basis.

We have seen that any Hermitean operator A can be expanded in itseigenvectors and eigenvalues. The procedure of finding this particularrepresentation of an operator is called diagonalization. Often we donot want to work in the basis of the eigenvectors but for example inthe canonical basis of the vectors

e1 =

10...0

, . . . , ei =

0...010...0

, . . . , eN =

0...01

. (1.78)

If we want to rewrite the operator A in that basis we need to finda map between the canonical basis and the orthogonal basis of theeigenvectors of A. If we write the eigenvectors |λi〉 =

∑Nj=1 αji|ej〉 then

this map is given by the unitary operator (we will define unitaryoperators shortly)

U =N∑i=1

|λi〉〈ei| .=

α11 . . . α1N...

. . ....

αN1 . . . αNN

(1.79)

which obviously maps a vector |ei〉 into the eigenvector correspondingto the eigenvalue |λi〉. Using this operator U we find

U †AU =∑i

λiU†|λi〉〈λi|U =

∑i

λi|ei〉〈ei| . (1.80)

The operator in Eq. (1.79) maps orthogonal vectors into orthogonalvectors. In fact, it preserves the scalar product between any two vectors.Let us use this as the defining property of a unitary transformation.

Page 51: Quantum mechanics

50 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Now as promised the accurate definition of a unitary operator.Avoiding the subtleties of infinite dimensional spaces (which are againproblems of the domain of definition of an operator) for the momentwe have the following

Definition 24 A linear operator U on a Hilbert space H is called uni-tary if it is defined for all vectors |x〉, |y〉 in H, maps the whole Hilbertspace into the whole Hilbert space and satisfies

〈x|U †U |y〉 = 〈x|y〉 . (1.81)

In fact we can replace the last condition by demanding that theoperator satisfies

U †U = 1 and U U † = 1. (1.82)

Eq. (1.81) implies that a unitary operator preserves the scalar prod-uct and therefore in particular the norm of vectors as well as the anglebetween any two vectors.

Now let us briefly investigate the properties of the eigenvalues andeigenvectors of unitary operators. The eigenvalues of a unitary oper-ators are in general not real, but they not completely arbitrary. Wehave

Theorem 25 Any unitary operator U on an N-dimensional Hilbertspace H has a complete basis of eigenvectors and all the eigenvalues areof the form eiφ with real φ.

Proof: I will not give a proof that the eigenvectors of U form a basisin H. For this you should have a look at a textbook. What I will proofis that the eigenvalues of H are of the form eiφ with real φ. To see this,we use Eq. (1.82). Be |λ〉 an eigenvector of U to the eigenvalue λ, then

λU †|λ〉 = U †U |λ〉 = |λ〉 . (1.83)

This implies that λ 6= 0 because otherwise the right-hand side wouldbe the null-vector, which is never an eigenvector. From Eq. (1.83) wefind

1

λ= 〈λ|U †|λ〉 = 〈λ|U |λ〉∗ = λ∗ . (1.84)

Page 52: Quantum mechanics

1.3. LINEAR OPERATORS 51

This results in|λ|2 = 1⇔ λ = eiφ . (1.85)

2.

1.3.4 Functions of Operators

In the previous sections I have discussed linear operators and specialsubclasses of these operators such as Hermitean and unitary operators.When we write down the Hamilton operator of a quantum mechanicalsystem we will often encounter functions of operators, such as the one-dimensional potential V (x) in which a particle is moving. Therefore itis important to know the definition and some properties of functions ofoperators. There are two ways of defining functions on an operator, oneworks particularly well for operators with a complete set of eigenvectors(Definition 26), while the other one works bests for functions that canbe expanded into power series (Definition 27).

Definition 26 Given an operator A with eigenvalues ai and a completeset of eigenvectors |ai〉. Further have a function f : C→ C that mapscomplex numbers into complex numbers then we define

f(A) :=N∑i=1

f(ai)|ai〉〈ai| (1.86)

Definition 27 Given a function f : C→ C that can be expanded intoa power series

f(z) =∞∑i=0

fizi (1.87)

then we define

f(A) =∞∑i=0

fiAi . (1.88)

Definition 28 The derivative of an operator function f(A) is definedvia g(z) = df

dz(z) as

df(A)

dA= g(A) . (1.89)

Page 53: Quantum mechanics

52 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Let us see whether the two definitions Def. 26 and 27 coincide foroperators with complete set of eigenvectors and functions that can beexpanded into a power series given in Eq. (1.87).

f(A) =∞∑k=1

fkAk

=∞∑k=1

fk(N∑j=1

aj|aj〉〈aj|)k

=∞∑k=1

fkN∑j=1

akj |aj〉〈aj|

=N∑j=1

(∞∑k=1

fkakj )|aj〉〈aj|

=N∑j=1

f(aj)|aj〉〈aj| (1.90)

For operators that do not have a complete orthonormal basis of eigen-vectors of eigenvectors it is not possible to use Definition 26 and wehave to go back to Definition 27. In practise this is not really a prob-lem in quantum mechanics because we will always encounter operatorsthat have a complete set of eigenvectors.

As an example consider a Hermitean operator A with eigenvaluesak and eigenvectors |ak〉 and compute U = eiA. We find

U = eiA =N∑k=1

eiak |ak〉〈ak| . (1.91)

This is an operator which has eigenvalues of the form eiak with real ak.Therefore it is a unitary operator, which you can also check directlyfrom the requirement U U † = 1 = U †U . In fact it is possible to showthat every unitary operator can be written in the form Eq. (1.91). Thisis captured in the

Lemma 29 To any unitary operator U there is a Hermitean operatorH such that

U = ei H . (1.92)

Page 54: Quantum mechanics

1.3. LINEAR OPERATORS 53

Exercise:1) Show that for any unitary operator U we have f(U †AU) = U †f(A)U

Proof: We use the fact that U U † = 1 to find

f(U †AU) =∞∑k=0

fk(U†AU)k

=∞∑k=0

fkU†AkU

= U †(∞∑k=0

fkAk)U

= U †f(A)U .

If you have functions, then you will also expect to encounter deriva-tives of functions. Therefore we have to consider how to take derivativesof matrices. To take a derivative we need to have not only one operator,but a family of operators that is parametrized by a real parameter s.An example is the set of operators of the form

A(s) =

(1 + s i · s−i · s 1− s

). (1.93)

Another example which is familiar to you is the time evolution operatore−iHs.

Now we can define the derivative with respect to s in completeanalogy to the derivative of a scalar function by

dA

ds(s) := lim

∆s→0

A(s+ ∆s)− A(s)

∆s. (1.94)

This means that we have defined the derivative of an operator compo-nent wise.

Now let us explore the properties of the derivative of operators.First let us see what the derivative of the product of two operators is.We findProperty: For any two linear operators A(s) and B(s) we have

d(AB)

ds(s) =

dA

ds(s)B(s) + A(s)

dB

ds(s) (1.95)

Page 55: Quantum mechanics

54 CHAPTER 1. MATHEMATICAL FOUNDATIONS

This looks quite a lot like the product rule for ordinary functions, exceptthat now the order of the operators plays a crucial role.

Planned end of 7th lecture

You can also have functions of operators that depend on more thanone variables. A very important example is the commutator of twooperators.

Definition 30 For two operators A and B the commutator [A, B] isdefined as

[A, B] = AB − BA . (1.96)

While the commutator between numbers (1× 1 matrices) is alwayszero, this is not the case for general matrices. In fact, you know al-ready a number of examples from your second year lecture in quantummechanics. For example the operators corresponding to momentumand position do not commute, i.e. their commutator is nonzero. Otherexamples are the Pauli spin-operators

σ0 =

(1 00 1

)σ1 =

(0 11 0

)

σ2 =

(0 −ii 0

)σ3 =

(1 00 −1

)(1.97)

For i, j = 1, 2, 3 they have the commutation relations

[σi, σj] = iεijkσk . (1.98)

where εijk is the completely antisymmetric tensor. It is defined by ε123 =1 and changes sign, when two indices are interchanged, for exampleεijk = −εjik.

There are some commutator relations that are quite useful to know.

Lemma 31 For arbitrary linear operators A, B, C on the same Hilbertspace we have

[AB, C] = A[B, C] + [A, C]B (1.99)

0 = [A, [B, C]] + [B, [C, A]] + [C, [A, B]] (1.100)

Page 56: Quantum mechanics

1.3. LINEAR OPERATORS 55

Proof: By direct inspection of the equations 2.Commuting observables have many useful and important proper-

ties. Of particular significance for quantum physics is the followingLemma 32 because it guarantees that two commuting observables canbe simultaneously measured with no uncertainty.

Lemma 32 Two commuting observables A and B have the same eigen-vectors, i.e. they can be diagonalized simultaneously.

Proof: For simplicity we assume that both observables have only non-degenerate eigenvalues. Now chose a basis of eigenvectors {|ai〉} thatdiagonalizes A. Now try to see whether the |ai〉 are also eigenvectorsof B. Using [A, B] we have

A(B|ai〉) = BA|ai〉 = ai(B|ai〉) . (1.101)

This implies that B|ai〉 is an eigenvector of A with eigenvalue ai. Asthe eigenvalue is non-degenerate we must have

B|ai〉 = bi|ai〉 (1.102)

for some bi. Therefore |ai〉 is an eigenvector to B 2.

There is a very nice relation between the commutator and thederivative of a function. First I define the derivative of an operatorfunction with respect to the operator.

Lemma 33 Given two linear operators A and B which have the com-mutator [B, A] = 1. Then for the derivative of an operator functionf(A) we find

[B, f(A)] =df

dA(A) . (1.103)

Proof: Remember that a function of an operator is defined via itsexpansion into a power series, see Eq. (1.88). Therefore we find

[B, f(A)] = [B,∑k

fkAk]

=∑k

fk[B, Ak]

Page 57: Quantum mechanics

56 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Now we need to evaluate the expression [B, Ak]. We proof by induc-tion that [B, An] = nAn−1. For n = 1 this is true. Assume that theassumption is true for n. Now start with n + 1 and reduce it to thecase for n. Using Eq. (1.99) we find

[B, A(n+1)] ≡ [B, AnA] = [B, An]A+ An[B, A] . (1.104)

Now using the [B, A] = 1 and the induction assumption we find

[B, A(n+1)] = nAn−1A+ An = (n+ 1)An . (1.105)

Now we can conclude

[B, f(A)] =∑k

fk[B, Ak]

=∑k

fkkAk−1

=df

dA(A)

This finishes proof.2A very useful property is

Lemma 34 (Baker-Campbell-Haussdorff) For general operators A andB we have

eBAe−B = A+ [B, A] +1

2[B, [B, A]] + . . . . (1.106)

For operators such that [B, [B, A]] = 0 we have the simpler version

eBAe−B = A+ [B, A] . (1.107)

Proof: Define the function of one real parameter α

f(α) = eαBAe−αB . (1.108)

We can expand this function around α = 0 into a Taylor series f(α) =∑∞n=0

αn

n!dnfdαn (α)|α=0 and therefore we need to determine the derivatives

Page 58: Quantum mechanics

1.4. OPERATORS WITH CONTINUOUS SPECTRUM 57

of the function f(α). We find

df

dα(α)|α=0 = [B, A]

d2f

dα2(α)|α=0 = [B, [B, A]]

...

The rest of the proof follows by induction. The proof of Eq. (1.107)follows directly from Eq. (1.106).2

1.4 Operators with continuous spectrum

In the preceding sections I have explained the basic ideas of Hilbertspaces and linear operators. All the objects I have been dealing with sofar have been finite dimensional an assumption that greatly simplifiedthe analysis and made the new ideas more transparent. However, inquantum mechanics many systems are actually described by an infinitedimensional state space and the corresponding linear operators are infi-nite dimensional too. Most results from the preceding section hold truealso for the infinite dimensional case, so that we do not need to learntoo many new things. Nevertheless, there are properties that requiresome discussion.

1.4.1 The position operator

The most natural place where an infinite dimensional state space ap-pears is in the analysis of a particle in free space. Therefore let usbriefly reconsider some aspects of wave mechanics as you have learntthem in the second year course. The state of a particle in free space(maybe moving in a potential) is described by the square-integrablewave-function ψ(x). The question is now as to how we connect thewave-function notation with the Dirac notation which we have used todevelop our theory of Hilbert spaces and linear operators.

Let us remember what the Dirac notation for finite dimensionalsystems means mathematically. Given the ket-vector |φ〉 for the state

Page 59: Quantum mechanics

58 CHAPTER 1. MATHEMATICAL FOUNDATIONS

of a finite dimensional system, we can find the components of this ket-vector with respect to a certain basis {|ei〉}. The i-th component isgiven by the complex number 〈ei|φ〉. Therefore in a particular basis itmakes sense to write the state |φ〉 as a column vector

|φ〉 ↔

〈e1|φ〉

...〈en|φ〉

. (1.109)

Let us try to transfer this idea to the wave-function of a particle infree space. What we will do is to interpret ψ(x) as the component of avector with infinitely many components. Informally written this means

|ψ〉 ↔

...

ψ(x)...

, (1.110)

where we have given the column vector a name, |ψ〉. Obviously theset of vectors defined in this way form a vector space as you can easilycheck. Of course we would like to have a scalar product on this vectorspace. This is introduced in complete analogy to the finite dimensionalcase. There we had

(|φ〉, |ψ〉) = 〈φ|ψ〉 =∑i

〈φ|ei〉〈ei|ψ〉 . (1.111)

We just replace the summation over products of components of the twovectors by an integration. We have (see also Eq (1.29)

(|φ〉, |ψ〉) :=∫ ∞

−∞dx φ∗(x)ψ(x) . (1.112)

Now we have a space of vectors (or square integrable wave functions)H equipped with a scalar product. Indeed it turns out to be a completespace (without proof) so that we have a Hilbert space.

Now that we have introduced ket vectors we also need to definebra-vectors. In the finite dimensional case we obtained the bra vectorvia the idea of linear functionals on the space of state vectors. Let us

Page 60: Quantum mechanics

1.4. OPERATORS WITH CONTINUOUS SPECTRUM 59

repeat this procedure now for the the case of infinite dimensions. Wedefine a linear functional 〈φ| by

〈φ|(|ψ〉) ≡ 〈φ|ψ〉 = (|φ〉, |ψ〉) . (1.113)

Now I would like to investigate a particular ket vector (linear func-tional) that will allow as to define a position state. We define a linearfunctional 〈x0| by

〈x0|(|ψ〉) ≡ 〈x0|ψ〉 := ψ(x0) . (1.114)

We are already writing this functional very suggestively as a bra-vector.Thats perfectly ok, and we just have to check that the so defined func-tional is indeed linear. Of course we would like to interpret the lefthand side of Eq. (1.114) as a scalar product between two ket’s, i.e.

(|x0〉, |ψ〉) := 〈x0|ψ〉 = ψ(x0) . (1.115)

What does the ket |x0〉 corresponding to the bra 〈x0| mean? Whichwave-function δ∗x0

(x) does it correspond to? Using the scalar productEq. (1.112), we have∫ ∞

−∞dxδ∗x0

(x)ψ(x) = (|x0〉, |ψ〉) = 〈x0|ψ〉 = ψ(x0) (1.116)

This means that the function δ∗x0(x) has to act like a delta-function!

The wave-function corresponding to the bra 〈x0| is a delta-function. Adelta-function however, is not square-integrable! Therefore it cannot bean element of the Hilbert space of square integrable functions. However,as we have seen it would be quite convenient to use these wave-functionsor states. Therefore we just add them to our Hilbert space, although wewill often call them improper states or wave-functions. In fact we canuse basically all the rules that we have learned about finite dimensionalHilbert-spaces also for these improper states. All we need to demandis the following rule for the scalar product

〈ψ|x0〉 := (〈x0|ψ〉)∗ = ψ∗(x0) . (1.117)

Now I can write for arbitrary kets |φ〉, |ψ〉 ∈ H

〈φ|ψ〉 =∫φ∗(x)ψ(x)dx =

∫〈φ|x〉〈x|ψ〉dx = 〈φ|(

∫|x〉〈x|dx)|ψ〉 .

(1.118)

Page 61: Quantum mechanics

60 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Then we can conclude ∫|x〉〈x|dx = 1 . (1.119)

Inserting this identity operator in 〈x|ψ〉, we obtain the orthogonalityrelation between position kets∫

δ(x− x′)ψ(x′)dx′ = ψ(x) = 〈x|ψ〉 =∫〈x|x′〉〈x′|ψ〉dx′

=∫〈x|x′〉ψ(x′)dx′ .

Therefore we have〈x|x′〉 = δ(x− x′) . (1.120)

Now we can derive the form of the position operator from our knowledgeof the definition of the position expectation value

〈ψ|x|ψ〉 :=∫x|ψ(x)|2dx

=∫〈ψ|x〉x〈x|ψ〉dx

= 〈ψ|(∫x|x〉〈dx)|ψ〉 , (1.121)

where we defined the position operator

x =∫x|x〉〈x|dx = x† . (1.122)

Now you see why the improper position kets are so useful. In this basisthe position operator is automatically diagonal. The improper positionkets |x0〉 are eigenvectors of the position operator

x|x0〉 = x0|x0〉 . (1.123)

This makes sense, as the position kets describe a particle that is per-fectly localized at position x0. Therefore a position measurement shouldalways give the result x0. So far we have dealt with one-dimensionalsystems. All of the above considerations can be generalized to thed-dimensional case by setting

ˆ|x〉 = x1~e1 + . . .+ xd~ed . (1.124)

The different components of the position operator commute are as-

sumed to commute. Planned end of 8th lecture

Page 62: Quantum mechanics

1.4. OPERATORS WITH CONTINUOUS SPECTRUM 61

1.4.2 The momentum operator

Now we are going to introduce the momentum operator and momentumeigenstates using the ideas of linear functionals in a similar fashion tothe way in which we introduced the position operator. Let us introducethe linear functional 〈p| defined by

〈p|ψ〉 :=1√2πh

∫e−ipx/hψ(x)dx . (1.125)

Now we define the corresponding ket by

〈p|ψ〉∗ = ψ∗(p) =: 〈ψ|p〉 . (1.126)

Combining Eq. (1.125) with the identity operator as represented in Eq.(1.119) we find

1√2πh

∫e−ipx/hψ(x)dx = 〈p|ψ〉 =

∫〈p|x〉〈x|ψ〉dx (1.127)

Therefore we find that the state vector |p〉 represents a plane wave,because

〈x|p〉 =1√2πh

eipx/h . (1.128)

As Eq. (1.128) represents a plane wave with momentum p it makessense to call |p〉 a momentum state and expect that it is an eigenvectorof momentum operator p to the eigenvalue p. Before we define the mo-mentum operator, let us find the decomposition of the identity operatorusing momentum eigenstates. To see this we need to remember fromthe theory of delta-functions that

1

2πh

∫eip(x−y)/hdp = δ(x− y) . (1.129)

Then we have for arbitrary |x〉 and |y〉

〈x|y〉 = δ(x− y) =1

2πh

∫dpeip(x−y)/h = 〈x|(

∫|p〉〈p|dp)|y〉 , (1.130)

and therefore ∫|p〉〈p|dp = 1 . (1.131)

Page 63: Quantum mechanics

62 CHAPTER 1. MATHEMATICAL FOUNDATIONS

The orthogonality relation between different momentum kets can befound by using Eq. (1.131) in Eq. (1.125).

〈p|ψ〉 = 〈p|1|ψ〉 =∫〈p|p′〉〈p′|ψ〉dp′ (1.132)

so that〈p|p′〉 = δ(p− p′) . (1.133)

The momentum operator p is the operator that has as its eigenvec-tors the momentum eigenstates |p〉 with the corresponding eigenvaluep. This makes sense, because |p〉 describes a plane wave which has aperfectly defined momentum. Therefore we know the spectral decom-position which is

p =∫p|p〉〈p|dp . (1.134)

Clearly we havep|p0〉 = p0|p0〉 . (1.135)

Analogously to the position operator we can extend the momentumoperator to the d-dimensional space by

~p = p1~e1 + . . .+ pd~ed (1.136)

The different components of the momentum operator are assumed tocommute.

1.4.3 The position representation of the momen-tum operator and the commutator betweenposition and momentum

We have seen how to express the position operator in the basis of theimproper position kets and the momentum operator in the basis of theimproper momentum kets. Now I would like to see how the momentumoperator looks like in the position basis.

To see this, differentiate Eq. (1.128) with respect to x which gives

h

i

∂x〈x|p〉 =

h

i

∂x

1√2πh

eipx/h = p〈x|p〉 . (1.137)

Page 64: Quantum mechanics

1.4. OPERATORS WITH CONTINUOUS SPECTRUM 63

Therefore we find

〈x|p|ψ〉 =∫〈x|p〉p〈p|ψ〉dp

=h

i

∂x

∫〈x|p〉〈p|ψ〉dp

=h

i〈x|(

∫|p〉〈p|)|ψ〉

=h

i

∂x〈x|ψ〉 . (1.138)

In position representation the momentum operator acts like the differ-ential operator, i.e.

p←→ h

i

∂x. (1.139)

Knowing this we are now able to derive the commutation relationbetween momentum and position operator. We find

〈x|[x, p]|ψ〉 = 〈x|(xp− px|ψ〉

=h

i

[x∂

∂x〈x|ψ〉 − ∂

∂x(x〈x|ψ〉)

]= ih〈x|ψ〉= 〈x|ih1|ψ〉

Therefore we have the Heisenberg commutation relations

[x, p] = ih1 . (1.140)

Page 65: Quantum mechanics

64 CHAPTER 1. MATHEMATICAL FOUNDATIONS

Page 66: Quantum mechanics

Chapter 2

Quantum Measurements

So far we have formulated two postulates of quantum mechanics. Thesecond of these states that the measurement results of an observableare the eigenvalues of the corresponding Hermitean operator. However,we do not yet know how to determine the probability with which thismeasurement result is obtained, neither have we discussed the state ofthe system after the measurement. This is the object of this section.

2.1 The projection postulate

We have learned, that the possible outcomes in a measurement of aphysical observable are the eigenvalues of the corresponding Hermiteanoperator. Now I would like to discuss what the state of the systemafter such a measurement result is. Let us guide by common sense.Certainly we would expect that if we repeat the measurement of thesame observable, we should find the same result. This is in contrastto the situation before the first measurement. There we certainly didnot expect that a particular measurement outcome would appear withcertainty. Therefore we expect that the state of the system after thefirst measurement has changed as compared to the state before thefirst measurement. What is the most natural state that corresponds tothe eigenvalue ai of a Hermitean operator A? Clearly this is the cor-responding eigenvector in which the observable has the definite valueai! Therefore it is quite natural to make the following postulate for

65

Page 67: Quantum mechanics

66 CHAPTER 2. QUANTUM MEASUREMENTS

observables with non-degenerate eigenvalues.

Postulate 3 (a) The state of a quantum mechanical system after themeasurement of observable A with the result being the non-degeneratedeigenvalue ai is given by the corresponding eigenvector |ai〉.

For observables with degenerate eigenvalues we have a problem.Which of the eigenvalues should we chose? Does it make sense to choseone particular eigenvector? To see what the right answer is we have tomake clear that we are really only measuring observable A. We do notobtain any other information about the measured quantum mechanicalsystem. Therefore it certainly does not make sense to prefer one overanother eigenvector. In fact, if I would assume that we have to choseone of the eigenvectors at random, then I would effectively assumethat some more information is available. Somehow we must be able todecide which eigenvector has to be chosen. Such information can onlycome from the measurement of another observable - a measurementwe haven’t actually performed. This is therefore not an option. Onthe other hand we could say that we chose one of the eigenvectors atrandom. Again this is not really an option, as it would amount tosaying that someone chooses an eigenvector and but does not reveal tous which one he has chosen. It is quite important to realize, that a) nothaving some information and b) having information but then choosingto forget it are two quite different situations.

All these problems imply that we have to have all the eigenvectorsstill present. If we want to solve this problem, we need to introduce anew type of operator - the projection operator.

Definition 35 An operator P is called a projection operator if itsatisfies

1. P = P † ,

2. P = P 2 .

Page 68: Quantum mechanics

2.1. THE PROJECTION POSTULATE 67

Some examples for projection operators are

1. P = |ψ〉〈ψ|

2. If P is a projection operator, then also 1 − P is a projectionoperator.

3. P = 1

Exercise: Prove that the three examples above are projection opera-tors!

Lemma 36 The eigenvalues of a projection operator can only have thevalues 0 or 1.

Proof: For any eigenvector |λ〉 of λ we have

λ|λ〉 = P |λ〉 = P 2|λ〉 = λ2|λ〉 . (2.1)

From this we immediately obtain λ = 0 or λ = 1 2.

For a set of orthonormal vectors {|ψi〉}i=1,...,N a projection operator

P =∑ki=1 |ψi〉〈ψi| projects a state |ψ〉 onto the subspace spanned by

the vectors {|ψi〉}i=1,...,k. In mathematics this statement is more clear.If we expand |ψ〉 =

∑Ni=1 αi|ψi〉 then

P |ψ〉 =k∑i=1

αi|ψi〉 . (2.2)

This is exactly what we need for a more general formulation of thethird postulate of quantum mechanics. We formulate the third postu-late again, but now for observables with degenerate eigenvalues.

Page 69: Quantum mechanics

68 CHAPTER 2. QUANTUM MEASUREMENTS

Postulate 3 (b) The state of a quantum mechanical system after themeasurement of general observable A with the result being the possiblydegenerated eigenvalue ai is given by

Pi|ψ〉 . (2.3)

where Pi is the projection operator on the subspace of H spanned byall the eigenvectors of A with eigenvalue ai, i.e.

Pi =∑i

|ψi〉〈ψi| . (2.4)

Experiments have shown that this postulate describes the resultsof measurements extremely well and we therefore have to accept it atleast as a good working assumption. I say this because there are quite afew people around who do not like this postulate very much. They askthe obvious question as to how and when the reduction of the quantumstate is exactly taking place. In fact, we have not answered any of thatin this discussion and I will not do so for a simple reason. Nobodyreally knows the answer to these questions and they are a subject ofdebate since a long time. People have come up with new interpretationsof quantum mechanics (the most prominent one being the many-worldstheory) or even changes to the Schrodinger equations of quantum me-chanics themselves. But nobody has solved the problem satisfactorily.This is a bit worrying because this means that we do not understandan important aspect of quantum mechanics. However, there is a reallygood reason for using the projection postulate: It works superblywell when you want to calculate things.

Therefore let us now continue with our exploration of the postulatesof quantum mechanics. We now know what the measurement outcomesare and what the state of our system after the measurement is, butwe still do not know what the probability for a specific measurementoutcome is. To make any sensible guess about this, we need to consider

Page 70: Quantum mechanics

2.1. THE PROJECTION POSTULATE 69

the properties of probabilities to see which of these we would like tobe satisfied in quantum mechanics. Once we know what are sensiblerequirements, then we can go ahead to make a new postulate.

As you know, probabilities p(Ai) to obtain an outcome Ai (whichis usually a set of possible results) are positive quantities that are notlarger than unity, i.e. 0 ≤ p(Ai) ≤ 1. In addition it is quite trivial todemand that the probability for an outcome corresponding to an emptyset vanishes, i.e. p(∅), while the probability for the set of all elementsis unity, i.e. p(1).

These are almost trivial requirements. Really important is the be-haviour of probabilities for joint sets. What we definitively would liketo have is that the probabilities for mutually exclusive events add up,i.e. if we are given disjoint sets A1 and A2 and we form the unionA1 ∪ A2 of the two sets, then we would like to have

p(A1 ∪ A2) = p(A1) + p(A2) . (2.5)

In the following postulate I will present a definition that satisfies all ofthe properties mentioned above.

Postulate 4 The probability of obtaining the eigenvalue ai in a mea-surement of the observable A is given by

pi = ||Pi|ψ〉||2 = 〈ψ|Pi|ψ〉 . (2.6)

For a non-degenerate eigenvalue with eigenvector |ψi〉 this probabil-ity reduces to the well known expression

pi = |〈ψi|ψ〉|2 (2.7)

It is easy to check that Postulate 4 indeed satisfies all the criteriathat we demanded for a probability. The fascinating thing is, that it isessentially the only way of doing so. This very important theorem wasfirst proved by Gleason in 1957. This came as quite a surprise. Onlyusing the Hilbert space structure of quantum mechanics together with

Page 71: Quantum mechanics

70 CHAPTER 2. QUANTUM MEASUREMENTS

the reasonable properties that we demanded from the probabilities forquantum mechanical measurement outcomes we can’t do anything elsethan using Eq. (2.6)! The proof for this theorem is too complicatedto be presented here and I just wanted to justify the Postulate 4 a bitmore by telling you that we cannot really postulate anything else.

2.2 Expectation value and variance.

Now that we know the measurements outcomes as well as the probabil-ities for their occurrence, we can use quantum mechanics to calculatethe mean value of an observable as well as the variance about this meanvalue.

What do we do experimentally to determine an mean value of anobservable. First, of course we set up our apparatus such that it mea-sures exactly the observable A in question. Then we build a machinethat creates a quantum system in a given quantum state over and overagain. Ideally we would like to have infinitely many of such identicallyprepared quantum states (this will usually be called an ensemble).Now we perform our experiment. Our state preparer produces the firstparticle in a particular state |ψ〉 and the particle enters our measure-ment apparatus. We obtain a particular measurement outcome, i.e. oneof the eigenvalues ai of the observable A. We repeat the experimentand we find a different eigenvalue. Every individual outcome of theexperiment is completely random and after N repetitions of the experi-ment, we will obtain the eigenvalue ai as a measurement outcome in Ni

of the experiments, where∑iNi = N . After finishing our experiment,

we can then determine the average value of the measurement outcomes,for which we find

〈A〉ψ(N) =∑i

Ni

Nai . (2.8)

For large numbers of experiments, i.e. large N , we know that the ratioNi

Napproaches the probability pi for the outcome ai. This probability

can be calculated from Postulate 4 and we find

〈A〉ψ = limN→∞

∑i

Ni

Nai =

∑i

piai =∑i

ai〈ψ|Pi|ψ〉 = 〈ψ|A|ψ〉 . (2.9)

Page 72: Quantum mechanics

2.3. UNCERTAINTY RELATIONS 71

You should realize that the left hand side of Eq. (2.9) is conceptuallyquite a different thing from the right hand side. The left hand sideis an experimentally determined quantity, while the right hand sideis a quantum mechanical prediction. This is a testable prediction ofPostulate 3 and has been verified in countless experiments. Now weare also able to derive the quantum mechanical expression for the vari-ance of the measurement results around the mean value. Given theprobabilities pi and the eigenvalues ai of the observable A we find

(∆A)2 :=∑i

pi(ai − 〈A〉)2 =∑i

pia2i − (

∑i

piai)2 = 〈A2〉 − 〈A〉2 .

(2.10)After these definitions let us now analyse a very important property

of measurement results in quantum mechanics.

2.3 Uncertainty Relations

In classical mechanics we are able to measure any physical quantity toan arbitrary precision. In fact, this is also true for quantum mechanics!Nothing prevents us from preparing a quantum system in a very welldefined position state and subsequently make a measurement that willtell us with exceeding precision where the particle is. The real differenceto classical mechanics arise when we try to determine the values of twodifferent observables simultaneously. Again, in classical mechanics wehave no law that prevents us from measuring any two quantities witharbitrary precision. Quantum mechanics, however, is different. Onlyvery special pairs of observables can be measured simultaneously toan arbitrary precision. Such observables are called compatible or com-muting observables. In general the uncertainties in the measurementsof two arbitrary observables will obey a relation which makes sure thattheir product has a lower bound which is in general is unequal to zero.This relation is called the uncertainty relation.

Theorem 37 For any two observables A and B we find for their un-

certainties ∆X =√〈X2〉 − 〈X〉2 the uncertainty relation

∆A∆B ≥ |〈[A, B]〉|2

. (2.11)

Page 73: Quantum mechanics

72 CHAPTER 2. QUANTUM MEASUREMENTS

Proof: Let us define two ket’s

|φA〉 =(A− 〈A〉

)|ψ〉 (2.12)

|φB〉 =(B − 〈B〉

)|ψ〉 . (2.13)

Using these vectors we find that√|〈φA|φA〉| · |〈φB|φB〉| = ∆A∆B . (2.14)

Now we can use the Schwarz inequality to find a lower bound on theproduct of the two uncertainties. We find that

|〈φA|φB〉| ≤√|〈φA|φA〉| · |〈φB|φB〉| = ∆A∆B . (2.15)

For the real and imaginary parts of |〈φA|φB〉| we find

Re〈φA|φB〉 =1

2

(〈AB + BA〉 − 2〈A〉〈B〉

)(2.16)

Im〈φA|φB〉 =1

2i〈[A, B]〉 . (2.17)

We can use Eqs. (2.16-2.17) in Eq. (2.15) to find

∆A∆B ≥√

(Re〈φA|φB〉)2 + (Im〈φA|φB〉)2 ≥ 1

2|〈[A, B]〉| (2.18)

This finishes the proof 2.

From Eq. (2.11) we can see that a necessary condition for twoobservables to be measurable simultaneously with no uncertainty isthat the two observables commute. In fact this condition is also asufficient one. We have

Theorem 38 Two observables A and B can be measured precisely, i.e.∆A = ∆B = 0, exactly if they commute.

Proof: From Eq. (2.11) we immediately see that ∆A = ∆B = 0 im-plies that [A, B] = 0. For the other direction of the statement we needto remember that two commuting observables have the same eigen-vectors. If we assume our quantum mechanical system is in one of

Page 74: Quantum mechanics

2.3. UNCERTAINTY RELATIONS 73

these eigenstates |ψ〉, then we see that in Eqs. (2.12-2.13) that |φA〉and |φB〉 are proportional to |ψ〉 and therefore proportional to eachother. Then we only need to remember that in the case of proportional|φA〉 and |φB〉 we have equality in the Schwarz inequality which implies∆A = ∆B = 1

2|〈[A, B]〉| = 0 2.

Now let us make clear what the uncertainty relation means, as thereis quite some confusion about that. Imagine that we are having an en-semble of infinitely many identically prepared quantum systems, eachof which is in state |ψ〉. We would like to measure two observables Aand B. Then we split the ensemble into two halves. On the first halfwe measure exclusively the observable A while on the second half wemeasure the observable B. The measurement of the two observableswill lead to average values 〈A〉 and 〈B〉 which have uncertainties ∆Aand ∆B. From this consideration it should be clear, that the uncer-tainties in the measurements of the two observables are not caused bya perturbation of the quantum mechanical system by the measurementas we are measuring the two observables on different systems which arein the same state. These uncertainties are an intrinsic property ofany quantum state. Of course perturbations due to the measure-ment itself may increase the uncertainty but it is not the reason forthe existence of the uncertainty relations.

In that context I would also like to point out, that you will some-times find the uncertainty relation stated in the form

∆A∆B ≥ |〈[A, B]〉| , (2.19)

that is, where the right-hand side is twice as large as in Eq. (2.11).The reason for this extra factor of 2 is that the uncertainty relation Eq.(2.19) describes a different situation than the one stated in Theorem 37.Eq. (2.19) really applies to the simultaneous measurement of two non-commuting observables on one quantum system. This means that weperform a measurement of the observable A on every system of the en-semble and subsequently we perform a measurement of the observableB on every system of the ensemble. The measurement of the observableA will disturb the system and a subsequent measurement of B couldtherefore be more uncertain. Imagine, for example, that you want to

Page 75: Quantum mechanics

74 CHAPTER 2. QUANTUM MEASUREMENTS

determine the position of an electron and simultaneously the momen-tum. We could do the position measurement by scattering light fromthe electron. If we want to determine the position of the electron veryprecisely, then we need to use light of very short wavelength. However,photons with a short wavelength carry a large momentum. When theycollide with the electron, then the momentum of the electron may bechanged. Therefore the uncertainty of the momentum of the electronwill be larger. This is an example for the problems that arise when youwant to measure two non-commuting variables simultaneously.

If you would like to learn more about the uncertainty relation for thejoint measurement of non-commuting variables you may have a look atthe pedagogical article by M.G. Raymer, Am. J. Phys. 62, 986 (1994)which you can find in the library.

2.3.1 The trace of an operator

In the fourth postulate I have defined the rule for the calculation ofthe probability that a measurement result is one of the eigenvalues ofthe corresponding operator. Now I would like to generalize this ideato situations in which we have some form of lack of knowledge aboutthe measurement outcome. An example would be that we are not quitesure which measurement outcome we have (the display of the apparatusmay have a defect for example). To give this law a simple formulation,I need to introduce a new mathematical operation.

Definition 39 The trace of an operator A on an N dimensional Hilbertspace is defined as

tr(A) =N∑i=1

〈ψi|A|ψi〉 (2.20)

for any orthonormal set of basis vectors {|ψi〉}.

In particular, if the |ψi〉 are chosen as the eigenvectors of A we seethat the trace is just the sum of all the eigenvalues of A. The trace willplay a very important role in quantum mechanics which we will see inour first discussion of the measurement process in quantum mechanics.

Page 76: Quantum mechanics

2.3. UNCERTAINTY RELATIONS 75

The trace has quite a few properties that are helpful in calculations.Note that the trace is only well defined, if it is independent of the choiceof the orthonormal basis {|ψi〉}. If we chose a different basis {|φi〉}, thenwe know that there is a unitary operator U such that |ψi〉 = U |φi〉 andwe want the property tr{A} = tr{U †AU}. That this is indeed trueshows

Theorem 40 For any two operators A and B on a Hilbert space Hand unitary operators U we have

tr{AB} = tr{BA} (2.21)

tr{A} = tr{UAU †} . (2.22)

Proof: I prove only the first statement, because the second one followsdirectly from U †U = 1 and Eq. (2.21). The proof runs as follows

tr{AB} =N∑n=1

〈ψn|AB|ψn〉

=N∑n=1

〈ψn|A1B|ψn〉

=N∑n=1

〈ψn|AN∑k=1

|ψk〉〈ψk|B|ψn〉

=N∑k=1

N∑n=1

〈ψn|A|ψk〉〈ψk|B|ψn〉

=N∑k=1

N∑n=1

〈ψk|B|ψn〉〈ψn|A|ψk〉

=N∑k=1

〈ψk|BN∑n=1

|ψn〉〈ψn|A|ψk〉

=N∑k=1

〈ψk|B1A|ψk〉

=N∑k=1

〈ψk|BA|ψk〉

= tr{BA} .

Page 77: Quantum mechanics

76 CHAPTER 2. QUANTUM MEASUREMENTS

Comment: If you write an operator in matrix form then the trace isjust the sum of the diagonal elements of the matrix.

I introduced the trace in order to write some of the expressions fromthe previous chapters in a different form which then allow their gener-alization to situations which involve classical uncertainty.

In postulate 4 the probability to find eigenvalue ai in a measurementof the observable A has been given for a system in quantum state |ψ〉.Using the trace it can now be written

pi = 〈ψ|Pi|ψ〉 = tr{Pi|ψ〉〈ψ|} .

Using the trace, the expectation value of an observable A measuredon a system in a quantum state |ψ〉 can be written as

〈A〉|ψ〉 = 〈ψ|A|ψ〉 = tr{A|ψ〉〈ψ|} .

2.4 The density operator

When discussing measurement theory it is quite natural to introducea new concept for the description of the state of quantum system. Sofar we have always described a quantum system by a state vector, i.e.by a pure state which may be a coherent superposition of many otherstate vectors, e.g.

|ψ〉 = α1|ψ1〉+ α2|ψ2〉 . (2.23)

A quantum mechanical system in such a state is ideal for exhibitingquantum mechanical interference. However, it has to be said that sucha state is an idealization as compared to the true state of a quantumsystem that you can prepare in an experiment. In an experiment youwill always have some statistical uncertainty due to imperfections inthe preparation procedure of the quantum state or the measurement.For example, due to an occasional error, happening at a random timeand unknown to you, the wrong quantum state may be prepared. Thenwe do not only have the quantum mechanical uncertainty that is de-scribed by the state vector, but we also have a classical statisticaluncertainty. This raises the question as to how to describe such an

Page 78: Quantum mechanics

2.4. THE DENSITY OPERATOR 77

Figure 2.1: An oven emits atomic two-level systems. The internal stateof the system is randomly distributed. With probability pi the systemis in the pure state |ψi〉. A person oblivious to this random distributionmeasures observable A. What is the mean value that he obtains?

experimental situation in the most elegant way. Certainly it cannotbe dealt with by just adding the state vectors together, i.e. forming acoherent superposition.

To understand this better and to clarify the definition of the densityoperator I will present an example of an experimental situation wherethe description using pure quantum states fails, or better, where it israther clumsy. Consider the situation presented in Fig. 2.1. An ovenis filled with atomic two-level systems. We assume that each of thesetwo-level systems is in a random pure state. To be more precise, letus assume that with probability pi a two-level system is in state |ψi〉.Imagine now that the oven has a little hole in it through which thetwo-level atoms can escape the oven in the form of an atomic beam.This atomic beam is travelling into a measurement apparatus built byan experimentalist who would like to measure the observable A. Thetask of a theoretical physicist is to predict what the experimentalistwill measure in his experiment.

We realize that each atom in the atomic beam is in a pure state|ψi〉 with probability pi. For each individual atom it is unknown to theexperimentalist in which particular state it is. He only knows the prob-ability distribution of the possible states. If the experimentalist makesmeasurements on N of the atoms in the beam, then he will performthe measurement Ni ≈ Npi times on an atom in state |ψi〉. For each ofthese pure states |ψi〉 we know how to calculate the expectation valueof the observable A that the experimentalist is measuring. It is simply〈ψi|A|ψi〉 = tr{A|ψi〉〈ψi|}. What average value will the experimentalistsee in N measurements? For a large N the relative frequencies of oc-

Page 79: Quantum mechanics

78 CHAPTER 2. QUANTUM MEASUREMENTS

currence of state |ψi〉 is Ni/N = pi. Therefore the mean value observedby the experimentalist is

〈A〉 =∑i

pitr{A|ψi〉〈ψi|} . (2.24)

This equation is perfectly correct and we are able to calculate the ex-pectation value of any observable A for any set of states {|ψi〉} andprobabilities {pi}. However, when the number of possible states |ψi〉 isreally large then we have a lot of calculations to do. For each state |ψi〉we have to calculate the expectation value 〈ψi|A|ψi〉 = tr{A|ψi〉〈ψi|}and then sum up all the expectation values with their probabilities. Ifwe then want to measure a different observable B then we need to dothe same lengthy calculation again. That’s not efficient at all! As atheoretical physicist I am of course searching for a better way to dothis calculation. Therefore let us reconsider Eq. (2.24), transformingit a bit and use it to define the density operator.

〈A〉 =∑i

pitr{A|ψi〉〈ψi|} .

= tr{A∑

pi|ψi〉〈ψi|}=: tr{Aρ} . (2.25)

The last equality is really the definition of the density operator ρ.Quite obviously, if I know the density operator ρ for the specific situa-tion - here the oven generating the atomic beam - then I can calculatequite easily the expectation value that the experimentalist will mea-sure. If the experimentalist changes his apparatus and now measuresthe observable B then this is not such a big deal anymore. We just needto re-evaluate Eq. (2.25) replacing A by B and not the potentially hugenumber of expectation values tr{A|ψi〉〈ψi|}.

Exercise: Check that in general we have 〈f(A)〉 = tr{f(A)ρ}.

Before I present some useful properties of a density operator, letme stress again the difference between a coherent superposition ofquantum states and a statistical mixture of quantum states. In acoherent superposition state of a quantum mechanical system, each

Page 80: Quantum mechanics

2.4. THE DENSITY OPERATOR 79

representative of the ensemble is in the same pure state of the form

|ψ〉 =∑i

αi|ψi〉 , (2.26)

which can be written as the density operator

ρ = |ψ〉〈ψ| =∑i,j

αiα∗j |ψi〉〈ψj| . (2.27)

This is completely different to a statistical mixture where a repre-sentative is with a certain probability |αi|2 in a pure state |ψi〉! Thecorresponding density operator is

ρ =∑i

|αi|2|ψi〉〈ψi| . (2.28)

and is also called a mixed state. You should convince yourself thatindeed the two states Eqs. (2.26-2.28) are different, i.e.

ρ =∑i

|αi|2|ψi〉〈ψi| 6= |ψ〉〈ψ| . (2.29)

Now let us consider some properties of the density operator whichcan in fact be used as an alternative definition of the density operator.

Theorem 41 Any density operator satisfies

1. ρ is a Hermitean operator.

2. ρ is a positive semidefinite operator, i.e. ∀|ψ〉 : 〈ψ|ρ|ψ〉 ≥ 0.

3. tr{ρ} = 1

Proof: As we have defined the density operator already we need toproof the above theorem. From our definition of the density operatorwe find

ρ† = (∑i

pi|ψi〉〈ψi|)† =∑i

pi|ψi〉〈ψi| = ρ . (2.30)

This followed because |ψi〉〈ψi| is a projection operator. This provespart 1). Part 2) follows by

〈ψ|ρ|ψ〉 = 〈ψ|(∑i

pi|ψi〉〈ψi|)|ψ〉 =∑i

pi〈ψ|ψi〉〈ψi|ψ〉 =∑i

pi|〈ψ|ψi〉|2 ≥ 0 .

(2.31)

Page 81: Quantum mechanics

80 CHAPTER 2. QUANTUM MEASUREMENTS

The last property follows from the fact that probabilities are normal-ized, i.e.

∑i pi = 1. Then we can see that

tr{ρ} = tr{∑i

pi|ψi〉〈ψi|} =∑i

pitr{|ψi〉〈ψi|} =∑i

pi = 1 . (2.32)

This finishes the proof 2.Some simple but useful properties that can be derived from Theorem

41 are

1. ρ2 = ρ⇐⇒ ρ is a pure state

2. All eigenvalues of ρ lie in the interval [0, 1].

Proof: Exercise!

A natural question that arises when one considers the density oper-ator, is that of its decomposition into pure states, i.e. ρ =

∑i pi|ψi〉〈ψi|.

Is this decomposition unique, or are there many possible ways to obtaina given density operator as a statistical mixture of pure states? Theanswer can be found by looking at a particularly simple example. Letus consider the density operator representing the ’completely mixed’state of a two-level system, i.e.

ρ =1

2(|0〉〈0|+ |1〉〈1|) . (2.33)

Looking at Eq. (2.33) we readily conclude that this density operatorcan be generated by an oven sending out atoms in states |0〉 or |1〉 witha probability of 50% each, see part a) of Fig . (2.2). That conclusion isperfectly correct, however, we could also imagine an oven that generatesatoms in states |±〉 = (|0〉 ± |1〉)/

√2 with a probability of 50% each.

Let us check that this indeed gives rise to the same density operator.

ρ =1

2(|+〉〈+|+ |−〉〈−|)

=1

4(|0〉〈0|+ |1〉〈0|+ |0〉〈1|+ |1〉〈1|) +

1

4(|0〉〈0| − |1〉〈0| − |0〉〈1|+ |1〉〈1|)

=1

2(|0〉〈0|+ |1〉〈1|) . (2.34)

Page 82: Quantum mechanics

2.4. THE DENSITY OPERATOR 81

Figure 2.2: a) An oven generates particles in states |0〉 or |1〉 withprobability 50%. An experimentalist measures observable A and findsthe mean value tr{Aρ}. b) The oven generates particles in states |±〉 =(|0〉 ± |1〉)/

√2 with probability 50%. The experimentalist will find the

same mean value tr{Aρ}.

Therefore the same density operator can be obtained in differentways. In fact, we can find usually find infinitely many ways of gen-erating the same density operator. What does this mean? Does thismake the density operator an ill-defined object, as it cannot distinguishbetween different experimental realisations? The answer to that is aclear NO! The point is, that if a two realisations give rise to the samedensity operator, then, as we have seen above, we will also find exactlythe same expectation values for any observable that we may chose tomeasure. If a quantum system in a mixed state gives rise to exactly thesame predictions for all possible measurements, then we are forced tosay that the two states are indeed the same! Of course this is somethingthat needs to be confirmed experimentally and it turns out that it istrue. Even more interestingly it turns out that if I could distinguishtwo situations that are described by the same density operator then Iwould be able to transmit signals faster than light! A clear impossi-bility as it contradicts special relativity. This is what I will explain toyou in the next section using the concept of entanglement which youare going to encounter for the first time.

Page 83: Quantum mechanics

82 CHAPTER 2. QUANTUM MEASUREMENTS

2.5 Mixed states, Entanglement and the

speed of light

In the previous section I have introduced the density operator to de-scribe situations in which we have a lack of knowledge due to animperfect preparation of a quantum state. However, this is not theonly situation in which we have to use a density operator. In thissection I will show you that the density operator description may alsobecome necessary when the system that we are investigating is only theaccessible part of some larger total system. Even if this largersystem is in a pure quantum state, we will see that the smaller systembehaves in every respect like a mixed state and has to be described bya density operator. This idea will then lead us to the insight that twodifferent preparations that are described by the same density operatorcan not be distinguished experimentally, because otherwise we wouldbe able to send messages faster than the speed of light. This wouldclearly violate the special theory of relativity.

To understand this new way of looking at density operators, I firstneed to explain how to describe quantum mechanical systems that con-sist of more than one particle. This is the purpose of the next subsec-tion.

2.5.1 Quantum mechanics for many particles

In this section I will show you what you have to do when you want todescribe a system of two particles quantum mechanically. The general-ization to arbitrary numbers of particles will then be obvious.

If we have two particles, then the state of each particle are elementsof a Hilbert space; for particle A we have the Hilbert space HA whichis spanned by a set of basis states {|φi〉A}i=1,...,N , while particle B hasthe Hilbert space HB spanned by the set of basis states {|ψj〉B}j=1,...,M .The two Hilbert spaces are not necessarily equal and may describe totaldifferent quantities or particles.

Now imagine that system A is in state |φi〉A and system B is instate |ψj〉B. Then we write the total state of both systems in the tensor

Page 84: Quantum mechanics

2.5. MIXED STATES, ENTANGLEMENT AND THE SPEED OF LIGHT83

product form

|Ψtot〉 = |φi〉 ⊗ |ψj〉 . (2.35)

The symbol ⊗ denotes the tensor product, which is not to be confusedwith ordinary products. Clearly there are N · M such combinationsof basis vectors. These vectors can be thought of as spanning a largerHilbert space HAB = HA⊗HB which describes all possible states thatthe two particles can be in.

Of course we have to be able to add different states in the newHilbert space HAB. We have to define this addition such that it iscompatible with physics. Imagine that system A is in a superpositionstate |Φ〉A = a1|φ1〉 + a2|φ2〉 and system B is in the basis state |Ψ〉 =|ψ1〉. Then

|Φ〉A ⊗ |Ψ〉 = (a1|φ1〉A + a2|φ2〉A)⊗ |ψ1〉 (2.36)

is the joint state of the two particles. Now surely it makes sense to saythat

|Φ〉A ⊗ |Ψ〉 = a1|φ1〉A ⊗ |ψ1〉+ a2|φ2〉A ⊗ |ψ1〉 . (2.37)

This is so, because the existence of an additional particle (maybe at theend of the universe) cannot change the linearity of quantum mechanicsof system A. As the same argument applies for system B we shouldhave in general

(∑i

ai|φi〉)⊗

∑j

bj|ψj〉

=∑ij

aibj|φi〉 ⊗ |ψj〉 . (2.38)

The set of states {|φi〉A⊗|ψj〉B} forms a basis of the state space of twoparticles.

Now let us see how the scalar product between two states |ψ1〉A ⊗|ψ2〉B and |φ1〉A ⊗ |φ2〉B has to be defined. Clearly if |ψ2〉 = |φ2〉 thenwe should have

(|ψ1〉A ⊗ |ψ2〉B, |φ1〉A ⊗ |φ2〉B) = (|ψ1〉A, |φ1〉A) (2.39)

again because the existence of an unrelated particle somewhere in theworld should not change the scalar product of the first state.

Page 85: Quantum mechanics

84 CHAPTER 2. QUANTUM MEASUREMENTS

Eq. (2.39) can only be true if in general

(|ψ1〉A ⊗ |ψ2〉B, |φ1〉A ⊗ |φ2〉B) = (|ψ1〉A, |φ1〉A) (|ψ2〉B, |φ2〉B) . (2.40)

The tensor product of linear operators is defined by the action of theoperator on all possible product states. We define

A⊗ B(|φ〉A ⊗ |ψ〉B) := (A|φ〉A)⊗ (B|ψ〉B) . (2.41)

In all these definitions I have only used states of the form |φ〉 ⊗ |ψ〉which are called product states. Are all states of this form? Theanswer is evidently no, because we can form linear superpositions ofdifferent product states. An example of such a linear superposition isthe state

|Ψ〉 =1√2(|00〉+ |11〉) . (2.42)

Note that I now omit the ⊗ and abbreviate |0〉⊗ |0〉 by |00〉. If you tryto write the state Eq. (2.42) as a product

|00〉+ |11〉√2

= (α|0〉+β|1〉)⊗(γ|0〉+δ|1〉) = αγ|00〉+αδ|01〉+βγ|10〉+βδ|11〉 .

This implies that αδ = 0 = βγ. If we chose α = 0 then also αγ = 0 andthe two sides cannot be equal. Likewise if we chose δ = 0 then βδ = 0and the two sides are different. As either α = 0 or δ = 0 we arrive ata contradiction. States which cannot be written in product form, arecalled entangled states. In the last part of these lectures we will learna bit more about the weird properties of the entangled states. They al-low such things as quantum state teleportation, quantum cryptographyand quantum computation.

Let us now see how the tensor product looks like when you writethem as column vectors? The way you write them is really arbitrary,but of course there are clumsy ways and there are smart ways. Inliterature you will find only one way. This particular notation for thetensor product of column vectors and matrices ensures that the relationA⊗B|φ〉⊗|ψ〉 = A|φ〉⊗B|ψ〉 is true when you do all the manipulations

Page 86: Quantum mechanics

2.5. MIXED STATES, ENTANGLEMENT AND THE SPEED OF LIGHT85

using matrices and column vectors. The tensor product between a m-dimensional vector and an n-dimensional vector is written as

a1...am

⊗b1...bn

=

a1

b1...bn

...

am

b1...bn

=

a1b1

...ambn

. (2.43)

The tensor product of a linear operator A on an m-dimensional spaceand linear operator B on an n-dimensional space in matrix notation isgiven by

a11 . . . a1m...

. . ....

am1 . . . amm

⊗b11 . . . b1n...

. . ....

bn1 . . . bnn

=

a11B . . . a1mB

.... . .

...am1B . . . ammB

=

a11

b11 . . . b1n...

. . ....

bn1 . . . bnn

. . . a1m

b11 . . . b1n...

. . ....

bn1 . . . bnn

...

. . ....

am1

b11 . . . b1n...

. . ....

bn1 . . . bnn

. . . amm

b11 . . . b1n...

. . ....

bn1 . . . bnn

. (2.44)

The simplest explicit examples that I can give are those for two 2-dimensional Hilbert spaces. There we have

(a1

a2

)⊗(b1b2

)=

a1b1a1b2a2b1a2b2

. (2.45)

Page 87: Quantum mechanics

86 CHAPTER 2. QUANTUM MEASUREMENTS

and

(a11 a12

a21 a22

)⊗(b11 b12

b21 b22

)=

a11b11 a11b12 a12b11 a12b12a11b21 a11b22 a12b21 a12b22a21b11 a21b12 a22b11 a22b12a21b21 a21b22 a22b21 a22b22

.

(2.46)

2.5.2 How to describe a subsystem of some largesystem?

Let us now imagine the following situation. The total system thatwe consider consists of two particles each of which are described by anN(M)-dimensional Hilbert spaceHA (HB) with basis vectors {|φi〉A}i=1,...,N

({|ψi〉B}i=1,...,M). Let us also assume that the two particles together arein an arbitrary, possibly mixed, state ρAB. One person, Alice, is holdingparticle A, while particle B is held by a different person (let’s call himBob) which, for whatever reason, refuses Alice access to his particle.This situation is schematically represented in Fig. 2.3. Now imaginethat Alice makes a measurement on her system alone (she cannot ac-cess Bobs system). This means that she is measuring an operator ofthe form A ⊗ 1. The operator on Bobs system must be the identityoperator because no measurement is performed there. I would now liketo calculate the expectation value of this measurement, i.e.

〈A〉 =∑ij

A〈φ|B〈ψj|A⊗ 1|φ〉A|ψj〉 = trAB{A⊗ 1 ρAB} ,

where trAB means the trace over both systems, that of Alice andthat of Bob. This description is perfectly ok and always yields thecorrect result. However, if we only ever want to know about outcomesof measurements on Alice’s system alone then it should be sufficient tohave a quantity that describes the state of her system alone withoutany reference to Bob’s system. I will now derive such a quantity, whichis called the reduced density operator.

Our task is the definition of a density operator ρA for Alices systemwhich satisfies

〈A〉 = trAB{A⊗ 1 ρAB} = trA{AρA} (2.47)

Page 88: Quantum mechanics

2.5. MIXED STATES, ENTANGLEMENT AND THE SPEED OF LIGHT87

Figure 2.3: Alice and Bob hold a joint system, here composed of twoparticles. However, Alice does not have access to Bob’s particle andvice versa.

for all observables A. How can we construct this operator? Let usrewrite the left hand side of Eq. (2.47)

〈A〉 =N∑i=1

M∑j=1

A〈φi|B〈ψj|A⊗ 1 ρAB|φi〉A|ψj〉B

=N∑i=1

A〈φi|A

M∑j=1

B〈ψj|ρAB|ψj〉B

|φi〉A= trA{A

M∑j=1

B〈ψj|ρAB|ψj〉B

} .

In the last step I have effectively split the trace operation into two parts.This split makes it clear that the state of Alices system is described by

Page 89: Quantum mechanics

88 CHAPTER 2. QUANTUM MEASUREMENTS

the reduced density operator

ρA :=M∑j=1

B〈ψj|ρAB|ψj〉B = trB{ρAB} . (2.48)

where the last identity describes the operation of taking the partialtrace. The point is now that the reduced density operator allows Aliceto describe the outcome of measurements on her particle without anyreference to Bob’s system, e.g.

〈A〉 = trA{AρA} .

When I am giving you a density operator ρAB then you will askhow to actually calculate the reduced density operator explicitly. Letme first consider the case where ρAB is the pure product state, i.e.ρAB = |φ〉A|ψ1〉BA〈φ|B〈ψ1| Then following Eq. (2.48) the partial traceis

trB{|φ1ψ1〉〈φ1ψ1|} =M∑i=1

B〈ψi| (|φ1ψ1〉〈φ1ψ1|) |ψi〉B

=M∑i=1

|φ1〉AA〈φ1|B〈ψi|ψ1〉BB〈ψ1|ψi〉B

= |φ1〉AA〈φ1| . (2.49)

Now it is clear how to proceed for an arbitrary density operator of thetwo systems. Just write down the density operator in a product basis

ρAB =N∑

i,k=1

M∑j,l=1

αij,kl|φi〉AA〈φk| ⊗ |ψj〉BB〈ψl| , (2.50)

and then use Eq. (2.49) to find

trB{ρAB} =M∑c=1

B〈ψc|

N∑i,k=1

M∑j,l=1

αij,kl|φi〉AA〈φk| ⊗ |ψj〉BB〈ψl|

|ψc〉B=

M∑c=1

N∑i,k=1

M∑j,l=1

αij,kl|φi〉AA〈φk|(B〈ψc|ψj〉B)(B〈ψl|ψc〉B)

Page 90: Quantum mechanics

2.5. MIXED STATES, ENTANGLEMENT AND THE SPEED OF LIGHT89

=M∑c=1

N∑i,k=1

M∑j,l=1

αij,kl|φi〉AA〈φk|δcjδlc

=N∑

i,k=1

(M∑c=1

αic,kc

)|φi〉AA〈φk| (2.51)

Let me illustrate this with a concrete example.Example: Consider two system A and B that have two energy lev-els each. I denote the basis states in both Hilbert spaces by |1〉 and|2〉. The basis states in the tensor product if both spaces are then{|11〉, |12〉, |21〉, |22〉}. Using this basis I write a possible state of thetwo systems as

ρAB =1

3|11〉〈11|+ 1

3|11〉〈12|+ 1

3|11〉〈21|

+1

3|12〉〈11|+ 1

3|12〉〈12|+ 1

3|12〉〈21|

+1

3|21〉〈11|+ 1

3|21〉〈12|+ 1

3|21〉〈21| . (2.52)

Now let us take the partial trace over the second system B. This meansthat we collect those terms which contain states of the form |i〉BB〈i|.Then we find that

ρA =2

3|1〉〈1|+ 1

3|1〉〈2|+ +

1

3|2〉〈1|+ 1

3|2〉〈2| . (2.53)

This is not a pure state (Check that indeed detρA 6= 0). It is notan accident that I have chosen this example, because it shows that thestate of Alice’s system may be a mixed state (incoherent superposition)although the total system that Alice and Bob are holding is in a purestate. To see this, just check that

ρAB = |Ψ〉ABAB〈Ψ| , (2.54)

where |Ψ〉AB =√

13|11〉 +

√13|12〉 +

√13|21〉. This is evidently a pure

state! Nevertheless, the reduced density operator Eq. (2.53) of Alice’ssystem alone is described by a mixed state!

In fact this is a very general behaviour. Whenever Alice and Bobhold a system that is in a joint pure state but that cannot be written

Page 91: Quantum mechanics

90 CHAPTER 2. QUANTUM MEASUREMENTS

as a product state |φ〉⊗|ψ〉 (Check this for Eq. (2.54), then the reduceddensity operator describing one of the subsystems represents a mixedstate. Such states are called entangled states and they have quite a lotof weird properties some of which you will encounter in this lecture andthe exercises.

Now let us see how these ideas help us to reveal a connection betweenspecial relativity and mixed states.

2.5.3 The speed of light and mixed states.

In the previous subsection you have seen that a mixed state of yoursystem can also arise because your system is part of some larger inac-cessible system. Usually the subsystem that you are holding is then ina mixed state described by a density operator. When I introduced thedensity operator, I told you that a particular mixed state be realizedin different ways. An example was

ρ =1

2|1〉〈1|+ 1

2|2〉〈2|

=1

2|+〉〈+|+ 1

2|−〉〈−|

with |±〉 = 1√2(|1〉 ± |2〉). However, I pointed out that these two real-

izations are physically the same because you are unable to distinguishthem. I did not prove this statement to you at that point. I will nowcorrect this omission. I will show that if you were able to distinguishtwo realizations of the same density operator, then you could send sig-nals faster than the speed of light, i.e. this would violate specialrelativity. How would that work?

Assume two persons, Alice and Bob, hold pairs of particles whereeach particle is described by two-dimensional Hilbert spaces. Let usassume that Alice and Bob are sitting close together, and are able toaccess each others particle such that they are able to prepare the state

|ψ〉 =1√2

(|11〉+ |22〉) . (2.55)

Now Alice and Bob walk away from each other until they are separatedby, say, one light year. Now Alice discovers the answer to a very tricky

Page 92: Quantum mechanics

2.5. MIXED STATES, ENTANGLEMENT AND THE SPEED OF LIGHT91

question which is either ’yes’ or ’no’ and would like to send this answerto Bob. To do this she performs either of the following measurementson her particle.

’yes’ Alice measures the observable Ayes = |1〉〈1|+ 2|2〉〈2|.The probability to find the state |1〉 is p1 = 1

2and to find |2〉 is

p2 = 12. This follows from postulate 4 because pi = tr{|i〉〈i|ρA} =

12

where ρA = 12(|1〉〈1| + |2〉〈2|) is the reduced density operator

describing Alices system. After the result |1〉 the state of the totalsystem is |1〉〈1| ⊗ 1|ψ〉 ∼ |11〉, after the result |2〉 the state of thetotal system is |2〉〈2| ⊗ 1|ψ〉 ∼ |22〉.

’no’ Alice measures the observable Ano = |+〉〈+|+ 2|−〉〈−| 6= Ayes.The probability to find the state |+〉 is p+ = 1

2and to find |−〉 is

p− = 12. This follows from postulate 4 because pi = tr{|i〉〈i|ρA} =

12

where ρA = 12(|+〉〈+|+ |−〉〈−|) is the reduced density operator

describing Alices system. After the result |+〉 the state of thetotal system is |+〉〈+| ⊗ 1|ψ〉 ∼ | + +〉, after the result |−〉 thestate of the total system is |−〉〈−| ⊗ 1|ψ〉 ∼ | − −〉.

What does this imply for Bob’s system? As the measurement results1 and 2 occur with probability 50%, Bob holds an equal mixture ofthe two states corresponding the these outcomes. In fact, when Al-ice wanted to send ’yes’ then the state of Bob’s particle after Alicesmeasurement is given by

ρyes =1

2|0〉〈0|+ 1

2|1〉〈1| (2.56)

and if Alice wanted to send ’no’ then the state of Bob’s particle is givenby

ρno =1

2|+〉〈+|+ 1

2|−〉〈−| . (2.57)

You can check quite easily that ρno = ρyes. If Bob would be able todistinguish the two realizations of the density operators then he couldfind out which of the two measurements Alice has carried out. In thatcase he would then be able to infer what Alices answer is. He can dothat immediately after Alice has carried out her measurements, whichin principle requires negligible time. This would imply that Alice could

Page 93: Quantum mechanics

92 CHAPTER 2. QUANTUM MEASUREMENTS

send information to Bob over a distance of a light year in virtually notime at all. This clearly violates the theory of special relativity.

As we know that special relativity is an extremely well establishedand confirmed theory, this shows that Bob is unable to distinguish thetwo density operators Eqs. (2.56-2.57).

2.6 Generalized measurements

Page 94: Quantum mechanics

Chapter 3

Dynamics and Symmetries

So far we have only been dealing with stationary systems, i.e. we didnot have any prescription for the time evolution of quantum mechan-ical systems. We were only preparing systems in particular quantummechanical states and then subjected them to measurements. In real-ity of course any system evolves in time and we need to see what thequantum mechanical rules for time evolution are. This is the subjectof this chapter.

3.1 The Schrodinger Equation

Most of you will know that the prescription that determines the quan-tum mechanical time evolution is given by the Schrodinger equation.Before I state this as the next postulate, let us consider what any de-cent quantum mechanical time evolution operator should satisfy, andthen we will convince us that the Schrodinger equation satisfies thesecriteria. In fact, once we have accepted these properties, we will nothave too much freedom of choice for the form of the Schrodinger equa-tion.

Definition 42 The time evolution operator U(t2, t1) maps the quantumstate |ψ(t1)〉 to the state |ψ(t2)〉 obeying the properties

1. U(t2, t1) is unitary.

93

Page 95: Quantum mechanics

94 CHAPTER 3. DYNAMICS AND SYMMETRIES

2. We have U(t2, t1)U(t1, t0) = U(t2, t0). (semi-group property)

3. U(t, t1) is differentiable in t.

What is the reason for these assumptions? First of all, the timeevolution operator has to map physical states onto physical states. Thatmeans in particular that a normalized state |ψ(t0)〉 is mapped into anormalized state |ψ(t1)〉, i.e. for any |ψ(t0)〉

〈ψ(t0)|ψ(t0)〉 = 〈ψ(t1)|ψ(t1)〉 = 〈ψ(t0)|U †(t1, t0)U(t1, t0)|ψ(t0)〉 .(3.1)

Therefore it seems a fair guess that U(t1, t0) should be a unitary oper-ator.

The second property of the time evolution operator demands thatit does not make a difference if we first evolve the system from t0 tot1 and then from t1 to t2 or if we evolve it directly from time t0 to t2.This is a very reasonable assumption. Note however, that this does notimply that we may measure the system at the intermediate time t1. Infact we must not interact with the system.

The third condition is one of mathematical convenience and of phys-ical experience. Every system evolves continuously in time. This isan observation that is confirmed in experiments. Indeed dynamics inphysics is usually described by differential equations, which already im-plies that observable quantities and physical state are differentiable intime. You may then wonder what all the fuss about quantum jumpsis then about. The point is that we are talking about the time evo-lution of a closed quantum mechanical system. This means that we,e.g. a person from outside the system, do not interact with the systemand in particular this implies that we are not measuring the system.We have seen that in a measurement indeed the state of a system canchange discontinuously, at least according to our experimentally welltested Postulate 4. In summary, the quantum state of a closed systemchanges smoothly unless the system is subjected to a measurement, i.e.an interaction with an outside observer.

What further conclusions can we draw from the properties of thetime evolution operator? Let us consider the time evolution opera-tor for very short time differences between t and t0 and use that it is

Page 96: Quantum mechanics

3.1. THE SCHRODINGER EQUATION 95

differentiable in time (property 3) of its Definition 42). Then we find

U(t, t0) = 1 − i

hH(t0)(t− t0) + . . . . (3.2)

Here we have used the fact that we have assumed that the time evolu-tion of a closed quantum mechanical system is smooth. The operatorH(t0) that appears on the right hand side of Eq. (3.2) is called theHamilton operator of the system. Let us apply this to an initialstate |ψ(t0)〉 and take the time derivative with respect to t on bothsides. Then we find

ih∂t|ψ(t)〉 ≡ ih∂t(U(t, t0)|ψ(t0)〉) ≈ H(t0)|ψ(t0)〉 . (3.3)

If we now carry out the limit t→ t0 then we finally find

ih∂t|ψ(t)〉 = H(t)|ψ(t)〉 . (3.4)

This is the Schrodinger equation in the Dirac notation in a coordinateindependent form. To make the connection to the Schrodinger equationas you know it from last years quantum mechanics course let us considerExample: Use the Hamilton operator for a particle moving in a one-dimensional potential V (x, t). Write the Schrodinger equation in theposition representation. This gives

ih∂t〈x|ψ(t)〉 = 〈x|ih∂t|ψ(t)〉= 〈x|H|ψ(t)〉

=

(− h2

2m

d2

dx2+ V (x, t)

)〈x|ψ(t)〉 .

This is the Schrodinger equation in the position representation as youknow it from the second year course.

Therefore, from the assumptions on the properties of the time evolu-tion operator that we made above we were led to a differential equationfor the time evolution of a state vector. Again, of course, this resulthas to be tested in experiments, and a (sometimes) difficult task is tofind the correct Hamilton operator H that governs the time evolutionof the system. Now let us formulate the result of our considerations in

Page 97: Quantum mechanics

96 CHAPTER 3. DYNAMICS AND SYMMETRIES

the following

Postulate 5 The time evolution of the quantum state of an isolatedquantum mechanical system is determined by the Schrodinger equation

ih∂t|ψ(t)〉 = H(t)|ψ(t)〉 , (3.5)

where H is the Hamilton operator of the system.

As you may remember from your second year course, the Hamiltonoperator is the observable that determines the energy eigenvalues of thequantum mechanical system. In the present course I cannot justify thisin more detail, as we want to learn a lot more things about quantummechanics. The following discussion will however shed some more lighton the time evolution of a quantum mechanical system and in particularit will pave our way towards the investigation of quantum mechanicalsymmetries and conserved quantities.

3.1.1 The Heisenberg picture

In the preceding section we have considered the time evolution of thequantum mechanical state vector. In fact, we have put all the time evo-lution into the state and have left the observables time independent.This way of looking at the time evolution is called the Schrodingerpicture. This is not the only way to describe quantum dynamics (un-less they had some intrinsic time dependence). We can also go to theother extreme, namely leave the states time invariant and evolve theobservables in time. How can we find out what the time evolution ofa quantum mechanical observable is? We need some guidance, someproperty that must be the same in both pictures of quantum mechan-ics. Such a property must be experimentally observable. What I willbe using here is the fact that expectation values of any observable hasto be the same in both pictures. Let us start by considering the expec-tation value of an operator AS in the Schrodinger picture at time time

Page 98: Quantum mechanics

3.1. THE SCHRODINGER EQUATION 97

t given an initial state |ψ(t0)〉. We find

〈A〉 = 〈ψ(t)|AS|ψ(t)〉 = 〈ψ(t0)|U †(t, t0)ASU(t, t0)|ψ(t0)〉 . (3.6)

Looking at Eq. (3.6) we see that we can interpret the right hand sidealso as the expectation value of a time-dependent operator

AH(t) = U †(t, t0)ASU(t, t0) (3.7)

in the initial state |ψ(t0)〉. That is

〈ψ(t)|AS|ψ(t)〉 = 〈A〉 = 〈ψ(t0)|AH(t)|ψ(t0)〉 . (3.8)

Viewing the state of the system as time-independent, and the observ-ables as evolving in time is called the Heisenberg-picture of quantummechanics. As we have seen in Eq. (3.8) both, the Schrodinger pictureand the Heisenberg picture, give exactly the same predictions for physi-cal observables. But, depending on the problem, it can be advantageousto chose one or the other picture.

Like the states in the Schrodinger picture the Heisenberg operatorsobey a differential equation. This can easily be obtained by taking thetime derivative of Eq. (3.7). To see this we first need to know the timederivative of the time evolution operator. Using |ψ(t)〉 = U(t, t0)|ψ(t0)〉in the Schrodinger equation Eq. (3.4) we find for any |ψ(t0)〉

ih∂tU(t, t0)|ψ(t0)〉 = H(t)U(t, t0)|ψ(t0)〉 (3.9)

and thereforeih∂tU(t, t0) = H(t)U(t, t0) . (3.10)

Assuming that the the operator in the Schrodinger picture has noexplicit time dependence, we find

d

dtAH(t) =

d

dt

(U †(t, t0)ASU(t, t0)

)=

∂U †(t, t0)

∂tASU(t, t0) + U †(t, t0)AS

∂U(t, t0)

∂t

=i

hU †(t, t0)H(t)ASU(t, t0) + U †(t, t0)AS

i

hH(t)U(t, t0)

Page 99: Quantum mechanics

98 CHAPTER 3. DYNAMICS AND SYMMETRIES

=i

hU †(t, t0)[H(t), AS]U(t, t0)

=i

h[H(t), AS]H (3.11)

=i

h[HH(t), AH ] . (3.12)

It is easy to check that for an operator that has an explicit time depen-dence in the Schrodinger picture, we find the Heisenberg equation

d

dtAH(t) =

i

h[H(t), AH(t)] +

(∂AS∂t

)H

(t) . (3.13)

One of the advantages of the Heisenberg equation is that it has a directanalogue in classical mechanics. In fact this analogy can be viewed asa justification to identify the Hamiltonian of the system H with theenergy of the system. I will not go into details here and for those ofyou who would like to know more about that I rather recommend thebook: H. Goldstein, Classical Mechanics, Addison-Wesley (1980).

3.2 Symmetries and Conservation Laws

After this introduction to the Heisenberg picture it is now time todiscuss the concept of symmetries and their relation to conservationlaws.

3.2.1 The concept of symmetry

If we are able to look at a system in different ways and it appears thesame to us then we say that a system has a symmetry. This simplestatement will be put on solid ground in this section. Why are weinterested in symmetries? The answer is simple! It makes our lifeeasier. If we have a symmetry in our problem, then the solution willalso have such a symmetry. This symmetry of the solution expressesitself in the existence of a quantity which is conserved for any solutionto the problem. Of course we are always interested to know quantitiesthat are conserved and the natural question is how these conservedquantities are related to the symmetry of the problem! There are two

Page 100: Quantum mechanics

3.2. SYMMETRIES AND CONSERVATION LAWS 99

ways to formulate symmetries, in an active way and a passive way.If we assume that we transform the system S into a new system S ′

then we speak of an active transformation, i.e. we actively changethe system. If we change the coordinate system in which we considerthe same system, then we speak of a passive transformation. In theselectures I will adopt the active point of view because it appears to be themore natural way of speaking. An extreme example is the time reversalsymmetry. Certainly we can make a system in which all momenta ofparticles are reversed (the active transformation) while we will not beable to actually reverse time, which would amount to the correspondingpassive transformation.

Now let us formalize what we mean by a symmetry. In the activeviewpoint, we mean that the systems S and S ′ = TS look the same. Wehave to make sure what we mean by ’S and S ′ look the same’. Whatdoes this mean in quantum mechanics? The only quantities that areaccessible in quantum mechanics are expectation values or, even morebasic, transition probabilities. Two systems are the same in quantummechanics if the transition probabilities between corresponding statesare the same. To be more precise let us adopt the

Definition 43 A transformation T is called a symmetry transfor-mation when for all vectors |φ〉 and |ψ〉 in system S and the corre-sponding vectors |φ′〉 = T |φ〉 and |ψ′〉 = T |ψ〉 in system S ′ we find

|〈ψ|φ〉| = |〈ψ′|φ′〉| . (3.14)

In this definition we have seen that quantum states are transformedas |φ′〉 = T |φ〉. This implies that observables, which are of the formA =

∑i ai|ai〉〈ai| will be transformed according to the prescription

A′ = T AT †.Which transformations T are possible symmetry transformations?

As all transition probabilities have to be preserved one would expectthat symmetry transformations are automatically unitary transforma-tions. But this conclusion would be premature because of two reasons.Firstly, we did not demand that the scalar product is preserved, butonly the absolute value of the scalar product and secondly we did noteven demand the linearity of the transformation T . Fortunately it turns

Page 101: Quantum mechanics

100 CHAPTER 3. DYNAMICS AND SYMMETRIES

out that a symmetry transformation T cannot be very much more gen-eral than a unitary transformation. This was proven in a remarkabletheorem by Wigner which states

Theorem 44 Any symmetry transformation can either be representedby a unitary transformation or an anti-unitary transformation.

For a proof (which is rather lengthy) of this theorem you shouldhave a look at books such as: K. Gottfried, Quantum Mechanics I,Benjamin (1966).

An anti-unitary transformation has the property

U(α|φ〉+ β|ψ〉) = α∗U |φ〉+ β∗U |ψ〉 (3.15)

and preserves all transition probabilities. An example of an anti-unitarysymmetry is time-reversal, but this will be presented later.

Of course a system will usually be symmetric under more than justone particular transformation. In fact it will be useful to considersymmetry groups. Such groups may be discrete (you can number themwith whole numbers) or continuous (they will be parametrized by a realnumber).

Definition 45 A continuous symmetry group is a set of symmetrytransformation that can be parametrized by a real parameter such thatthe symmetry transformations can be differentiated with respect to thisparameter.

Example: a) Any Hermitean operator A gives rise to a continuoussymmetry group using the definition

U(ε) := eiAε/h . (3.16)

Obviously the operator U(ε) can differentiated with respect to the pa-rameter ε.

So far we have learned which transformations can be symmetrytransformations. A symmetry transformation of a quantum mechanicalsystem is either a unitary or anti-unitary transformation. However,

Page 102: Quantum mechanics

3.2. SYMMETRIES AND CONSERVATION LAWS 101

for a given quantum system not every symmetry transformation is asymmetry of that quantum system. The reason for this is that it is notsufficient to demand that the configuration of a system is symmetric ata given instance in time, but also that this symmetry is preservedunder the dynamics of the system! Otherwise we would actually beable to distinguish two ’symmetric’ states by just waiting and lettingthe system evolve. The invariance of the configurational symmetryunder a time evolution will lead to observables that are conserved intime. As we are now talking about the time evolution of a quantumsystem we realize that the Hamilton operator plays a major role inthe theory of symmetries of quantum mechanical systems. In fact, thisis not surprising because the initial state (which might posses somesymmetry) and the Hamilton operator determine the future of a systemcompletely.

Assume that we have a symmetry that is preserved in time. At theinitial time the symmetry takes a state |ψ〉 into |ψ〉′ = T |ψ〉. The timeevolution of the original system S is given by the Hamilton operator Hwhile that of the transformed system S ′ is given by H ′ = T HT †. Thisleads to

|ψ(t)〉 = e−iHt/h|ψ(0)〉 (3.17)

for the time evolution in S. The time evolution of the transformedsystem S ′ is governed by the Hamilton operator H ′ so that we find

|ψ(t)〉′ = e−iH′t/h|ψ(0)〉′ . (3.18)

However, the state |ψ〉′ could also be viewed as quantum state of theoriginal system S! This means that the time evolution of system Sgiven by the Hamilton operator H could be applied and we would find

|ψ(t)〉′′ = e−iH′t/h|ψ(0)〉′ . (3.19)

If the symmetry of the system is preserved for all times then the twostates |ψ(t)〉′ and |ψ(t)〉′′ cannot differ by more than a phase factor, i.e.

|ψ(t)〉′′ = eiφ|ψ(t)〉′ . (3.20)

because we need to have

|〈ψ′|ψ′〉| = |〈ψ(t)′|ψ(t)′′〉| . (3.21)

Page 103: Quantum mechanics

102 CHAPTER 3. DYNAMICS AND SYMMETRIES

As Eq. (3.21) has to be true for all state vectors, the Hamilton operatorsH and H ′ can differ by only a constant which is physically unobservable(it gives rise to the same global eiφ in all quantum states) and thereforethe two Hamilton operators are essentially equal. Therefore we canconclude with

Lemma 46 A symmetry of a system S that holds for all times needs tobe a unitary or anti-unitary transformation T that leaves the Hamiltonoperator H invariant, i.e.

H = H ′ = T HT † . (3.22)

Given a symmetry group which can be parametrized by a parametersuch that the symmetry transformations are differentiable with respectto this parameter (see the Example in this section above), we can de-termine the conserved observable G of the system quite easily from thefollowing relation

G = limε→0

iU(ε)− 1

ε. (3.23)

The quantity G is also called the generator of the symmetry group.Why is G conserved? From Eq. (3.23) we can see that for small ε wecan write

U(ε) = 1 − iεG+O(ε2) , (3.24)

where G is a Hermitean operator and O(ε2) means that all other termscontain at least a factor of ε2 and are therefore very small. (see in thefirst chapter for the proof). Now we know that the Hamilton operatoris invariant under the symmetry transformation, i.e.

H = H ′ = U(ε)HU †(ε) . (3.25)

For small values of ε we find

H = (1 − iGε)H(1 + iGε)

= H − i[G, H]ε+O(ε2) . (3.26)

As the equality has to be true for arbitrary but small ε, this impliesthat

[G, H] = 0 . (3.27)

Page 104: Quantum mechanics

3.2. SYMMETRIES AND CONSERVATION LAWS 103

Now we remember the Heisenberg equation Eq. (3.13) and realize thatthe rate of change of the observable G in the Heisenberg picture isproportional to the commutator Eq. (3.27),

dGH

dt= ih[G, H]H = 0 , (3.28)

i.e. it vanishes. Therefore the expectation value of the observable G isconstant, which amounts to say that G is a conserved quantity, i.e.

〈ψ(t)|G|ψ(t)〉dt

=〈ψ(0)|GH(t)|ψ(0)〉

dt= 0 . (3.29)

Therefore we have the important

Theorem 47 The generator of a continuous symmetry group of a quan-tum system is a conserved quantity under the time evolution of thatquantum system.

After these abstract consideration let us now consider some exam-ples of symmetry groups.

3.2.2 Translation Symmetry and momentum con-servation

Let us now explore translations of quantum mechanical systems. Againwe adopt the active viewpoint of the transformations. First we need todefine what we mean by translating the system by a distance a to theright. Such a transformation Ta has to map the state of the system |ψ〉into |ψa〉 = Ta|ψ〉 such that

〈x|ψa〉 = 〈x− a|ψ〉 . (3.30)

In Fig. 3.1 you can see that this is indeed a shift of the wavefunction (i.e.the system) by a to the right. Which operator represents the translationoperator Ta? To see this, we begin with the definition Eq. (3.30). Thenwe use the representation of the identity operator 1 =

∫dp|p〉〈p| (see

Page 105: Quantum mechanics

104 CHAPTER 3. DYNAMICS AND SYMMETRIES

Figure 3.1: The original wave function ψ(x) (solid line) and the shiftedwave function ψa(x) = ψ(x−a). |ψa〉 represents a system that has beenshifted to the right by a.

Eq. (1.131)) and Eq. (1.128). We find

〈x|Ta|ψ〉 = 〈x− a|ψ〉

=∫dp〈x− a|p〉〈p|ψ〉

=∫dp

1√2πh

eip(x−a)/h〈p|ψ〉

=∫dp

1√2πh

e−ipa/heipx/h〈p|ψ〉

=∫dpe−ipa/h〈x|p〉〈p|ψ〉

= 〈x|(∫dpe−ipa/h|p〉〈p|)|ψ〉

= 〈x|e−ipa/h|ψ〉 . (3.31)

As this is true for any |ψ〉, we that the translation operator has theform

Ta = e−ipa/h . (3.32)

This means that the momentum operator is the generator of transla-tions. From Eq. (3.23) it follows then that in a translation invariant

Page 106: Quantum mechanics

3.2. SYMMETRIES AND CONSERVATION LAWS 105

system momentum is conserved, i.e.

d〈ψ(t)|p|ψ(t)〉dt

= 0 .

Consider the Hamilton operator of a particle in a potential, which

H =p2

2m+ V (x) . (3.33)

The momentum operator is obviously translation invariant, but a trans-lation invariant potential has to satisfy V (x) = V (x+a) for all a whichmeans that it is a constant.

3.2.3 Rotation Symmetry and angular momentumconservation

Another very important symmetry is rotation symmetry. To considerrotation symmetry it is necessary to go to three-dimensional systems.Let us first consider rotations Rz(α) with an angle α around the z-axis.This is defined as

ψα(r, φ, θ) = ψ(r, φ− α, θ) . (3.34)

Here we have written the wavefunction in polar coordinates

x1 = r cosφ sin θ (3.35)

x2 = r sinφ sin θ (3.36)

x3 = r cos θ . (3.37)

In Fig. 3.2 we can see that transformation Eq. (3.34) indeed rotates awave-function by an angle α counter clockwise.

As translations are generated by the momentum operator, we expectrotations to be generated by the angular momentum. Let us confirmour suspicion. The angular momentum of a particle is given by

~l = ~x× ~p (3.38)

= (x2p3 − x3p2)~e1 + (x3p1 − x1p3)~e2 + (x1p2 − x2p1)~e3 ,(3.39)

Page 107: Quantum mechanics

106 CHAPTER 3. DYNAMICS AND SYMMETRIES

Figure 3.2: The original wave function ψ(r, φ, θ) (solid line) and therotated wave function ψα(r, φ, θ) = ψ(r, φ − α, θ). |ψα〉 represents asystem that has been rotated clockwise around the x3 − axis.

where × is the ordinary vector-product, ~p = p1~e1 + p2~e2 + p3~e3 and~x = x1~e1 + x2~e2 + x3~e3 with [xi, pj] = ihδij.

As we are considering rotations around the z-axis, i.e. ~e3, we willprimarily be interested in the l3 component of the angular momentumoperator. In the position representation we find

l3 =h

i

(x1

∂x2

− x2∂

∂x1

). (3.40)

This can be converted to polar coordinates by using the chain rule

∂φ=

∂x1

∂φ

∂x1

+∂x2

∂φ

∂x2

=∂(r cosφ sin θ)

∂φ

∂x1

+∂(r sinφ sin θ)

∂φ

∂x2

= −r sinφ sin θ∂

∂x1

+ r cosφ sin θ∂

∂x2

= −x2∂

∂x1

+ x1∂

∂x2

Page 108: Quantum mechanics

3.2. SYMMETRIES AND CONSERVATION LAWS 107

=i

hl3 . (3.41)

Now we can proceed to find the operator that generates rotations byderiving a differential equation for it.

〈~x|∂Rz(α)

∂α|ψ〉 =

∂α〈~x|Rz(α)|ψ〉

=∂

∂α〈r, φ− α, θ|ψ〉

= − ∂

∂φ〈r, φ− α, θ|ψ〉

= − ∂

∂φ〈~x|Rz(α)|ψ〉

= − ih〈~x|l3Rz(α)|ψ〉 . (3.42)

As this is true for all wave functions |ψ〉 we have the differential equationfor the rotation operator

∂Rz(α)

∂α= − i

hl3Rz(α) (3.43)

With the initial condition Rz(0) = 1 the solution to this differentialequation is given by

Rz(α) = e−il3α/h = e−i~l~e3α/h . (3.44)

We obtain a rotation around an arbitrary axis ~n by

Rz(α) = e−i~l~nα/h . (3.45)

Any system that is invariant under arbitrary rotations around anaxis ~n preserves the component of the angular momentum in that direc-

tion, i.e. it preserves ~l~n. As the kinetic energy of a particle is invariantunder any rotation (Check!) the angular momentum is preserved if theparticle is in a potential that is symmetric under rotation around theaxis ~n. The electron in a hydrogen atom has the Hamilton operator

H =p2

2m+ V (|~x|) . (3.46)

Page 109: Quantum mechanics

108 CHAPTER 3. DYNAMICS AND SYMMETRIES

Rotations leave the length of a vector invariant and therefore |~x| and

with it V (|~x|) are invariant under rotations around any axis. Thismeans that any component of the angular momentum operator is pre-served, i.e. the total angular momentum is preserved.

3.3 General properties of angular momenta

In the preceding section we have investigated some symmetry transfor-mations, the generators of these symmetry transformations and someof their properties. Symmetries are important and perhaps the mostimportant of all is the concept of rotation symmetry and its generator,the angular momentum. In the previous section we have consideredthe specific example of the orbital angular momentum which we coulddevelop from the classical angular momentum using the correspondenceprincipal. However, there are manifestations of angular momentum inquantum mechanics that have no classical counterpart, the spin of anelectron being the most important example. In the following I wouldlike to develop the theory of the quantum mechanical angular momen-tum in general, introducing the notion of group representations.

3.3.1 Rotations

Whenever we aim to generalize a classical quantity to the quantumdomain, we first need to investigate it carefully to find out those prop-erties that are most characteristic of it. Then we will use these basicproperties as a definition which will guide us in our quest to find the cor-rect quantum mechanical operator. Correctness, of course, needs to betested experimentally, because nice mathematics does not necessarilydescribe nature although the mathematics itself might be consistent.

In the case of rotations I will not look at rotations about arbitraryangles, but rather at rotations about very small, infinitesimal angles.This will be most convenient in establishing some of the basic propertiesof rotations.

As we have done in all our discussion of symmetries, we will adoptthe viewpoint of active rotations, i.e. the system is rotated while thecoordinate system remains unchanged. A rotation around an axis n

Page 110: Quantum mechanics

3.3. GENERAL PROPERTIES OF ANGULAR MOMENTA 109

for a positive angle φ is one that follows the right hand rule (thumb indirection of the axis of rotation).

In three dimensional space rotations are given by real 3×3 matrices.A rotation around the x-axis with an angle φ is given by

Rx(φ) =

1 0 00 cosφ − sinφ0 sinφ cosφ

. (3.47)

The rotations around the y-axis is given by

Ry(φ) =

cosφ 0 sinφ0 1 0

− sinφ 0 cosφ

(3.48)

and the rotation about the z-axis is given by

Rz(φ) =

cosφ − sinφ 0sinφ cosφ 0

0 0 1

. (3.49)

Using the expansions for sin and cos for small angles

sin ε = ε+O(ε3) cos ε = 1− ε2

2, (3.50)

we may now expand the rotation matrices Eqs. (3.47-3.49) up to secondorder in the small angle ε. We find

Rx(ε) =

1 0 0

0 1− ε2

2−ε

0 ε 1− ε2

2

. (3.51)

Ry(ε) =

1− ε2

20 ε

0 1 0

−ε 0 1− ε2

2

(3.52)

Rz(ε) =

1− ε2

2−ε 0

ε 1− ε2

20

0 0 1

. (3.53)

Page 111: Quantum mechanics

110 CHAPTER 3. DYNAMICS AND SYMMETRIES

We know already from experience, that rotations around the same axiscommute, while rotations around different axis generally do not com-mute. This simple fact will allow us to derive the canonical commuta-tion relations between the angular momentum operators.

To see the effect of the non-commutativity of rotations for differentaxis, we compute a special combination of rotations. First rotate thesystem about an infinitesimal angle ε about the y-axis and then by thesame angle about the x-axis

Rx(ε)Ry(ε) =

1 0 0

0 1− ε2

2−ε

0 ε 1− ε2

2

1− ε2

20 ε

0 1 0

−ε 0 1− ε2

2

=

1− ε2

20 ε

ε2 1− ε2

2−ε

−ε ε 1− ε2

. (3.54)

Now we calculate

Rx(−ε)Ry(−ε)Rx(ε)Ry(ε) =

=

1− ε2

20 −ε

ε2 1− ε2

ε −ε 1− ε2

1− ε2

20 ε

ε2 1− ε2

2−ε

−ε ε 1− ε2

.

=

1 −ε2 0ε2 1 00 0 1

.

= Rz(ε2) (3.55)

This will be the relation that we will use to derive the commutationrelation between the angular momentum operators.

3.3.2 Group representations and angular momen-tum commutation relations

We already know that there are many different angular momenta inquantum mechanics, orbital angular momentum and spin are just two

Page 112: Quantum mechanics

3.3. GENERAL PROPERTIES OF ANGULAR MOMENTA 111

examples. Certainly they cannot necessarily represented by 3× 3 ma-trices as in the rotations in the previous section. Nevertheless, all theseangular momenta have some common properties that justify their clas-sification as angular momenta. This will lead us to a discussion ofgroups and their representations.

We have already seen in the first section on vector spaces what agroup is. Let us repeat the properties of a group G and its elementsx, y, z.

1. ∀x, y ∈ G : x · y = z ∈ G

2. ∃1 ∈ G : ∀x ∈ G : x · 1 = x

3. ∀x ∈ G : ∃x−1 ∈ G : x−1 · x = 1 and x · x−1 = 1

4. ∀x, y, z ∈ G : x · (y · z) = (x · y) · z

Note that we did not demand that the group elements commute underthe group operation. The reason is that we intend to investigate rota-tions which are surely not commutative. The fundamental group thatwe are concerned with is the group of rotations in three-dimensionalspace, i.e. the group of 3 × 3 matrices. It represents all our intuitiveknowledge about rotations. Every other group of operators on wave-functions that we would like to call a group of rotations will have toshare the basic properties of the group of three-dimensional rotations.This idea is captured in the notion of a group representation.

Definition 48 A representation of a group G of elements x is a groupS of unitary operators D(x) such that there is a map T : G → Sassociating with every x ∈ G an operator D(x) ∈ S such that the groupoperation is preserved, i.e. for all x, y ∈ G with x · y = z we have

D(x)D(y) = D(z) . (3.56)

This means that both groups G and S are essentially equal becausetheir elements share the same relations between each other. If we nowhave a representation of the rotation group then it is natural to say thatthe operators that make up this representation are rotation operators.

Page 113: Quantum mechanics

112 CHAPTER 3. DYNAMICS AND SYMMETRIES

In quantum mechanics a rotation has to be represented by a unitaryoperator acting on a state vector. For rotations around a given axis nthese unitary operators have to form a set of operators U(φ) that isparametrized by a single parameter, the angle φ. As we know that theoperators are unitary we know that they have to be of the form

U(φ) = e−iJnφ/h

with a Hermitean operator Jn and where we have introduced h forconvenience. The operator is the angular momentum operator alongdirection n. In general we can define the angular momentum operator

by ~J = Jx~ex + Jy~ey + Jz~ez which yields Jn = ~J · ~n. Now let us findout what the commutation relations between the different componentsof the angular momentum operator are. We use the fact that we havea representation of the rotation group, i.e. Eq. (3.56) is satisfied.Eq. (3.55) then implies that for an infinitesimal rotation we have theoperator equation

eiJxε/heiJyε/he−iJxε/he−iJyε/h = e−iJzε2/h .

Expanding both sides to second order in ε, we find

1 + [Jy, Jx]ε2/h2 = 1 − iJzε2/h

which gives rise to the commutation relation

[Jx, Jy] = ihJz . (3.57)

In the same way one can obtain the commutation relations between anyother component of the angular momentum operator, which results in

[Ji, Jj] = ihεijkJk . (3.58)

This fundamental commutation relation is perhaps the most importantfeature of the quantum mechanical angular momentum and can be usedas its definition.

Page 114: Quantum mechanics

3.3. GENERAL PROPERTIES OF ANGULAR MOMENTA 113

3.3.3 Angular momentum eigenstates

Using the basic commutation relation of angular momentum operatorsEq. (3.58), we will now derive the structure of the eigenvectors of theangular momentum operators. Eq. (3.58) already shows that we arenot able to find simultaneous eigenvectors to all the three componentsof the angular momentum. However, the square of the magnitude of

the angular momentum ~J2 = J2x + J2

y + J2z commutes with each of its

components. In the following we will concentrate on the z-componentof the angular momentum, for which we find

[ ~J2, Jz] = 0 . (3.59)

As the two operators commute, we can find simultaneous eigenvectors

for them, which we denote by |j,m〉. We know that ~J2

is a positiveoperator and for convenience we chose

~J2|j,m〉 = h2j(j + 1)|j,m〉 . (3.60)

By varying j from zero to infinity the expression j(j + 1) can take anypositive value from zero to infinity. The slightly peculiar form for the

eigenvalues of ~J2 has been chosen to make all the following expressionsas transparent as possible. Jz is not know to be positive, in fact it isn’t,and we have the eigenvalue equation

Jz|j,m〉 = hm|j,m〉 . (3.61)

Given one eigenvector for the angular momentum we would like to havea way to generate another one. This is done by the ladder operatorsfor the angular momentum. They are defined as

J± = Jx ± iJy . (3.62)

To understand why they are called ladder operators, we first derivetheir commutation relations with Jz. They are

[Jz, J±] = ±hJ± . (3.63)

Page 115: Quantum mechanics

114 CHAPTER 3. DYNAMICS AND SYMMETRIES

Using these commutation relations, we find

Jz(J+|j,m〉) = (J+Jz + hJ+)|j,m〉= (J+hm+ hJ+)|j,m〉= h(m+ 1)J+|j,m〉 . (3.64)

This implies that

(J+|j,m〉) = α+(j,m)|j,m+ 1〉 , (3.65)

i.e. J+ generates a new eigenvector of Jz with an eigenvalue increasedby one unit and α+(j,m) is a complex number. Likewise we find that

(J−|j,m〉) = α−(j,m)|j,m− 1〉 , (3.66)

i.e. J− generates a new eigenvector of Jz with an eigenvalue decreasedby one unit and α−(j,m) is a complex number. Now we have to es-tablish the limits within which m can range. Can m take any value orwill there be bounds given by j? To answer this we need consider thepositive operator

~J2 − J2z = J2

x + J2y . (3.67)

Applying it to an eigenvector, we obtain

〈j,m|( ~J2 − J2z )|j,m〉 = h2(j(j + 1)−m2) ≥ 0 . (3.68)

We conclude that the value of m is bounded by

|m| ≤√j(j + 1) . (3.69)

While from Eqs. (3.65-3.66) we can see that the ladder operators allowus to increase and decrease the value of m by one unit, Eq. (3.69)implies that there has to be a limit to this increase and decrease. Infact, this implies that for the largest value mmax we have

J+|j,mmax〉 = 0|j,mmax + 1〉 = 0 , (3.70)

while for the smallest value mmin we have

J−|j,mmin〉 = 0|j,mmin − 1〉 = 0 . (3.71)

Page 116: Quantum mechanics

3.3. GENERAL PROPERTIES OF ANGULAR MOMENTA 115

This implies that we have

J−J+|j,mmax〉 = 0 , (3.72)

J+J−|j,mmin〉 = 0 . (3.73)

(3.74)

To determine mmax and mmin we need to express J−J+ and J+J− in

terms of the operators ~J2

and Jz. This can easily be done by directcalculation which gives

J+J− = (Jx+iJy)(Jx−iJy) = J2x+J

2y−i[Jx, Jy] = ~J2−J2

z+hJz , (3.75)

and

J−J+ = (Jx−iJy)(Jx+iJy) = J2x+J

2y+i[Jx, Jy] = ~J2−J2

z−hJz . (3.76)

Using these expressions we find for mmax

0 = J−J+|j,mmax〉 = ( ~J2 − J2z − hJz)|j,mmax〉

= h2(j(j + 1)−mmax(mmax + 1))|j,mmax〉 . (3.77)

This implies thatmmax = j . (3.78)

Likewise we find

0 = J+J−|j,mmin〉 = ( ~J2 − J2z + hJz)|j,mmin〉

= h2(j(j + 1) +mmin(1−mmin))|j,mmin〉 . (3.79)

This implies thatmmin = −j . (3.80)

We have to be able to go in steps of one unit from the maximal valuemmax to mmin. This implies that 2j is a whole number and thereforewe find that

j =n

2with n ∈ N . (3.81)

Every real quantum mechanical particle can be placed in one of twoclasses. Either it has integral angular momentum, 0, 1, . . . (an example

Page 117: Quantum mechanics

116 CHAPTER 3. DYNAMICS AND SYMMETRIES

is the orbital angular momentum discussed in the previous section) andis called a boson, or it has half-integral angular momentum 1

2, 3

2, . . . (the

electron spin is an example for such a particle) and is called a fermion.Finally we would like to determine the constant α±(j,m) in

J±|j,m〉 = α±(j,m)|j,m± 1〉 . (3.82)

This is easily done by calculating the norm of both sides of Eq. (3.82).

|α+(j,m)|2 = ||J+|j,m〉||2 = 〈j,m|( ~J2 − J2z − hJz)|j,m〉

= h(j(j + 1)−m(m+ 1)) . (3.83)

Analogously we find

|α−(j,m)|2 = ||J−|j,m〉||2 = 〈j,m|( ~J2 − J2z + hJz)|j,m〉

= h(j(j + 1)−m(m− 1)) . (3.84)

We chose the phase of the states |j,m〉 in such a way that α±(j,m) isalways positive, so that we obtain

J+|j,m〉 = h√j(j + 1)−m(m+ 1)|j,m+ 1〉 (3.85)

J−|j,m〉 = h√j(j + 1)−m(m− 1)|j,m− 1〉 . (3.86)

3.4 Addition of Angular Momenta

In the previous section I have reviewed the properties of the angu-lar momentum of a single particle. Often, however, you are actuallyholding a quantum system consisting of more than one particle, e.g.a hydrogen atom, and you may face a situation where more than oneof those particles possesses angular momentum. Even a single particlemay have more than one angular momentum. An electron in a centralpotential, for example, has an orbital angular momentum as well asan internal angular momentum, namely the spin. Why do we need toadd these angular momenta? To see this, consider the example of anelectron in a central potential.

Page 118: Quantum mechanics

3.4. ADDITION OF ANGULAR MOMENTA 117

The Hamilton operator of a particle in a central potential is givenby

H0 =~p

2

2m+ V (|~x|) , (3.87)

where ~p is the linear momentum of the particle, m its mass and V (|~x|)is the operator describing the central potential. From subsection 3.2.3we know that under this Hamilton operator the orbital angular mo-mentum is preserved because the Hamilton operator is invariant underrotations around an arbitrary axis. However, if you do experiments,then you will quickly realize that the Hamilton operator H0 is only thefirst approximation to the correct Hamilton operator of an electron ina central potential (you may have heard about that in the 2nd yearatomic physics course). The reason is, that the electron possesses aninternal angular momentum, the spin, and therefore there have to beadditional terms in the Hamilton operator. These additional terms canbe derived from a relativistic theory of the electron (Dirac equation),but here I will just give a heuristic argument for one of them becausethe full description of the Dirac equation would take far too long.) Intu-itively, an electron in a central potential rotates around the origin andtherefore creates a circular current. Such a circular current gives rise toa magnetic field. The spin of an electron on the other hand gives rise toa magnetic moment of the electron whose orientation depends on theorientation of the spin - which is either up or down. This means thatthe electron, has different energies in a magnetic field ~B depending on

the orientation of its spin. This energy is given by −µ0~S ~B, where −µ0

~S

is the magnetic moment of the electron and ~S = h(σ1~e1 +σ2~e2 +σ3~e3) isthe electron spin operator. The magnetic field created by the rotatingelectron is proportional to the orbital angular momentum (the higherthe angular momentum, the higher the current induced by the rotatingelectron and therefore the higher the magnetic field) so that we findthat the additional part in the Hamilton operator Eq. (3.87) is givenby

H1 = −ξ ~S ⊗ ~L = ξ(S1 ⊗ L1 + S2 ⊗ L2 + S3 ⊗ L3

), (3.88)

where ξ is a constant. Note that the operators ~S and ~L act on two

Page 119: Quantum mechanics

118 CHAPTER 3. DYNAMICS AND SYMMETRIES

different Hilbert spaces, ~S acts on the 2-dimensional Hilbert space of

the electron spin and ~L on the Hilbert space describing the motion ofthe electron.

Under the Hamilton operator Eq. (3.87) the spin was a constant

of motion as the spin operator ~S evidently commutes with the Hamil-tonian Eq. (3.87), i.e. [H0, Si] = 0. For the total Hamilton operator

H = H0 + H1, however, neither the orbital angular momentum ~L nor

the spin angular momentum ~S are constants of motion anymore. Thiscan easily be seen by checking that now the commutators [Li, H] and[Si, H] are non-vanishing! See for example that

[L1, H] = [L1, H1]

= [L1,−ξ ~S · ~L]

= −ξ[L1,3∑i=1

Si · Li]

= −ihξS2L3 + ihξS3L2 (3.89)

and

[S1, H] = [S1, H1]

= −ξ[S1,3∑i=1

Si · Li]

= −ξihS3L2 + ξihS2L3

which both are evidently non-zero. However, the sum of the two spin

operators ~J = ~L+ ~S is a conserved quantity because all its componentscommute with the Hamiltonian, i.e.

[Li + Si, H] = 0 . (3.90)

You may check this easily for the first component of J1 using the twocommutators Eqs. (3.89-3.90).

We know a basis of eigenstates to the orbital angular momentumand also one for the spin angular momentum. However, as the an-gular momenta separately are not conserved anymore such a choice

Page 120: Quantum mechanics

3.4. ADDITION OF ANGULAR MOMENTA 119

of eigenstates is quite inconvenient because eigenstates to the angular

momenta ~L and ~S are not eigenstates of the total Hamilton operatorH = H0 + H1 anymore. Of course it would be much more convenientto find a basis that is composed of simultaneous eigenvectorsof the total angular momentum and the Hamilton operator.This is the aim of the following subsection which describes a generalprocedure how one can construct the eigenvectors of the total angu-lar momentum from the eigenstates of orbital angular momentum andspin. As different components of the total angular momentum do notcommute, we can of course only find joint eigenstates of the total angu-

lar momentum operator ~J2

, its z-component Jz and the total Hamiltonoperator H0 + H1.

3.4.1 Two angular momenta

First I will present the general idea behind the addition of two angular

momenta represented by the operators ~j(1)

for the first particle and ~j(2)

for the second particle. Then I will give the simplest possible explicit

example. Note that I am writing ~j(1)

instead of the more precise ~j(1)

⊗1to shorten the equations a little bit.

Now I would like to consider the total angular momentum

~J := ~j(1)

+ ~j(2)

, (3.91)

Here the upper bracketed index indicates on which particle the operatoris acting.

Like in the previous section ~J2

commutes with each of its compo-

nents. (Check e.g. that [ ~J2

, Jz] = [ ~J2

, j(1)z + j(2)

z ] = 0). Therefore I

would like to find a set of joint eigenvectors of ~J2

and the z-componentof the total angular momentum Jz. These eigenvectors which are writ-ten as |J,M〉 have to satisfy

~J2

|J,M〉 = hJ(J + 1)|J,M〉Jz|J,M〉 = hM |J,M〉 .

Page 121: Quantum mechanics

120 CHAPTER 3. DYNAMICS AND SYMMETRIES

As the eigenstates |j(1),m(1); j(2),m(2)〉 of the separate angular momentaj(1) and j(2) form a basis, we have to be able to write |J,M〉 as linearcombinations of the eigenvectors to the individual angular momenta|j(1),m(1); j(2),m(2)〉 of which there are (2j(1) + 1)(2j(2) + 1) states. Itmight seem difficult to find these linear combinations, but luckily thereis a general recipe which I am going to explain in the following.

In the previous section we constructed all the possible angular mo-mentum eigenstates of a single particle by starting with the state withhighest m-quantum number and then working our way down by apply-ing the angular momentum ladder operator j− = jx−ijy. We will apply

an analogous strategy for the total angular momentum ~J . Firstly weneed to identify the ladder operators for the total angular momentum.They are

J− = j(1)− + j

(2)− (3.92)

J+ = j(1)+ + j

(2)+ . (3.93)

Let us check whether these operators satisfy commutation relationsanalogous to Eq. (3.63).

[Jz, J±] = [j(1)z + j(2)

z , j(1)± + j

(2)± ]

= [j(1)z , j

(1)± ] + [j(2)

z , j(2)± ]

= ±hj(1)± ± hj

(2)±

= ±hJ± . (3.94)

The commutation relation Eq. (3.94) allows us, just as in the case of asingle angular momentum, to obtain the state |J,M−1〉 from the state|J,M〉 via

J−|J,M〉 = h√J(J + 1)−M(M − 1)|J,M − 1〉 . (3.95)

This relation can be derived in exactly the same way as relation Eq.(3.86). The procedure for generating all the states |J,M〉 has threesteps and starts withStep 1: Identify the state with maximal total angular momentum andmaximal value for the M -quantum number. If we have two angular

Page 122: Quantum mechanics

3.4. ADDITION OF ANGULAR MOMENTA 121

momenta j(1) and j(2) then their sum can not exceed Jmax = j(1) + j(2).Which state could have total angular momentum Jmax = j(1) + j(2) andthe M = Jmax? Clearly this must be a combination of two angularmomenta that are both parallel, i.e. we guess they should be in thestate

|J = j(1) + j(2),M = j(1) + j(2)〉 = |j(1), j(1)〉 ⊗ |j(2), j(2)〉 . (3.96)

To check whether this assertion is true we need to verify that

~J2

|J,M〉 = h2J(J + 1)|J,M〉Jz|J,M〉 = hM |J,M〉 ,

with J = M = j(1) + j(2). I will not present this proof here, but will letyou do it in the problem sheets. Now followsStep 2: Apply the ladder operator defined in Eq. (3.92) using Eq.(3.95) to obtain all the states |J,M〉. Repeat this step until you reachedstate |J,M = −J〉.

However, these are only 2Jmax + 1 states which is less than the to-tal of (2j1 + 1)(2j2 + 1) states of the joint state space of both angularmomenta. To obtain the other states we applyStep 3: Identify the state that is of the form |J − 1,M = J − 1〉 andthen go to step 2. The procedure stops when we have arrived at thesmallest possible value of J , which is obtained when the two angularmomenta are oriented in opposite direction so that we need to subtractthem. This implies that Jmin = |j(1)−j(2)|. The absolute value needs tobe taken, as the angular momentum is by definition a positive quantity.

Following this strategy we obtain a new basis of states consisting of∑Jmaxi=Jmin

(2i+ 1) =∑Jmaxi=0 2i+ 1−∑Jmin−1

i=0 2i+ 1 = (Jmax + 1)2− J2min =

(j(1) + j(2) + 1)2 − (j(1) − j(2))2 = (2j(1) + 1)(2j(2) + 1) states.As this is a rather formal description I will now give the simplest

possible example for the addition of two angular momenta.

Example:Now I am going to illustrate this general idea by solving the explicitexample of adding two spin-1

2.

Page 123: Quantum mechanics

122 CHAPTER 3. DYNAMICS AND SYMMETRIES

The angular momentum operators for a spin-12

particle are

jx = hσx jy = hσy jz = hσz (3.97)

with the Pauli spin-operators σi. It is easy to check that this definitionsatisfies the commutation relations for an angular momentum. The an-gular momentum operator of an individual particle is therefore given by

~j = h (σx~ex + σy~ey + σz~ez) and the total angular momentum operatoris given by

~J = ~j(1)

+ ~j(2)

. (3.98)

The angular momentum ladder operators are

J− = j(1)− + j

(2)−

J+ = j(1)+ + j

(2)+ .

The maximal value for the total angular momentum is J = 1. Thestate corresponding to this value is given by

|J = 1,M = 1〉 = |j(1) =1

2,m(1) =

1

2〉 ⊗ |j(2) =

1

2,m(2) =

1

2〉 ,

(3.99)

or using the shorthand

| ↑〉 = |j(1) =1

2,m(1) =

1

2〉 (3.100)

| ↓〉 = |j(1) =1

2,m(1) = −1

2〉 (3.101)

we have|J = 1,M = 1〉 = | ↑〉 ⊗ | ↑〉 .

Now we want to find the representation of the state |J = 1,M = 0〉using the operator J−. We find

|J = 1,M = 0〉 =J−|J = 1,M = 1〉

h√

2

=(j

(1)− + j

(2)− )| ↑〉 ⊗ | ↑〉h√

2

=| ↓〉 ⊗ | ↑〉√

2+| ↑〉 ⊗ | ↓〉√

2(3.102)

Page 124: Quantum mechanics

3.5. LOCAL GAUGE SYMMETRIES AND ELECTRODYNAMICS123

To find the state |J = 1,M = −1〉 we apply J− again and we find that

|J = 1,M = −1〉 =J−|J = 1,M = 0〉

h√

2

=(j

(1)− + j

(2)− )|J = 1,M = 0〉h√

2= | ↓〉 ⊗ | ↓〉 (3.103)

Now we have almost finished our construction of the eigenstates of thetotal angular momentum operator. What is still missing is the statewith total angular momentum J = 0. Because J = 0 this state mustalso have M = 0.The state |J = 0,M = 0〉 must be orthogonal tothe three states |J = 1,M = 1〉, |J = 1,M = 0〉, |J = 1,M = −1〉.Therefore the state must have the form

|J = 0,M = 0〉 =| ↓〉 ⊗ | ↑〉√

2− | ↑〉 ⊗ | ↓〉√

2. (3.104)

To check that this is the correct state, just verify that it is orthogonalto |J = 1,M = 1〉, |J = 1,M = 0〉, |J = 1,M = −1〉. This concludesthe construction of the eigenvectors of the total angular momentum.

3.5 Local Gauge symmetries and Electro-

dynamics

Page 125: Quantum mechanics

124 CHAPTER 3. DYNAMICS AND SYMMETRIES

Page 126: Quantum mechanics

Chapter 4

Approximation Methods

Most problems in quantum mechanics (like in all areas of physics)cannot be solved analytically and we have to resort to approximationmethods. In this chapter I will introduce you to some of these meth-ods. However, there are a large number of approximation methods inphysics and I can only present very few of them in this lecture.

Often a particular problem will be in a form that it is exactly solv-able would it not be for a small additional term (a perturbation) in theHamilton operator. This perturbation may be both, time-independent(e.g. a static electric field) or time-dependent (a laser shining on anatom). It is our task to compute the effect that such a small perturba-tion has. Historically these methods originated from classical mechanicswhen in the 18th century physicists such as Lagrange tried to calculatethe mechanical properties of the solar system. In particular he and hiscolleagues have been very interested in the stability of the earths orbitaround the sun. This is a prime example of a problem where we cansolve part of the dynamics exactly, the path of a single planet aroundthe sun, while we are unable to solve this problem exactly once we takeinto account the small gravitational effects due to the presence of allthe other planets. There are quite a few such problems in quantummechanics. For example we are able to solve the Schrodinger equationfor the hydrogen atom, but if we place two hydrogen atoms maybe100 Angstrom away from each other then the problem has no exactsolution anymore. The atoms will start to perturb each other and aweak perturbation to their dynamics will be the effect. While not be-

125

Page 127: Quantum mechanics

126 CHAPTER 4. APPROXIMATION METHODS

ing strong enough to form a molecule this effect leads to an attractiveforce, the van der Waals force. This force, although being quite weak,is responsible for the sometimes remarkable stability of foams.

4.1 Time-independent Perturbation The-

ory

In the second year quantum mechanics course you have seen a methodfor dealing with time independent perturbation problems. I will red-erive these results in a more general, shorter and elegant form. Then Iwill apply them to derive the van der Waals force between two neutralatoms.

4.1.1 Non-degenerate perturbation theory

Imagine that you have a system with a Hamilton operator H0, eigen-vectors |φi〉 and corresponding energies εi. We will now assume thatwe can solve this problem, i.e. we know the expressions for the eigen-states |φi〉 and energies εi already. Now imagine a small perturbationin the form of the operator λV is added to the Hamilton operator. Thereal parameter λ can be used to count the order to which we performperturbation theory. Now we have the new total Hamilton operatorH = H0 + λV with the new eigenvectors |ψi(λ)〉 and energies Ei(λ).For λ = 0 we recover the old unperturbed system. For the new system,the time-independent Schrodinger equation now reads

(H0 + λV )|ψi(λ)〉 = Ei(λ)|ψi(λ)〉 . (4.1)

For the following it is useful to introduce the slightly unusual ’normal-ization ’

〈φi|ψi(λ)〉 = 1 (4.2)

for the perturbed eigenstates |ψi(λ)〉, i.e. we do not assume that〈ψi(λ)|ψi(λ)〉 = 1!

Now we can multiply the Schrodinger equation (4.1) from the leftwith 〈φi| and find

〈φi|(H0 + λV )|ψi(λ)〉 = 〈φi|Ei(λ)|ψi(λ)〉 . (4.3)

Page 128: Quantum mechanics

4.1. TIME-INDEPENDENT PERTURBATION THEORY 127

Using the ’normalization’ relation Eq. (4.2) we then obtain

Ei(λ) = εi + λ〈φi|V |ψi(λ)〉 . (4.4)

To obtain the energy eigenvalues of the perturbed Hamilton operatorwe therefore need to compute the corresponding eigenvector |ψi(λ)〉. Inthe following I will do this for the state |φ0〉 to obtain the perturbedenergy E0(λ). (You can easily generalize this to an arbitrary eigenvalue,just by replacing in all expressions 0′s by the corresponding value.) Forthe moment I will assume that the eigenvector |φ0〉 of the unperturbedHamilton operator H0 is non-degenerate. You will see soon why Ihad to make this assumption.

In order to do the perturbation theory as systematic as possible Iintroduce the two projection operators

P0 = |φ0〉〈φ0| and Q0 = 1 − P0 . (4.5)

The second projector projects onto the subspace that supports the cor-rections to the unperturbed wave-function, i.e.

Q0|ψ0(λ)〉 = |ψ0(λ)〉 − |φ0〉 (4.6)

Now let us rewrite the Schrodinger equation introducing an energy εwhich will be specified later. We write

(ε− E0(λ) + λV )|ψ0(λ)〉 = (ε− H0)|ψ0(λ)〉 . (4.7)

This can be written as

|ψ0(λ)〉 = (ε− H0)−1(ε− E0(λ) + λV )|ψ0(λ)〉 . (4.8)

Multiplying this with the projector Q0 using Eq. (4.6) we find

|ψ0(λ)〉 = |φ0〉+ Q0(ε− H0)−1(ε− E0(λ) + λV )|ψ0(λ)〉 . (4.9)

Now we can iterate this equation and we find

|ψ0(λ)〉 =∞∑n=0

[Q0(ε− H0)

−1(ε− E0(λ) + λV )]n|φ0〉 . (4.10)

Page 129: Quantum mechanics

128 CHAPTER 4. APPROXIMATION METHODS

Now we can plug this into Eq. (4.4) and we find the expression for theperturbed energies

E0(λ) = ε0 +∞∑n=0

〈φ0|λV[Q0(ε− H0)

−1(ε− E0(λ) + λV )]n|φ0〉 .

(4.11)Eqs. (4.9) and (4.11) give the perturbation expansion of the energiesand corresponding eigenvectors to all orders.

Remark: The choice of the constant ε is not completely arbitrary.If it is equal to the energy of one of the excited states of the unperturbedHamilton operator, then the perturbation expansion will not convergeas it will contain infinite terms. This still leaves a lot of freedom.However, we also would like to be able to say that by taking into accountthe first k terms of the sum Eq. (4.11), we have all the terms to orderλk. This is ensured most clearly for two choices for the value of ε. Onesuch choice is evidently ε = E0(λ). In that case we have

E0(λ) = ε0 +∞∑n=0

〈φ0|λV[Q0(E0(λ)− H0)

−1(λV )]n|φ0〉 . (4.12)

This choice is called the Brillouin-Wigner perturbation theory. Theother choice, which is often easier to handle, is that where ε = ε0. Inthat case we obtain

E0(λ) = ε0 +∞∑n=0

〈φ0|λV[Q0(ε0 − H0)

−1(ε0 − E0(λ) + λV )]n|φ0〉 .

(4.13)This is the Rayleigh perturbation theory. For this choice (ε0−E0(λ) +λV ) is proportional to λ and therefore we obtain higher order correc-tions in λ by choosing more terms in the sum.

Now let us look at the first and second order expression so that youcan compare it with the results that you learned in the second yearcourse. To do this we now specify the value of the arbitrary constantε to be the unperturbed energy ε0. For the energy of the perturbedHamilton operator we then obtain

E0(λ) = ε0 + 〈φ0|λV |φ0〉+ 〈φ0|λV Q0(ε− H0)−1(ε0 − E0(λ) + λV )|φ0〉+ . . .

Page 130: Quantum mechanics

4.1. TIME-INDEPENDENT PERTURBATION THEORY 129

= ε0 + λ〈φ0|V |φ0〉+ 〈φ0|λV∑n6=0

|φn〉〈φn|1

ε0 − εn(ε0 − E0(λ) + λV )|φ0〉+ . . .

= ε0 + λ〈φ0|V |φ0〉+ λ2∑n6=0

|〈φ0|V |φn〉|2

ε0 − εn+ . . . (4.14)

Remark: If |φ0〉 is the ground state of the unperturbed Hamilton oper-ator, then the second order contribution in the perturbation expansionis always negative. This has a good reason. The ground state of thetotal Hamilton operator H = H0 + V is |ψ0〉 and we find for |φ0〉 that

E0 = 〈ψ0|H0 + V |ψ0〉 < 〈φ0|H0 + V |φ0〉 = E0 −∑n6=0

|〈φ0|V |φi〉|2

ε0 − εn

This implies that the second order perturbation theory term is negative!

The new eigenstate to the eigenvector Eq. (4.14) can be calculated too,and the first contributions are given by

|ψ0〉 = |φ0〉+ Q0(ε− H0)−1(ε0 − E0 + V )|φ0〉+ . . .

= |φ0〉+∑n6=0

|φn〉〈φn|1

ε0 − εn(ε0 − E0 + V )|φ0〉+ . . .

= |φ0〉+∑n6=0

|φn〉1

ε0 − εn〈φn|V |φ0〉+ . . . .

Looking at these two expressions it is clear why I had to demand,that the eigenvalue ε0 of the unperturbed Hamilton operator H0 hasto be non-degenerate. In order to obtain a meaningful perturbationexpansion the individual terms should be finite and therefore the factors

1ε0−εn need to be finite for n 6= 0.

4.1.2 Degenerate perturbation theory

This part has not been presented in the lectureWhat do we do in the case of degenerate eigenvalues? We can followa similar strategy but as you can expect things get a little bit morecomplicated.

Page 131: Quantum mechanics

130 CHAPTER 4. APPROXIMATION METHODS

Let us assume that we have a Hamilton operator H0, eigenvectors|φνi 〉 and corresponding energies εi where the upper index ν numeratesan orthogonal set of eigenvectors to the degenerate eigenvalue εi. Againlet us deal with the eigenvalue ε0.

Now we write down the projectors

P0 =∑ν

|φν0〉〈φν0| and Q0 = 1 − P0 . (4.15)

Then we find

Q0|ψµ0 (λ)〉 = |ψµ0 (λ)〉 − P0|ψµ0 (λ)〉 , (4.16)

where the eigenvector |ψµ0 〉 of the perturbed Hamilton operator origi-nates from the unperturbed eigenvector |φµ0〉.

To obtain the analogue of Eq. (4.3) we multiply the Schrodingerequation for the state |ψµ(λ)〉 from the left with 〈ψν(λ)|P0 and weobtain

〈ψν0 (λ)|P0(H0 + λV − Eµ0 (λ))|ψµ(λ)〉 = 0 . (4.17)

Note that I have proceeded slightly different from the non-degeneratecase because there I multiplied from the left by 〈φν |. I could have donethat too, but then I would have been forced to introduce a ’normal-ization’ condition of the form 〈φν |ψµ(λ)〉 = δµν which is quite hard tocompute.

Eq. (4.17) will eventually help us determine the new energy eigen-values. However, this time this will involve the diagonalization of amatrix. The reason of course being that we have a degeneracy in theoriginal unperturbed eigenvalue.

Now we proceed very much along the lines of the unperturbed per-turbation theory. Using Eq. (4.16) we rewrite Eq. (4.9) and obtain

|ψµ0 (λ)〉 = P0|ψµ0 (λ)〉+Q0(ε0−H0)−1(ε0−Eµ

0 (λ)+λV )|ψµ0 (λ)〉 . (4.18)

Iterating this equation yields

|ψµ0 (λ)〉 =∞∑n=0

[Q0(ε0 − H0)

−1(ε− Eµ0 (λ) + λV )

]nP0|ψµ0 (λ)〉 . (4.19)

Page 132: Quantum mechanics

4.1. TIME-INDEPENDENT PERTURBATION THEORY 131

This implies that we can calculate the whole eigenvector |ψµ0 (λ)〉 fromthe eigenvalues Eµ(λ) and the components P0|ψµ0 (λ)〉 of the eigenvectorin the eigenspace of the eigenvalue ε0 of the Hamilton operator H0.

This may now be inserted in Eq. (4.17) so that we find with theshorthand notation |ψν,P0 〉 = P0|ψµ0 〉

〈ψν,P0 |(H0+λV−Eµ0 (λ))

∞∑n=0

[Q0(ε0 − H0)

−1(ε0 − Eµ0 (λ) + λV )

]n|ψµ,P0 〉 = 0 .

(4.20)Now we need to determine the energies Eµ

0 from Eq. (4.20). Let usfirst determine the lowest order approximation, which means that weset n = 0. We find for any µ and ν that

〈ψν,P0 |ε0 + λV − Eµ0 (λ)|ψµ,P0 〉 = 0 . (4.21)

Therefore the new energy eigenvalues are obtained by diagonalizingthe operator V in the eigenspace of the eigenvalue ε0 of the Hamiltonoperator H0 spanned by the |φν0〉. We find

Eµ0 (λ) = ε0 + λVµµ (4.22)

where the Vµµ are the eigenvalues of V .

The second order contribution is then obtained by using the eigen-vectors obtained from Eq. (4.21) and use them in the next order expres-sion, e.g. for n = 1. The second order expression for the perturbationtheory is therefore obtained from taking the expectation values

〈ψν,P0 |(ε0 + λVνν − Eµ0 (λ)) + λV Q0(ε0 − H0)

−1λV |ψν,P0 〉 = 0 .

This gives

Eν0 = ε0 + λVνν + λ2

∑m6=0

|〈ψν,P0 |V |φm〉|2

ε0 − εm. (4.23)

In the following section I will work out an example for the use of per-turbation theory which gives a non-trivial result.

Page 133: Quantum mechanics

132 CHAPTER 4. APPROXIMATION METHODS

4.1.3 The van der Waals force

In this section I am going to derive the force that two neutral atomsare exerting on each other. This force, the van der Waals force, canplay a significant role in the physics (also the chemistry and mechanics)of neutral systems. In the derivation of the van der Waals force I willuse time-independent perturbation theory. The calculations will besimplified by the fact that the system of two atoms exhibits symmetrieswhich have the effect, that the first order contribution to perturbationtheory vanishes.

The basic experimental situation is presented in Fig. 4.1. We dis-cuss the van der Waals force between two hydrogen atoms. The two

Figure 4.1: Two hydrogen atoms are placed near each other, with adistance R much larger than the Bohr radius a0.

atomic nuclei (proton) are positioned at a fixed distance from eachother. We assume that the protons are not moving. The proton ofatom A is placed at the origin, while the proton of atom B is located atthe position ~R. The electron of atom A is displaced from the origin by~rA, while atom B is displaced from the nucleus B by ~rB. To make surethat we can still speak of separate hydrogen atoms we need to demandthat the separation between the two protons is much larger than theradius of each of the atoms, i.e.

|~R| � a0 , (4.24)

where a0 is the Bohr radius.After we have described the physical situation we will now have to

determine the Hamilton operator of the total system. The Hamilton

Page 134: Quantum mechanics

4.1. TIME-INDEPENDENT PERTURBATION THEORY 133

operator has the form

H = H0 + V , (4.25)

where V describes the small perturbation that gives rise to the van derWaals force. The unperturbed Hamilton-operator H0 is that of twohydrogen atoms, i.e.

H0 =~p

2

A

2m+~p

2

B

2m− 1

4πε0|~rA|− 1

4πε0|~rB|, (4.26)

which can be solved exactly. We have energy eigenfunctions of theform |φAn,l,m〉|φBn′,l′,m′〉 with energies En +En′ . The perturbation to thisHamilton operator is due to the electrostatic forces between electronsand protons from different atoms. From Fig. 4.1 we can easily see thatthis gives

V =e2

4πε0R+

e2

4πε0|~R + ~rB − ~rA|− e2

4πε0|~R + ~rB|− e2

4πε0|~R + ~rA|.

(4.27)As we have assumed that the Bohr radius is much smaller than theseparation between the two hydrogen atoms, we can conclude that|~rA|, |~rB| � R. Therefore we are able to expand the perturbationHamilton operator and we obtain after lengthy but not too difficultcalculation

Vdd =e2

4πε0

~rA~rBR3− 3

(~rA ~Ru)(~rB ~Ru)

R3

, (4.28)

where ~Ru is the unit vector in direction of ~R. Now we are in a position todetermine the first and second order contribution of the perturbationtheory. Because we are dealing with the ground state, we have nodegeneracy and we can apply non-degenerate perturbation theory.

First order correction The first order contribution to perturbationtheory is given by

∆E1 = 〈φA0,0,0|〈φB0,0,0|Vdd|φA0,0,0〉|φB0,0,0〉 , (4.29)

Page 135: Quantum mechanics

134 CHAPTER 4. APPROXIMATION METHODS

where |φA/B0,0,0〉 are the unperturbed ground-states of atom A/B. InsertingEq. (4.27) into Eq. (4.29) gives

∆E1 =e2

4πε0

〈φA0,0,0|~rA|φA0,0,0〉〈φB0,0,0|~rB|φB0,0,0〉R3

−3〈φA0,0,0|~rA ~Ru|φA0,0,0〉〈φB0,0,0|~rB ~Ru|φB0,0,0〉

R3

. (4.30)

Now we can use a symmetry argument to show that this whole, ratherlengthy expression must be zero. Why is that so? What appears inEq. (4.30) are expectation values of components of the position op-erator in the unperturbed ground state of the atom. However, theunperturbed Hamilton operator of the hydrogen atom possesses somesymmetries. The relevant symmetry here is that of the parity, i.e. theHamilton operator H0 commutes with the operator P which is definedby P |x〉 = | − x〉. This implies that both operators, H0 and P , canbe diagonalized simultaneously, i.e. they have the same eigenvectors.This is the reason why all eigenvectors of the unperturbed Hamiltonoperator H0 are also eigenvectors of the parity operator. Therefore wehave P |φA/B0,0,0〉 = ±|φA/B0,0,0〉. This implies that

〈φ0,0,0|~x|φ0,0,0〉 = 〈φ0,0,0|P ~xP |φ0,0,0〉= 〈φ0,0,0| − ~x|φ0,0,0〉

This equality implies that 〈φ0,0,0|~x|φ0,0,0〉 = 0 and therefore ∆E1 = 0!This quick argument illustrates the usefulness of symmetry arguments.

Second order correction The second order contribution of the per-turbation theory is given by

∆E2 =,∑

(n′,l′,m′),(n,l,m)

|〈φAn′,l′,m′|〈φBn,l,m|Vdd|φA0,0,0〉|φB0,0,0〉|2

2E0 − En′ − En(4.31)

where the∑′ means that the state |φA0,0,0〉|φB0,0,0〉 is excluded from the

summation. The first conclusion that we can draw from expression Eq.

Page 136: Quantum mechanics

4.1. TIME-INDEPENDENT PERTURBATION THEORY 135

(4.31) is that the second order correction ∆E2 is negative so that wehave

∆E2 = − C

R6(4.32)

with a positive constant C. The force exerted on one of the atoms istherefore given by

F = −dEdR≈ −6C

R7(4.33)

and therefore the force aims to reduce the distance between the atomsand we conclude that the van der Waals force is attractive. This is alsothe classical result that has been known before the advent of quantummechanics.

The physical interpretation of the van der Waals force is the fol-lowing: A hydrogen atom should not be viewed as a stationary chargedistribution but rather as a randomly fluctuating electric dipole dueto the motion of the electron. Assume that in the first atom a dipolemoment appears. This generates an electric field, which scales like 1

R3

with distance. This electric field will now induce an electric dipole mo-ment in the other atom. An electric dipole will always tend to movetowards areas with increasing field strength and therefore the secondatom will be attracted to the first one. The induced dipole momentwill be proportional to the electric field. Therefore the net force willscale as − d

dR1R6 . This semi-classical argument gives you an idea for the

correct R-dependence of the van der Waals force.

However, this semi-classical explanation fails to account for some ofthe more intricate features of the van der Waals force. One exampleis that of two hydrogen atoms, one in the ground state and one in anexcited state. In that case the van der Waals force scales as C

R3 andthe constant C can be either positive or negative, i.e. we can have arepulsive as well as an attractive van der Waals force. The computationof this case is slightly more complicated because excited states in thehydrogen atom are degenerate, which requires the use of degenerateperturbation theory. On the other hand it only involves first orderterms.

Page 137: Quantum mechanics

136 CHAPTER 4. APPROXIMATION METHODS

4.1.4 The Helium atom

Now let us try to apply perturbation theory to the problem of theHelium atom. In fact it is not at all clear that we can do that becausethe two electrons in the Helium atom are very close together and theirmutual exerted force is almost as strong as that between the nucleusand the individual electrons. However, we may just have a look how farwe can push perturbation theory. Again I am interested in the groundstate of the Helium atom. The Hamilton operator is given by

H =~p

2

A

2m+~p

2

B

2m− 2e2

4πε0|~rA|− 2e2

4πε0|~rB|+

e2

4πε0|~rA − ~rB|. (4.34)

The first four terms, H0 describe two non-interacting electrons in acentral potential and for this part we can find an exact solution, givenby the energies and wave-functions of a He+ ion. Now we performperturbation theory up to first order. The zeroth order wavefunctionis given by

|ψHe〉 = |ψHe+〉 ⊗ |ψHe+〉 (4.35)

where |ψHe+〉 is the ground state of an electron in a Helium ion, i.e. asystem consisting of two protons and one electron. In position spacethis wavefunction is given by

ψHe(r) =Z3

πa30

e−Z(rA+rB)

a0 , (4.36)

where a0 = 0.511 10−10m is the Bohr radius and Z = 2 is the numberprotons in the nucleus. The energy of this state for the unperturbedsystem is given by E = −108.8eV . The first order correction to theenergy of the ground state is now given by

∆E =∫d3rA

∫d3rB|ψHe(r)|2

e2

4πε0|~rA − ~rB|= 34eV . (4.37)

Therefore our estimate for the ground state energy of the Helium atomis E1 = −108.8eV + 34eV = −74.8eV . This compares quite well withthe measured value of E1 = −78.9eV . However, in the next sectionwe will see how we can actually improve this value using a differentapproximation technique.

Page 138: Quantum mechanics

4.2. ADIABATIC TRANSFORMATIONS AND GEOMETRIC PHASES137

4.2 Adiabatic Transformations and Geo-

metric phases

4.3 Variational Principle

After this section on perturbation methods I am now moving on to adifferent way of obtaining approximate solutions to quantum mechan-ical problems. Previously we have investigated problems which were’almost’ exactly solvable, i.e. the exactly solvable Hamilton operatorhas a small additional term. Now I am going to deal with problemswhich can not necessarily be cast into such a form. An example wouldbe the Helium atom (which cannot be solved exactly) as compared toa negative Helium ion (which is basically like a hydrogen atom andtherefore exactly solvable). Because the two electrons in the Heliumatom are very close, it is not obvious that a perturbation expansiongives a good result, simply because the electrostatic forces between theelectrons are almost as large as those between electrons and nucleus.Here variational principles can be very useful. The example of the He-lium atom already indicates that variational methods are paramountimportance e.g. in atomic physics, but also in quantum chemistry orsolid state physics.

4.3.1 The Rayleigh-Ritz Method

Here I will explain a simple variational method, which nevertheless ex-hibits the general idea of the method. Variational methods are partic-ularly well suited to determine the ground state energy of a quantummechanical system and this is what I will present first. The wholemethod is based on the following theorem concerning the expectationvalue of the Hamilton operator.

Theorem 49 Given a Hamilton operator H with ground state |φ0〉 andground state energy E0. For any |ψ〉 we have

〈φ0|H|φ0〉 ≤ 〈ψ|H|ψ〉 . (4.38)

Page 139: Quantum mechanics

138 CHAPTER 4. APPROXIMATION METHODS

Proof: For the proof we need to remember that any state vector |ψ〉can be expanded in terms of the eigenvectors |φi〉 of the Hermiteanoperator H. This means that we can find coefficients such that

|ψ〉 =∑i

αi|φi〉 . (4.39)

I will use this in the proof of theorem 49.

〈ψ|H|ψ〉 =∑i,j

α∗iαj〈φi|H|φj〉

=∑i,j

α∗iαjEi〈φi|φj〉

=∑i

|αi|2Ei .

Now we can see that

〈ψ|H|ψ〉 − E0 =∑i

|αi|2Ei − E0

=∑i

|αi|2(Ei − E0)

≥ 0 ,

because Ei ≥ E0. If the lowest energy eigenvalue E0 is not degeneratethen we have equality exactly if |ψ〉 = |φ0〉 2.

How does this help us in finding the energy of the ground state of aquantum mechanical system that is described by the Hamilton operatorH? The general recipe proceeds in two steps.

Step 1: Chose a ’trial’ wave function |ψ(α1, . . . , αN)〉 that is parametrizedby parameters α1, . . . , αN . This choice of the trial wave function willoften be governed by the symmetries of the problem.Step 2: Minimize the expression

E = 〈ψ(α1, . . . , αN)|H|ψ(α1, . . . , αN)〉 , (4.40)

with respect to the αi. The value Emin that you obtain this way is yourestimate of the ground state wave function.

Page 140: Quantum mechanics

4.3. VARIATIONAL PRINCIPLE 139

The Helium atom Now let us come back to the Helium atom to seewhether we can make use of the variational principle to obtain a betterestimate of the ground state energy.

In our perturbation theoretical calculation we have used the wave-function Eq. (4.36) with Z = 2. Now let us introduce the adjustableparameter σ and use the trial wavefunction

ψHe(r) =Z3eff

πa3e−

Zeff (rA+rB)

a0 , (4.41)

where Zeff = Z−σ. This is a physically reasonable assumption becauseeach electron sees effectively a reduced nuclear charge due to the shield-ing effect of the other electron. Now we can use this new trial wave-function to calculate the energy expectation value of the total Hamiltonoperator with this wavefunction. We find after some computations

E(σ) = −2RH

(Z2 − 5

8Z +

5

8σ − σ2

), (4.42)

where RH = m0e4

64π3ε20h3c

is the Rydberg constant. Now we need to com-

pute that value of σ for which Eq. (4.42) assumes its minimal value.The value that we obtain is σ = 5

16, independently of the value of Z.

Inserting this into Eq. (4.42) we find

Emin = −2RH(Z − 5

16)2 . (4.43)

For the Helium atom (Z=2) we therefore obtain

Emin = −77.4eV , (4.44)

for the ground state energy. This is quite close to the true value of−78.9eV and represents a substantial improvement compared to thevalue obtained via perturbation theory. Making a slightly more detailedchoice for the trial wavefunction (basically using one more parameter)would lead to substantially improved values. This shows that the vari-ational method can indeed be quite useful.

However, the next example will illustrate the shortcomings of thevariational method and proves that this method has to be applied withsome care.

Page 141: Quantum mechanics

140 CHAPTER 4. APPROXIMATION METHODS

The harmonic oscillator The Hamilton operator for the harmonicoscillator is given H = p2

2m+ 1

2mω2x2. Now, assume that we do not

know the ground state energy and the ground state wavefunction ofthe harmonic oscillator. This Hamilton operator commutes with theparity operator and therefore the eigenstates should also be chosenas eigenfunctions of the parity operator. Therefore let us chose thenormalized symmetric wave-function

ψa(x) =

√2a3/2

π

1

x2 + a. (4.45)

Of course I have chosen this slightly peculiar function in order to makethe calculations as simple as possible. Omitting the detailed calcu-lations we find that the mean value of the energy of the harmonicoscillator in this state is given by

〈ψa|H|ψa〉 =∫ψ∗a(x)

(− h2

2m

d2

dx2+

1

2mω2x2

)ψa(x)dx (4.46)

=h2

4m

1

a+

1

2mω2a . (4.47)

Now we need to find the value for which this expression becomes min-imal. We obtain

amin =1√2

h

mω, (4.48)

and for the energy

Emin =1√2hω . (4.49)

Therefore our estimate gives an error

Emin − E0

E0

=

√2− 1

2≈ 0.2 . (4.50)

This is not too good, but would have been a lot better for other trialfunctions.

This example shows the limitations of the variational principle. Itvery much depends on a good choice for the trial wave function. Itshould also be noted, that a good approximation to the ground state

Page 142: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 141

energy does not imply that the chosen trial wave function will give goodresults for other physical observables. This can be seen from the aboveexample of the harmonic oscillator. If we compute the expectationvalue of the operator x2 then we find

〈ψamin|x2|ψamin

〉 =1√2

h

which is quite close to the true value of 12hω. On the other hand, the

expectation value of the operator x4 diverges if we calculate it withthe trial wave function, while we obtain a finite result for the trueground-state wave function.

The variational principle for excited states. The variationalmethod shown here can be extended to the calculation of the ener-gies of excited states. The basic idea that we will be using is thateigenvectors to different eigenvalues of the Hamilton operator are nec-essarily orthogonal. If we know the ground state of a system, or at leasta good approximation to it, then we can use the variational method.The only thing we need to make sure is that our trial wavefunction isalways orthogonal to the ground state wave function or the best ap-proximation we have for it. If we have ensured this with the choice ofour trial function, then we can proceed analogously to the variationalprinciple for the ground state.

4.4 Time-dependent Perturbation Theory

In the previous sections I have explained some of the possible methodsof stationary perturbation theory. Using these methods we are able, inprinciple, to approximate the energy eigenvalues as well as the eigen-vectors of the Hamilton operator of a quantum mechanical system thathas a small perturbation. We are then able to approximate the spectraldecomposition of the Hamilton operator, which is given by

H =∑i

Ei|ψi〉〈ψi| , (4.51)

where the Ei are the energy eigenvalues of the Hamilton operator andthe |ψi〉 the corresponding eigenvectors. Equation (4.51) is sufficient to

Page 143: Quantum mechanics

142 CHAPTER 4. APPROXIMATION METHODS

compute the time evolution of any possible initial state |φ(t0)〉 usingthe solution of the Schrodinger equation

e−iH(t−t0)|φ〉 =∑i

e−iEi(t−t0)/h|ψi〉〈ψi|φ(t0)〉 . (4.52)

Using stationary perturbation theory we are then able to obtain theapproximate time-evolution of the system.

However, there are reasons why this approach is not necessarilythe best. First of all, we might have a situation where the Hamiltonoperator of the system is time-dependent. In that case the solution

of the Schrodinger equation is generally not of the form e−i∫H(t′)dt′/h

anymore, simply because Hamilton operators for different times do notnecessarily commute with each other. For time-dependent Hamiltonoperators we have to proceed in a different way in order to obtainapproximations to the time evolution.

There are two situations in which we may encounter time-dependentHamilton operators. While the full Hamilton operator of a closed sys-tem is always time-independent, this is not the case anymore if wehave a system that is interacting with its environment, i.e. an opensystem. An example is an atom that is being irradiated by a laserbeam. Clearly the electro-magnetic field of the laser is time-dependentand therefore we find a time-dependent Hamilton operator. A time-dependent Hamilton operator may also appear in another situation. Ifwe have given the Hamilton operator of a quantum system which iscomposed of two parts, the solvable part H0 and the perturbation V ,both of which may be time independent, then it can be of advantageto go to an ’interaction’ picture that is ’rotating ’ with frequencies de-termined by the unperturbed Hamilton operator H0. In this case theremaining Hamilton operator will be a time-dependent version of V .

The rest of this section will be devoted to the explanation of amethod that allows us to approximate the time evolution operator ofa time-dependent Hamilton operator. The convergence of this methodis very much enhanced if we go into an interaction picture which elim-inates the dynamics of the system due to the unperturbed Hamiltonoperator. The definition of the ’interaction’ picture in a precise man-ner will be the subject of the first part of this section.

Page 144: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 143

4.4.1 Interaction picture

Often the total Hamilton operator can be split into two parts H =H0 + V , the exactly solvable Hamilton operator H0 and the pertur-bation V . If we have a Hamilton operator, time-dependent or not,and we want to perform perturbation theory it will always be usefulto make use of the exactly solvable part H0 of the Hamilton operator.In the case of time-independent perturbation theory we have used theeigenvalues and eigenvectors of the unperturbed Hamilton operator andcalculated corrections to them in terms of the unperturbed eigenvec-tors and eigenvalues. Now we are going to do the analogous step forthe time evolution operator. We use the known dynamics due to theunperturbed Hamilton operator H0 and then determine corrections tothis time evolution. In fact, the analogy goes so far that it is possi-ble in principle to rederive time-independent perturbation theory as aconsequence of time-dependent perturbation theory. I leave it to theslightly more ambitious student to work this out in detail after I haveexplained the ideas of time-dependent perturbation theory.

The Schrodinger equation reads

ihd

dt|ψ(t)〉 = (H0 + V )(t)|ψ(t)〉 , (4.53)

with a potentially time dependent total Hamilton operator H(t). Thesolution of the Schrodinger equation can be written formally as

|ψ(t)〉 = U(t, t′)|ψ(t′)〉 , (4.54)

where the time evolution operator U(t, t′) obeys the differential equa-tion

ihd

dtU(t, t′) = H(t)U(t, t′) . (4.55)

This can easily be checked by inserting Eq. (4.54) into the Schrodingerequation. Now let us assume that we can solve the time-evolutionthat is generated by the unperturbed Hamilton operator H0(t). Thecorresponding time evolution operator is given by U0(t, t

′) which givesthe solution |ψ0(t)〉 = U0(t, t

′)|ψ0(t′)〉 of the Schrodinger equation

ihd

dt|ψ0(t)〉 = H0(t)|ψ0(t)〉 . (4.56)

Page 145: Quantum mechanics

144 CHAPTER 4. APPROXIMATION METHODS

What we are really interested in is the time evolution according tothe full Hamilton operator H. As we already know the time evolutionoperator U0(t, t

′), the aim is now the calculation of the deviation fromthis unperturbed time evolution. Let us therefore derive a Schrodingerequation that describes this deviation. We obtain this Schrodingerequation by going over to an interaction picture with respect to thetime t0 which is defined by choosing the state vector

|ψI(t)〉 = U †0(t, t0)|ψ(t)〉 . (4.57)

Clearly the state in the interaction picture has been obtained by ’un-doing’ the part of the time evolution that is due to the unperturbedHamilton operator in the state |ψ(t)〉 and therefore describes that partof the time evolution that is due to the perturbation V (t). Now we haveto derive the Schrodinger equation for the interaction picture wavefunc-tion and in particular we have to derive the Hamilton-operator in theinteraction picture. This is achieved by inserting Eq. (4.57) into theSchrodinger equation Eq. (4.53). We find

ihd

dt

(U0(t, t0)|ψI(t)〉

)= (H0 + V )(t)U0(t, t0)|ψI(t)〉 (4.58)

and then

ih

(d

dtU0(t, t0)

)|ψI(t)〉+ihU0(t, t0)

d

dt|ψI(t)〉 = (H0+V )(t)U0(t, t0)|ψI(t)〉 .

(4.59)Now we use the differential equation

ihd

dtU0(t, t0) = H0(t)U0(t, t0) . (4.60)

Inserting this into Eq. (4.59) gives

H0U0(t, t0)|ψI(t)〉+ ihU0(t, t0)d

dt|ψI(t)〉 = (H0 + V )(t)U0(t, t0)|ψI(t)〉

and finally

ihd

dt|ψI(t)〉 = U †

0(t, t0)V (t)U0(t, t0)|ψI(t)〉 .

Page 146: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 145

Using the definition for the Hamilton operator in the interaction picture

HI(t) = U †0(t, t0)(H − H0)U0(t, t0) (4.61)

we find the interaction picture Schrodinger equation

ihd

dt|ψI(t)〉 = HI(t)|ψI(t)〉 . (4.62)

The formal solution to the Schrodinger equation in the interaction pic-ture can be written as

|ψI(t)〉 = UI(t, t′)|ψI(t′)〉 . (4.63)

From Eq. (4.63) we can then obtain the solution of the Schrodingerequation Eq. (4.53) as

|ψ(t)〉 = U0(t, t0)|ψI(t)〉= U0(t, t0)UI(t, t

′)|ψI(t′)〉= U0(t, t0)UI(t, t

′)U †0(t

′, t0)|ψ(t′)〉 . (4.64)

This result shows that even in a system with a time independentHamilton operator H we may obtain a time dependent Hamilton opera-tor by going over to an interaction picture. As time dependent Hamiltonoperators are actually quite common place it is important that we findout how the time evolution for a time-dependent Hamilton operatorcan be approximated.

4.4.2 Dyson Series

Given the Schrodinger equation for a time dependent Hamilton oper-ator we need to find a systematic way of obtaining approximations tothe time evolution operator. This is achieved by time-dependent per-turbation theory which is based on the Dyson series. Here I will dealwith the interaction picture Schrodinger equation Eq. (4.62). To findthe Dyson series, we first integrate the Schrodinger equation Eq. (4.62)formally. This gives

|ψI(t)〉 = |ψI(t′)〉 −i

h

∫ t

t′dt1HI(t1)|ψI(t1)〉 . (4.65)

Page 147: Quantum mechanics

146 CHAPTER 4. APPROXIMATION METHODS

This equation can now be iterated. We then find the Dyson series

|ψI(t)〉 = |ψI(t′)〉−i

h

∫ t

t′dt1HI(t1)|ψI(t′)〉−

1

h2

∫ t

t′dt1

∫ t1

t′dt2HI(t1)HI(t2)|ψI(t′)〉+. . . .

(4.66)From this we immediately observe that the time evolution operator inthe interaction picture is given by

UI(t, t′) = 1 − i

h

∫ t

t′dt1VI(t1)−

1

h2

∫ t

t′dt1

∫ t1

t′dt2VI(t1)VI(t2) + . . . .

(4.67)Equations (4.66) and (4.67) are the basis of time-dependent perturba-tion theory.

Remark: It should be pointed out that these expressions have somelimitations. Quite obviously every integral in the series will generallyhave the tendency to grow with increasing time differences |t− t′|. Forsufficiently large |t − t′| the Dyson series will then fail to converge.This problem can be circumvented by first splitting the time evolutionoperator into sufficiently small pieces, i.e.

UI(t, 0) = UI(t, tn)UI(tn, tn−1) . . . UI(t1, 0) , (4.68)

such that each time interval ti − ti−1 is very small. Then each of thetime evolution operators is calculated using perturbation theory andfinally they are multiplied together.

4.4.3 Transition probabilities

For the following let us assume that the unperturbed Hamilton operatoris time independent and that we take an interaction picture with respectto the time t0 = 0. In this case we can write

U0(t, 0) = e−iH0t/h . (4.69)

Under the time evolution due to the unperturbed Hamilton operatorH0 the eigenstates |φn〉 to the energy En of that Hamilton operatoronly obtain phase factors in the course of the time evolution, i.e.

e−iH0t/h|φn〉 = e−iEnt/h|φn〉 = e−iωnt|φn〉 , (4.70)

Page 148: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 147

where ωn = En/h. Under the time evolution due to the total Hamiltonoperator H this is not the case anymore. Now, the time evolution willgenerally take an eigenstate of the unperturbed Hamilton operator H0

to a superposition of eigenstates of the unperturbed Hamilton operatorH0, i.e.

e−iH(t−t′)/h|φn〉 =∑k

αn→k(t)|φk〉 (4.71)

with nonzero coefficients αn→k(t). If we make a measurement at thelater time t we will have a non-zero probability |αn→k|2 to find eigen-states |φm〉 with m 6= n. We then say that the system has a transitionprobability pn→k(t) for going from state |φn〉 to state |φk〉. Thesetransitions are induced by the perturbation V , which may for examplebe due to a laser field. Let us calculate the transition probability inlowest order in the perturbation V . This is a valid approximation aslong as the total effect of the perturbation is small, i.e. the probabilityfor finding the system in the original state is close to 1. To obtainthe best convergence of the perturbative expansion, we are going overto the interaction picture with respect to the unperturbed Hamiltonoperator H0 and then break off the Dyson series after the term linearin HI(t) = eiH0t1/hV (t1)e

−iH0t1/h. If the initial state at time t′ is |φn〉then the probability amplitude for finding the system in state |φm〉 atthe later time t is given by

αn→m(t) = 〈φm|ψ(t)〉

≈ 〈φm|(|φn〉 −

i

h

∫ t

t′dt1e

iH0t1/hV (t1)e−iH0t1/h|φn〉

)= δmn −

i

h

∫ t

t′dt1e

i(ωm−ωn)t1〈φm|V (t1)|φn〉 . (4.72)

If m 6= n, we find

an→m(t) ≈ − ih

∫ t

t′dt1e

i(ωm−ωn)t1〈φm|V (t1)|φn〉 , (4.73)

and the transition probability is then given by

pn→m(t) = |an→m(t)|2 . (4.74)

Page 149: Quantum mechanics

148 CHAPTER 4. APPROXIMATION METHODS

Periodic perturbation

Let us consider the special case in which we have a periodic perturbation

V (t) = V0 cosωt =1

2V0

(eiωt + e−iωt

). (4.75)

Inserting this into Eq. (4.73) we find using ωmn := ωm − ωn

an→m(t) ≈ − ih

∫ t

0dt1e

iωmnt11

2〈φm|V0|φn〉

(eiωt1 + e−iωt1

)= − i

2h〈φm|V0|φn〉

(e−i(ω−ωmn)t − 1

−i(ω − ωmn)+ei(ω+ωmn)t − 1

i(ω + ωmn)

)(4.76)

Static perturbation For a time independent perturbation we haveω = 0. This leads to

an→m(t) = − ih〈φm|V0|φn〉

(eiωmnt − 1

iωmn

)(4.77)

and then to

pn→m(t) =1

h2 |〈φm|V0|φn〉|2sin2 ωmnt

2

(ωmn

2)2

(4.78)

For sufficiently large times t this is a very sharply peaked function inthe frequency ωmn. In fact in the limit t → ∞ this function tendstowards a delta-function

limt→∞

sin2 ωmnt2

(ωmn

2)2

= 2πtδ(ωmn) . (4.79)

For sufficiently large times (t � ω−1mn) we therefore find that the tran-

sition probability grows linearly in time. We find Fermi’s golden rulefor time independent perturbations

pω=0n→m(t) = t

h2 |〈φm|V0|φn〉|2δ(ωn − ωm) . (4.80)

Obviously this cannot be correct for arbitrarily large times t, becausethe transition probabilities are bounded by unity.

Page 150: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 149

High frequency perturbation If the frequency of the perturbationis unequal to zero, we find two contributions to the transition ampli-tude, one with a denominator ω − ωmn and the other with the denom-inator ω + ωmn. As the frequency ω is always positive, only the firstdenominator can become zero, in which case we have a resonance andthe first term in Eq. (4.76) dominates. We find

pn→m(t) =1

4h2 |〈φm|V0|φn〉|2sin2 (ω−ωmn)t

2

(ω−ωmn

2)2

+sin2 (ω+ωmn)t

2

(ω+ωmn

2)2

. (4.81)

Again we find that for large times the transition probability grows lin-early in time which is formulated as Fermi’s golden rule for time de-pendent perturbations

pω 6=0n→m(t) = t

4h2 |〈φm|V0|φn〉|2(δ(ωn−ωm−ω)+δ(ωn−ωm+ω)) . (4.82)

The Zeno effect

Fermi’s golden rule is not only limited to times that are not too large,but also finds its limitations for small times. In fact for times t� ω−1

mn

the transition probability grows quadratically in time. This is not onlya feature of our particular calculation, but is a general feature of thequantum mechanical time evolution that is governed by the Schrodingerequation. This can be shown quite easily and is summarized in thefollowing

Theorem 50 Given a Hamilton operator H and an initial state |φ(0)〉,then the transition probability to any orthogonal state |φ⊥〉 grows quadrat-ically for small times, i.e.

limt→0

d

dt|〈φ⊥|φ(t)〉|2 = 0 . (4.83)

Proof: Let us first calculate the derivative of the transition probabilityp(t) = |α(t)|2 with α(t) = 〈φ⊥|φ(t)〉 for arbitrary times t. We find

dp

dt(t) =

dt(t)α∗(t) + α(t)

dα∗

dt(t) . (4.84)

Page 151: Quantum mechanics

150 CHAPTER 4. APPROXIMATION METHODS

Obviously α(t = 0) = 〈φ⊥|φ(0)〉 = 0, so that we find

limt→0

dp

dt(t) = 0 . (4.85)

This finishes the proof 2.The transition probability for short times is therefore

p(t) = p(0) +t2

2p′′(0) + . . . =

Ct2

2+ higher orders in t (4.86)

where C is a positive constant.Theorem 50 has a weird consequence which you have encountered

(in disguise) in the first problem sheet. Assume that we have a twostate system with the orthonormal basis states |0〉 and |1〉. Imaginethat we start at time t = 0 in the state |0〉. The time evolution of thesystem will be governed by a Hamilton operator H and after some timeT the system will be in the state |1〉.

Now let the system evolve for a time T , but, as I am very curioushow things are going I decide that I will look in which state the systemis after times T

n, 2Tn, . . . , T . The time evolution takes the initial state

after a time Tn

into

|φ(T

n)〉 = e−iH

Thn |0〉 . (4.87)

Now I make a measurement to determine in which of the two states |0〉or |1〉 the system is. The probability for finding the system in state |0〉is p1 = 1 − C T 2

n2 with some nonzero constant C. If I find the systemin the state |0〉, then we wait until time 2T

nand perform the same

measurement again. The probability that in all of these measurementsI will find the system in the state |0〉 is given by

p =

(1− CT

2

n2

)n≈ 1− CT

2

n. (4.88)

In the limit of infinitely many measurements, this probability tends to1. This result can be summarized as

A continuously observed system does not evolve in time.

Page 152: Quantum mechanics

4.4. TIME-DEPENDENT PERTURBATION THEORY 151

This phenomenon has the name ’Quantum Zeno effect’ and has indeedbeen observed in experiments about 10 years ago.

With this slightly weird effect I finish this part of the lecture andnow move on to explain some features of quantum information theory,a field that has developed in the last few years only.

Page 153: Quantum mechanics

152 CHAPTER 4. APPROXIMATION METHODS

Page 154: Quantum mechanics

Part II

Quantum InformationProcessing

153

Page 155: Quantum mechanics
Page 156: Quantum mechanics

Chapter 5

Quantum InformationTheory

In 1948 Claude Shannon formalised the notion of information and cre-ated what is now known as classical information theory. Until aboutfive years ago this field was actually known simply as information the-ory but now the additional word ’classical’ has become necessary. Thereason for this is the realization that there is a quantum version of thetheory which differs in quite a few aspects from the classical version.In these last lectures of this course I intend to give you a flavour of thisnew theory that is currently emerging. What I am going to show youin the next few lectures is more or less at the cutting edge of physics,and many of the ideas that I am going to present are not older than 5years. In fact the material is modern enough that many of your physicsprofessors will actually not really know these things very well. So, af-ter these lectures you will know more than some professors of physics,and thats not easy to achieve as a third year student. Now you mightbe worried that we will require plenty of difficult mathematics, as youwould expect this for a lecture that presents the cutting edge of physi-cal science. This however, is not the case. In this lecture I taught youmany of the essential techniques that are necessary for you to under-stand quantum information theory. This is again quite differentfrom other branches of physics that are at the cutting edge of research,e.g. super string theory where you need to study for quite a while untilyou reach a level that allows you to carry out research. The interesting

155

Page 157: Quantum mechanics

156 CHAPTER 5. QUANTUM INFORMATION THEORY

point about quantum information theory is the fact that it unifies twoapparently different fields of science, namely physics and informationtheory.

Your question will now be: ’What is the connection’? and ’Whydid people invent the whole thing’?

The motivation for scientists to begin to think about quantum infor-mation came from the rapid development of microprocessor technology.As many of you know, computers are becoming more and more pow-erful every year, and if you have bought a computer 5 years ago thenit will be hopelessly inferior to the latest model that is on the market.Even more amazing is the fact that computers do get more powerful,but they do not get much more expensive. This qualitative descrip-tion of the development of micro-electronics can actually be quantifiedquite well. Many years ago, in the 1960’s, Gordon Moore, one of thefounders of Intel observed that the number of transistors per unit areaof a micro chip doubles roughly every 18 months. He then predictedthat this development will continue in the future. As it turns out, hewas right with his prediction and we are still observing the same growthrate. This development can be seen in Fig. 5.1. Of course sooner or

Figure 5.1: The number of transistors per unit area of a micro-chipdoubles roughly every 18 months.

later this development has to stop because would it continue like thisfor another 20 years than the transistors would be so small, that theywould be fully charged by just one electron and they would shrink tothe size of an atom. While so far micro-electronics has worked on prin-ciples that can be understood to a large extent using classical physics itis clear that transistors that are of the size of one atom must see plentyof quantum mechanical effects. As a matter of fact we would expectthat quantum mechanical effects will play a significant role even ear-lier. This implies that we should really start to think about informationprocessing and computation on the quantum mechanical level. Thesethoughts led to the birth of quantum information theory. Neverthelessit is not yet clear why there should be an essential difference betweeninformation at a classical level and information at the quantum level.However, this is not the case.

A very important insight that people had is the observation thatinformation should not be regarded as an isolated purely mathematicalconcept! Why is that so? You may try to define information as anabstract concept but you should never forget that information needs to

Page 158: Quantum mechanics

157

be represented and transmitted. Both of these processes are physical.An example for the storage of information are my printed lecture noteswhich use ink on paper. Even in you brain information is stored notin an immaterial form, but rather in the form of synaptic connectionsbetween your brain cells. A computer, finally, stores information intransistors that are either charged or uncharged. Likewise informationtransmission requires physical objects. Talking to you means that I amusing sound waves (described by classical physics) to send informationto you. Television signals going through a cable represent informationtransfer using a physical system. In a computer, finally, the informationthat is stored in the transistors is transported by small currents.

So, clearly information and physics are not two separate conceptsbut should be considered simultaneously. Our everyday experience isof course mainly dominated by classical physics, and therefore it is notsurprising that our normal perception of information is governed byclassical physics. However, the question remains, what will happenwhen we try to unify information theory with quantum physics. Whatnew effects can appear? Can we do useful things with this new theory?These are all important questions but above all I think that it is goodfun to learn something new about nature.

Page 159: Quantum mechanics

158 CHAPTER 5. QUANTUM INFORMATION THEORY

5.1 What is information? Bits and all

that.

5.2 From classical information to quan-

tum information.

5.3 Distinguishing quantum states and the

no-cloning theorem.

5.4 Quantum entanglement: From qubits

to ebits.

5.5 Quantum state teleportation.

5.6 Quantum dense coding.

5.7 Local manipulation of quantum states.

5.8 Quantum cyptography

5.9 Quantum computation

Quantum Bits

Before I really start, I will have to introduce a very basic notion. Thefirst one is the generalization of the classical bit. In classical informationtheory information is usually represented in the form of classical bits,i.e. 0 and 1. These two values may for example be represented as anuncharged transistor (’0’) and a fully charged transistor (’1’), see Fig5.2.

Note that a charged transistor easily holds 108 electrons. Thereforeit doesn’t make much difference whether a few thousand electrons are

Page 160: Quantum mechanics

5.9. QUANTUM COMPUTATION 159

Figure 5.2: A transistor can represent two distinct logical values. Anuncharged transistor represents ’0’ and a fully charged transistor rep-resent ’1’. The amount of charge may vary a little bit without causingan error.

missing. A transistor charged with 108 electrons and one with 0.999·108

electrons both represent the logical value 1.

This situation changes, when we consider either very small tran-sistors or in the extreme case atoms. Imagine an atom which storesthe numbers ’0’ and ’1’ in its internal states. For example an electronin the ground state represents the value ’0’, while an electron in theexcited state represents the value ’1’ (see Fig. 5.3). In the followingwe will disregard all the other energy levels and idealize the system asa two level system. Such a quantum mechanical two level system willfrom now on be called a quantum bit or shortly a qubit. So far thisis just the same situation as in the classical case of a transistor. How-ever, there are two differences. Firstly the atomic system will be muchmore sensitive to perturbations, because now it makes a big differencewhether there is one electron or no electron. Secondly, and probablymore importantly, in quantum mechanics we have the superpositionprinciple. Therefore we do not only have the possibilities ′0′ and ′1′

represented by the two quantum states |0〉 and |1〉, but we may alsohave coherent superpositions between different values (Fig 5.3), i.e.

|ψ〉 = a|0〉+ b|1〉 (5.1)

Therefore we have the ability to represent simultaneously two valuesin a single quantum bit. But it comes even better. Imagine that youare holding four qubits. Then they can be in a state that is a coherent

Page 161: Quantum mechanics

160 CHAPTER 5. QUANTUM INFORMATION THEORY

Figure 5.3:

superposition of 16 different states, each representing binary strings.

|ψ〉 =1

4(|0000〉+ |0001〉+ |0010〉+ |0011〉

+|0100〉+ |0101〉+ |0110〉+ |0111〉+|1000〉+ |1001〉+ |1010〉+ |1011〉+|1100〉+ |1101〉+ |1110〉+ |1111〉) (5.2)

Evidently a collection of n qubits can be in a state that is a coherentsuperposition of 2n different quantum states, each of which representsa number in binary notation. If we apply a unitary transformation osuch a state, we therefore manipulate 2n binary numbers simultane-ously! This represents a massive parallelism in our computation whichis responsible for the fact that a quantum mechanical system can solvecertain problems exponentially faster than any classical system can do.However, it is very difficult to build such a quantum system in a waythat we can control it very precisely and that it is insensitive to noiseat the same time. This has is quite difficult, and there are no realquantum computers around present.

If I have enough time, I am going to explain the idea of a quantumcomputer in more detail, but first I would like to present some effectsand applications of quantum entanglement which have actually beenrealized in experiments.

5.10 Entanglement and Bell inequalities

In the previous section we have seen that the superposition principleis one source for the new effects that can arise in quantum informa-tion theory. However, the superposition principle alone exists also inclassical physics, e.g. in sound waves or electro-magnetic waves. What

Page 162: Quantum mechanics

5.10. ENTANGLEMENT AND BELL INEQUALITIES 161

is completely absent from classical physics is the notion of entangle-ment (from the German word Verschrankung) of which you have heardabout earlier in this lecture. Entanglement was first investigated bySchrodinger in his Gedanken experiments using cats. Later Einstein,Podolsky and Rosen proposed a famous Gedanken experiment whichthey used to criticise quantum mechanics. The notion of entanglementarises from the superposition principle together with the tensor productstructure of the quantum mechanical Hilbert space. States of the form

|ψ〉 =1

2(| ↓〉+ | ↑〉)⊗ (| ↓〉+ | ↑〉) (5.3)

are called product states. Such states are called disentangled as theoutcome of a measurement on the first system is independent of theoutcome of a measurement on the other particle. On the other handstates of the form

|ψ〉 =1√2(| ↓〉 ⊗ | ↓〉+ | ↑〉⊗ ↑〉) (5.4)

are very strongly correlated. If a measurement on the first system showsthat the system is in state | ↓〉 (| ↑〉) then we immediately know thatthe second system must be in state | ↓〉 (| ↑〉) too. If this would beall that can be said about the correlations in the state Eq. (5.4), thenit would not be justified to call this states entangled simply becausewe can produce the same correlations also in a purely classical setting.Imagine for example, that we have two coins and that they are alwaysprepared in a way, such that either both of them show heads or bothof them show tails. Then by looking at only one of them, we knowwhether the other one shows head or tails. The correlations in the stateEq. (5.4), however, have much more complicated properties. These newproperties are due to the fact, that in quantum mechanics we can makemeasurements in bases other than the {| ↓〉, | ↑〉} basis. Any basis ofthe form {a| ↓〉 + b| ↑〉, b∗| ↓〉 − a∗| ↑〉} can also be used. This makesthe structure of the quantum mechanical state Eq. (5.4) much richerthan that of the example of the two coins.

A famous example in which these new correlations manifest them-selves is that of the Bell inequalities. In the rest of this section I am go-ing to explain to you some of the ideas behind the Bell inequalities and

Page 163: Quantum mechanics

162 CHAPTER 5. QUANTUM INFORMATION THEORY

their significance. When Bell started to think about the foundationsof quantum mechanics, the work that later led to the Bell inequalities,he was interested in one particular problem. Since the discovery ofquantum mechanics, physicists have been worried about the fact thatquantum mechanical measurements lead to random measurement out-comes. All that quantum mechanics predicts are the probabilities forthe different possible measurement outcomes. This is quite substan-tially different from everything that we know from classical physics,where the measurement outcomes are not random if we have completeknowledge of all system variables. Only incomplete knowledge of thesystem can lead to random measurement outcomes. Does that meanthat quantum mechanics is an incomplete description of nature? Arethere hidden variables, that we cannot observe directly, but which de-termine the outcome of our measurements? In a book on the mathe-matical foundations of quantum mechanics John von Neumann had ac-tually presented a proof that such theories cannot exist. Unfortunatelythe proof is wrong as von Neumann had made a wrong assumption.The question whether there are hidden variable theories that repro-duce all quantum mechanical predictions was therefore still open. JohnBell finally re-investigated the experiment that has originally been pro-posed by Einstein, Podolsky and Rosen and he finally succeeded inshowing that there is a physically observable quantity for which localhidden variable theories and quantum mechanics give different predic-tions. This was an immense step forward, because now it had becomepossible to test experimentally whether quantum mechanics with allits randomness is correct or whether there are hidden variable theoriesthat explain the randomness in the measurement results by our incom-plete knowledge. In the following I am going to show you a derivationof Bell’s inequalities.

To do this I now need to define quantitatively what I mean by corre-lations. The state Eq. (5.4) describes the state of two spin-1

2particles.

Assume that I measure the orientation of the first spin along direction|a〉 and that of the second spin along direction ~b. Both measurementcan only have one of two outcomes, either the spin is parallel or anti-parallel to the direction along which it has been measured. If it isparallel then I assign the value a = 1 (b = 1) to the measurement out-come, if it is anti-parallel, then I assign the value a = −1 (b = −1)

Page 164: Quantum mechanics

5.10. ENTANGLEMENT AND BELL INEQUALITIES 163

to the measurement outcome. If we repeat the measurement N times,each time preparing the original state Eq. (5.4) and then performingthe measurement, then the correlation between the two measurementsis defined as

C(~a,~b) = limN→∞

1

N

N∑n=1

anbn . (5.5)

Now I want to show that correlations of the form Eq. (5.5) satisfyBells inequalities, given two assumptions are being made

1. We impose locality, i.e. a measurement on the first particle hasno effect on the second side when the measurements are at space-like locations (no signal can travel after the measurement on thefirst particle from there to the second particle before the mea-surement on the second particle has been carried out). This leadsto probabilities that are just products of probabilities for a mea-surement outcome for the first particle side and the probabilityfor an measurement outcome on the second side.

2. There are hidden variables that we cannot access directly butwhich influence the probabilities that are observed. This impliesthat all probabilities are of the form PA(a, λ) and PB(b, λ) whereλ describes the hidden variables.

Under these assumptions I want to prove that for measurementsalong the four directions ~a,~a′,~b and ~b′ we find the Bell inequality

|C(~a,~b) + C(~a,~b′) + C(~a′,~b)− C(~a′,~b′)| ≤ 2 . (5.6)

Proof: To see this we use the fact that for all an, a′n, bn, b

′n ∈ [−1, 1]

|an(bn + b′n) + a′n(bn − b′n)| ≤ 2 . (5.7)

Now we find

C(~a,~b) =∫dλρ(λ) [PA(+, λ)PB(+, λ) + PA(−, λ)PB(−, λ)

−PA(+, λ)PB(−, λ)− PA(−, λ)PB(+, λ)]

=∫dλρ(λ) [PA(+, λ)− PA(−, λ)] [PB(+, λ)− PB(−, λ)]

≡∫QA(~a, λ)QB(~b, λ)ρ(λ)dλ (5.8)

Page 165: Quantum mechanics

164 CHAPTER 5. QUANTUM INFORMATION THEORY

and therefore

|C(~a,~b) + C(~a,~b′) + C(~a′,~b)− C(~a,~b)| ≤∫|QA(~a, λ)QB(~b, λ) +QA(~a, λ)QB(~b′, λ)

+QA(~a′, λ)QB(~b, λ)−QA(~a′, λ)QB(~b′, λ)|ρ(λ)dλ

≤ 2

This finishes the proof.Therefore if we are able to explain quantum mechanics by a local

hidden variable theory, then Eq. (5.6) has to be satisfied.Now let us calculate what quantum mechanics is telling us about

the left hand side of Eq. (5.6). Quantum mechanically the correlationis given by

C(~a,~b) = 〈ψ|(~a~σ)⊗ (~a~σ)|ψ〉 , (5.9)

where ~σ = σx~ex + σy~ey + σz~ez with the Pauli operators σi. Now we canexpress the correlation in terms of the angle θab between the vectors aand b. We find (this is an exercise for you)

C(~a,~b) = − cos θab . (5.10)

Now we make a particular choice for the four vectors ~a,~a′,~b and ~b′. Wechose ~a and ~b parallel and ~a′ and ~b′ such that all four vectors lie in oneplane. Finally we chose the angles θab′ = θa′b = φ. All this is shown inFig. 5.4. Inserting this choice in the left hand side of Eq. (5.6), thenwe find

|1 + 2 cosφ− cos 2φ| ≤ 2 . (5.11)

Plotting the function on the left hand side of Eq. (5.11) in Fig. 5.5we see that the inequality is actually violated for quite a wide range ofvalues of φ. The maximum value for the left hand side of Eq. (5.11) isgiven by 2.5 and is assumed for the value φ = π/3.

Of course now the big question is whether Bell’s inequalities areviolated or not. Experiments testing the validity of the Bell inequalitieshave been carried out e.g. in Paris in 1982. The idea was very simple,but of course it was quite difficult to actually do the experiment. Acentral source produces photons which are in the singlet state

|ψ−〉 =1√2(| ↑〉| ↓〉 − | ↓〉| ↑〉) . (5.12)

Page 166: Quantum mechanics

5.10. ENTANGLEMENT AND BELL INEQUALITIES 165

Figure 5.4: The relative orientation of the four directions ~a,~a′,~b and ~b′

along which the spins are measured.

One can derive identical Bell inequalities for such a state and this statewas chosen, because it can be produced fairly easily. The way it isdone, is via the decay of an atom in such a way that it always emitstwo photons and that the total change in angular momentum in theatom is zero. Then the two photons are necessarily in a spin-0 state,i.e. a singlet state. The two photons were then flying away in oppositedirections towards to measurement apparatuses. These apparatuseswere then measuring the polarization state of the two photons alongfour possible directions that have been chosen to give maximal violationof Bell inequalities. After each decay for each apparatus a directionwas chosen randomly and independently from the other side. Finallythe measurement result was noted down. This experiment found avalue for the correlations of about 2.7 which is reasonably close to thequantum mechanically predicted value of 2.82. More recently moreprecise experiments have been carried out and the violation of Bellsinequalities has been clearly demonstrated.

Page 167: Quantum mechanics

166 CHAPTER 5. QUANTUM INFORMATION THEORY

Figure 5.5: The right hand side of Eq. (5.11) is plotted. You can clearlysee that it can exceed the value of 2 and achieves a maximum of 2.5.

5.11 Quantum State Teleportation

The procedure we will analyse is called quantum teleportation and canbe understood as follows. The naive idea of teleportation involves aprotocol whereby an object positioned at a place A and time t first“dematerializes” and then reappears at a distant place B at some latertime t+ T . Quantum teleportation implies that we wish to apply thisprocedure to a quantum object. However, a genuine quantum telepor-tation differs from this idea, because we are not teleporting the wholeobject but just its state from particle A to particle B. As quantumparticles are indistinguishable anyway, this amounts to ‘real’ telepor-tation. One way of performing teleportation (and certainly the wayportrayed in various science fiction movies, e.g. The Fly) is first to

Page 168: Quantum mechanics

5.12. A BASIC DESCRIPTION OF TELEPORTATION 167

learn all the properties of that object (thereby possibly destroying it).We then send this information as a classical string of data to B whereanother object with the same properties is re-created. One problemwith this picture is that, if we have a single quantum system in an un-known state, we cannot determine its state completely because of theuncertainty principle. More precisely, we need an infinite ensemble ofidentically prepared quantum systems to be able completely to deter-mine its quantum state. So it would seem that the laws of quantummechanics prohibit teleportation of single quantum systems. However,the very feature of quantum mechanics that leads to the uncertaintyprinciple (the superposition principle) also allows the existence of en-tangled states. These entangled states will provide a form of quantumchannel to conduct a teleportation protocol. It will turn out that thereis no need to learn the state of the system in order to teleport it. On theother hand, there is a need to send some classical information from A toB, but part of the information also travels down an entangled channel.This then provides a way of distinguishing quantum and classical cor-relations, which we said was at the heart of quantifying entanglement.After the teleportation is completed, the original state of the particleat A is destroyed (although the particle itself remains intact) and sois the entanglement in the quantum channel. These two features aredirect consequences of fundamental laws in information processing. Icannot explain these here as I do not have enough time, but if you areinterested you should have a look at the article M.B. Plenio and V.Vedral, Contemp. Physics 39, 431 (1998) which has been written forfinal year students and first year PhD students.

5.12 A basic description of teleportation

Let us begin by describing quantum teleportation in the form originallyproposed by Bennett, Brassard, Crepeau, Jozsa, Peres, and Woottersin 1993. Suppose that Alice and Bob, who are distant from each other,wish to implement a teleportation procedure. Initially they need toshare a maximally entangled pair of quantum mechanical two levelsystems. Unlike the classical bit, a qubit can be in a superposition ofits basis states, like |Ψ〉 = a|0〉 + b|1〉. This means that if Alice and

Page 169: Quantum mechanics

168 CHAPTER 5. QUANTUM INFORMATION THEORY

Bob both have one qubit each then the joint state may for example be

|ΨAB〉 = (|0A〉|0B〉+ |1A〉|1B〉)/√

2 , (5.13)

where the first ket (with subscript A) belongs to Alice and second (withsubscript B) to Bob. This state is entangled meaning, that it cannot bewritten as a product of the individual states (like e.g. |00〉). Note thatthis state is different from a statistical mixture (|00〉〈00| + |11〉〈11|)/2which is the most correlated state allowed by classical physics.

Now suppose that Alice receives a qubit in a state which is unknownto her (let us label it |Φ〉 = a|0〉 + b|1〉) and she has to teleport it toBob. The state has to be unknown to her because otherwise she canjust phone Bob up and tell him all the details of the state, and hecan then recreate it on a particle that he possesses. If Alice does notknow the state, then she cannot measure it to obtain all the necessaryinformation to specify it. Therefore she has to resort to using the state|ΨAB〉 that she shares with Bob. To see what she has to do, we writeout the total state of all three qubits

|ΦAB〉 := |Φ〉|ΨAB〉 = (a|0〉+ b|1〉)(|00〉+ |11〉)/√

2 . (5.14)

However, the above state can be written in the following convenient way(here we are only rewriting the above expression in a different basis,and there is no physical process taking place in between)

|ΦAB〉 = (a|000〉+ a|011〉+ b|100〉+ b|111〉)/√

2

=1

2

[|Φ+〉(a|0〉+ b|1〉) + |Φ−〉(a|0〉 − b|1〉)

+|Ψ+〉(a|1〉+ b|0〉) + |Ψ−〉(a|1〉 − b|0〉)], (5.15)

where

|Φ+〉 = (|00〉+ |11〉)/√

2 (5.16)

|Φ−〉 = (|00〉 − |11〉)/√

2 (5.17)

|Ψ+〉 = (|01〉+ |10〉)/√

2 (5.18)

|Ψ−〉 = (|01〉 − |10〉)/√

2 (5.19)

form an ortho-normal basis of Alice’s two qubits (remember that thefirst two qubits belong to Alice and the last qubit belongs to Bob).

Page 170: Quantum mechanics

5.12. A BASIC DESCRIPTION OF TELEPORTATION 169

The above basis is frequently called the Bell basis. This is a very usefulway of writing the state of Alice’s two qubits and Bob’s single qubit be-cause it displays a high degree of correlations between Alice’s and Bob’sparts: to every state of Alice’s two qubits (i.e. |Φ+〉, |Φ−〉, |Ψ+〉, |Ψ−〉)corresponds a state of Bob’s qubit. In addition the state of Bob’s qubitin all four cases looks very much like the original qubit that Alice hasto teleport to Bob. It is now straightforward to see how to proceedwith the teleportation protocol:

1. Upon receiving the unknown qubit in state |Φ〉 Alice performsprojective measurements on her two qubits in the Bell basis. Thismeans that she will obtain one of the four Bell states randomly,and with equal probability.

2. Suppose Alice obtains the state |Ψ+〉. Then the state of all threequbits (Alice + Bob) collapses to the following state

|Ψ+〉(a|1〉+ b|0〉) . (5.20)

(the last qubit belongs to Bob as usual). Alice now has to com-municate the result of her measurement to Bob (over the phone,for example). The point of this communication is to inform Bobhow the state of his qubit now differs from the state of the qubitAlice was holding previously.

3. Now Bob knows exactly what to do in order to complete theteleportation. He has to apply a unitary transformation on hisqubit which simulates a logical NOT operation: |0〉 → |1〉 and|1〉 → |0〉. He thereby transforms the state of his qubit intothe state a|0〉 + b|1〉, which is precisely the state that Alice hadto teleport to him initially. This completes the protocol. It iseasy to see that if Alice obtained some other Bell state then Bobwould have to apply some other simple operation to completeteleportation. We leave it to the reader to work out the other twooperations (note that if Alice obtained |Φ+〉 he would not haveto do anything). If |0〉 and |1〉 are written in their vector formthen the operations that Bob has to perform can be representedby the Pauli spin matrices, as depicted in Fig. 5.6.

Page 171: Quantum mechanics

170 CHAPTER 5. QUANTUM INFORMATION THEORY

(a) l l lψ ︸ ︷︷ ︸|Ψ+〉

∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼ (α|0〉+ β|1〉)(|00〉+ |11〉)/√

2

(b) l l lψ

Measurement{|Ψ±〉, |Φ±〉}

∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼

12|Φ+〉(α|0〉+ β|1〉)

+ 12|Φ−〉(α|0〉 − β|1〉)

+ 12|Ψ+〉(α|1〉+ β|0〉)

+ 12|Ψ−〉(α|1〉 − β|0〉)

(c)

Alice finds|Φ+〉 −→0 Bob does nothing

Alice finds|Φ−〉 −→1 Bob performs σz

Alice finds|Ψ+〉 −→2 Bob performs σx

Alice finds|Ψ−〉 −→3 Bob performs σzσx

(d) l l lψ∼∼∼∼∼∼

Figure 5.6:

Figure 5.6: The basic steps of quantum state teleportation. Aliceand Bob are spatially separated, Alice on the left of the dashed line,Bob on the right. (a) Alice and Bob share a maximally entangled pairof particles in the state (|00〉+|11〉)/

√2. Alice wants to teleport the un-

known state |ψ〉 to Bob. (b) The total state of the three particles thatAlice and Bob are holding is rewritten in the Bell basis Eqs. (5.16-5.19)for the two particles Alice is holding. Alice performs a measurementthat projects the state of her two particles onto one of the four Bellstates. (c) She transmits the result encoded in the numbers 0, 1, 2, 3to Bob, who performs a unitary transformation 1, σz, σx, σzσx that de-pends only on the measurement result that Alice obtained but not onthe state |ψ〉! (d) After Bob has applied the appropriate unitary op-eration on his particle he can be sure that he is now holding the statethat Alice was holding in (a).

Page 172: Quantum mechanics

5.12. A BASIC DESCRIPTION OF TELEPORTATION 171

An important fact to observe in the above protocol is that all theoperations (Alice’s measurements and Bob’s unitary transformations)are local in nature. This means that there is never any need to performa (global) transformation or measurement on all three qubits simulta-neously, which is what allows us to call the above protocol a genuineteleportation. It is also important that the operations that Bob per-forms are independent of the state that Alice tries to teleport to Bob.Note also that the classical communication from Alice to Bob in step 2above is crucial because otherwise the protocol would be impossible toexecute (there is a deeper reason for this: if we could perform telepor-tation without classical communication then Alice could send messagesto Bob faster than the speed of light, remember that I explained thisin a previous lecture.

Important to observe is also the fact that the initial state to be tele-ported is at the end destroyed, i.e it becomes maximally mixed, of theform (|0〉〈0| + |1〉〈1|)/2. This has to happen since otherwise we wouldend up with two qubits in the same state at the end of teleportation(one with Alice and the other one with Bob). So, effectively, we wouldclone an unknown quantum state, which is impossible by the laws ofquantum mechanics (this is the no-cloning theorem of Wootters andZurek). I will explain this in more detail later.

For those of you who are interested in quantum information theory,here are some references for further reading:

L. Hardy, Cont. Physics 39, 415 (1998)M.B. Plenio and V. Vedral, Cont. Physics 39, 431 (1998)J. Preskill, http://www.theory.caltech.edu/people/preskill/ph229/