ysics computational physics 1 ysics€¦ ·  · 2008-07-07by j s morgan and j l schonfelder ... by...

40
Computational Physics 1 Module PH3707 10 Credits by Roger Stewart Version date: Monday, 7 July 2008 Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics Computational Physics– Computational Physics

Upload: nguyenxuyen

Post on 13-Apr-2018

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Computational Physics 1Module PH3707

10 Credits

by Roger Stewart

Version date: Monday, 7 July 2008

ComputationalPhysics–Com

putationalPhysics

ComputationalPhysics–Com

putationalPhysics

ComputationalPhysic

s–

ComputationalPhysics

Com

puta

tiona

lPhy

sics–

ComputationalPhysics

Com

puta

tiona

lPhy

sics

ComputationalPhysics

Com

puta

tiona

l Phy

sics

ComputationalPhysics

Com

puta

tiona

l Phy

sics–

ComputationalPhysics

Computational Physic

s–

Compu

tation

alPhy

sics

Computational Physics– Com

puta

tiona

lPhy

sics

Computational Physics– Com

puta

tiona

lPhy

sics

Computational Physics– Com

puta

tiona

l Phy

sics

Computational Physics–

Compu

tation

alPhy

sics

Computational Physics–

Computational Physics

Com

putational Physics–

Computational Physics

Com

putationalPhysics–

Computational Physics

ComputationalPhysics–

Computational Physics

ComputationalPhysics–

Computational Physics

ComputationalPhysics–Com

putational Physics

Page 2: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2

Copyright c©2003 Dennis Dunn.

Page 3: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Contents

Contents 3

INTRODUCTION TO THE MODULE 4

1 RANDOM PROCESSES 9

1.1 Objectives 9

1.2 Introduction 9

1.3 Random Number Generators 10

1.4 Intrinsic Subroutine 10

1.5 Monte Carlo Integration 12

1.6 Nuclear Decay Chains 13

1.7 Exercises 15

2 ANALYSIS OF WAVEFORMS 17

2.1 Objectives 17

2.2 Fourier Analysis 17

2.3 The Numerical Methods 18

2.4 Diffusion Equation 20

3 EIGENVALUES AND EIGENVECTORS OF MATRICES 23

3.1 Objectives 23

3.2 Eigenvalues and eigenvectors of real symmetric or hermitian matrices 23

3.3 A Matrix Eigenvalue Package 25

3.4 Schrodinger Equation 26

4 MONTE CARLO SIMULATION 30

4.1 OBJECTIVES 30

4.2 EQUILIBRIUM AND FLUCTUATIONS 30

4.3 MONTE CARLO SIMULATIONS – THE PRINCIPLES 31

4.4 The Metropolis Monte Carlo Algorithm 32

4.5 The Ising Model 32

4.6 The Ising Model and the Monte Carlo Algorithm 33

4.7 Rationale 33

Page 4: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4

4.8 MONTE CARLO SIMULATIONS – IN ACTION 34

4.9 Order Parameter - Magnetisation 34

4.10 Temperature Scan (Annealing and Quenching) 34

4.11 MATHEMATICAL APPENDIX 35

4.12 EXERCISES 37

Index 39

Page 5: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

INTRODUCTION TO THE MODULE

Version date: Monday, 7 July 2008

Introduction

In this module you will be taught techniques employed in computational science and, in particular,computational physics using the FORTRAN95 language. The unit consists of four ‘computer exper-iments’, each of which must be completed within a specified time. Each ‘computer experiment’ isdescribed in a separate chapter of this manual and contains a series of exercises for you to complete.You should work alone and should keep a detailed record of the work in a logbook that must besubmitted for assessment at the end of each experiment.

For three of the four projects, there will be a supervised laboratory session each week and furtherunsupervised sessions.

There will be no supervised sessions for the fourth project: These will be exercises in independentlearning.

The Salford Fortran95 compiler will be used in this course, and this may be started by double-clicking on the Plato icon under the “Programming - Salford Fortran 95” program group. A “FTN95Help” facility is supplied with this software and can be found within the same program group.This help facility includes details of the standard FORTRAN95 commands as well as the compiler-specific graphics features. All of the programs needed during this course may be downloaded fromthe Part 3 - PH3707 Computational Physics page on the department’s web-server:

(www.rdg.ac.uk/physicsnet).

Web Site Information

In addition to all the chapters and programs required for this course, there are links to other usefulsites including a description of programming style; a description of computational science in generaland FORTRAN programming in particular; a tutorial for FORTRAN90; and a description of object-oriented programming.

References

Programming in Fortran 90/95

By J S Morgan and J L Schonfelder

Published by N.A. Software 2002. 316 pages.

This can be ordered online from

www.fortran.com $15

5

Page 6: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

6

Fortran 95 Handbook

By Jeanne Adams, Walt Brainerd, Jeanne Martin, Brian Smith, and Jerry Wagener

Published by MIT Press, 1997. 710 pages.

$55.00

Fortran 95/2003 Explained

By Michael Metcalf and John Reid

Oxford University Press ISBN 0-19-8526293-8

$35.00

Fortran 90 for Scientists and Engineers

By Brian Hahn

Published by Arnold £19.99

Fortran 90/95 for Scientists and Engineers

By Stephen J. Chapman

McGraw-Hill, 1998 ISBN 0-07-011938-4

$68.00

Numerical Recipes in Fortran90

By William Press, Saul Teukolsky, William Vetterling, and Brian FlanneryPublished by Cambridge University Press, 1996. 550 pages. $49.00

A more complete list of reference texts is held at

http://www.fortran.com/fortran/Books/books.html

where books can be ordered directly.

Logbooks

You must keep an accurate and complete record of the work in a logbook. The logbook is what isassessed. In the logbook you should answer all the questions asked in the text, include copies of theprograms with explanations of how they work, and record details of the program inputs and of theoutput created by the programs. On completion of each chapter you should write a brief summaryof what has been achieved in the project.

I am often asked what should go in the logbook. It is difficult to give a precise answer to this sinceeach computer experiment is different but as a guide: it should have

• a complete record of your computer experiment;• sufficient detail for someone else to understand what you were doing and why; and• for someone else to be able to repeat the computer experiment.

In particular should also include:

• program listings;• a description of any changes you made to the programs;

If you have made a succession of changes, you should not reproduce the complete programeach time but simply specify what changes you have made and why you have made them.

Page 7: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

7

• the data entered and results obtained from your programs (or, if there are a very large numberof results, a summary of these results);

• a comment on each set of results;You will lose marks if you simply record masses of computer output without commenting onits significance.

• descriptions of any new techniques you have learned.

It worked!

A statement in your logbook of the form ”I tried the program and it worked” will not be looked onfavorably.

• What inputs did you provide to the program?• What output did you obtain?• What evidence do you have that this output is correct?

Program Testing

It is always safe to assume that a program you have written is wrong in some way. If you have madesome error with the programming language then the compiler will tell you about this: it may notalways tell you precisely what is wrong!

However it may be the case that what you have told the computer to do is not really what youintended. Computers have no intelligence: You need to be very precise in your instructions.

Every program should be tested. You do this by giving the program input data such that you know, orcan can easily calculate, the result. If possible your method of calculation should be via a differentmethod than that which the computer is using: the method you have told the computer to use maybe incorrect.

Only when the program has passed your tests should you start to use it to give you new results.

Module Assessment

The module comprises 4 computational projects. A record of each project must be kept in a logbookand the logbook submitted for assessment by the specified deadline. The final assessment will basedon the best 3 project marks.

Each project will be marked out of 20. A detailed marking scheme is given, in this manual, for eachproject.

Guidelines on the assessment are given below.

Late Submissions

If a project is submitted up to one calendar week after the original deadline 2 marks will be de-ducted.

I am prepared to mark any project, which is more than one week late, providing it is submitted bythe last day of the of the Spring Term. However for such a project 4 marks will be deducted.

Page 8: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

8

Extensions & Extenuating Circumstances

If you have a valid reason for not being able to complete a project by the specified deadline thenyou should

• inform the lecturer as soon as possible; and• complete an Extension of Deadlines Form. The form can be obtained from the School Office

(Physics 210).

If you believe that there has been some non-academic problem that you encountered during themodule (medical or family problems for example) you should complete an Extenuating Circum-stances Form, again obtainable from the School Office, so that the Director of Teaching & Learningand the Examiners can consider this.

Feedback

In addition to comments written in your logbook by the assessor during marking, feedback on theprojects will be provided by a class discussion and, when appropriate, by individual discussion withthe lecturer.

There will be no feedback, apart from the mark, on late submissions.

Assessment Guidelines

This module is assessed solely by continuous assessment. Each project (which corresponds to onechapter of the manual) is assessed as follows.

The depth of understanding and level of achievement will be assessed taking into account the fol-lowing three categories:

1. Completion of the project (0 – 17 marks)Completeness of the recordDescription and justification of all actionsFollowing the documented instructions, answering questions andperforming derivations etc.

2. Summary (0 – 3 marks)• Review of objectives• Summary of achievements• Retrospective comments on the effectiveness of the exer-cises

3. Bonus for extra work (0 - 2 marks)• Any exceptional computational work beyond the require-ments stated• An exceptional depth of analysis• An outstanding physical insight

I should point out that bonus marks are only rarely awarded and, in any case, the total mark of aproject cannot exceed 20. Unfinished work will be marked pro rata, unless there are extenuatedcircumstances.

If you are unable to attend the laboratory session you should inform the lecturer Dr R J Stewart by

Page 9: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

9

telephone 0118 378 8536 or by email [email protected]

Plagiarism

In any learning process you should make use of whatever resources are available. In this course,I hope, the lecturer and postgraduate assistant should be valuable resources. Your fellow studentsmay also be useful resources and I encourage you to discuss the projects with them.

However at the end of these discussions you should then write your own program (or programmodification). It is completely unacceptable to copy someone else’s program (or results). This is aform of cheating and will be dealt with as such.

I should point out that such copying (even if slightly disguised) is very easy to detect.

Time Management

There is ample time for you to complete each of these projects providing you manage your timesensibly.

You should aim to spend at least six hours per week on each project: that is a total of about 18 hoursper project.

Each project is divided into a number of ’exercises’ and each of these exercises is allocated a mark.This mark is approximately proportional to the time that I expect you to spend on the exercise. Youshould therefore be able to allocate your time accordingly: It is not sensible to spend half of theavailable time on an exercise which is worth only a quarter of the total marks.

Each of the projects below is given a deadline. You should not take this as a target. You should setyourself a target well before the deadline.

Projects

• Random Processes [Deadline: noon, Wednesday Week 3 Autumn Term]• Analysis of Waveforms ; [Deadline: noon, Wednesday Week 6 Autumn Term]• Eigenvalues & Eigenvectors [Deadline: noon, Wednesday Week 9 Autumn Term]• Monte Carlo Simulation [Deadline: noon, Wednesday Week 2 Spring Term]

The ’Monte Carlo Simulation’ project will be unsupervised is an exercise in independent learning.Nevertheless the lecturer and demonstrator will be available for consultation.

Page 10: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Chapter 1

RANDOM PROCESSES

Version Date: Thursday, 30 August, 2007 at 11:21

1.1 Objectives

This chapter provides an introduction to random processes in physics. On completion, you will befamiliar with the random number generator in FORTRAN 95, and will have gained experience in us-ing it in two applications. You will also be ready to tackle later chapters that develop computationalstudies of random systems.

1.2 Introduction

It is convenient to describe models of physical processes as either deterministic or random. Anobvious example of the former is planetary motion and its description via Newton’s equations ofmotion: given the position and momenta of the particles in the system at time t, we can predictthe values of all the positions and momenta at a later time t′. Even the solution of Schrodinger’sequation is in a sense deterministic: we can predict the time evolution of the wave function in adeterministic way even though the wave function itself carries a much more restricted amount ofinformation than a classical picture provides.

The obvious example in physics of a theory based on randomness at the microscopic level is statis-tical mechanics. There may well be deterministic processes taking place, but they do not concernus because we can only observe the net effect of a vast number of such processes, and this is muchmore amenable to description on a statistical basis. But a statistical basis does not only concern sta-tistical mechanical (and thermodynamic) systems. Many physical systems are inherently disorderedand defy a simple deterministic analysis: the passage of a liquid through a porous membrane (oilthrough shale, for example), electrical breakdown in dielectrics, the intertwining of polymer chains,and galaxy formation are some examples of random processes.

Statistical mechanics uses concepts like entropy, partition functions, Boltzmann, Fermi or Bosestatistics and so on to describe the net effect of random processes. In computer simulations, oneactually models the microscopic random processes themselves. To model randomness, we needto have something to provide the element of chance - like a coin to toss, or a dice to throw. Ofcourse, in computing, we do not use coins or dice but rather random number generators to inject thestatistics of chance, and we start by seeing how they work.

10

Page 11: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.3 Random Number Generators 11

1.3 Random Number Generators

1.3.1 The basic algorithm

Random number generators are more precisely known as pseudo-random number generators. Thesequence of numbers they produce can be predicted if the algorithm is known, but there should beno correlations between the numbers along the sequence. In practice, the sequence will repeat itselfbut the period of the cycle should be longer than the equivalent scale of the process one wants tosimulate. Random number generators are based on the algorithm

xn+1 = (axn + c) mod m

where xn+1 and xn are respectively the (n + 1)th and nth numbers in the sequence. The startinginteger in the sequence x0 is called the seed. All the quantities in the expression, including thenumbers themselves and the constants a, c, m are integers. y = z mod m means that y is theremainder left after dividing z by m. For example (27mod 5) equals 2.

⇒ Now go to Exercise 1

You will see, from the Exercise, that with a judicious choice of parameters we can produce a pseudo-random sequence of integers i such that 0 ≤ i < m (note that 0 appears in the sequence but m doesnot). Usually real random numbers r between 0 and 1 are required. This can be done using thealgorithm with a final step r = REAL(i)/REAL(m), such that 0 ≤ r < 1.

In practice the number m, a and c are chosen to give a large range of integers and a large period(before the sequence starts to repeat). A random number generator that I most often use has

m = 248 a = 33952834046453

1.4 Intrinsic Subroutine

All computing systems have a built-in random number generator (usually based on more than oneof the basic generators just studied) that has been optimized, and one would normally use that. ForFORTRAN 95, the simplest use is as follows:

CALL RANDOM NUMBER(r)

r is the generated random number (with 0 ≤ r < 1), and r must be declared as REAL (or betterstill REAL (KIND=...) ). You can also declare r as a one dimensional real array; in this case thesubroutine returns a (different) random number in each element of the array.

If you repeat a run of a program containing this call, the same set of random numbers is produced.This is not what is usually required and it can be overcome by seeding the random number generatorat the start of the program using the system clock.

The standard random number generator has a seed which is an array of several integers. The ’sev-eral’ can be different for different compilers. For Salford ’several’ is actually 1; in the Lahey fortrancomplier it is 4; and in the free GNU gfortran compiler it is 8.

It good practice to write the code so that it works on any compiler. The following example showshow to do this.

INTEGER :: j, k, countREAL (KIND=DP) :: rINTEGER, ALLOCATABLE :: seed(:)

Page 12: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.4 Intrinsic Subroutine 12

CALL RANDOM_SEED(SIZE=k)ALLOCATE(seed(k))DO j = 1, k

CALL SYSTEM_CLOCK(count)seed(j) = count + j*j

END DO

CALL RANDOM_SEED(PUT = seed)

WRITE(*, *) ’Random number seed = ‘,seed

CALL RANDOM_NUMBER(r)

The first CALL of RANDOM SEED is to find out the size of the seed array and puts this sizeinto the integer k. The second call puts the correct seed array into the random number generator.Note, count (the current value of the system clock) is integer and that DP denotes what kind of realnumbers you are using.

It is sensible to write out the seed value just in case you do want to rerun the program with theexactly same set of random numbers.

1.4.1 Different number ranges

Often we want to generate random numbers over a range different from 0 to 1. This is straightfor-ward. If we want a real random number x between −2 and +2, for example, this can be obtainedfrom the random number generator output r using the statement: x = −2.0 + 4.0 ∗ r. Generally,use the expression x = a+ (b− a) ∗ r if the required range is a to b.

We have to be a little more careful with integers. Suppose we were simulating the throw of a dice(and needed to generate a random integer from the set 1, 2, . . . .,6). If d is the required randominteger, we can use the statement: d = INT(6*r) + 1. Calculation of INT(6*r) gives one of theintegers 0, 1, . . . .,5.

1.4.2 Testing your random generator

The random generator is supposed to generate a random number r which is uniformly distributed inthe range 0 ≤ r < 1. If this is so then we can easily calculate the following average mathematically

〈rp〉 =1

p+ 1

where p is any integer. We could then check the average computationally by generating N randomnumbers r1 . . . rN and then forming the average

〈rp〉 =1N

N∑k=1

rpk

When programming this it is not necessary to define an array for the random numbers!

For large N the two averages – theoretical and computational – should be very nearly the same andthe difference between them should reduce as N gets larger.

Note that for a large value of N it is not sensible to use the standard REAL variables since thesegive only about 1 part in 106 accuracy.

Page 13: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.5 Monte Carlo Integration 13

A slightly more complicated test checks whether there is any correlation between neighbouringrandom numbers generated.

Suppose we have two independent random variables r and s then mathematically we have

〈rpsq〉 =1

(p+ 1)1

(q + 1)

We then calculate this average using our computer random number generators. Generate N pairs ofrandom numbers (r1, s1) . . . (rN , sN ) and then form the average

〈rpsq〉 =1N2

N∑j=1

N∑k=1

rpj sqk

Again when programming this it is should not be necessary to define an array for the random num-bers!

There are more sophisticated tests but these simple tests should show if something is wrong withthe generator.

⇒ Now go to Exercise 2

1.5 Monte Carlo Integration

There is a wide range of calculations in Computational Physics that rely on the use of a randomnumber generator. Generally they are known as Monte Carlo techniques. We will start by lookingat the Monte Carlo method applied to integration. There are two approaches to choose from: the‘hit and miss’ method and the ‘sampling’ method.

1.5.1 Hit and Miss Method

Here is analogy to help explain the method. An experimental way to measure the area of the trebletwenty on a dart board is to throw the darts at the board at random; if N60 is the number hitting thetreble twenty and N is the total number of darts thrown, then the area A60 of the treble twenty isgiven by the equation A60 = A ∗N60/N , where A is the total area of the board.

⇒ Now go to Exercises 3(a) and (b)

1.5.2 Sampling Method

This method can be summarised by the equation

b∫a

f(x)dx =(b− a)N

N∑i=1

f(xi)

Choose N random numbers xi in the range a < xi < b, calculate f(xi) for each, take the average,and then multiply by the integration range (b-a). It is similar to Simpson’s rule but in that case thevalues of xi are evenly distributed. We have seen already how to generate random numbers in therange a to b.

The extension to a multidimensional integral is easy. For example, in two dimensions, write

Page 14: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.6 Nuclear Decay Chains 14

b∫a

dx

d∫c

dyf(x, y) =(b− a)(d− c)

N

N∑i=1

f(xi, yi)

In this case choose N pairs of random numbers (xi, yi) and go through a similar procedure. Theextension to an arbitrary number of dimensions is straightforward.

Generally, from the point of view of accuracy, it is better to use ‘conventional’ methods like Simp-son’s rule for integrals in low dimensions and Monte Carlo methods for high dimensions. If youhave body with an awkward shape, however, Monte Carlo methods are useful even at low dimen-sionality.

Why are Monte Carlo methods better in higher dimensions? If we pick n random numbers forour integration the error is proportional to n1/2 independent of the number of dimensions. For thetrapezoidal approximation and for Simpson’s rule the errors are proportional to n−1/d and n−2/d

respectively. Here n is the number of strips the integration range is divided into, and d is the dimen-sionality. Increasing n is more effective at reducing errors in Monte Carlo than in the trapezoidalrule for d > 2. In comparison with Simpson’s rule, Monte Carlo wins out for d > 4.

⇒ Now go to Exercises 3( c) and (d)

1.6 Nuclear Decay Chains

Now let us consider a nuclear decay sequence with a number of different daughter products:

23492 U −→

250,000yr

23090 Th+ α

↓ −→80,000yr

22688 Ra+ α

↓ −→1,620yr

22286 Rn+ α

↓−→fast

20682 Pb

The times shown are half-lives in years. 23492 U is produced from 238

92 U by a decay with a half-life of4.5×109 years which is so long compared with the half lives in the above chain that we can ignorethis factor in the change of the number of 234

92 U nuclei. The decay of 22286 Rn to 206

82 Pb (stable) is bya chain of disintegrations and takes place very rapidly on the time-scale being considered here; T1/2

for 22286 Rn is about 4 days. Therefore we can in effect consider the decay of Ra to be directly to the

stable Pb isotope.

The half life of a nucleus is defined as the time taken for a half of the nuclei in a large populationto decay. For an individual nucleus it really is a random process however with only the probabilityof decay defined. We can either use an analytic method or a computer simulation to describe thesystem.

1.6.1 Analytic approach

In the analytic approach we deal with the statistical averages of the numbers of each type of particle.These average quantities are not, of course, integers.

Let N1(t) be the statistical average of the number of 23492 U nuclei at time t,

N2(t) be the statistical average of the number of 23090 Th nuclei at time t,

N3(t) be the statistical average of the number of 22688 Ra nuclei at time t,

and N4(t) be the statistical average of the number of 20682 Pb nuclei at time t.

Page 15: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.6 Nuclear Decay Chains 15

Note that because we are considering the statistical averages these numbers are no longer integers.

We take the initial condition (at t = 0) to be N1 = N; N2 = N3 = N4 = 0.

At a general time t, N1(t) + N2(t) + N3(t) + N4(t) = N, i.e. the total number of nuclei is conserved.

The rate equations for the chain of decays are:

dN1

dt= −λ1N1

dN2

dt= λ1N1 − λ2N2

dN3

dt= λ2N2 − λ3N3

dN4

dt= λ3N3

where the decay constant λ = ln2/T1/2= 0.693/T1/2

.

The analytic solution to these equations with the given initial conditions is

N1(t) = N exp(−λ1 t)

N2(t) =λ1N

λ2 − λ1[exp (−λ1 t)− exp (−λ2 t)]

N3 (t) = λ1 λ2N

exp(−λ1 t)

(λ1 − λ2) (λ1 − λ3)+

exp(−λ2 t)(λ2 − λ3) (λ2 − λ1)

+exp(−λ3 t)

(λ3 − λ1) (λ3 − λ2)

N4 (t) = N − N1(t) +N2(t) +N3(t)

1.6.2 Computer simulation

We are now going to take a different approach in which we try to simulate the random physicalprocesses. We can regard a nucleus as existing in one of 4 states as defined as above:

State 1 ≡ U ; state 2 ≡ Th ; state 3 ≡ Ra ; state 4 ≡ Pb

The probability PI that a nucleus in state i decays within a time interval ∆t to the next state (i+1) isgiven by Pi = λi∆t. When it reaches state 4 there is no further decay, of course. We suppose that∆t is small enough for the possibility of double decays such as

U → Th→ Ra

within ∆t to be negligible.

We can simulate the decay process for one nucleus by choosing a random number in the range 0 to1 and comparing this with Pi. If the random number is less than Pi then the decay takes place; if notthe nucleus remains in the same state. This trick is very common in computer simulations of randomprocesses.

We start with N nuclei in state 1, and simulate the decay of each one of the nuclei in a successionof time intervals ∆t.

In such a computer simulation the numbers of each type of particle are of course integers as theyare in the real physical case.

⇒ Now go to Exercise 4

Page 16: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.7 Exercises 16

1.7 Exercises

1. [2 Marks]

Check on paper that you understand what is happening. Take a=5, c=2, m=8, and x0 =1 and generatethe first few random numbers. You should find they start as follows 1, 7, 5, 3, and then repeat. Nowgo through the same process with the following sets of numbers: (a=3, c=4, m=8, x0 =1) and then(a=5, c=1, m=8, x0 =1). Record your sequences of numbers.

2. [ 4 Marks]

(a) Write a program which tests the built-in random number generator by calculating the averages〈rp〉 for a few values of p, in the range 1 to 12. Do this for the cases where the number of randomnumbers generated N is 100,000, 1,000,000 and 10,000,000. Record your test results.

(b) Modify the program to calculate the averages, using the built-in random number generator,〈rpsq〉 for a value of p, q in the range 1 to 12. Use a double summation to do this;

〈rpsq〉 =1N2

N∑j=1

N∑k=1

rpj spk

where N is 1000 and 10,000. Record your test results.

3. [ 5 Marks]

(a). We will use the random number generator to calculate π. Consider a circle of radius 1 unit andcentre at the origin (0,0); it just fits in a square with corners at the points (-1,-1), (-1,+1), (+1,-1),(+1,+1). Now generate a pair of random numbers (x,y) each between -1 and +1. They are insidethe square. What is the condition for them to be inside the circle as well? Repeat this till you havegenerated a total of N points. If Nc points were also inside the circle, the ratio of the area of thecircle to that of the square is Nc/N - but the area of the square is 4, so the area of the circle is givenby 4Nc/N. But since we know the answer is πr2 and r=1, we have a way of determining π. Write aprogram to do this. You should aim for this kind of accuracy: π=3.14±0.01.

(b). Although we expressed it as an evaluation of π, the last exercise was really a calculation of thearea of a circle. We know that the area of a circle of radius r is given by A=πr2. The analogousquantity in 3 dimensions is the volume of a sphere, V=(4/3)πr3. What is the equivalent quantity in4 dimensions? Presumably the ‘hypervolume’ of a 4D ‘hypersphere’ of radius r is given by H=Cr4.Make (very minor) modifications to your circle program to calculate C. [exact result is C=π2/2].

(c). The mean energy (kinetic) of an atom of a Boltzmann gas of non-interacting atoms moving in1 dimension is given by

E = I1/I2where

I1 =∞∫−∞

(p2/2m) exp(−p2/2mkT

)dpand I2 =

∞∫−∞

exp(−p2/2mkT

)dp

p is the momentum. With a change of variables, α=p/√

(2mkT), this can be rewritten as

E = kT (J1/J2)where

J1 =∞∫0

α2 exp(−α2

)dαand J2 =

∞∫0

exp(−α2

)dα

Write a program that employs the sampling method to calculate J1 and J2 and thus the coefficientJ1/J2. You will have to cut off the upper limit of the integrals at some value b. Increase the values ofb and N until you have a result that is accurate to 2 decimal places, but estimate a reasonable valueof b by hand before you start computing (for what value of α does the integrand become small). Isyour result what you expect?

Page 17: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

1.7 Exercises 17

(d). Now calculate the mean energy (translational kinetic energy + vibrational potential energy) ofan ideal gas of diatomic molecules confined to 1 dimension. It is a 3 variable problem - the momentap1 and p2 of the 2 atoms of the molecule and their displacement x from the equilibrium separation.Besides kinetic energy terms p2

1/2m and p22/2m we have a potential term µω2x2/2 where µ is the

reduced mass and ω is the natural frequency. If we again do a change of variables, β=x√

(µω2/2kT),we can write an expression for E as in the previous case, but now

J1 =

∞∫0

dα1

∞∫0

dα2

∞∫0

dβ(α2

1 + α22 + β2

)exp

[−(α2

1 + α22 + β2

)]

J2 =

∞∫0

dα1

∞∫0

dα2

∞∫0

dβ exp[−(α2

1 + α22 + β2

)]What is the coefficient of kT in this case? What result did you expect to obtain?

4 [ 6 Marks]

Compile and execute the program Nuclear Decay.f95. This does the analytic part of the calculationand prints out the results every 50,000 years up to 2,000,000 years. It asks you to input the numberof nuclei N, and a time step dt. Use 1.0 for dt for the moment (dt is not relevant until you write theMonte Carlo code). Parts of the program relevant for only the Monte Carlo part are indicated bycomments. When you have understood the program you can go onto the Monte Carlo part.

An array nr(1 : 4) has been set up for you which holds the number of nuclei of each type. nr(2 : 4)have been initialized to zero and nr(1) to the total number of nuclei. That is, initially each nucleusis of type 1.

Your task is to write a subroutine for the MC calculation and incorporate it in the program. Theprogram is set up so that your subroutine can be called each time step to calculate the updatedvalues of nr. In each time step you have to consider each nucleus and determine whether it decaysusing the criterion given in the text. Notice that the sum

4∑j=1

nr(j)

should remain constant.

First choose a reasonable time step and justify your choice. Now run your program with variousnumber of atoms in your sample, say 100, 1000, 10000 and 100000. What sort of value of N(number in the program) do you need to use for analytic and MC results to be similar. Display andcomment on results that you obtain.

Page 18: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Chapter 2

ANALYSIS OF WAVEFORMS

Version Date: Wednesday, 5 September, 2007 at 13:05

2.1 Objectives

In this chapter the main elements of Fourier Analysis are reviewed and the methods are applied tosome basic wave forms. On completion of this chapter students will have utilized simple numericaltechniques for performing Fourier Analysis; studied the convergence of Fourier series and how thisis effected by discontinuities in the function; and investigated the best choice of Fourier coefficientsin finite series. Fourier techniques will be applied to the solution of the diffusion equation.

2.2 Fourier Analysis

Fourier’s theorem states that any well-behaved (or physical) periodic wave form f(x) with periodL may be expressed as the series

f (x) =∞∑

r=−∞Fr exp (ikrx) (2.1)

where the wave-vector kr is given by

kr =2πrL

(2.2)

The complex Fourier coefficients Fr are given by

Fr =1L

∫f (x) exp (−ikrx) dx (2.3)

Here the integrals are over any complete period (e.g. x = 0 to x = L or x = −L2 to x = L

2 ).

The rth component of the sum in (2.1) corresponds to a harmonic wave with spatial frequency r/Land hence a wavelength L/r.

Normally, in Physics, f(x) is a real function. In this case the Fourier coefficients have the followingsymmetry property:

F∗r = F−r (2.4)

An important exception to this is the case of quantum mechanics where the wavefunctions arenormally complex. The above symmetry does not apply to this case.

18

Page 19: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2.3 The Numerical Methods 19

In numerical work we can only deal with series with a finite number of terms. Suppose the finiteseries, f [M ] (x), is used to approximate the function, f (x).

f [M ] (x) =M∑

r=−MFr exp (ikrx) (2.5)

The mean-square-error involved in making this approximation is by definition

E[M ] =1L

L∫0

[f(x)− f [M ](x)

]2dx. (2.6)

and this can be written (after quite a bit of manipulation!) as

E[M ] =+∞∑r=−∞

|F2r| −

M∑r=−M

|F2r| (2.7)

This latter form shows that the mean square error decreases monotonically as a function of M (ieE[M+1]6 E[M ]). f [M ](x) converges to f(x) as more terms are added to the series. It can alsobe shown that the mean-square-error E[M ] (for any fixed M ) is minimized by using the Fouriercoefficients as calculated through equation (2.3).

2.2.1 Why bother?

It is not entirely clear from the above equations what has been gained by expressing the functionf(x) as a series (2.1) or (2.5).

In Physics we often need to evaluate the derivative (or second derivative etc) of a function: If thefunction has been expressed as a Fourier series then this is a trivial operation. For example the J thderivative of f(x), using (2.1), is

dJf (x)dxJ

=∞∑

r=−∞(ikr)

J [Fr exp (ikrx)] (2.8)

We can similarly write expressions for integrals of f(x).

2.2.2 Aperiodic functions

The instances, in Physics, of genuinely periodic functions are exceedingly rare. However there arestill many applications of the above theory.

Suppose a function f(x) either exists only in a finite range 0 ≤ x ≤ L or is known only in this finiterange. We can construct a periodic function simply by making a periodic repetition of the finite-range function with a repeat period L and apply the theory to this (artificial) periodic function. It isin this form that Fourier Analysis is normally applied in Physics.

2.3 The Numerical Methods

We now consider how to calculate the coefficients Fr.

Page 20: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2.3 The Numerical Methods 20

Figure 2.1. Trapezoidal Integration

In general we can approximate an integral by means of the trapezoidal rule. The essence of this isshown in the figure (2.1).

The integrand is divided into N equal intervals of size L/N and the integrand is approximated bya sequence of straight-line segments. The function f(x) is evaluated at the positions xs = sL/N .Note that N needs to be quite large to ensure accuracy. In the program you will use N has beentaken to be 1000.

The result of this procedure applied to (2.3) and is

Fr =1N

[(f(0) + f(L)

2

)+N−1∑s=1

f

(sL

N

)exp

(−i2π rs

N

)](2.9)

We can simplify this by defining an array

fs = f(sLN

); s = 1, .., N − 1

f0 =(f(0)+f(L)

2

) (2.10)

Using this array gives the result for the coefficients as

Fr =1N

N−1∑s=0

fs exp(−i2π rs

N

)(2.11)

This approximate procedure for the integrals predicts coefficients for |r| < N/2. It fails to correctlypredict coefficients for |r| ≥ N/2. That is it fails to predict the Fourier components with spatialfrequency greater than N/2L and wavelengths less than 2L/N.

In fact, if the function f(x) has no spatial frequency greater than N/2L, the Sampling Theoremtells us that the expression (2.11) is exact.

There is a technical problem in evaluating the exponentials in (2.11) or (2.5). In Fortran we canonly evaluate exp (iα) if α is not too large: In practice less than about 70. (You might think this is

Page 21: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2.4 Diffusion Equation 21

a deficiency of Fortran but you need to be aware that no other language has complex exponentialsbuilt in). We can surmount this difficulty by evaluating the exponentials in the form:

exp(−i2π rs

N

)= exp

(−i2πMOD (rs,N)

N

)(2.12)

That is we have replaced rs byMOD (rs,N). This is a Fortran function which gives the remainderwhen rs is divided byN . This procedure works because (rs−MOD (rs,N)) /N is an integer andbecause

exp (−i2πm) = 1 (2.13)

for any integer m.

The argument of the exponential on the right-hand side of (2.12) is then a small quantity (in fact,less in magnitude than 2π).

If we only need to evaluate the original function f at the discrete points xs = sL/N then theformula (2.5) simplifies to

fs =M∑

r=−MFr exp

(i2π rsN

)(2.14)

which is very similar to the expression for the Fourier coefficient (2.11) and the exponential iscalculated in the same way.

2.4 Diffusion Equation

I now look at the use of Fourier Analysis in solving differential equations.

In thermal equilibrium (and in the absence of external forces) gases and liquids have uniform den-sities. If a gas or liquid is prepared with a high density in a localized region then this excess densitywill quickly spread out until uniformity is restored: This process is called diffusion. If f(x, t)denotes the deviation of density from equilibrium then the evolution of this quantity with time isdetermined by the diffusion equation:

∂f (x, t)∂t

= D∂2f (x, t)∂x2

(2.15)

D is the diffusion constant.

Now suppose that I use equation (2.5) for f(x, t) but where the coefficients are functions of time:

f (x, t) =M∑

r=−MFr (t) exp (ikrx) (2.16)

Inserting this expression in the diffusion equation gives the following result for the coefficients:

Fr (t) = exp(−Dk2

r t)

Fr (0) (2.17)

This can be used to determine the density at any later time.

Page 22: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2.4 Diffusion Equation 22

If I assume this fluid is contained in a region

0 ≤ x ≤ L

then kr =2πrL

as in (2.2). The above equation then becomes

Fr (t) = exp

(−4π2 r2D t

L2

)Fr (0) (2.18)

The complete prescription for solving the diffusion problem is:

• Fourier analyse the initial (t = 0) density function. That is, calculate the Fourier coefficientsFr (0) using equation (2.11);

• Evaluate the coefficients at time t using equation (2.17);• Calculate the density at time t by inserting these Fourier coefficients into (2.5) with M =

(N − 1)/2.

Now attempt the exercises.

2.4.1 Exercises

1. [10 Marks](a) Obtain a copy of the program Fourier.f95 and run it. The program

sets up various waveforms and plots them in the graphics window.Make sure you understand how it works, and record its main featuresin your log-book. Remember that the graph-plotting routine alsowrites to the clipboard, so that copies of the graphs can be pastedinto other documents if required.

(b) Write a subroutine to calculate the coefficients Fr for |r| ≤ (N −1)/2 using equation (2.11). Write the results to a data file. Compareyour numerical results with the analytical solutions to equation (2.3)for one of the waveforms.

(c) Check that the symmetry property (2.4) is satisfied by the Fourier co-efficients. What symmetry do the square and triangular waveformshave, and how is this related to the values of their Fourier coeffi-cients? What about the ramp wave?

(d) Use your calculated values of Fr to reconstruct the approximationf [M ] in equation (2.14), and plot this curve alongside the originalwaveform. The value of M should be ≤ (N − 1)/2. Note that thesubroutine ‘plot graph’ provided will plot all the curves stored inf(npoints, ngraphs).

(e) Investigate the convergence of the series (that is gradually increaseM and observe what happens); does that of the triangular wave con-verge faster than those of the square or ramp waves? Also investigatethe Gibbs overshoot phenomenon observed in Fourier series for dis-continuous wave forms.

Page 23: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

2.4 Diffusion Equation 23

(f) Calculate the mean-square-error in equation (2.7); Since N is large,

a good approximation to the right-hand side is(N−1)/2∑

r=−(N−1)/2

|F2r| −

M∑r=−M

|F2r|

(g) Show that E[M ] monotonically decreases with M .

Remember to keep an accurate record of your work in your log-book.

2. [7 Marks]Diffusion: Assume that in a diffusion problem the initial density isf (x, 0) = exp

(− (x−L/2)2

2σ2

)where σ = L

80 . Determine the densities

at times: t = L2

2000D ,2L2

2000D ,4L2

2000D . Notice that at t = 0 the requiredfunction is exactly that described as ”gaussian” in the program.

Remember that you must finish your work on this chapter by writing a summary in your laboratorynote books. This should summarize in about 300 words what you have learnt and whether theobjectives of this chapter have been met.

Page 24: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Chapter 3

EIGENVALUES AND EIGENVECTORS OFMATRICES

Version Date: Thursday, 30 August, 2007 at 10:20

3.1 Objectives

In this chapter you will investigate eigenvalue equations and eigenvalue packages for solving suchequations. You will be provided with a subroutine for finding all the eigenvalues and eigenvec-tors of a real symmetric matrix and also an eigenvalue package which finds a few eigenvalues andeigenvectors.

In the first set of exercises you will check that the results produced by the package against directcalculations (for small matrices).

In the second part of the project you will use the packages to investigate eigenvalues and eigenvec-tors of the Schrodinger equation.

3.2 Eigenvalues and eigenvectors of real symmetric or hermitian matrices

An eigenvalue equation is

A v(k) = λ(k) v(k) (3.1)

where A is an n× n matrix; v(k) is the kth eigenvector and is an n× 1 column vector; and λ(k) isthe kth eigenvalue.

The complete expression of the above equation is

n∑s=1

Arsv(k)s = λ(k) v(k)

r r = 1, . . . , n (3.2)

I shall only consider matrices that are real and symmetric or complex and hermitian:

In the first case the matrices have the symmetry

Ars = Asr

and in the second case

Ars = A∗sr

24

Page 25: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.2 Eigenvalues and eigenvectors of real symmetric or hermitian matrices 25

These are the types of matrices required for most physical problems.

Such matrices have very special properties:

• all the eigenvalues are real;

• (if the matrices are n×n) there are n eigenvectors that are mutually orthogonal and these forma complete set.

Mutually orthogonal means that the eigenvectors satisfy

n∑r=1

(v(k)∗r v(k′)

r

)= 0 k 6= k′ (3.3)

It is conventional to normalize the eigenvector, that is choose them to satisfy

n∑r=1

(v(k)∗r v(k)

r

)= 1 (3.4)

The completeness of the eigenvectors means that any column vector can be constructed as a sum ofthe eigenvectors (with appropriate coefficients).

That is any column vector b can be written as

br =n∑k=1

α(k) v(k)r (3.5)

The required set of coefficients α(k) can be evaluated by

α(k) =n∑r=1

v(k)r∗ br (3.6)

In order to emphasize that the properties of hermitian (or real symmetric) matrices are not shared bymore general matrices, consider a space of 2× 2 matrices and 2× 1 column vectors. A very simple,but not symmetric 2× 2 matrix is: (

0 10 0

)This matrix has eigenvalue λ = 0 and has only one eigenvector(

10

)

Clearly this one eigenvector cannot be used to generate an arbitrary 2× 1 column vector.

You should verify the above properties of this asymmetric real matrix.

The eigenvalue equation (3.1) can be written, entirely in matrix form, as

A V = V D (3.7)

where V is an n × n matrix which is made up from the n column vectors v(1),v(2), . . . ,v(n) andD is a diagonal matrix with diagonal entries λ(1), λ(2), . . . , λ(n).

Page 26: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.3 A Matrix Eigenvalue Package 26

If the eigenvectors are normalized according to (3.4) then the eigenvector matrix V satisfies theequations

V† V = I = V V† (3.8)

where I denotes the n× n unit matrix and V† denotes the hermitian conjugate of V.

V †rs = V ∗sr (3.9)

For real matrices this is just the transpose.

This property (3.8) of the eigenvector matrices can be used to represent the original matrix A as

A = V D V† (3.10)

This is very useful in numerical computations because it provides a very severe test of the numericalmethod. That is, use the numerical procedure to calculate the eigenvalue and eigenvector matricesD and V; then use (3.10) to reconstruct A. If this reconstruction does not agree with the originalmatrix (to within some required accuracy) then the procedure is at fault.

Numerical techniques for finding eigenvalues and eigenvectors of complex hermitian matrices are astraightforward development of those used for real symmetric matrices.

3.3 A Matrix Eigenvalue Package

The program SymEigTest, which is on the Physics intranet, contains the module SymmetricEigen-systems. This is specifically for real, symmetric matrices. However the techniques used could easilybe converted to deal with complex hermitian matrices.

The SymmetricEigensystems module contains several subroutines that you will use:

• SymEig: This finds all the eigenvalues and (optionally) all the eigenvectors of an n×n real,symmetric matrix.

• TrdQRL: This finds all the eigenvalues and (optionally) all the eigenvectors of a tridiagonaln× n real, symmetric matrix.

• TrdEig: This finds the eigenvalues in a certain, specified, range and (optionally) all corre-sponding eigenvectors for a tridiagonal n× n real, symmetric matrix.

A tridiagonal symmetric matrix has only the main diagonal and the two adjacent diagonals withnon-zero elements.

d1 n1 0 0 0 0 . . .n1 d2 n2 0 0 0 · · ·0 n2 d3 n3 0 0 . . .0 0 n3 d4 n4 0 . . .0 0 0 n4 d5 n5 . . .0 0 0 0 n5 d6 . . ....

......

......

.... . .

In the case of a tridiagonal matrix there is no essential difference in speed in using TrdQRL ratherthan SymEig. However TrdQRL only requires the non-zero two leading diagonals to be stored. Thisis great advantage for large matrices.

Page 27: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.4 Schrodinger Equation 27

For example if we are finding the eigenvalues of a 10000 × 10000 tridiagonal matrix then SymEigrequires 108 matrix elements to be stored (even if most of them are zero) whereas TrdQRL requiresonly 19999 elements to be stored.

Again, in the case of large matrices, both TrdQRL and SymEig provide too much information: Fora 10000× 10000 matrix there are 100, 000, 000 elements in the eigenvector matrix.

TrdEig allows the user to investigate a few eigenvectors.

These three subroutines are used as follows:

• CALL SymEig(A, eval, evec)A is the n × n matrix; eval is an n × 1 array containing the eigenvalues in ascending order;evec is the n× n eigenvector matrix.

• CALL TrdQRL(Ad, An, eval, evec)Ad is an n × 1 array containing the main diagonal elements of the matrix; An is an (n −1) × 1 array containing the leading upper diagonal elements of the matrix; evec is the n × neigenvector matrix, eval is an n× 1 array containing the eigenvalues.Ad(j) = A(j, j); An(j) = A(j, j + 1) = A(j + 1, j)Note: Some care is required in using this subroutine because on input evec needs to be setequal to the unit matrix.

• CALL TrdEig(Ad, An, Lower, Upper, NumEig, eval, evec)Ad is an n×1 array containing the main diagonal elements of the matrix; An is an (n−1)×1array containing the leading upper diagonal elements of the matrix; Lower and Upper definethe range in which the eigenvalues are required; NumEig is the number of eigenvalues (andeigenvectors) found in this range; eval is an n×1 array containing the eigenvalues in ascendingorder; evec is the n×NumEig eigenvector matrix.If there are more than certain number of eigenvalues in the range, the program will complain.The maximum number is set to 30 in the subroutine. If you really need more than this thenchange the variable max num eig.

One of the advantages of Fortran lies in the availability of good quality, well-tested ’libraries’ ofsubroutines. Most physicists make use of such subroutines and incorporate these in their programs.I produced the SymmetricEigenvalue module by modifying (to my own needs) subroutines from thewell-known library package LAPACK (Linear Algebra Package).

3.4 Schrodinger Equation

The time-independent Schrodinger Equation for a one-dimensional system is

− ~2

2md2Ψ (x)dx2

+ V (x) Ψ (x) = EΨ (x) (3.11)

This is an eigenvalue equation, with E as the eigenvalue and Ψ (x) as the eigenfunction. I want toshow how this differential eigenvalue equation can be expressed as a matrix eigenvalue equation.

The second derivative in (3.11) can be calculated (approximately) as

d2Ψ (x)dx2

=Ψ (x+ ∆x) + Ψ (x−∆x)− 2Ψn (x)

∆x2 (3.12)

where ∆x is some suitably small distance.

I now define

Ψn = Ψ (n∆x) ; Vn = V (n∆x) (3.13)

Page 28: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.4 Schrodinger Equation 28

where n is an integer.

The differential equation can then be written as

− ~2

2m∆x2 (Ψn+1 + Ψn−1 − 2Ψn) + VnΨn = EΨn (3.14)

I can further simplify this by using dimensionless variables. I choose to measure distances in termsof some basic length a and energies in terms of the basic energy ~2

2ma2 .

In terms of these dimensionless variables, this equation becomes

− 1∆x2 (Ψn+1 + Ψn−1 − 2Ψn) + VnΨn = EΨn (3.15)

If the potential V is reasonably well-behaved the wavefunctions go to zero as x→ ±∞. Hence theremust be a choice for n (remember x = n∆x) beyond which the eigenfunction is (approximately)zero. I call this value N .

Then I can define a column vector Ψ with elements Ψ−N , . . . ,ΨN and a tridiagonal matrix H.

The diagonal elements of H are

Hn,n =2

∆x2 + Vn (3.16)

and the non-zero off-diagonal elements are

Hn,n+1 = Hn+1,n = − 1∆x2 (3.17)

In terms of this (2N + 1)× (2N + 1) matrix the eigenvalue equation is

HΨ = EΨ (3.18)

3.4.1 Harmonic Oscillator

The potential for a harmonic oscillator can be written as

V (x) =12mω2x2

and if I choose the unit of distance to be

a =

√~mω

then the unit of energy is

12

and the potential is

Vn = n2∆x2

Page 29: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.4 Schrodinger Equation 29

This is a useful test problem because the exact results are known: The exact eigenvalues are:1, 3, 5, 7, . . ..

In order to solve numerically to a reasonable accuracy we need to choose

• ∆x 1• N∆x 1

In order to calculate the first few eigenvalues ∆x = 0.001 and N = 10000 should be sufficient togive fairly accurate results. However you may need to experiment with these values.

3.4.2 Spherically Symmetric 3D Systems

In a spherically symmetric 3D system the wavefunction in spherical polar coordinates can be writtenas

1r

Ψ (r)Ylm (θ, φ) exp (imφ) (3.19)

where Ylm are spherical harmonic functions. l andm are the angular momentum quantum numbers.

The radial function Ψ (r) satisfies the same equation as (3.11) except that a term

~2l(l + 1)2mr2

needs to be added to the potential; and, of course, r is positive.

For the case of the 3D harmonic oscillator, the eigenvalue equation can still be written as (3.18)except that the matrix indices now run from 1→ N and the potential is

Vn = n2∆x2 +l(l + 1)n2∆x2

In order to calculate the first few eigenvalues, for small values of l (l = 0, 1, 2), ∆x = 0.001 andN = 10000 should be sufficient.

In the case of the hydrogen atom, if we use the Bohr radius as the unit of distance, the correspondingpotential is

Vn = − 2n∆x

+l(l + 1)n2∆x2

Exercises

1. [3 marks]

(a) Solve the eigenvalue equations for the 2× 2 asymmetric matrix(0 10 0

)Show the steps in process in detail. Show that it does not have a complete set of eigenvectors.

(b) Repeat the calculation for symmetric matrix(0 11 0

)

Page 30: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

3.4 Schrodinger Equation 30

Discuss the differences.

(c) Determine the eigenvalue and eigenvector matrices D and V for the matrix in (b) and showthat equation (3.10) is satisfied.

2. [4 marks]

(a) Download the program SymEigTest.F95; make a working copy of this with a different name.

The program constructs a random n × n symmetric matrix, with n initially set to 10. It then callsSymEig to find all the eigenvalues and eigenvectors. (In fact it calls this 100 times just to make thecomputer time long enough to determine accurately!).

Then it uses equation (3.10) to attempt to reconstruct the original matrix. It finally calculates therms error in the reconstructed matrix (by comparing it to the original).

Run the program with matrix sizes 10, 20, . . . , 100. Record the results. Deduce how the time tooperate the subroutine depends on n.

(b) Modify the program so that the random symmetric matrix is now a random symmetric tridi-agonal matrix. Repeat the above calculations.

(c) Next modify the program to use the TrdQRL subroutine which is specifically intended fortridiagonal symmetric matrices. How does the performance compare with that of SymEig.

3. [4 marks]

Modify the program so as to make use of the subroutine TrdEig. Determine the lowest four eigen-values and corresponding eigenvalues of the 1D Harmonic oscillator and plot the correspondingeigenvectors.

4. [6 marks]

(a) Modify the program so as to be able to treat radial functions for spherically symmetric 3Dsystems. Test this by finding the first four eigenvalues for l = 0, l = 1 and for l = 2. Plot the lowestfour eigenfunctions for l = 2.

(b) Set up the eigenvalue equations for a the hydrogen atom; choose the Bohr radius to be theunit of distance. In this case suitable sizes of the parameters are: ∆x = 0.001 and N = 200000.Yes! you really are going to find eigenvalues of a 200000× 200000 matrix.

You may need to experiment with the parameters ∆x and N .

Find the first four eigenvalues for l = 0, l = 1 and for l = 2 and then plot the lowest foureigenfunctions for l = 2.

Note: In these dimensionless units the Coulomb potential is − 2(n∆x)

and the lowest eigenvalue

should be −1.

Remember that you must finish your work on this chapter by writing an abstract in your laboratorynotebooks. This abstract should summarise in about 300 words what you have learnt and whetherthe objectives of this chapter have been met.

Page 31: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

Chapter 4

MONTE CARLO SIMULATION

Version Date: Wednesday, 1 October 2003

4.1 OBJECTIVES

The Metropolis Monte Carlo algorithm is one of the most important techniques in ComputationalPhysics for dealing with random systems; it also provides a way of including temperature into themodelling. The main objective of this chapter is to introduce Monte Carlo methods. Inevitablymodelling is done on systems with a small number of particles, whereas in real systems we aredealing with ∼1023 particles. Fluctuations about the mean become a dominant feature in smallsystems and it is important that we understand about size effects. Developing a feeling about thissubject is the first priority of this chapter. A simple model contains the main features.

4.2 EQUILIBRIUM AND FLUCTUATIONS

Suppose we have two boxes, one of which contains a certain number of molecules of a gas andin the other is a vacuum. If these boxes are joined then gas will flow from one to the other untilequilibrium is reached (a state of uniform density).

To define fully a state of the system we have to specify the position and momentum of each moleculeof the gas. Let us investigate the approach to equilibrium with a drastic simplification. We willconcern ourselves only with which box a molecule is in, and ignore details about positions andvelocities.

Suppose there are N molecules in total and, at each instant, NL are in the left hand box and NR arein the right hand one (of course, NL + NR = N). The table below describes the possible situationsfor N=6. There are N+1 states of the system distinguished by the number of molecules in each box.

State NL NR No. of configura-tions, Ω

Prob.L to R

Prob.R to L

lnΩ

1 6 0 1 1 0 0.0002 5 1 6 5/6 1/6 1.7923 4 2 15 2/3 1/3 2.7084 3 3 20 1/2 1/2 2.9965 2 4 15 1/3 2/3 2.7086 1 5 6 1/6 5/6 1.7927 0 6 1 0 1 0.000

The number of configurations (or microstates) associated with a particular state is given by Ω =N !/(NL!NR!). In the above example, Ω of state 2 is 6 because any of the 6 molecules could be theone in the right hand box (molecules here are treated as classical – they are distinguishable). The

31

Page 32: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.3 MONTE CARLO SIMULATIONS – THE PRINCIPLES 32

entropy S of a particular state is given by S/kB=lnΩ.

As the system evolves, suppose one molecule moves from one box to the other in each time step. Itis reasonable to take the probability that it will be a left to right move as NL/N and for a right toleft move as NR/N . It is clear from the table that the tendency will be a move toward state 4 (theequilibrium state - the one of maximum entropy), but there will certainly be fluctuations about thisposition.

Even if we are in the equilibrium state, there is a chance that fluctuations could lead us in a few timesteps into a state where all the molecules are back in one of the boxes. The probability (see table)that this occurs in just three time steps is (1/2)x(1/3)x(1/6)=1/36. This is not extremely long odds,but think what is the likelihood of a similar situation occurring for larger values of N.

The program boxes.f95 simulates the above model. You can enter N and the number of time-steps;you can select to start with the particles equally distributed or all in the left hand box; you can alsochoose graphical output. The random number generator decides on a L to R or R to L move. Lookat the program and make sure you understand what it does.

⇒ Go to Exercise 1(a)

Now let us try to get something more quantitative about the fluctuations from the simulations. Thevariance σ2 is defined as:

σ2 =< N2L > − < NL >

2

and σ provides us with a measure of the size of the fluctuations. The averages (denoted by angularbrackets) are taken over the time period of the simulation. The ratio σ/ < NL > is an informativeway to express the behaviour.

⇒ Go to Exercise 1(b)

You will have observed in Exercise 1(a) that if N is small, then quite frequently you find all of theparticles in one of the boxes; by contrast, if N is large, this dramatic departure from equilibriumis an extremely rare event. We can make this observation more quantitative. Let us assume thatthe probability that there are NL particles in the left-hand box is given by a normal (Gaussian)distribution with width σ:

P (NL) =(σ√

2π)−1

exp[− (NL− < NL >)2 /

(2σ2)]

Given the values of < NL > and σ (see Appendix), we could argue that the probability of findingall or none of the particles in one of the boxes is

2√

2/πN exp (−N/2)

Then, the number of times in a run that we will find all the particles in one box is the product ofthis probability and the number of time-steps. Note, this argument is only a rough one, but it shouldgive an order of magnitude estimate.

⇒ Go to Exercise 1( c)

4.3 MONTE CARLO SIMULATIONS – THE PRINCIPLES

We have to find a way to introduce temperature into a simulation. In statistical mechanics we cancalculate a thermodynamic average of some quantity A by performing a weighted sum over allconfigurations (microstates) of the system

Page 33: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.4 The Metropolis Monte Carlo Algorithm 33

< A >= Z−1∑s

As exp(−Es/kBT )

where As is the value taken by A in microstate s, and Z is the partition function

Z =∑s

exp(−Es/kBT )

In a computer simulation, what we would like to do is perform a trajectory through phase spacein such a way that a microstate s is visited with a probability exp(−Es/kBT )/Z. Averaging Athroughout the trajectory will then reproduce the same< A > that we get in the statistical mechanicscalculation (canonical ensemble).

There is not a unique way of doing this but one of those most widely used is the Metropolis MonteCarlo algorithm.

4.4 The Metropolis Monte Carlo Algorithm

The algorithm is best described by way of an example. Suppose we have a system of N spins(elementary magnets) each of which can point up or down. There are 2N microstates of the system;a microstate is determined by specifying each spin (up or down) of the system. We assume thatthe spins interact with each other in some way so that the energy associated with any microstate isknown. A trajectory through phase space governed by the algorithm is generated as follows.

(i) Choose one of the 2N microstates in which to start.(ii) Pick one of the spins at random (using the random number generator).(iii) Consider the new microstate obtained by reversing the direction of the selected spin; calculate

the change in energy ∆E that occurs if the system is allowed to jump to the new microstate.(iv) If there is a decrease in energy, ∆E ≤ 0, move to the new microstate state (ie flip the selected

spin);(v) If there is an increase in energy, ∆E > 0, make the move with a probability exp(−∆E/kBT );

ie on some occasions, when ∆E > 0, the jump is made, while on others the system remainsunchanged.

(vi) Repeat the process from (ii) until enough data is collected.

The example used for illustration is a simple discrete one in which each entity (a spin) has onlytwo possible states. We could equally well apply the principle to an assemblage of particles (in afluid say). In that case step (ii) would be to pick a particle at random, and step (iii) would involvecalculating the change in potential energy if its position was changed randomly by a small amount.

Note that step (iv) is what defines the Metropolis algorithm. One could use alternative recipes thatwould still provide a valid simulation of the canonical ensemble (see Mathematical Appendix 2 forthe condition that has to be fulfilled). The Metropolis method is the most widely used however, andwe will not consider other choices.

4.5 The Ising Model

The spin system used in the introduction to the Metropolis algorithm is known as the Ising model.The energy of a pair of spins is generally written as −J if they are parallel and +J if they areantiparallel. A positive J provides a simple model for ferromagnetism and a negative one for anti-ferromagnetism. The table summarises the situation for a single pair of spins for which there are 22

microstates.

Page 34: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.6 The Ising Model and the Monte Carlo Algorithm 34

If we represent a spin numerically as S = +1 (up) and S = −1 (down), then the energy of the pair,S1 and S2, can be written: E = −JS1S2.

Microstate Spins Energy1 ↑↑ −J2 ↑↓ +J3 ↓↑ +J4 ↓↓ −J

Although the Ising model was originally set up to study the transition from ferromagnetism toparamagnetism as temperature is increased, it has much wider application. We could, for example,use it to describe a binary alloy (made up of atomic species A and B). We use spin up to representtype A and spin down to represent type B. Then if J > 0, atoms like to have their own sort asneighbours and, if J < 0, they prefer the other type as neighbours.

4.6 The Ising Model and the Monte Carlo Algorithm

Let us fill in a little more detail about the implementation of the algorithm (see the working programising.f95). Suppose the spins lie on a lattice (square in 2 dimensions, cubic in 3 dimensions). Eachspin has 4 (in 2D) or 6 (in 3D) neighbours (the program is for 2D).

Declare an array, spin(:, :), the elements of which can take values ±1. The arguments of the arraylabel its position coordinates. Then initialise the array (step 1 of the MC algorithm). The 3 commonchoices are programmed.

−∑j

JijSiSj Now choose a spin at random (step 2). Let us call it spin i defined by its coordinates

(x,y). The energy associated with it and its neighbours is

∆E = 2∑j

JijSiSj where j is summed over the neighbours. The change in energy (step 3) on

reversing the sign of Si is therefore 4∑j

JijSj . If the spin flip is made in step 4, Si → −Si. In

the calculations, only the dimensionless ratio, J/kBT is important. Usually in programming J is setequal to 1 and ‘temperature’ is a number that we can vary. If we wanted to convert the ‘temperature’in the program to real units we would multiply it by J/kB .

Periodic boundary conditions are usually employed to reduce finite size effects - otherwise the spinson the edges of the lattice would have fewer than 4 (in 2D) neighbours. This is done by adding anextra row of spins to each edge – each spin is constrained to have the same value as the one justinside the lattice on the opposite side.

4.7 Rationale

The reason for doing simulations, of course, is that we are trying to work out the behaviour ofmacroscopic systems containing of the order of 1023 spins or particles, and there are only a fewmodels for which exact statistical mechanical solutions are obtainable. Hopefully we can simulateon systems with N large enough so that we can deduce (or extrapolate to) what happens in themacroscopic case (even though N is many orders of magnitude smaller than 1023).

You might well ask, if we have to be satisfied with a reasonably modest value of N , why not do anexact calculation for that size of system. Considering a spin system will provide the answer. For anexact calculation one would have to consider 2N states and perform thermodynamic averages overthem. For a Monte Carlo simulation you would consider perhaps 1000 ×N Monte Carlo steps. If1000×N ¡ 2N it is more cost effective to do a Monte Carlo calculation. Check at what value of N

Page 35: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.8 MONTE CARLO SIMULATIONS – IN ACTION 35

the cross-over occurs and MC becomes more efficient (you will find it is between 13 and 14 - reallyvery small). Even for a fairly modest N , the value of 2N rapidly becomes too large for an exactcalculation.

A Monte Carlo simulation is employing the principle of importance sampling. At low temperatures,the low energy states dominate in a thermodynamic average - and the phase space excursion isprimarily through these microstates. Indeed in a real macroscopic system the high energy stateswill rarely get visited at low temperatures - perhaps on a time scale of the order of the age of theUniverse - or longer! At very high temperatures, on the contrary, all microstates are more or lessequally accessible - but a fairly coarse-grained average will do - as long as representative microstatesare visited according to their relative density.

The fact that simulations are done on finite systems has to be borne in mind. There are ways ofextrapolating to large systems from a series of simulations on small systems over a range of differentsizes. Even if we do not want to go to such extra sophistication, we generally do have a knowledgeof what the effects of finite size are; we have seen one example of this already in the calculation ofσ/ < NL > in the previous section.

4.8 MONTE CARLO SIMULATIONS – IN ACTION

Because MC simulations are such an important technique, we have looked at the principles in somedetail. Now let us put it into practice.

⇒ Go to Exercise 2(a)

4.9 Order Parameter - Magnetisation

The magnetisation per spin S at a particular instant in time is defined as

S = N−1∑r

Sr

where the sum is over all N spins of the lattice. The average over the period of the simulation,< S >, is our definition of the magnetisation. We expect it to be 1 at very low temperaturesand to fall as temperature is increased going to zero when T reaches TC (the model exhibits a Curietemperature). For this reason it is a convenient measure of the order in the system – and is sometimescalled the order parameter. It is also possible to study the fluctuations in the magnetisation: < S2 >− < S >2. The fluctuations are largest at temperatures near TC . They are also related to thesusceptibility (the ease with which the system responds to a magnetic field) – see Appendix.

⇒ Go to Exercise 2(b)

4.10 Temperature Scan (Annealing and Quenching)

If you want to look at several temperatures in a simulation, it is usually more efficient to do themall in a single run. The spin configuration at the end of the simulation at one temperature providesthe input for the simulation at the next temperature. The procedure increases efficiency because itreduces the time to settle to equilibrium compared with making a new start at each temperature.

There is another problem, which you might have noticed, and this can be avoided by this technique.At low temperatures, you might have expected that your picture would have been all red or all blue(fully ordered). For larger samples at T = 0.5, say, it is more likely that you will see a big blue anda big red area. Early on, one part of the sample started ordering one way while the other began with

Page 36: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.11 MATHEMATICAL APPENDIX 36

the opposite orientation. Neither could win. This is what happens in reality. If you cool somethingvery fast – ‘quenching’ – it does not have time to adjust and different parts get locked into positionsthat are not necessarily the most favourable energetically for the system as a whole. A slow coolingschedule (annealing) will give them time to adjust.

⇒ Go to Exercise 2(c)

4.11 MATHEMATICAL APPENDIX

4.11.1 Fluctuations

If ΩNNLis the number of configurations of the state with NL molecules in the left-hand box when N

molecules are present then

< NL >= Z−1N∑

NL=0

NLΩNNL< N2

L >= Z−1N∑

NL=0

N2LΩNNL

Z =N∑

NL=0

ΩNNL

where

ΩNNL= N !/ [NL!(N −NL)!]

Now ΩNr is also the coefficient that appears in the binomial expansion of (1 + x)N SN = (1 +

x)N =N∑r=0

ΩNr xr

Setting x=1, we obtain

Z = 2N (there are 2 configurations of each particle and there are N of them)

Then, differentiating:

dSNdx

= N(1 + x)N−1 =N∑r=0

ΩNr rxr−1

and again setting x=1 and comparing the expression for < NL >

< NL >= Z−1N2N−1 = N/2

A further related differentiation yields

d

dx

(xdSNdx

)= N(1 + x)N−1 +N(N − 1)x(1 + x)N−2 =

N∑r=0

ΩNr r2xr−1

and following an analogous procedure for < N2L > we obtain

< N2L >= Z−1

[N2N−1 +N(N − 1)2N−2

]= N/2 +N(N − 1)/4

From the definition of the variance σ2 =< N2L > − < NL >

2 we obtain

σ2 = N/4 and σ =√N/2

and so finally

Page 37: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.11 MATHEMATICAL APPENDIX 37

σ

< NL >=

1√N

4.11.2 Metropolis Monte Carlo Algorithm and Principle of Detailed Balance

An important relation that has to be satisfied in simulations of the sort we are considering is calledthe Principle of Detailed Balance; it can be written as

P (i→ j) exp(−Ei/kBT ) = P (j → i) exp(−Ej/kBT )

where P (i → j) is the probability that the system, if it is in microstate i, will make a transition tomicrostate j; Ei is the energy of the system when in microstate i.

Since the probability of the system being in microstate i is given by Z−1exp(−Ei/kBT ), we cansee that the left hand side of the equation gives the rate at which transitions from i to j occur, andthe right hand side describes the reverse process. The detailed balance equation is the condition forequilibrium.

For a computer simulation, the condition on any P (i → j) that is used is that it must satisfy thedetailed balance equation. We can see that the Metropolis algorithm does. Suppose that Ei > Ej .Then according to the algorithm, P (i → j) = 1, and P (j → i) = exp[−(Ei − Ej)/kBT ], whichis entirely consistent with the detailed balance requirement.

4.11.3 Susceptibility and Fluctuations

We can write the mean value of the spin at an arbitrary site i as

< Si >= Z−1∑

Si exp [−β (E − SiH)]

where

Z =∑

exp [−β (E − SiH)]

and β = 1/kBT . The sum is over all configurations (with energy E) of the system; we includea magnetic field H and show the contribution to the energy arising from the effect of H on theparticular spin.

The susceptibility χ is defined as

χ =d < S >

dH

d [Z < Si >]dH

=< Si >∑

βSi exp [−β (E − SiH)]+Zd < Si >

dH=∑

βS2i exp [−β (E − SiH)]

Now dividing both sides by Z leads us to

d < Si >

dH= β

[< Si >

2 − < S2i >]

< Si > is the same for all sites, so we can write

Page 38: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.12 EXERCISES 38

χ =[< S2 > − < S >2

]/kBT

4.12 EXERCISES

1 [8 Marks]

(a). Run the program with the graphics option to get a feel for what happens. Compare N=10,tstep=200 with N=100, tstep=5000 for example. Choose other values as well. Describe your obser-vations.

(b). Add some code to the program to calculate < NL >, < N2L >, σ, and the ratio σ/ < NL >.

You will need to calculate values averaged over the run; do several runs for a particular set ofparameters (the clock will ensure different random number sequences); start with the configurationof equally filled boxes. If you are doing the averages for very large samples or over long timeperiods, be careful about generating very large numbers. If you are working with integers try toestimate how big an integer your program is likely to produce. The default ‘KIND’ for integers onFORTRAN 95 is 3 which means integers in the range −231 to 231 − 1 are allowed. If you find youare going above this range, convert to floating point for calculating averages. Compare the resultsfrom your simulations with the ‘theoretical’ values from the Mathematical Appendix. In particular,investigate how σ/ < NL > depends on N. You can speed up the calculation by not displaying thegraphical output.

(c). Add some more lines to the code to count the number of times in a run that all or none of theparticles is in the left-hand box. Sometimes this configuration never occurs, so write some code toevaluate the largest and smallest value of NL that occurs in a run. Compare your simulation withthe rough theory. You should be able to do the investigations for N up to around 20. Use the roughformula to estimate the probability of the rare event occurring for N = 50, and calculate how longyou would have to sit in front of the computer to observe it.

2. [ 9 Marks]

(a). The program ising.f95 allows you to input lattice size, number of Monte Carlo steps per spin,temperature, and sign of the coupling J. You can choose from three initial configurations and youcan select graphical output (up and down spins are distinguished by red and blue circles). You canalso request a metafile for hardcopy output. The colour output is relatively slow: use it to get a feelabout what is happening, but switch it off for quantitative calculations.

First study the program and make sure that you understand what it does. Do a number of runs fordifferent parameters and comment qualitatively on what happens. Examine values of temperature0.5, 2.0, 3.0, and lattices of side 16 and 64. Monte Carlo steps per spin in the range 200-500 shouldbe adequate at this stage. It may help your comments to note that the model is expected to show aCurie temperature TC = 2.27J/kB (2.27 in the program units).

You can also try the other options.

(b). As a first step to more quantitative results, add some code to the program to monitor themagnetisation. Calculate the total spin Stot =

∑rSr immediately after initialisation. Then each

time the Metropolis step produces a spin flip, update Stot by +2 or −2.

Now average the magnetisation over the simulation run, and also find how the magnetisation fluc-tuates. That is, calculate both < S > and < S2 > − < S >2.

If the simulation starts far from equilibrium, you should let the system settle down until it is fluctu-ating about its equilibrium behaviour before you start your averaging. For example, you might use1000 MC steps per spin, but let it run for 200 steps per spin before you start averaging (ie disregardthe first 20% of the run).

Page 39: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.12 EXERCISES 39

The total number of MC steps (see program) is mcs=mcsps*n. It is not necessary to average thespin over all of the mcs steps. You could save time by including in your average values taken everyn MC steps, for example.

Referring to ising.f95, the suggestion is to average values taken at the point indicated by a comment,and to ignore values for i < mscps/5.

Make an evaluation of< S > over a range of temperatures (say 1.0, 2.0, 3.0 and a few around whereyou expect TC to be). You should do several runs at each temperature. How do the magnetisationand the fluctuations vary with temperature?

(c) Rather than inputting temperature for each run, add a temperature loop so that your programperforms an annealing schedule. The configuration you have at the end of one temperature loopis the starting configuration for the next. Just restart your averaging process. Do your annealingschedule over 10 to 15 temperatures starting at 4.0 and going down to 0.5. Make your temperaturesteps closer around TC .

The first thing you should observe is uniform magnetisation (all red or blue) in the low temperatureregime.

Obtain data for< S > and χ over the temperature range and plot it as a graph. Make this calculationas accurate as possible. You should be able to do a 64 × 64 lattice with 2000 MC steps per spin ina reasonable time (if you don’t display graphical output). Do several runs. Perhaps the PC will befast enough for you to do runs for larger samples or for more MC steps.

Page 40: ysics Computational Physics 1 ysics€¦ ·  · 2008-07-07By J S Morgan and J L Schonfelder ... By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 ... familiar with the random

4.12 EXERCISES 40