a course on quantum techniques for stochastic mechanics

Upload: sjhsjh

Post on 16-Oct-2015

21 views

Category:

Documents


0 download

DESCRIPTION

mathematics of graph theory and stochastic petri nets.

TRANSCRIPT

  • A Course onQuantum Techniques for Stochastic Mechanics

    John C. Baez1,2 and Jacob D. Biamonte2,3

    1 Department of MathematicsUniversity of California

    Riverside, CA 92521, USA

    2 Centre for Quantum TechnologiesNational University of Singapore

    Singapore 117543

    3 ISI FoundationVia Alassio 11/c

    10126 Torino, Italy

    email: [email protected], [email protected]

    Abstract

    Some ideas from quantum theory are just beginning to percolate back to classi-cal probability theory. For example, there is a widely used and successful theoryof chemical reaction networks, which describes the interactions of molecules ina stochastic rather than quantum way. Computer science and population bi-ology use the same ideas under a different name: stochastic Petri nets. Butif we look at these theories from the perspective of quantum theory, they turnout to involve creation and annihilation operators, coherent states and otherwell-known ideasbut in a context where probabilities replace amplitudes. Weexplain this connection as part of a detailed analogy between quantum mechan-ics and stochastic mechanics. We use this analogy to present new proofs of twomajor results in the theory of chemical reaction networks: the deficiency zerotheorem and the AndersonCraciunKurtz theorem. We also study the overlapof quantum mechanics and stochastic mechanics, which involves Hamiltoniansthat can generate either unitary or stochastic time evolution. These Hamiltoni-ans are called Dirichlet forms, and they arise naturally from electrical circuitsmade only of resistors.

    1

  • Foreword

    This course is about a curious relation between two ways of describing situationsthat change randomly with the passage of time. The old way is probability theoryand the new way is quantum theory.

    Quantum theory is based, not on probabilities, but on amplitudes. We canuse amplitudes to compute probabilities. However, the relation between themis nonlinear: we take the absolute value of an amplitude and square it to geta probability. It thus seems odd to treat amplitudes as directly analogous toprobabilities. Nonetheless, if we do this, some good things happen. In par-ticular, we can take techniques devised in quantum theory and apply them toprobability theory. This gives new insights into old problems.

    There is, in fact, a subject eager to be born, which is mathematically verymuch like quantum mechanics, but which features probabilities in the sameequations where quantum mechanics features amplitudes. We call this subjectstochastic mechanics.

    Plan of the course

    In Section 1 we introduce the basic object of study here: a stochastic Petrinet. A stochastic Petri net describes in a very general way how collectionsof things of different kinds can randomly interact and turn into other things.If we consider large numbers of things, we obtain a simplified deterministicmodel called the rate equation, discussed in Section 2. More fundamental,however, is the master equation, introduced in Section 3. This describes howthe probability of having various numbers of things of various kinds changeswith time.

    In Section 4 we consider a very simple stochastic Petri net and notice that inthis case, we can solve the master equation using techniques taken from quantummechanics. In Section 5 we sketch how to generalize this: for any stochastic Petrinet, we can write down an operator called a Hamiltonian built from creationand annihilation operators, which describes the rate of change of the probabilityof having various numbers of things. In Section 6 we illustrate this with anexample taken from population biology. In this example the rate equation isjust the logistic equation, one of the simplest models in population biology.The master equation describes reproduction and competition of organisms in astochastic way.

    In Section 7 we sketch how time evolution as described by the master equa-tion can be written as a sum over Feynman diagrams. We do not develop thisin detail, but illustrate it with a predatorprey model from population biology.In the process, we give a slicker way of writing down the Hamiltonian for anystochastic Petri net.

    In Section 8 we enter into a main theme of this course: the study of equi-librium solutions of the master and rate equations. We present the AndersonCraciunKurtz theorem, which shows how to get equilibrium solutions of themaster equation from equilibrium solutions of the rate equation, at least if a cer-

    2

  • tain technical condition holds. Brendan Fong has translated Anderson, Craciunand Kurtzs original proof into the language of annihilation and creation oper-ators, and we give Fongs proof here. In this language, it turns out that theequilibrium solutions are mathematically just like coherent states in quantummechanics.

    In Section 9 we give an example of the AndersonCraciunKurtz theoremcoming from a simple reversible reaction in chemistry. This example leadsto a puzzle that is resolved by discovering that the presence of conservedquantitiesquantitites that do not change with timelet us construct manyequilibrium solutions of the rate equation other than those given by the AndersonCraciunKurtz theorem.

    Conserved quantities are very important in quantum mechanics, and theyare related to symmetries by a result called Noethers theorem. In Section 10we describe a version of Noethers theorem for stochastic mechanics, which weproved with the help of Brendan Fong. This applies, not just to systems de-scribed by stochastic Petri nets, but a much more general class of processes calledMarkov processes. In the analogy to quantum mechanics, Markov processesare analogous to arbitrary quantum systems whose time evolution is given by aHamiltonian. Stochastic Petri nets are analogous to a special case of these: thecase where the Hamiltonian is built from annihilation and creation operators.In Section 11 we state the analogy between quantum mechanics and stochasticmechanics more precisely, and with more attention to mathematical rigor. Thisallows us to set the quantum and stochastic versions of Noethers theorem sideby side and compare them in Section 12.

    In Section 13 we take a break from the heavy abstractions and look at afun example from chemistry, in which a highly symmetrical molecule randomlyhops between states. These states can be seen as vertices of a graph, with thetransitions as edges. In this particular example we get a famous graph with 20vertices and 30 edges, called the Desargues graph.

    In Section 14 we note that the Hamiltonian in this example is a graphLaplacian, and, following a computation done by Greg Egan, we work outthe eigenvectors and eigenvalues of this Hamiltonian explicitly. One reasongraph Laplacians are interesting is that we can use them as Hamiltonians todescribe time evolution in both stochastic and quantum mechanics. Operatorswith this special property are called Dirichlet operators, and we discuss themin Section 15. As we explain, they also describe electrical circuits made ofresistors. Thus, in a peculiar way, the intersection of quantum mechanics andstochastic mechanics is the study of electrical circuites made of resistors!

    In Section 16, we study the eigenvectors and eigenvalues of an arbitraryDirichlet operator. We introduce a famous result called the PerronFrobeniustheorem for this purpose. However, we also see that the PerronFrobeniustheorem is important for understanding the equilibria of Markov processes. Thisbecomes important later when we prove the deficiency zero theorem.

    We introduce the deficiency zero theorem in Section 17. This result, provedby the chemists Feinberg, Horn and Jackson, gives equilibrium solutions forthe rate equation for a large class of stochastic Petri nets. Moreover, these

    3

  • equilibria obey the extra condition that lets us apply the AndersonCraciunKurtz theorem and obtain equlibrium solutions of the master equations as well.However, the deficiency zero theorem is best stated, not in terms of stochasticPetri nets, but in terms of another, equivalent, formalism: chemical reactionnetworks. So, we explain chemical reaction networks here, and use them heavilythroughout the rest of the course. However, because they are applicable tosuch a large range of problems, we call them simply reaction networks. Likestochastic Petri nets, they describe how collections of things of different kindsrandomly interact and turn into other things.

    In Section 18 we consider a simple example of the deficiency zero theoremtaken from chemistry: a diatomic gas. In Section 19 we apply the AndersonCraciunKurtz theorem to the same example.

    In Section 20 we begin the final phase of the course: proving the deficiencyzero theorem, or at least a portion of it. In this section we discuss the conceptof deficiency, which had been introduced before, but not really explained: thedefinition that makes the deficiency easy to compute is not the one that sayswhat this concept really means. In Section 21 we show how to rewrite therate equation of a stochastic Petri netor equivalently, of a reaction networkin terms of a Markov process. This is surprising because the rate equationis nonlinear, while the equation describing a Markov process is linear in theprobabilities involved. The trick is to use a nonlinear operation called matrixexponentiation. In Section 22 we study equilibria for Markov processes. Then,finally, in Section 23, we use these equilbria to obtain equilibrium solutions ofthe rate equation, completing our treatment of the deficiency zero theorem.

    Acknowledgements

    These course notes are based on a series of articles on the Azimuth blog. Theoriginal articles are available via this webpage:

    Network Theory, http://math.ucr.edu/home/baez/networks/.

    On the blog you can read discussion of these articles, and also make your owncomments or ask questions.

    We thank the readers of Azimuth for many helpful online discussions, in-clluding David Corfield, Manoj Gopalkrishnan, Greg Egan and Blake Stacey,but also many others we apologize for not listing here. We especially thankBrendan Fong for his invaluable help, in particular for giving a quantum proofof the AndersonCraciunKurtz theorem and proving the stochastic version ofNoethers theorem. We thank Wikimedia Commons for the use of many pic-tures. We thank Federica Ferraris for drawing the Baez-magician, the Rob-bin hood rabbit and the rabbit melee as well as helping format several of theother pictures appearing in this book.

    Finally, we thank the Centre for Quantum Technologies for its hospitalityand support, not only for ourselves but also for Brendan Fong.

    4

  • CONTENTS

    CONTENTS

    Contents

    1 Stochastic Petri nets 91.1 The rate equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 121.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151.3 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    2 The rate equation 172.1 Rate equations: the general recipe . . . . . . . . . . . . . . . . . 172.2 The formation of water (1) . . . . . . . . . . . . . . . . . . . . . 182.3 The formation of water (2) . . . . . . . . . . . . . . . . . . . . . 192.4 The dissociation of water (1) . . . . . . . . . . . . . . . . . . . . 202.5 The dissociation of water (2) . . . . . . . . . . . . . . . . . . . . 212.6 The SI model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.7 The SIR model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.8 The SIRS model . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.9 The general recipe revisited . . . . . . . . . . . . . . . . . . . . . 252.10 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    3 The master equation 283.1 The master equation . . . . . . . . . . . . . . . . . . . . . . . . . 293.2 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    4 Probabilities vs amplitudes 374.1 A Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.2 Probability theory vs quantum theory . . . . . . . . . . . . . . . 394.3 Stochastic vs unitary operators . . . . . . . . . . . . . . . . . . . 414.4 Infinitesimal stochastic versus self-adjoint operators . . . . . . . . 424.5 The moral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    5 Annihilation and creation operators 475.1 Rabbits and quantum mechanics . . . . . . . . . . . . . . . . . . 475.2 Catching rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.3 Dying rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.4 Breeding rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . 515.5 Dueling rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525.6 Brawling rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.7 The general rule . . . . . . . . . . . . . . . . . . . . . . . . . . . 535.8 Kissing rabbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565.10 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    6 Population biology 616.1 Amoeba fission and competition . . . . . . . . . . . . . . . . . . 616.2 The rate equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.3 The master equation . . . . . . . . . . . . . . . . . . . . . . . . . 63

    5

  • CONTENTS

    CONTENTS

    6.4 An equilibrium state . . . . . . . . . . . . . . . . . . . . . . . . . 666.5 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    7 Feynman diagrams 717.1 Stochastic Petri nets revisited . . . . . . . . . . . . . . . . . . . . 717.2 The rate equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 727.3 The master equation . . . . . . . . . . . . . . . . . . . . . . . . . 747.4 Feynman diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 76

    8 The AndersonCraciunKurtz theorem 808.1 The rate equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 808.2 Complex balance . . . . . . . . . . . . . . . . . . . . . . . . . . . 818.3 The master equation . . . . . . . . . . . . . . . . . . . . . . . . . 828.4 Coherent states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828.5 The proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 848.6 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858.7 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

    9 A reversible reaction 879.1 A reversible reaction . . . . . . . . . . . . . . . . . . . . . . . . . 879.2 Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879.3 Complex balanced equilibria . . . . . . . . . . . . . . . . . . . . . 889.4 Conserved quantities . . . . . . . . . . . . . . . . . . . . . . . . . 90

    10 Noethers theorem 9210.1 Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 9210.2 Noethers theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 9510.3 Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9610.4 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

    11 Quantum mechanics vs stochastic mechanics 10111.1 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10111.2 Symmetries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10311.3 Quantum evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 10411.4 Stochastic evolution . . . . . . . . . . . . . . . . . . . . . . . . . 10611.5 The HilleYosida theorem . . . . . . . . . . . . . . . . . . . . . . 10811.6 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    12 Noethers theorem: quantum vs stochastic 11212.1 Two versions of Noethers theorem . . . . . . . . . . . . . . . . . 11212.2 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11212.3 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

    6

  • CONTENTS

    CONTENTS

    13 Chemistry and the Desargues graph 11513.1 The ethyl cation . . . . . . . . . . . . . . . . . . . . . . . . . . . 11513.2 The Desargues graph . . . . . . . . . . . . . . . . . . . . . . . . . 11713.3 The ethyl cation, revisited . . . . . . . . . . . . . . . . . . . . . . 11813.4 Trigonal bipyramidal molecules . . . . . . . . . . . . . . . . . . . 11913.5 Drawing the Desargues graph . . . . . . . . . . . . . . . . . . . . 12113.6 Desargues theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 12413.7 Answers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

    14 Graph Laplacians 13214.1 A random walk on the Desargues graph . . . . . . . . . . . . . . 13314.2 Graph Laplacians . . . . . . . . . . . . . . . . . . . . . . . . . . . 13614.3 The Laplacian of the Desargues graph . . . . . . . . . . . . . . . 138

    15 Dirichlet operators 14215.1 Dirichlet operators . . . . . . . . . . . . . . . . . . . . . . . . . . 14215.2 Circuits made of resistors . . . . . . . . . . . . . . . . . . . . . . 14315.3 The big picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

    16 PerronFrobenius theory 14916.1 At the intersection of two theories . . . . . . . . . . . . . . . . . 15016.2 Stochastic mechanics versus quantum mechanics . . . . . . . . . 15116.3 From graphs to matrices . . . . . . . . . . . . . . . . . . . . . . . 15516.4 Perrons theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 15616.5 From matrices to graphs . . . . . . . . . . . . . . . . . . . . . . . 15716.6 The PerronFrobenius theorem . . . . . . . . . . . . . . . . . . . 15816.7 Irreducible Dirichlet operators . . . . . . . . . . . . . . . . . . . . 15916.8 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16016.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

    17 The deficiency zero theorem 16417.1 Reaction networks . . . . . . . . . . . . . . . . . . . . . . . . . . 16417.2 The deficiency zero theorem . . . . . . . . . . . . . . . . . . . . . 16717.3 References and remarks . . . . . . . . . . . . . . . . . . . . . . . 171

    18 Example of the deficiency zero theorem 17218.1 Diatomic molecules . . . . . . . . . . . . . . . . . . . . . . . . . . 17218.2 A reaction network . . . . . . . . . . . . . . . . . . . . . . . . . . 17318.3 The rate equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 17518.4 Deficiency zero theorem . . . . . . . . . . . . . . . . . . . . . . . 177

    19 Example of the AndersonCraciunKurtz theorem 18119.1 The master equation . . . . . . . . . . . . . . . . . . . . . . . . . 18119.2 Equilibrium solutions . . . . . . . . . . . . . . . . . . . . . . . . . 18319.3 Noethers theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 18619.4 The AndersonCraciunKurtz theorem . . . . . . . . . . . . . . . 188

    7

  • CONTENTS

    CONTENTS

    19.5 Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

    20 The deficiency of a reaction network 19320.1 Reaction networks revisited . . . . . . . . . . . . . . . . . . . . . 19420.2 The concept of deficiency . . . . . . . . . . . . . . . . . . . . . . 19620.3 How to compute the deficiency . . . . . . . . . . . . . . . . . . . 19820.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19920.5 Different kinds of graphs . . . . . . . . . . . . . . . . . . . . . . . 202

    21 Rewriting the rate equation 20521.1 The rate equation and matrix exponentiation . . . . . . . . . . . 20621.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

    22 Markov processes 21422.1 The Markov process of a graph with rates . . . . . . . . . . . . . 21622.2 Equilibrium solutions of the master equation . . . . . . . . . . . 21722.3 The Hamiltonian, revisited . . . . . . . . . . . . . . . . . . . . . 221

    23 Proof of the deficiency zero theorem 22423.1 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22723.2 The proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229

    8

  • 1 STOCHASTIC PETRI NETS

    1 Stochastic Petri nets

    Stochastic Petri nets are one of many different diagrammatic languages peoplehave evolved to study complex systems. Well see how theyre used in chemistry,molecular biology, population biology and queuing theory, which is roughly thescience of waiting in line. Heres an example of a Petri net taken from chemistry:

    It shows some chemicals and some reactions involving these chemicals. To makeit into a stochastic Petri net, wed just label each reaction by a positive realnumber: the reaction rate constant, or rate constant for short.

    Chemists often call different kinds of chemicals species. In general, a Petrinet will have a set of species, which well draw as yellow circles, and a setof transitions, which well draw as blue rectangles. Heres a Petri net frompopulation biology:

    Now, instead of different chemicals, the species really are different species of an-imals! And instead of chemical reactions, the transitions are processes involvingthese species. This Petri net has two species: rabbit and wolf. It has threetransitions:

    In birth, one rabbit comes in and two go out. This is a caricature ofreality: these bunnies reproduce asexually, splitting in two like amoebas.

    In predation, one wolf and one rabbit come in and two wolves go out.This is a caricature of how predators need to eat prey to reproduce. Biol-ogists might use biomass to make this sort of idea more precise: a certainamount of mass will go from being rabbit to being wolf.

    9

  • 1 STOCHASTIC PETRI NETS

    In death, one wolf comes in and nothing goes out. Note that were pre-tending rabbits dont die unless theyre eaten by wolves.

    If we labelled each transition with a rate constant, wed have a stochasticPetri net.

    To make this Petri net more realistic, wed have to make it more compli-cated. Were trying to explain general ideas here, not realistic models of specificsituations. Nonetheless, this Petri net already leads to an interesting model ofpopulation dynamics: a special case of the so-called Lotka-Volterra predator-prey model. Well see the details soon.

    More to the point, this Petri net illustrates some possibilities that our pre-vious example neglected. Every transition has some input species and someoutput species. But a species can show up more than once as the output (orinput) of some transition. And as we see in death, we can have a transitionwith no outputs (or inputs) at all.

    But lets stop beating around the bush, and give you the formal definitions.Theyre simple enough:

    Definition 1. A Petri net consists of a set S of species and a set T oftransitions, together with a function

    i : S T Nsaying how many copies of each species shows up as input for each transition,and a function

    o : S T Nsaying how many times it shows up as output.

    Definition 2. A stochastic Petri net is a Petri net together with a function

    r : T (0,)giving a rate constant for each transition.

    Starting from any stochastic Petri net, we can get two things. First:

    The master equation. This says how the probability that we have a givennumber of things of each species changes with time.

    Since stochastic means random, the master equation is what gives stochas-tic Petri nets their name. The master equation is the main thing well be talkingabout in future blog entries. But not right away!

    Why not?In chemistry, we typically have a huge number of things of each species.

    For example, a gram of water contains about 3 1022 water molecules, anda smaller but still enormous number of hydroxide ions (OH), hydronium ions(H3O

    +), and other scarier things. These things blunder around randomly, bumpinto each other, and sometimes react and turn into other things. Theres a

    10

  • 1 STOCHASTIC PETRI NETS

    stochastic Petri net describing all this, as well eventually see. But in this situa-tion, we dont usually want to know the probability that there are, say, exactly31, 849, 578, 476, 264 hydronium ions. That would be too much information!Wed be quite happy knowing the expected value of the number of hydroniumions, so wed be delighted to have a differential equation that says how thischanges with time.

    And luckily, such an equation exists; and its much simpler than the masterequation. So, in this section well talk about:

    The rate equation. This says how the expected number of things of eachspecies changes with time.

    But first, we hope you get the overall idea. The master equation is stochastic:at each time the number of things of each species is a random variable takingvalues in N, the set of natural numbers. The rate equation is deterministic:at each time the expected number of things of each species is a non-randomvariable taking values in [0,), the set of nonnegative real numbers. If themaster equation is the true story, the rate equation is only approximately true;but the approximation becomes good in some limit where the expected valueof the number of things of each species is large, and the standard deviation iscomparatively small.

    If youve studied physics, this should remind you of other things. The masterequation should remind you of the quantum harmonic oscillator, where energylevels are discrete, and probabilities are involved. The rate equation should re-mind you of the classical harmonic oscillator, where energy levels are continuous,and everything is deterministic.

    When we get to the original research part of our story, well see this analogyis fairly precise! Well take a bunch of ideas from quantum mechanics andquantum field theory, and tweak them a bit, and show how we can use them todescribe the master equation for a stochastic Petri net.

    Indeed, the random processes that the master equation describes can bedrawn as pictures:

    11

  • 1.1 The rate equation

    1 STOCHASTIC PETRI NETS

    This looks like a Feynman diagram, with animals instead of particles! Its prettyfunny, but the resemblance is no joke: the math will back it up.

    Were dying to explain all the details. But just as classical field theory iseasier than quantum field theory, the rate equation is simpler than the masterequation. So we should start there.

    1.1 The rate equation

    If you handed over a stochastic Petri net, we can write down its rate equation.Instead of telling you the general rule, which sounds rather complicated at first,lets do an example. Take the Petri net we were just looking at:

    12

  • 1.1 The rate equation

    1 STOCHASTIC PETRI NETS

    We can make it into a stochastic Petri net by choosing a number for eachtransition:

    the birth rate constant the predation rate constant the death rate constant Let x(t) be the number of rabbits and let y(t) be the number of wolves at

    time t. Then the rate equation looks like this:

    dx

    dt= x xy

    dy

    dt= xy y

    Its really a system of equations, but well call the whole thing the rate equationbecause later we may get smart and write it as a single equation.

    See how it works?

    We get a term x in the equation for rabbits, because rabbits are born ata rate equal to the number of rabbits times the birth rate constant .

    We get a term y in the equation for wolves, because wolves die at arate equal to the number of wolves times the death rate constant .

    We get a term xy in the equation for rabbits, because rabbits die at arate equal to the number of rabbits times the number of wolves times thepredation rate constant .

    We also get a term xy in the equation for wolves, because wolves areborn at a rate equal to the number of rabbits times the number of wolvestimes .

    Of course were not claiming that this rate equation makes any sense bi-ologically! For example, think about predation. The xy terms in the aboveequation would make sense if rabbits and wolves roamed around randomly, andwhenever a wolf and a rabbit came within a certain distance, the wolf had a cer-tain probability of eating the rabbit and giving birth to another wolf. At least it

    13

  • 1.1 The rate equation

    1 STOCHASTIC PETRI NETS

    would make sense in the limit of large numbers of rabbits and wolves, where wecan treat x and y as varying continuously rather than discretely. Thats a rea-sonable approximation to make sometimes. Unfortunately, rabbits and wolvesdont roam around randomly, and a wolf doesnt spit out a new wolf each timeit eats a rabbit.

    Despite that, the equations

    dx

    dt= x xy

    dy

    dt= xy y

    are actually studied in population biology. As we said, theyre a special case ofthe Lotka-Volterra predator-prey model, which looks like this:

    dx

    dt= x xy

    dy

    dt= xy y

    The point is that while these models are hideously oversimplified and thus quan-titatively inaccurate, they exhibit interesting qualititative behavior thats fairlyrobust. Depending on the rate constants, these equations can show either astable equilibrium or stable periodic behavior. And we go from one regimeto another, we see a kind of catastrophe called a Hopf bifurcation. You canread about this in week308 and week309 of This Weeks Finds. Those considersome other equations, not the Lotka-Volterra equations. But their qualitativebehavior is the same!

    If you want stochastic Petri nets that give quantitatively accurate models,its better to retreat to chemistry. Compared to animals, molecules come a lotcloser to roaming around randomly and having a chance of reacting when theycome within a certain distance. So in chemistry, rate equations can be used tomake accurate predictions.

    But were digressing. We should be explaining the general recipe for gettinga rate equation from a stochastic Petri net! You might not be able to guess itfrom just one example. In the next section, well do more examples, and maybeeven write down a general formula. But if youre feeling ambitious, you can trythis now:

    Problem 1. Can you write down a stochastic Petri net whose rate equation isthe Lotka-Volterra predator-prey model:

    dx

    dt= x xy

    dy

    dt= xy y

    for arbitrary , , , > 0? If not, for which values of these rate constants canyou do it?

    14

  • 1.2 References

    1 STOCHASTIC PETRI NETS

    1.2 References

    Here is free online introduction to stochastic Petri nets and their rate equations:

    [GP98] Peter J. E. Goss and Jean Peccoud, Quantitative modeling of stochasticsystems in molecular biology by using stochastic Petri nets, Proc. Natl.Acad. Sci. USA 95 (1988), 67506755.

    We should admit that Petri net people say place where were saying species!The term species is used in the literature on chemical reaction networks, whichwe discuss starting in Section 17.

    Here are some other introductions to the subject:

    [Haa02] Peter J. Haas, Stochastic Petri Nets: Modelling, Stability, Simulation,Springer, Berlin, 2002.

    [Koc10] Ina Koch, Petri netsa mathematical formalism to analyze chemical re-action networks, Molecular Informatics 29, 838843, 2010.

    [Wil06] Darren James Wilkinson, Stochastic Modelling for Systems Biology, Taylorand Francis, New York, 2006.

    1.3 Answer

    Here is the answer to the problem:

    Problem 1. Can you write down a stochastic Petri net whose rate equation isthe Lotka-Volterra predator-prey model:

    dx

    dt= x xy

    dy

    dt= xy y

    for arbitrary , , , > 0? If not, for which values of these rate constants canyou do it?

    Answer. We can find a stochastic Petri net that does the job for any , , , >0. In fact we can find one that does the job for any possible value of , , , .But to keep things simple, lets just solve the original problem.

    Well consider a stochastic Petri net with two species, rabbit and wolf, andfour transitions:

    birth (1 rabbit in, 2 rabbits out), with rate constant death (1 wolf in, 0 wolves out), with rate constant jousting (1 wolf and 1 rabbit in, R rabbits and W wolves out, whereR,W are arbitrary natural numbers), with rate constant

    15

  • 1.3 Answer

    1 STOCHASTIC PETRI NETS

    dueling (1 wolf and 1 rabbit in, R rabbits and W wolves out, whereR,W are arbitrary natural numbers) with rate constant .

    All these rate constants are positive.This gives the rate equation:

    dx

    dt= x+ (R 1)xy + (R 1)xy

    dy

    dt= (W 1)xy + (W 1)xy y

    This is flexible enough to do the job.For example, lets assume that when they joust, the massive, powerful wolf

    always kills the rabbit, and then eats the rabbit and has one offspring (R = 0and W = 2). And lets assume that in a duel, the lithe and clever rabbit alwayskills the wolf, but does not reproduce afterward (R = 1, W = 0).

    Then we getdx

    dt= x xy

    dy

    dt= ( )xy y

    This handles the equationsdx

    dt= x xy

    dy

    dt= xy y

    where , , , > 0 and > . In other words, the cases where more rabbits diedue to combat than wolves get born!

    Well let you handle the cases where fewer rabbits die than wolves get born.If we also include a death process for rabbits and birth process for wolves,

    we can get the fully general Lotka-Volterra equations:

    dx

    dt= x xy

    dy

    dt= xy y

    Its worth noting that biologists like to study these equations with differentchoices of sign for the constants involved: the predator-prey Lotka-Volterraequations and the competitive Lotka-Volterra equations.

    16

  • 2 THE RATE EQUATION

    2 The rate equation

    As we saw previously in Section 1, a Petri net is a picture that shows differentkinds of things and processes that turn bunches of things into other bunches ofthings, like this:

    The kinds of things are called species and the processes are called transitions.We see such transitions in chemistry:

    H + OH H2Oand population biology:

    amoeba amoeba + amoebaand the study of infectious diseases:

    infected + susceptible infected + infectedand many other situations.

    A stochastic Petri net says the rate at which each transition occurs. Wecan think of these transitions as occurring randomly at a certain rateandthen we get a stochastic process described by something called the masterequation. But for starters, weve been thinking about the limit where there arevery many things of each species. Then the randomness washes out, and theexpected number of things of each species changes deterministically in a mannerdescribed by the rate equation.

    Its time to explain the general recipe for getting this rate equation! It lookscomplicated at first glance, so well briefly state it, then illustrate it with tonsof examples, and then state it again.

    One nice thing about stochastic Petri nets is that they let you dabble inmany sciences. Last time we got a tiny taste of how they show up in populationbiology. This time well look at chemistry and models of infectious diseases. Wewont dig very deep, but trust us: you can do a lot with stochastic Petri nets inthese subjects! Well give some references in case you want to learn more.

    2.1 Rate equations: the general recipe

    Heres the recipe, really quickly:

    17

  • 2.2 The formation of water (1)

    2 THE RATE EQUATION

    A stochastic Petri net has a set of species and a set of transitions. Letsconcentrate our attention on a particular transition. Then the ith species willappear mi times as the input to that transition, and ni times as the output.Our transition also has a reaction rate 0 < r

  • 2.3 The formation of water (2)

    2 THE RATE EQUATION

    See how it works? The reaction occurs at a rate proportional to the product ofthe numbers of things that appear as inputs: two Hs and one O. The constantof proportionality is the rate constant . So, the reaction occurs at a rate equalto [H]2[O]. Then:

    Since two hydrogen atoms get used up in this reaction, we get a factor of2 in the first equation.

    Since one oxygen atom gets used up, we get a factor of 1 in the secondequation.

    Since one water molecule is formed, we get a factor of +1 in the thirdequation.

    2.3 The formation of water (2)

    Lets do another example. Chemical reactions rarely proceed by having threethings collide simultaneouslyits too unlikely. So, for the formation of waterfrom atomic hydrogen and oxygen, there will typically be an intermediate step.Maybe something like this:

    Here OH is called a hydroxyl radical. Were not sure this is the most likelypathway, but never mindits a good excuse to work out another rate equation.If the first reaction has rate constant and the second has rate constant , hereswhat we get:

    d[H]

    dt= [H][O] [H][OH]

    d[OH]

    dt= [H][O] [H][OH]

    d[O]

    dt= [H][O]

    d[H2O]

    dt= [H][OH]

    See how it works? Each reaction occurs at a rate proportional to the productof the numbers of things that appear as inputs. We get minus signs when a

    19

  • 2.4 The dissociation of water (1)

    2 THE RATE EQUATION

    reaction destroys one thing of a given kind, and plus signs when it creates one.We dont get factors of 2 as we did last time, because now no reaction createsor destroys two of anything.

    2.4 The dissociation of water (1)

    In chemistry every reaction comes with a reverse reaction. So, if hydrogen andoxygen atoms can combine to form water, a water molecule can also dissociateinto hydrogen and oxygen atoms. The rate constants for the reverse reaction canbe different than for the original reaction... and all these rate constants dependon the temperature. At room temperature, the rate constant for hydrogen andoxygen to form water is a lot higher than the rate constant for the reversereaction. Thats why we see a lot of water, and not many lone hydrogen oroxygen atoms. But at sufficiently high temperatures, the rate constants change,and water molecules become more eager to dissociate.

    Calculating these rate constants is a big subject. Were just starting to readthis book, which looked like the easiest one on the library shelf:

    [Log96] S. R. Logan, Chemical Reaction Kinetics, Longman, Essex, (1996).

    But lets not delve into these mysteries yet. Lets just take our naive Petri netfor the formation of water and turn around all the arrows, to get the reversereaction:

    If the reaction rate is , heres the rate equation:

    d[H]

    dt= 2[H2O]

    d[O]

    dt= [H2O]

    d[H2O]

    dt= [H2O]

    20

  • 2.5 The dissociation of water (2)

    2 THE RATE EQUATION

    See how it works? The reaction occurs at a rate proportional to [H2O], since ithas just a single water molecule as input. Thats where the [H2O] comes from.Then:

    Since two hydrogen atoms get formed in this reaction, we get a factor of+2 in the first equation.

    Since one oxygen atom gets formed, we get a factor of +1 in the secondequation.

    Since one water molecule gets used up, we get a factor of +1 in the thirdequation.

    2.5 The dissociation of water (2)

    Of course, we can also look at the reverse of the more realistic reaction involvinga hydroxyl radical as an intermediate. Again, we just turn around the arrowsin the Petri net we had:

    Now the rate equation looks like this:

    d[H]

    dt= +[OH] + [H2O]

    d[OH]

    dt= [OH] + [H2O]

    d[O]

    dt= +[OH]

    d[H2O]

    dt= [H2O]

    Do you see why? Test your understanding of the general recipe.By the way: if youre a category theorist, when we say turn around all

    the arrows you probably thought opposite category. And youd be right! APetri net is just a way of presenting of a strict symmetric monoidal categorythats freely generated by some objects (the species) and some morphisms (the

    21

  • 2.6 The SI model

    2 THE RATE EQUATION

    transitions). When we turn around all the arrows in our Petri net, were gettinga presentation of the opposite symmetric monoidal category. For more details,try:

    [Sas] Vladimiro Sassone, On the category of Petri net computations, 6th In-ternational Conference on Theory and Practice of Software Development,Lecture Notes in Computer Science 915, Springer, Berlin, pp. 334-348(1995).

    We wont emphasize the category-theoretic aspects in this course, but theyrelurking right beneath the surface throughout.

    2.6 The SI model

    The SI model is an extremely simple model of an infectious disease. We candescribe it using this Petri net:

    There are two species: susceptible and infected. And theres a transi-tion called infection, where an infected person meets a susceptible person andinfects them.

    Suppose S is the number of susceptible people and I the number of infectedones. If the rate constant for infection is , the rate equation is

    dS

    dt= SI

    dI

    dt= SI

    Do you see why?By the way, its easy to solve these equations exactly. The total number of

    people doesnt change, so S + I is a conserved quantity. Use this to get rid ofone of the variables. Youll get a version of the famous logistic equation, so thefraction of people infected must grow sort of like this:

    22

  • 2.7 The SIR model

    2 THE RATE EQUATION

    Problem 2. Is there a stochastic Petri net with just one species whose rateequation is the logistic equation:

    dP

    dt= P P 2?

    2.7 The SIR model

    The SI model is just a warmup for the more interesting SIR model, which wasinvented by Kermack and McKendrick in 1927:

    [KM27] W. O. Kermack and A. G. McKendrick, A contribution to the mathe-matical theory of epidemics, Proc. Roy. Soc. Lond. A 115, 700-721,(1927).

    This is the only mathematical model we know to have been knighted : Sir Model.This model has an extra species, called resistant, and an extra transition,

    called recovery, where an infected person gets better and develops resistanceto the disease:

    23

  • 2.8 The SIRS model

    2 THE RATE EQUATION

    If the rate constant for infection is and the rate constant for recovery is, the rate equation for this stochastic Petri net is:

    dS

    dt= SI

    dI

    dt= SI I

    dR

    dt= I

    See why?We dont know a closed-form solution to these equations. But Kermack

    and McKendrick found an approximate solution in their original paper. Theyused this to model the death rate from bubonic plague during an outbreak inBombay, and got pretty good agreement. Nowadays, of course, we can solvethese equations numerically on the computer.

    2.8 The SIRS model

    Theres an even more interesting model of infectious disease called the SIRSmodel. This has one more transition, called losing resistance, where a resis-tant person can go back to being susceptible. Heres the Petri net:

    Problem 3. If the rate constants for recovery, infection and loss of resistanceare , , and , write down the rate equations for this stochastic Petri net.

    24

  • 2.9 The general recipe revisited

    2 THE RATE EQUATION

    In the SIRS model we see something new: cyclic behavior! Say you startwith a few infected people and a lot of susceptible ones. Then lots of peopleget infected... then lots get resistant... and then, much later, if you set the rateconstants right, they lose their resistance and theyre ready to get sick all overagain! You can sort of see it from the Petri net, which looks like a cycle.

    You can learn about the SI, SIR and SIRS models here:

    [Man06] Marc Mangel, The Theoretical Biologists Toolbox: Quantitative Methodsfor Ecology and Evolutionary Biology, Cambridge U. Press, Cambridge,(2006).

    For more models of this type, see:

    Compartmental models in epidemiology, Wikipedia.

    A compartmental model is closely related to a stochastic Petri net, but beware:the pictures in this article are not really Petri nets!

    2.9 The general recipe revisited

    Now well remind you of the general recipe and polish it up a bit. So, supposewe have a stochastic Petri net with k species. Let xi be the number of thingsof the ith species. Then the rate equation looks like:

    dxidt

    =???

    Its really a bunch of equations, one for each 1 i k. But what is theright-hand side?

    The right-hand side is a sum of terms, one for each transition in our Petrinet. So, lets assume our Petri net has just one transition! (If there are more,consider one at a time, and add up the results.)

    Suppose the ith species appears as input to this transition mi times, and asoutput ni times. Then the rate equation is

    dxidt

    = r(ni mi)xm11 xmkkwhere r is the rate constant for this transition.

    Thats really all there is to it! But subscripts make you eyes hurt more andas you get olderthis is the real reason for using index-free notation, despiteany sophisticated rationales you may have heardso lets define a vector

    x = (x1, . . . , xk)

    that keeps track of how many things there are in each species. Similarly letsmake up an input vector:

    m = (m1, . . . ,mk)

    25

  • 2.10 Answers

    2 THE RATE EQUATION

    and an output vector:n = (n1, . . . , nk)

    for our transition. And a bit more unconventionally, lets define

    xm = xm11 xmkkThen we can write the rate equation for a single transition as

    dx

    dt= r(nm)xm

    This looks a lot nicer!Indeed, this emboldens us to consider a general stochastic Petri net with

    lots of transitions, each with their own rate constant. Lets write T for the setof transitions and r() for the rate constant of the transition T . Let n()and m() be the input and output vectors of the transition . Then the rateequation for our stochastic Petri net is

    dx

    dt=T

    r()(n()m())xm()

    Thats the fully general recipe in a nutshell. Were not sure yet how helpful thisnotation will be, but its here whenever we want it.

    In Section 2 well get to the really interesting part, where ideas from quan-tum theory enter the game! Well see how things of different species randomlytransform into each other via the transitions in our Petri net. And somedaywell check that the expected number of things in each state evolves accordingto the rate equation we just wrote down... at least in the limit where there arelots of things in each state.

    2.10 Answers

    Here are the answers to the problems:

    Problem 2. Is there a stochastic Petri net with just one state whose rateequation is the logistic equation:

    dP

    dt= P P 2?

    Answer. Yes. Use the Petri net with one species, say amoeba, and two tran-sitions:

    fission, with one amoeba as input and two as output, with rate constant.

    competition, with two amoebas as input and one as output, with rateconstant .

    26

  • 2.10 Answers

    2 THE RATE EQUATION

    The idea of competition is that when two amoebas are competing for limitedresources, one may die.

    Problem 3. If the rate constants for recovery, infection and loss of resistanceare , , and , write down the rate equations for this stochastic Petri net:

    Answer. The rate equation is:

    dS

    dt= SI + R

    dI

    dt= SI I

    dR

    dt= I R

    27

  • 3 THE MASTER EQUATION

    3 The master equation

    In Section 2 we explained the rate equation of a stochastic Petri net. Butnow lets get serious: lets see whats stochasticthat is, random about astochastic Petri net. For this we need to forget the rate equation (temporarily)and learn about the master equation. This is where ideas from quantum fieldtheory start showing up!

    A Petri net has a bunch of species and a bunch of transitions. Heres anexample weve already seen, from chemistry:

    The species are in yellow, the transitions in blue. A labelling of our Petri netis a way of having some number of things of each species. We can draw thesethings as little black dots:

    In this example there are only 0 or 1 things of each species: weve got oneatom of carbon, one molecule of oxygen, one molecule of sodium hydroxide, onemolecule of hydrochloric acid, and nothing else. But in general, we can haveany natural number of things of each species.

    In a stochastic Petri net, the transitions occur randomly as time passes. Forexample, as time passes we could see a sequence of transitions like this:

    28

  • 3.1 The master equation

    3 THE MASTER EQUATION

    Each time a transition occurs, the number of things of each species changesin an obvious way.

    3.1 The master equation

    Now, we said the transitions occur randomly, but that doesnt mean theresno rhyme or reason to them! The miracle of probability theory is that it lets usstate precise laws about random events. The law governing the random behaviorof a stochastic Petri net is called the master equation.

    In a stochastic Petri net, each transition has a rate constant, a positive realnumber. Roughly speaking, this determines the probability of that transition.

    A bit more precisely: suppose we have a Petri net that is labelled in someway at some moment. Then the probability that a given transition occurs in ashort time t is approximately:

    the rate constant for that transition, times

    29

  • 3.1 The master equation

    3 THE MASTER EQUATION

    the time t, times the number of ways the transition can occur.

    More precisely still: this formula is correct up to terms of order (t)2. So,taking the limit as t 0, we get a differential equation describing preciselyhow the probability of the Petri net having a given labelling changes with time!And this is the master equation.

    Now, you might be impatient to actually see the master equation, but thatwould be rash. The true master doesnt need to see the master equation. Itsounds like a Zen proverb, but its true. The raw beginner in mathematics wantsto see the solutions of an equation. The more advanced student is content toprove that the solution exists. But the master is content to prove that theequation exists.

    A bit more seriously: what matters is understanding the rules that inevitablylead to some equation: actually writing it down is then straightforward. Andyou see, theres something we havent explained yet: the number of ways thetransition can occur. This involves a bit of counting. Consider, for example,this Petri net:

    Suppose there are 10 rabbits and 5 wolves.

    How many ways can the birth transition occur? Since birth takes onerabbit as input, it can occur in 10 ways.

    How many ways can predation occur? Since predation takes one rabbitand one wolf as inputs, it can occur in 10 5 = 50 ways.

    How many ways can death occur? Since death takes one wolf as input,it can occur in 5 ways.

    Or consider this one:

    30

  • 3.1 The master equation

    3 THE MASTER EQUATION

    Suppose there are 10 hydrogen atoms and 5 oxygen atoms. How many wayscan they form a water molecule? There are 10 ways to pick the first hydrogen,9 ways to pick the second hydrogen, and 5 ways to pick the oxygen. So, thereare

    10 9 5 = 450ways.

    Note that were treating the hydrogen atoms as distinguishable, so there are10 9 ways to pick them, not 1092 =

    (102

    ). In general, the number of ways to

    choose M distinguishable things from a collection of L is the falling power

    LM = L (L 1) (LM + 1)

    where there are M factors in the product, but each is 1 less than the precedingonehence the term falling.

    Okay, now weve given you all the raw ingredients to work out the masterequation for any stochastic Petri net. The previous paragraph was a big fathint. One more nudge and youre on your own:

    Problem 4. Suppose we have a stochastic Petri net with k species and onetransition with rate constant r. Suppose the ith species appears mi times asthe input of this transition and ni times as the output. A labelling of thisstochastic Petri net is a k-tuple of natural numbers ` = (`1, . . . , `k) saying howmany things are in each species Let `(t) be the probability that the labellingis ` at time t. Then the master equation looks like this:

    d

    dt`(t) =

    `

    H```(t)

    for some matrix of real numbers H``. What is this matrix?

    You can write down a formula for this matrix using what weve told you.And then, if you have a stochastic Petri net with more transitions, you can justcompute the matrix for each transition using this formula, and add them all up.

    31

  • 3.1 The master equation

    3 THE MASTER EQUATION

    Theres a straightforward way to solve this problem, but we want to get thesolution by a strange route: we want to guess the master equation using ideasfrom quantum field theory!

    Why? Well, if we think about a stochastic Petri net whose labelling under-goes random transitions as weve described, youll see that any possible historyfor the labelling can be drawn in a way that looks like a Feynman diagram. Inquantum field theory, Feynman diagrams show how things interact and turninto other things. But thats what stochastic Petri nets do, too!

    For example, if our Petri net looks like this:

    then a typical history can be drawn like this:

    32

  • 3.1 The master equation

    3 THE MASTER EQUATION

    Some rabbits and wolves come in on top. They undergo some transitions as timepasses, and go out on the bottom. The vertical coordinate is time, while thehorizontal coordinate doesnt really mean anything: it just makes the diagrameasier to draw.

    If we ignore all the artistry that makes it cute, this Feynman diagram is justa graph with species as edges and transitions as vertices. Each transition occursat a specific time.

    We can use these Feynman diagrams to compute the probability that ifwe start it off with some labelling at time t1, our stochastic Petri net willwind up with some other labelling at time t2. To do this, we just take a sumover Feynman diagrams that start and end with the given labellings. For eachFeynman diagram, we integrate over all possible times at which the transitionsoccur. And what do we integrate? Just the product of the rate constants forthose transitions!

    That was a bit of a mouthful, and it doesnt really matter if you followedit in detail. What matters is that it sounds a lot like stuff you learn when you

    33

  • 3.1 The master equation

    3 THE MASTER EQUATION

    study quantum field theory!Thats one clue that something cool is going on here. Another is the master

    equation itself:d

    dt`(t) =

    `

    H```(t)

    This looks a lot like Schrodingers equation, the basic equation describing howa quantum system changes with the passage of time.

    We can make it look even more like Schrodingers equation if we create avector space with the labellings ` as a basis. The numbers `(t) will be thecomponents of some vector (t) in this vector space. The numbers H`` will bethe matrix entries of some operator H on that vector space. And the masterequation becomes:

    d

    dt(t) = H(t)

    Compare Schrodingers equation:

    id

    dt(t) = H(t)

    The only visible difference is that factor of i!But of course this is linked to another big difference: in the master equation

    describes probabilities, so its a vector in a real vector space. In quantumtheory describes amplitudes, so its a vector in a complex Hilbert space.

    Apart from this huge difference, everything is a lot like quantum field theory.In particular, our vector space is a lot like the Fock space one sees in quantumfield theory. Suppose we have a quantum particle that can be in k differentstates. Then its Fock space is the Hilbert space we use to describe an arbitrarycollection of such particles. It has an orthonormal basis denoted

    |`1 `kwhere `1, . . . , `k are natural numbers saying how many particles there are ineach state. So, any vector in Fock space looks like this:

    =

    `1,...,`k

    `1,...,`k |`1 `k

    But if write the whole list `1, . . . , `k simply as `, this becomes

    =`

    `|`

    This is almost like what weve been doing with Petri nets!except we hadntgotten around to giving names to the basis vectors.

    In quantum field theory class, you would learn lots of interesting operatorson Fock space: annihilation and creation operators, number operators, and soon. So, when considering this master equation

    d

    dt(t) = H(t)

    34

  • 3.2 Answer

    3 THE MASTER EQUATION

    it seemed natural to take the operator H and write it in terms of these. Therewas an obvious first guess, which didnt quite work... but thinking a bit hardereventually led to the right answer. Later, it turned out people had alreadythought about similar things. So, we want to explain this.

    When we first started working on this stuff, we focused on the differencebetween collections of indistinguishable things, like bosons or fermions, andcollections of distinguishable things, like rabbits or wolves. But with the benefitof hindsight, its even more important to think about the difference betweenquantum theory, which is all about probability amplitudes, and the game wereplaying now, which is all about probabilities. So, in the next Section, wellexplain how we need to modify quantum theory so that its about probabilities.This will make it easier to guess a nice formula for H.

    3.2 Answer

    Here is the answer to the problem:

    Problem 4. Suppose we have a stochastic Petri net with k species and justone transition, whose rate constant is r. Suppose the ith species appears mitimes as the input of this transition and ni times as the output. A labelling ofthis stochastic Petri net is a k-tuple of natural numbers ` = (`1, . . . , `k) sayinghow many things there are of each species. Let `(t) be the probability thatthe labelling is ` at time t. Then the master equation looks like this:

    d

    dt`(t) =

    `

    H```(t)

    for some matrix of real numbers H``. What is this matrix?

    Answer. To compute H`` its enough to start the Petri net in a definite la-belling ` and see how fast the probability of being in some labelling ` changes.In other words, if at some time t we have

    `(t) = 1

    thend

    dt`(t) = H``

    at this time.Now, suppose we have a Petri net that is labelled in some way at some

    moment. Then the probability that the transition occurs in a short time t isapproximately:

    the rate constant r, times the time t, times the number of ways the transition can occur, which is the product of falling

    powers `m11 `

    mkk . Lets call this product `

    m for short.

    35

  • 3.2 Answer

    3 THE MASTER EQUATION

    Multiplying these 3 things we get

    r`mt

    So, the rate at which the transition occurs is just:

    r`m

    And when the transition occurs, it eats up mi things of the ith species, andproduces ni things of that species. So, it carries our system from the originallabelling ` to the new labelling

    ` = `+ nm

    So, in this case we haved

    dt`(t) = r`

    m

    and thusH`` = r`

    m

    However, thats not all: theres another case to consider! Since the probabilityof the Petri net being in this new labelling ` is going up, the probability of itstaying in the original labelling ` must be going down by the same amount. Sowe must also have

    H`` = r`m

    We can combine both cases into one formula like this:

    H`` = r`m (`,`+nm `,`)

    Here the first term tells us how fast the probability of being in the new labellingis going up. The second term tells us how fast the probability of staying in theoriginal labelling is going down.

    Note: each column in the matrix H`` sums to zero, and all the off-diagonalentries are nonnegative. Thats good: in the next section well show that thismatrix must be infinitesimal stochastic, meaning precisely that it has theseproperties!

    36

  • 4 PROBABILITIES VS AMPLITUDES

    4 Probabilities vs amplitudes

    In Section 3 we saw clues that stochastic Petri nets are a lot like quantum fieldtheory, but with probabilities replacing amplitudes. Theres a powerful analogyat work here, which can help us a lot. Its time to make that analogy precise.But first, let us quickly sketch why it could be worthwhile.

    4.1 A Poisson process

    Consider this stochastic Petri net with rate constant r:

    It describes an inexhaustible supply of fish swimming down a river, and gettingcaught when they run into a fishermans net. In any short time t theres achance of about rt of a fish getting caught. Theres also a chance of two ormore fish getting caught, but this becomes negligible by comparison as t 0.Moreover, the chance of a fish getting caught during this interval of time isindependent of what happens before or afterwards. This sort of process is calleda Poisson process.

    Problem 5. Suppose we start out knowing for sure there are no fish in thefishermans net. Whats the probability that he has caught n fish at time t?

    Answer. At any time there will be some probability of having caught n fish;lets call this probability (n, t). We can summarize all these probabilities in asingle power series, called a generating function:

    (t) =

    n=0

    (n, t) zn

    Here z is a formal variabledont ask what it means, for now its just a trick.In quantum theory we use this trick when talking about collections of photonsrather than fish, but then the numbers (n, t) are complex amplitudes. Nowthey are real probabilities, but we can still copy what the physicists do, and usethis trick to rewrite the master equation as follows:

    d

    dt(t) = H(t)

    This describes how the probability of having caught any given number of fishchanges with time.

    37

  • 4.1 A Poisson process

    4 PROBABILITIES VS AMPLITUDES

    Whats the operator H? Well, in quantum theory we describe the creation ofphotons using a certain operator on power series called the creation operator:

    a = z

    We can try to apply this to our fish. If at some time were 100% sure we haven fish, we have

    = zn

    so applying the creation operator gives

    a = zn+1

    One more fish! Thats good. So, an obvious wild guess is

    H = ra

    where r is the rate at which were catching fish. Lets see how well this guessworks.

    If you know how to exponentiate operators, you know to solve this equation:

    d

    dt(t) = H(t)

    Its easy:(t) = exp(tH)(0)

    Since we start out knowing there are no fish in the net, we have

    (0) = 1

    so with our guess for H we get

    (t) = exp(rta)1

    But a is the operator of multiplication by z, so exp(rta) is multiplication byertz, and

    (t) = ertz =

    n=0

    (rt)n

    n!zn

    So, if our guess is right, the probability of having caught n fish at time t is

    (rt)n

    n!

    Unfortunately, this cant be right, because these probabilities dont sum to 1!Instead their sum is

    n=0

    (rt)n

    n!= ert

    We can try to wriggle out of the mess were in by dividing our answer by thisfudge factor. It sounds like a desperate measure, but weve got to try something!

    38

  • 4.2 Probability theory vs quantum theory

    4 PROBABILITIES VS AMPLITUDES

    This amounts to guessing that the probability of having caught n fish bytime t is

    (rt)n

    n!ert

    And this guess is right! This is called the Poisson distribution: its famousfor being precisely the answer to the problem were facing.

    So on the one hand our wild guess about H was wrong, but on the otherhand it was not so far off. We can fix it as follows:

    H = r(a 1)The extra 1 gives us the fudge factor we need.

    So, a wild guess corrected by an ad hoc procedure seems to have worked!But whats really going on?

    Whats really going on is that a, or any multiple of this, is not a legiti-mate Hamiltonian for a master equation: if we define a time evolution operatorexp(tH) using a Hamiltonian like this, probabilities wont sum to 1! But a 1is okay. So, we need to think about which Hamiltonians are okay.

    In quantum theory, self-adjoint Hamiltonians are okay. But in probabilitytheory, we need some other kind of Hamiltonian. Lets figure it out.

    4.2 Probability theory vs quantum theory

    Suppose we have a system of any kind: physical, chemical, biological, economic,whatever. The system can be in different states. In the simplest sort of model,we say theres some set X of states, and say that at any moment in time thesystem is definitely in one of these states. But we want to compare two otheroptions:

    In a probabilistic model, we may instead say that the system has aprobability (x) of being in any state x X. These probabilities arenonnegative real numbers with

    xX(x) = 1

    In a quantum model, we may instead say that the system has an am-plitude (x) of being in any state x X. These amplitudes are complexnumbers with

    xX|(x)|2 = 1

    Probabilities and amplitudes are similar yet strangely different. Of coursegiven an amplitude we can get a probability by taking its absolute value andsquaring it. This is a vital bridge from quantum theory to probability theory. Inthe present section, however, we dont want to focus on the bridges, but ratherthe parallels between these theories.

    39

  • 4.2 Probability theory vs quantum theory

    4 PROBABILITIES VS AMPLITUDES

    We often want to replace the sums above by integrals. For that we needto replace our set X by a measure space, which is a set equipped with enoughstructure that you can integrate real or complex functions defined on it. Well,at least you can integrate so-called integrable functionsbut well neglect allissues of analytical rigor here. Then:

    In a probabilistic model, the system has a probability distribution : X R, which obeys 0 and

    X

    (x) dx = 1

    In a quantum model, the system has a wavefunction : X C, whichobeys

    X

    |(x)|2 dx = 1

    In probability theory, we integrate over a set S X to find out theprobability that our systems state is in this set. In quantum theory we integrate||2 over the set to answer the same question.

    We dont need to think about sums over sets and integrals over measurespaces separately: theres a way to make any set X into a measure space suchthat by definition,

    X

    (x) dx =xX

    (x)

    In short, integrals are more general than sums! So, well mainly talk aboutintegrals, until the very end.

    In probability theory, we want our probability distributions to be vectors insome vector space. Ditto for wave functions in quantum theory! So, we makeup some vector spaces:

    In probability theory, the probability distribution is a vector in the space

    L1(X) = { : X C :X

    |(x)| dx

  • 4.3 Stochastic vs unitary operators

    4 PROBABILITIES VS AMPLITUDES

    The main thing we can do with elements of L1(X), besides what we cando with vectors in any vector space, is integrate one. This gives a linearmap:

    : L1(X) C

    The main thing we can with elements of L2(X), besides the things we cando with vectors in any vector space, is take the inner product of two:

    , =X

    (x)(x) dx

    This gives a map thats linear in one slot and conjugate-linear in the other:

    , : L2(X) L2(X) CFirst came probability theory with L1(X); then came quantum theory with

    L2(X). Naive extrapolation would say its about time for someone to inventan even more bizarre theory of reality based on L3(X). In this, youd have tointegrate the product of three wavefunctions to get a number! The math of Lp

    spaces is already well-developed, so give it a try if you want. Well stick to L1

    and L2.

    4.3 Stochastic versus unitary operators

    Now lets think about time evolution:

    In probability theory, the passage of time is described by a map sendingprobability distributions to probability distributions. This is describedusing a stochastic operator

    U : L1(X) L1(X)meaning a linear operator such that

    U =

    and 0 U 0

    In quantum theory the passage of time is described by a map sendingwavefunction to wavefunctions. This is described using an isometry

    U : L2(X) L2(X)meaning a linear operator such that

    U,U = , In quantum theory we usually want time evolution to be reversible, so wefocus on isometries that have inverses: these are called unitary operators.In probability theory we often consider stochastic operators that are notinvertible.

    41

  • 4.4 Infinitesimal stochastic versus self-adjoint operators

    4 PROBABILITIES VS AMPLITUDES

    4.4 Infinitesimal stochastic versus self-adjoint operators

    Sometimes its nice to think of time coming in discrete steps. But in theorieswhere we treat time as continuous, to describe time evolution we usually needto solve a differential equation. This is true in both probability theory andquantum theory.

    In probability theory we often describe time evolution using a differentialequation called the master equation:

    d

    dt(t) = H(t)

    whose solution is(t) = exp(tH)(0)

    In quantum theory we often describe time evolution using a differential equationcalled Schrodingers equation:

    id

    dt(t) = H(t)

    whose solution is(t) = exp(itH)(0)

    In both cases, we call the operator H the Hamiltonian. In fact the appearanceof i in the quantum case is purely conventional; we could drop it to make theanalogy better, but then wed have to work with skew-adjoint operators insteadof self-adjoint ones in what follows.

    Lets guess what properties an operator H should have to make exp(itH)unitary for all t. We start by assuming its an isometry:

    exp(itH), exp(itH) = ,

    Then we differentiate this with respect to t and set t = 0, getting

    iH, + ,iH = 0

    or in other words:H, = ,H

    Physicists call an operator obeying this condition self-adjoint. Mathematiciansknow theres more to it, but now is not the time to discuss such subtleties,intriguing though they be. All that matters now is that there is, indeed, acorrespondence between self-adjoint operators and well-behaved one-parameterunitary groups exp(itH). This is called Stones Theorem.

    But now lets copy this argument to guess what properties an operatorH must have to make exp(tH) stochastic. We start by assuming exp(tH) isstochastic, so

    exp(tH) =

    42

  • 4.4 Infinitesimal stochastic versus self-adjoint operators

    4 PROBABILITIES VS AMPLITUDES

    and 0 exp(tH) 0

    We can differentiate the first equation with respect to t and set t = 0, gettingH = 0

    for all .But what about the second condition,

    0 exp(tH) 0?

    It seems easier to deal with this in the special case when integrals over X reduceto sums. So lets suppose that happens... and lets start by seeing what the firstcondition says in this case.

    In this case, L1(X) has a basis of Kronecker delta functions: The Kroneckerdelta function i vanishes everywhere except at one point i X, where it equals1. Using this basis, we can write any operator on L1(X) as a matrix.

    As a warmup, lets see what it means for an operator

    U : L1(X) L1(X)

    to be stochastic in this case. Well take the conditionsU =

    and 0 U 0

    and rewrite them using matrices. For both, its enough to consider the casewhere is a Kronecker delta, say j .

    In these terms, the first condition saysiX

    Uij = 1

    for each column j. The second says

    Uij 0

    for all i, j. So in this case, a stochastic operator is just a square matrix whereeach column sums to 1 and all the entries are nonnegative. (Such matrices areoften called left stochastic.)

    Next, lets see what we need for an operator H to have the property thatexp(tH)is stochastic for all t 0. Its enough to assume t is very small, whichlets us use the approximation

    exp(tH) = 1 + tH +

    43

  • 4.4 Infinitesimal stochastic versus self-adjoint operators

    4 PROBABILITIES VS AMPLITUDES

    and work to first order in t. Saying that each column of this matrix sums to 1then amounts to

    iXij + tHij + = 1

    which requires iX

    Hij = 0

    Saying that each entry is nonnegative amounts to

    ij + tHij + 0

    When i = j this will be automatic when t is small enough, so the meat of thiscondition is

    Hij 0 if i 6= jSo, lets say H is an infinitesimal stochastic matrix if its columns sum tozero and its off-diagonal entries are nonnegative. This term doesnt roll offthe tongue, but we dont know a better one. The ideas is that any infinitesimalstochastic operator should be the infinitesimal generator of a stochastic process.

    In other words, when we get the details straightened out, any 1-parameterfamily of stochastic operators

    U(t) : L1(X) L1(X) t 0

    obeyingU(0) = I

    U(t)U(s) = U(t+ s)

    and continuity:ti t U(ti) U(t)

    should be of the formU(t) = exp(tH)

    for a unique infinitesimal stochastic operator H.When X is a finite set, this is trueand an infinitesimal stochastic operator

    is just a square matrix whose columns sum to zero and whose off-diagonal en-tries are nonnegative. But do you know a theorem characterizing infinitesimalstochastic operators for general measure spaces X? Someone must have workedit out.

    Luckily, for our work on stochastic Petri nets, we only need to understandthe case where X is a countable set and our integrals are really just sums. Thisshould be almost like the case where X is a finite setbut well need to takecare that all our sums converge.

    44

  • 4.5 The moral

    4 PROBABILITIES VS AMPLITUDES

    4.5 The moral

    Now we can see why a Hamiltonian like a is no good, while a 1 is good.(Well ignore the rate constant r since its irrelevant here.) The first one is notinfinitesimal stochastic, while the second one is!

    In this example, our set of states is the natural numbers:

    X = N

    The probability distribution : N C

    tells us the probability of having caught any specific number of fish.The creation operator is not infinitesimal stochastic: in fact, its stochastic!

    Why? Well, when we apply the creation operator, what was the probabilityof having n fish now becomes the probability of having n + 1 fish. So, theprobabilities remain nonnegative, and their sum over all n is unchanged. Thosetwo conditions are all we need for a stochastic operator.

    Using our fancy abstract notation, these conditions say:a =

    and 0 a 0

    So, precisely by virtue of being stochastic, the creation operator fails to beinfinitesimal stochastic:

    a 6= 0

    Thus its a bad Hamiltonian for our stochastic Petri net.On the other hand, a 1 is infinitesimal stochastic. Its off-diagonal entries

    are the same as those of a, so theyre nonnegative. Moreover:(a 1) = 0

    precisely because a =

    You may be thinking: all this fancy math just to understand a single stochasticPetri net, the simplest one of all!

    45

  • 4.5 The moral

    4 PROBABILITIES VS AMPLITUDES

    But next well explain a general recipe which will let you write down theHamiltonian for any stochastic Petri net. The lessons weve just learned willmake this much easier. And pondering the analogy between probability theoryand quantum theory will also be good for our bigger project of unifying theapplications of network diagrams to dozens of different subjects. Oh and weshould mention, the rabbits are plotting their revenge.

    46

  • 5 ANNIHILATION AND CREATION OPERATORS

    5 Annihilation and creation operators

    Now comes the fun part. Lets see how tricks from quantum theory can beused to describe random processes. Well try to make this section completelyself-contained, except at the very end. So, even if you skipped a bunch of theprevious ones, this should make sense.

    Youll need to know a bit of math: calculus, a tiny bit probability theory,and linear operators on vector spaces. You dont need to know quantum theory,though youll have more fun if you do. What were doing here is very similar,but also strangely differentfor reasons explained in Section 4.

    5.1 Rabbits and quantum mechanics

    Suppose we have a population of rabbits in a cage and wed like to describe itsgrowth in a stochastic way, using probability theory. Let n be the probability ofhaving n rabbits. We can borrow a trick from quantum theory, and summarizeall these probabilities in a formal power series like this:

    =

    n=0

    nzn

    The variable z doesnt mean anything in particular, and we dont care if thepower series converges. See, in math formal means its only symbols on thepage, just follow the rules. Its like if someone says a party is formal, so needto wear a white tie: youre not supposed to ask what the tie means.

    However, theres a good reason for this trick. We can define two operatorson formal power series, called the annihilation operator:

    a =d

    dz

    and the creation operator:a = z

    Theyre just differentiation and multiplication by z, respectively. So, for exam-ple, suppose we start out being 100% sure we have n rabbits for some particularnumber n. Then n = 1, while all the other probabilities are 0, so:

    = zn

    If we then apply the creation operator, we obtain

    a = zn+1

    Voila`! One more rabbit!

    47

  • 5.1 Rabbits and quantum mechanics

    5 ANNIHILATION AND CREATION OPERATORS

    The annihilation operator is more subtle. If we start out with n rabbits:

    = zn

    and then apply the annihilation operator, we obtain

    a = nzn1

    What does this mean? The zn1 means we have one fewer rabbit than before.But what about the factor of n? It means there were n different ways we couldpick a rabbit and make it disappear! This should seem a bit mysterious, forvarious reasons... but well see how it works soon enough.

    The creation and annihilation operators dont commute:

    (aa aa) = ddz

    (z) z ddz

    =

    so for short we say:aa aa = 1

    or even shorter:[a, a] = 1

    where the commutator of two operators is [S, T ] = ST TS.The noncommutativity of operators is often claimed to be a special feature of

    quantum physics, and the creation and annihilation operators are fundamentalto understanding the quantum harmonic oscillator. There, instead of rabbits,were studying quanta of energy, which are peculiarly abstract entities obeyingrather counterintuitive laws. So, its cool that the same math applies to purelyclassical entities, like rabbits!

    In particular, the equation [a, a] = 1 just says that theres one more way toput a rabbit in a cage of rabbits, and then take one out, than to take one outand then put one in.

    48

  • 5.2 Catching rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    But how do we actually use this setup? We want to describe how the prob-abilities n change with time, so we write

    (t) =

    n=0

    n(t)zn

    Then, we write down an equation describing the rate of change of :

    d

    dt(t) = H(t)

    Here H is an operator called the Hamiltonian, and the equation is called themaster equation. The details of the Hamiltonian depend on our problem!But we can often write it down using creation and annihilation operators. Letsdo some examples, and then well tell you the general rule.

    5.2 Catching rabbits

    In Section 4 we told you what happens when we stand in a river and catchfish as they randomly swim past. Let us remind you of how that works. Butnow lets use rabbits.

    So, suppose an inexhaustible supply of rabbits are randomly roaming arounda huge field, and each time a rabbit enters a certain area, we catch it andadd it to our population of caged rabbits. Suppose that on average we catchone rabbit per unit time. Suppose the chance of catching a rabbit during anyinterval of time is independent of what happens before or afterwards. What isthe Hamiltonian describing the probability distribution of caged rabbits, as afunction of time?

    Theres an obvious dumb guess: the creation operator! However, we sawlast time that this doesnt work, and we saw how to fix it. The right answer is

    H = a 1To see why, suppose for example that at some time t we have n rabbits, so:

    (t) = zn

    Then the master equation says that at this moment,

    d

    dt(t) = (a 1)(t) = zn+1 zn

    49

  • 5.3 Dying rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    Since =n=0 n(t)z

    n, this implies that the coefficients of our formal powerseries are changing like this:

    d

    dtn+1(t) = 1

    d

    dtn(t) = 1

    while all the rest have zero derivative at this moment. And thats exactly right!See, n+1(t) is the probability of having one more rabbit, and this is going upat rate 1. Meanwhile, n(t) is the probability of having n rabbits, and this isgoing down at the same rate.

    Problem 6. Show that with this Hamiltonian and any initial conditions, themaster equation predicts that the expected number of rabbits grows linearly.

    5.3 Dying rabbits

    Dont worry: no rabbits are actually injured in the research that were doinghere at the Centre for Quantum Technologies. This is just a thought experiment.

    Suppose a mean nasty guy had a population of rabbits in a cage and didntfeed them at all. Suppose that each rabbit has a unit probability of dying perunit time. And as always, suppose the probability of this happening in anyinterval of time is independent of what happens before or after that time.

    What is the Hamiltonian? Again theres a dumb guess: the annihilationoperator! And again this guess is wrong, but its not far off. As before, theright answer includes a correction term:

    H = aNThis time the correction term is famous in its own right. Its called the numberoperator:

    N = aa

    The reason is that if we start with n rabbits, and apply this operator, it amountsto multiplication by n:

    Nzn = zd

    dzzn = nzn

    Lets see why this guess is right. Again, suppose that at some particular time twe have n rabbits, so

    (t) = zn

    50

  • 5.4 Breeding rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    Then the master equation says that at this time

    d

    dt(t) = (aN)(t) = nzn1 nzn

    So, our probabilities are changing like this:

    d

    dtn1(t) = n

    d

    dtn(t) = n

    while the rest have zero derivative. And this is good! Were starting with nrabbits, and each has a unit probability per unit time of dying. So, the chanceof having one less should be going up at rate n. And the chance of having thesame number we started with should be going down at the same rate.

    Problem 7. Show that with this Hamiltonian and any initial conditions, themaster equation predicts that the expected number of rabbits decays exponen-tially.

    5.4 Breeding rabbits

    Suppose we have a strange breed of rabbits that reproduce asexually. Sup-pose that each rabbit has a unit probability per unit time of having a babyrabbit, thus effectively duplicating itself.

    As you can see from the cryptic picture above, this duplication processtakes one rabbit as input and has two rabbits as output. So, if youve beenpaying attention, you should be ready with a dumb guess for the Hamiltonian:aaa. This operator annihilates one rabbit and then creates two!

    But you should also suspect that this dumb guess will need a correctionterm. And youre right! As always, the correction terms makes the probabilityof things staying the same go down at exactly the rate that the probability ofthings changing goes up.

    You should guess the correction term... but well just tell you:

    H = aaaN

    51

  • 5.5 Dueling rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    We can check this in the usual way, by seeing what it does when we have nrabbits:

    Hzn = z2d

    dzzn nzn = nzn+1 nzn

    Thats good: since there are n rabbits, the rate of rabbit duplication is n. Thisis the rate at which the probability of having one more rabbit goes up... andalso the rate at which the probability of having n rabbits goes down.

    Problem 8. Show that with this Hamiltonian and any initial conditions, themaster equation predicts that the expected number of rabbits grows exponen-tially.

    5.5 Dueling rabbits

    Lets do some stranger examples, just so you can see the general pattern.

    Here each pair of rabbits has a unit probability per unit time of fighting a duelwith only one survivor. You might guess the Hamiltonian aaa, but in fact:

    H = aaaN(N 1)

    Lets see why this is right! Lets see what it does when we have n rabbits:

    Hzn = zd2

    dz2zn n(n 1)zn = n(n 1)zn1 n(n 1)zn

    Thats good: since there are n(n1) ordered pairs of rabbits, the rate at whichduels take place is n(n 1). This is the rate at which the probability of havingone less rabbit goes up... and also the rate at which the probability of having nrabbits goes down.

    (If you prefer unordered pairs of rabbits, just divide the Hamiltonian by 2.We should talk about this more, but not now.)

    52

  • 5.6 Brawling rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    5.6 Brawling rabbits

    Now each triple of rabbits has a unit probability per unit time of gettinginto a fight with only one survivor! We dont know the technical term for athree-way fight, but perhaps it counts as a small brawl or melee. In fact theWikipedia article for melee shows three rabbits in suits of armor, fighting itout:

    Now the Hamiltonian is:

    H = aa3 N(N 1)(N 2)

    You can check that:

    Hzn = n(n 1)(n 2)zn2 n(n 1)(n 2)zn

    and this is good, because n(n 1)(n 2) is the number of ordered triples ofrabbits. You can see how this number shows up from the math, too:

    a3zn =d3

    dz3zn = n(n 1)(n 2)zn3

    5.7 The general rule

    Suppose we have a process taking k rabbits as input and having j rabbits asoutput:

    53

  • 5.8 Kissing rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    By now you can probably guess the Hamiltonian well use for this:

    H = ajak N(N 1) (N k + 1)

    This works because

    akzn =dk

    dzkzn = n(n 1) (n k + 1)znk

    so that if we apply our Hamiltonian to n rabbits, we get

    Hzn = n(n 1) (n k + 1)(zn+jk zn)See? As the probability of having n+ j k rabbits goes up, the probability ofhaving n rabbits goes down, at an equal rate. This sort of balance is necessaryfor H to be a sensible Hamiltonian in this sort of stochastic theory infinitesimalstochastic operator, to be precise). And the rate is exactly the number ofordered k-tuples taken from a collection of n rabbits. This is called the kthfalling power of n, and written as follows:

    nk = n(n 1) (n k + 1)Since we can apply functions to operators as well as numbers, we can write ourHamiltonian as:

    H = ajak Nk

    5.8 Kissing rabbits

    54

  • 5.8 Kissing rabbits

    5 ANNIHILATION AND CREATION OPERATORS

    Lets do one more example just to test our understanding. This time eachpair of rabbits has a unit probability per unit time of bumping into each other,exchanging a friendly kiss and walking off. This shouldnt affect the rabbitpopulation at all! But lets follow the rules and see what they say.

    According to our rules, the Hamiltonian should be:

    H = a2a2 N(N 1)

    However,

    a2a2zn = z2

    d2

    dz2zn = n(n 1)zn = N(N 1)zn

    and since zn form a basis for the formal power series, we see that:

    a2a2 = N(N 1)

    so in fact:H = 0

    Thats good: if the Hamiltonian is zero, the master equation will say

    d

    dt(t) = 0

    so the population, or more precisely the probability of having any given numberof rabbits, will be constant.

    Theres another nice little lesson here. Copying the calculation we just did,its easy to see that:

    akak = Nk

    This is a cute formula for falling powers of the number operator in terms ofannihilation and creation operators. It means that for the general transition wesaw before:

    we can write the Hamiltonian in two equivalent ways:

    H = ajak Nk = ajak akak

    55

  • 5.9 References

    5 ANNIHILATION AND CREATION OPERATORS

    Okay, thats it for now! We can, and will, generalize all this stuff to stochasticPetri nets where there are things of many different kindsnot just rabbits. Andwell see that the master equation we get matches the answer to the problem inSection 3. Thats pretty easy.

    5.9 References

    For a general introduction to stochastic processes, try this: