self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/soasrefrig.pdf · tems...

21
Self-organization as structural refrigeration Eric Smith Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 (Dated: August 28, 2002) A class of linear, coupled, quantum harmonic oscillator systems is studied, which has a natural interpretation as a self-organizing reversible heat engine. Self-organization, even though it is explic- itly dynamical, happens by standard Carnot refrigeration of “structural” elements, and is described within the framework of equilibrium statistical mechanics. Effective potentials for such systems are shown to generalize those of the static case, in a way that typically is required to compute responses to heterogeneous external constraints. The ignorance implied by a “classical” distribution over mi- crostates, and measured by its entropy, becomes a function of both static and time-dependent state variables, and is generally smaller than the entropy of the same description, projected onto its static components alone. I. SYNOPSIS This paper has two main messages. One is that there are self-organizing processes requiring only thermody- namically reversible transformations. Organization in these cases happens by refrigeration of some components from unstructured to structured states, with the struc- ture in turn supporting the process of refrigeration, and possibly separate engine processes that power it. The second message is that even explicitly dynamical thermal systems, if their dynamics is entirely reversible, should be lumped with quasi-static cases in the class of systems tractable by “equilibrium statistical mechanics”. While this conclusion follows more or less immediately from definitions, it implies that constructions like the Helmholtz effective potential in general have extensions to incorporate intrinsically dynamical or time-dependent order parameters. Just as the static, configurational en- tropy relates steady system states to uniform properties of the external world (such as temperature or chemical potential), the extensions are needed to identify the re- sponse of dynamical components to heterogenous cou- plings. The purpose of the investigation presented here is to show how flows can sustain self-organizing systems in (sequences of) configurations that are not ground states of equilibrium free energies, and yet remain within proper application of equilibrium principles. The Leg- endre transform of the complete set of constraints on a maximum-ignorance distribution of microstates sim- ply may have no representation in terms solely of time- invariant macroscopic configuration variables. The com- plete specification of the ensemble is then intrinsically dy- namic, and the projection of its effective potential onto those configuration variables that are static specifies a different ensemble, whose classical observables necessar- ily omit aspects of the response to heterogeneous cou- pling. The projected ensemble is a coarse-graining of the one originally specified, and as a result naturally has a larger entropy. It is plausible that some of the order in living sys- tems arises in this way, and a long-term goal of this re- search should be to determine which parts do, and to describe them with proper statistical ensembles. Noth- ing approaching that level of complexity is attempted in this paper, where not only biology, but even nonlinear- ity, are removed to obtain the simplest possible type of demonstrative models. It will be argued, though, that the principles instantiated in these models are the same as those that govern the more complex cases of reversible self-organization. II. INTRODUCTION: FINDING EXTENSIONS TO A PARADIGM A. What is to be explained Self-organization (SO) is in some sense defined by op- position to a paradigm. Biologists don’t seem to find it strange that structured living matter exists and persists; they see the origins of drift and selection as the aspects of persistence to be explained [1]. Computer scientists have viewed the emergence of complex order in rule-based systems, like cellular automata, as a possible model for sources of order in nature, but work primarily to identify which details of mechanism might be shared [2]. That emergent order should be possible is commonly accepted in the world of algorithms, and the question of “‘how much’ or ‘what kind of’ order would be surprising” ap- pears not to arise. Only in physics is self-organization the unifying and puzzling concept to be understood among diverse systems with emergent order. The reason self-organanization can be a unitary con- cept in physics, and not just a collection of separate prob- lems (each linked to the context where it occurs) is that superficially it appears to lead to configurations violating the second law of thermodynamics [3, 4]. The classical configurations created moment by moment, and rendered dynamically stable in SO systems, are not ground states of the best approximate effective potentials one can con- struct for them from equilibrium statistical mechanics. They may be more richly structured, and as a result have entropies much lower, and free energies higher, than sim- ilarly constituted ground states of equilibrium systems. The obvious common feature of SO systems, not shared

Upload: others

Post on 28-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

Self-organization as structural refrigeration

Eric SmithSanta Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501

(Dated: August 28, 2002)

A class of linear, coupled, quantum harmonic oscillator systems is studied, which has a naturalinterpretation as a self-organizing reversible heat engine. Self-organization, even though it is explic-itly dynamical, happens by standard Carnot refrigeration of “structural” elements, and is describedwithin the framework of equilibrium statistical mechanics. Effective potentials for such systems areshown to generalize those of the static case, in a way that typically is required to compute responsesto heterogeneous external constraints. The ignorance implied by a “classical” distribution over mi-crostates, and measured by its entropy, becomes a function of both static and time-dependent statevariables, and is generally smaller than the entropy of the same description, projected onto its staticcomponents alone.

I. SYNOPSIS

This paper has two main messages. One is that thereare self-organizing processes requiring only thermody-namically reversible transformations. Organization inthese cases happens by refrigeration of some componentsfrom unstructured to structured states, with the struc-ture in turn supporting the process of refrigeration, andpossibly separate engine processes that power it.

The second message is that even explicitly dynamicalthermal systems, if their dynamics is entirely reversible,should be lumped with quasi-static cases in the class ofsystems tractable by “equilibrium statistical mechanics”.While this conclusion follows more or less immediatelyfrom definitions, it implies that constructions like theHelmholtz effective potential in general have extensionsto incorporate intrinsically dynamical or time-dependentorder parameters. Just as the static, configurational en-tropy relates steady system states to uniform propertiesof the external world (such as temperature or chemicalpotential), the extensions are needed to identify the re-sponse of dynamical components to heterogenous cou-plings.

The purpose of the investigation presented here isto show how flows can sustain self-organizing systemsin (sequences of) configurations that are not groundstates of equilibrium free energies, and yet remain withinproper application of equilibrium principles. The Leg-endre transform of the complete set of constraints ona maximum-ignorance distribution of microstates sim-ply may have no representation in terms solely of time-invariant macroscopic configuration variables. The com-plete specification of the ensemble is then intrinsically dy-namic, and the projection of its effective potential ontothose configuration variables that are static specifies adifferent ensemble, whose classical observables necessar-ily omit aspects of the response to heterogeneous cou-pling. The projected ensemble is a coarse-graining of theone originally specified, and as a result naturally has alarger entropy.

It is plausible that some of the order in living sys-tems arises in this way, and a long-term goal of this re-search should be to determine which parts do, and to

describe them with proper statistical ensembles. Noth-ing approaching that level of complexity is attempted inthis paper, where not only biology, but even nonlinear-ity, are removed to obtain the simplest possible type ofdemonstrative models. It will be argued, though, thatthe principles instantiated in these models are the sameas those that govern the more complex cases of reversibleself-organization.

II. INTRODUCTION: FINDING EXTENSIONSTO A PARADIGM

A. What is to be explained

Self-organization (SO) is in some sense defined by op-position to a paradigm. Biologists don’t seem to find itstrange that structured living matter exists and persists;they see the origins of drift and selection as the aspectsof persistence to be explained [1]. Computer scientistshave viewed the emergence of complex order in rule-basedsystems, like cellular automata, as a possible model forsources of order in nature, but work primarily to identifywhich details of mechanism might be shared [2]. Thatemergent order should be possible is commonly acceptedin the world of algorithms, and the question of “‘howmuch’ or ‘what kind of’ order would be surprising” ap-pears not to arise. Only in physics is self-organization theunifying and puzzling concept to be understood amongdiverse systems with emergent order.

The reason self-organanization can be a unitary con-cept in physics, and not just a collection of separate prob-lems (each linked to the context where it occurs) is thatsuperficially it appears to lead to configurations violatingthe second law of thermodynamics [3, 4]. The classicalconfigurations created moment by moment, and rendereddynamically stable in SO systems, are not ground statesof the best approximate effective potentials one can con-struct for them from equilibrium statistical mechanics.They may be more richly structured, and as a result haveentropies much lower, and free energies higher, than sim-ilarly constituted ground states of equilibrium systems.

The obvious common feature of SO systems, not shared

Page 2: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

2

by the examples where equilibrium statistical mechan-ics predicts the right ground states, is that the SO sys-tems are open, and couple to heterogeneous environmen-tal reservoirs or gradients. Both properties are essen-tial, as openness to uniform environments is the assump-tion already made of equilibrium. But while it is easy tosay that heterogeneous environmental coupling, and theflows induced by it, make equilibrium models invalid, thequestion remains how one predicts the correct dynamicalstates. The tendency of macroscopic ensembles to driftinto disordered states is still present in SO systems, andmust be related quantitatively to whatever induces orderfrom flows.

Thus SO is not a puzzle because it leads to rare oranomalous experiences of nature; indeed everything fromweather systems and life [5], to oscillating chemical reac-tions [6] and avalanches [7], makes it familiar. The con-cept exists only through the failure of equilibrium meth-ods to generalize to heterogeneous inputs. This papershows how one class of SO systems can be incorporatedwithin a statistical framework, and how the puzzle of or-dered instantaneous states is resolved for that class.

B. Dissipation versus conservation

Anomalous behavior of expected entropy measuresfrom equilibrium thermodynamics is an adequate signa-ture to define SO phenomenologically, and to imply thatsome new principles are needed to account for organiza-tion around flows. However, it fails to make a further dis-tinction between intrinsically irreversible and reversibleSO processes, for which the needed principles may bedifferent.

The entropy of an ensemble, and its potential to changeif the macro- or microstate representations evolve overtime, arise from a formally well-understood interactionbetween the dynamics of the microstates, and the coarse-graining that defines the ensemble, or with respect towhich it is subsequently measured [8]. A formal theoryof dissipative systems requires a sequence of ensembledescriptions, in which the number of bits of informa-tion needed to identify the microstate observed in oneinstance grows with the time at which the observation ismade. A formal description of the decrease in some sub-set of marginal entropies (the formation of structure),against the background of such a continuously chang-ing parametrization, has not been carried out. A morephenomenological approach that appears to capture theleading behaviors of many dissipative, self-organizing sys-tems is called the theory of “dissipative structures” [9].

It is fortunate that SO also occurs in much simplersystems, whose ordered states form reversible enginesthat power the replication of their own operating ma-chinery [10, 11]. Reversibility implies that the amountof information needed to specify the ensemble is time-invariant, in whatever basis the specifying measurementsare being made. The existence of heat flows and work ac-

cumulation attests that the classical measurement basishas an ever-changing relation to the microstates of thesystem. In particular, different fractions of the bits areneeded along each measured component, to specify thesame distribution at different times. The fact that theignorance about the microstates remains fixed, though,suggests that it will generally be possible to find a rep-resentation in which it is manifestly time-independent.

Such a thermal “diagonalization” will be derived here,for a class of linear quantum harmonic oscillator mod-els that capture the important features of actual re-versible self-organizing engines. It will be shown thatthere is no time-independent representation that factorsinto fixed values of configuration variables, themselvesindependent of the time when they are measured. Fur-thermore, finite rates and timescales will be intrinsic tothe model dynamics, so the time-independent representa-tion will not achieved by taking quasistatic limits. Nev-ertheless, the complete description of the ensemble re-mains essentially an “equilibrium” construction. Con-verting from the time-independent, to time-dependentrepresentations, the classical thermodynamic notions ofheat flow will be recovered, along with the associated “in-formational entropies” that are reduced when structuresare formed. In keeping with the observation that bothentropies must be counted to free the second law of ther-modynamics from Maxwell deamons [12], it will be foundthat structure formation is just another instance of thefamiliar equilibrium process of refrigeration.

It will become clear in the presentation that the dif-ference between dissipative and conservative structures isone of the mathematics of coarse-grainings by which theyare described. That is enough to make the distinctionfundamental, since each method must be coherent for itsscale of description. However, the inference of averagesof observables from ensembles of experimental measure-ments is only part of the specification of classical states,because it leaves open the question whether the exper-iments create maximum-ignorance ensembles consistentwith those measurements. Thus the same phenomenonmay admit more than one level of statistical description,and it will remain unresolved whether the mechanisms ofreversible SO are relevant also to cases usually treated asirreversible.

C. Layout of the paper

A central concern of the paper is to make as little al-gebra as possible illustrate some very general principles.Therefore the linear oscillator system to be analyzed indetail is presented relatively late, and phenomenologi-cal discussions of several physical systems are presentedfirst to give the subsequent model motivation and con-text. The algebra is intended to show how various phys-ical systems can take on one or another property of self-organization, but much of the physics is contained inthe discussion and comparison of familiar examples, even

Page 3: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

3

where detailed microscopic models of them are not at-tempted.

The main concern of Sec. III is how closing the reser-voirs of a reversible SO system should produce oscillatorydynamics, which are then naturally interpreted in termsof storage and inertia, or equivalently capacitance andinductance. The phase space topology of the resultingtransport cycles will provide the mental picture for therest of the paper. Three familiar physical examples willsimply be discussed qualitatively as SO systems, to high-light different aspects of this picture. Specific concernsare: where the information carried in structure comesfrom, the role of nonlinearities in creating the apparent“attractor” nature of self-organized states (and why theycan be omitted here), and the ability to trace informa-tion flow to quantum coherences, while not making anyspecial use of coherence beyond that already responsiblefor classical dynamics.

Much of the reason refrigeration is an attractive con-cept, with which to discuss self-organization, is that itfits so naturally into the explanation of biochemistry.Sec. IV therefore takes a deliberately explicit detourinto Schrodinger’s original description of the unusual en-tropy flows of life, and shows how the same processescorrespond to details of the physical systems just pre-sented. Some rather surprising modern findings suggestthat some fundamental biochemical processes may be de-scribable in the simple terms of reversible SO.

When the physical features discussed in these two sec-tions are captured in an explicit dynamical model, it willbe so simple as to be nearly trivial microscopically. Themodel is chosen this way to magnify the point that ev-erything about heat and information flows comes fromthe interaction of microscale dynamics with a prescrip-tion for coarse-graining that defines the equivalent no-tions “maximum-ignorance thermal ensemble”, or “clas-sical state”. Careful use of these terms will make all ofthe following results essentially obvious from unpackingthe definitions, so Sec. V gives a careful review of theterms as they are used here.

Sec. VI (at last) introduces the actual oscillator model,and relates the underlying quantum structure and cou-pling to the classical high-temperature dynamics. Thecoarse-grained description is derived, and shown to havethe correct heat flows to be interpreted as the enginesof Sec. III, and the correct information flows to satisfyvarious definitions of self-organized. Examples will alsoshow how easily reversibility is lost from overly restrictiveclasses of macrostate description, even though it is read-ily apparent in more subtle (time-nonlocal) parametriza-tions.

Sec. VII abstracts the example results to general state-ments about the structure of effective thermal potentialsfor heterogeneous environmental couplings. It retains theexample variables, for the sake of having a definite nota-tion, but ties the results explicitly back to observationsabout living matter, showing that essentially the samemistakes one makes by treating life in flow-independent

terms are reproduced here.There are five appendices which do the elementary

algebra associated with quantum oscillators, coherentstates, thermal density matrices and their coherent-staterepresentations, and aspects of coarse-graining and statepreparation. One or another aspect of the content willbe familiar to most readers, in which case the appen-dices simply establish notation. However, several specifictricks, such as the use of coherent-state representations toimpose heterogeneous thermal initial conditions, or thesurprising fact that all marginals of the resulting distri-butions remain exactly thermal, are new to this paper.The appendixes are written so that they can be read asan essentially independent thread on technical methods,leaving interpretation issues to the main text.

III. THREE GUIDING PHYSICAL EXAMPLES

While the essential opposition of SO to static entropymaximization may be definable abstractly, it will alwaysbe possible to find physical systems that instantiate dif-ferent aspects of this opposition in different ways, andthere probably is no single notation or model appropri-ate to capture them all. Organized states may be steadyor transient, asymptotic or recurrent, and the order maybe expressed in spatial, temporal, or even compositionalvariables under different characterizations. In particular,the steadiness of the ordered form, or its entropy differ-ence from a flow-less equivalent, may depend on nonlin-earities of either the microscopic or macroscopic evolu-tion, and the type of order formed. To disentangle these,and see what minimally is necessary for a system to becalled “self-organizing”, it is best to consider closely re-lated physical instances. Three are chosen here, each ofwhich introduces a separate step in understanding eitherthe principle or common features of the phenomenology.

The first point to appreciate is that one should under-stand dynamical heat flows and structure formation thesame way the quasistatic case was understood [13]. Oneshould embed the open SO system in an explicit modelof the environment, and then close that environment sothat the states of the whole system can be indexed explic-itly, and their evolution described. There is no particu-lar value in doing this for a dissipative structure-formingsystem, since even in a closed system entropy is not ex-pected to be conserved. For reversible systems, though,it affords a great simplification.

The second point to argue is that the nonlinearitiespresent in many actual SO systems are not the ulti-mate origin of their organizing behavior, and so can beomitted in looking for the simplest possible dynamicalmodel. This will be done by finding a linear system whoseSO properties have point by point correspondences withthose in the first example, even though they are not usu-ally described as self-organized.

Finally, a familiar generalization of the simplest linearexample will help illustrate how dynamical nonlinearity

Page 4: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

4

at the classical level can naturally prolongue the lifetimeof the ordered state, and lead to regular limits with un-bounded environments. The combination of these physi-cal arguments will be invoked to keep the actual explic-itly modeled dynamics simple, while extrapolating theirconclusions to more complex cases.

A. Closing dynamical reservoirs

The simplest natural model of reversible SO is thetraveling-wave thermoacoustic engine [14], analyzed else-where as a dynamical phase transition [10, 11]. It is aperiodic, gas-filled acoustic resonator coupled thermallyto two reservoirs, which spontaneously generates a trav-eling sound wave if the temperatures of its two reservoirsdiffer. The order parameter for the phase transition is theamplitude of the sound wave, and its phase degeneracy intime (or around the resonator) arises from a spontaneoussymmetry breaking by Goldstone’s theorem.

These engines have a classical description in termsof linear acoustics of perfect gases, and require onlythose forms of conduction between the engine volumeand the reservoirs that have sensible reversible limits.Since no internal structure is assumed for the reservoirs,their energy-carrying excitations may also be taken to bephonons, leading to a system model with one species ofexcitation. The coherent traveling wave arises from con-densation of phonons from the maximum-entropy ther-mal distribution, into a ground state of the resonator, anentropy-reducing process. Because the coherent wave isalso the “machinery” of the engine, condensation has anadditional interpretation as structure-forming.

These engines thus satisfy one criterion sometimesgiven for self-organization: entropy reduction in a com-ponent from its equilibrium condition [3]. They can alsobe shown to satisfy a different criterion – increase in thestatistical complexity of the component [15] – as follows.The simplest description of all the regularities of a qui-escent resonator is its temperature, whereas the simplestdescription of the ordered state must specify its meantemperature, and the amplitude and phase of the soundwave, or some set of values equivalent to these. The mostcompact representation of the regularities of the orderedstate is therefore larger than for the disordered state,which is the definition of increased statistical complex-ity.

The thermoacoustic systems were purposely designedas engines, and their work output increases the coherentphonon count if the engine is not coupled to a load. Whenthis phonon pumping into the coherent state is identifiedas essentially a process of refrigeration, the systems ob-tain a compact description as coupled engine/refrigeratorpairs, a representation with applications to biochemistry.

Idealized traveling-wave thermoacoustic engines are re-versible, which means both that classically, they haveCarnot efficiency between their reservoirs, and that theyoperate in reverse as refrigerators. If a traveling wave is

already present, along with a temperature difference ofsign opposite the sign that drives that wave, the inertiaof the wave will be slowly consumed as it pumps entropyfrom the lower to the higher-temperature reservoir. Forreservoirs with positive, finite specific heats, this suggestsa natural phenomenology:

If the reservoirs of a reversible thermoacoustic engineare made finite, and set at different temperatures, theprocess of self-organization of sound from a quiescent intoa coherent state in the engine is initially indistinguish-able from the case when they are infinite. However, thepumping of heat from finite reservoirs must equalize andeventually reverse the temperature difference, and shouldculminate in a quiescent state with the sign of tempera-ture difference reversed. This quiescent state must thencollapse into a self-organized traveling wave in the op-posite direction, which runs until it restores the initialthermal inequality.

While such oscillating structure provides a generalizedframework of heat and work flows in closed reversible SOsystems, it is inconvenient to try to model these particu-lar engines in microscopic detail, because thermoacousticphonon creation and transport are moderately nonlinear.The phase space topology of their cycles, though, and thetypes of structures formed, immediately suggest a linearsystem with the same essential features. They also havea feature that it will be desirable to preserve in simplermodels: the growth rate of phonons in the engine is pro-portional to the temperature difference in the reservoirs,a direct consequence of reversibility.

B. Isolating the linear limit

The cylcing of the thermoacoustic engine between fi-nite reservoirs is analogous to the cycling of the currentin a capacitor/inductor (LC) circuit [16], except that fa-miliar LC circuits are linear, and their states of order arenot usually called self-organized. Therefore it is usefulto make the correspondence of the two systems precise,and show that the LC circuit is a sufficient guide for thealgebraic model.

Like all-phonon engines, LC circuits have one speciesof excitation (electrons), and in a dilute-gas limit wherethe implications of Fermi statistics have already been ab-sorbed in forming the conductor [17], these can be treatedas spinless, first-quantized particles. The chemical po-tential for electrons in a capacitor is the voltage, and forphonons in a resonator, the temperature.

The order parameter, and the classical degree of free-dom that receives the “work” from current flow, is theinductor’s magnetic field. Maxwell’s equations relate thechange of current to the energy lost or gained by the field,and the resulting inductance is the inertia of the system.Moreover, the charge and entropy states of the conductorare much like the phonon states of the resonator. The av-erage particle count is the same independent of the stateof current. Macrostates with currents simply correspond

Page 5: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

5

to displacements of charges from symmetric to asymmet-ric populations of wavenumber microstates.

The rate of growth of the current in an LC circuit isalso proportional to the voltage difference of the capaci-tor plates, the analog of the temperature dependence inthe thermoacoustic case. The only difference, and thesource of linearity, is that the rate of current growth isnot also proportional to the current density in the LCcase, whereas it is proportional to the sound amplitudein the engine.

It is worth noting, too, that LC circuits are classic sys-tems to model quantum mechanically, because of theirextreme simplicity. The phase correlations among quan-tum states can be traced explicitly through the corre-spondence principle to their expression in classical dy-namics at high temperature. In order to understandconservation of entropy, and especially its “flow” as heatfrom one component to another, it will probably alwaysbe necessary to resort to microscopically explicit mech-anisms. All that is essential is that the mechanisms re-sponsible for SO not be required to make special use oflow temperature or coherence effects, beyond those al-ready responsible for classical dynamics.

The final difference between LC current formation andthermoacoustic sound generation is that a vast numberof wavevectors is differentially populated in a classicalelectric current, whereas only the two directions of thefundamental mode are populated in most thermoacousticexperiments. This can be traced to the fact that the cou-pling points of thermoacoustic resonators are close com-pared to the sound wavelength, whereas they are distantcompared to the electronic deBroglie wavelength. Thecooperative effect of nonlinear amplification in the en-gine also plays a part.

The only significance of these engineering details forself-organization, though, is that much less informationis stored in the thermoacoustic wave, relative to the en-tropy of the whole volume of gas, than is stored in theelectric current relative to the Fermi gas. Precisely be-cause the informational entropy so easily gets lost in fa-miliar engines, it is ignored in classical conservation lawslike Carnot’s theorem. In systems like the circuit, whereit is comparable to the background, it cannot be ignoredwithout violating the second law at leading order, so it isnoticed and made part of the thermal accounting. Thelinear model below will have the same classical equationsof motion as the LC circuit (exactly), and will have anatural small-parameter expansion separating the “ther-mal” from the “informational” entropies.

The most important feature of the LC example thatdoes not capture the intuition of self-organization is thatthe ordered state is not “favored”; it is just one stage ofa harmonic oscillation, with continuously varying orderparameter. The sense that self-organized states shouldsomehow be basins of attraction is probably responsiblefor the almost exclusive pursuit of nonlinear, dissipativeSO models. The third example will therefore be a smallmodification of the LC circuit, to show how classical non-

linearity can create the reversible equivalent of basins ofattraction, by standard phase-space mapping.

C. Prolonging ordered states

An equivalent description of the limitation of the LCcircuit as an intuitive model of SO is that it has no regu-lar limit as the capacitance (reservoir capacity) diverges.The frequency of oscillation goes to zero, and even atfixed initial voltage, the maximum current diverges, theinitial growth of current from quescence becoming linearin time for longer and longer periods.

Natural systems are regarded as self-organizing pre-cisely when they attain non-equilibrium states in a waythat does not depend on details of the reservoir. The LCcircuit can be given this property by making the inductorthe rotor coil of a DC electric motor [16]. Work is thenstored in both magnetic fields and mechanical motion,while the back-EMF induced by rotation sets an upperlimit to the steady-state angular velocity attainable witha given voltage drop. While an ideal, nondissipative DCmotor will in general have complex oscillations of currentand rotation, the magnitudes of both can be kept finite asthe capacitance diverges, as long as the starting voltageis held fixed.

The dynamical configuration variables natural to theLC circuit will be charge/current oscillations of differentphases and magnitudes. For the DC motor, these will bereplaced with more complex histories, in which the inter-vals of motor rotation in a given direction will becomelonger and longer as the capacitance diverges. The his-tories are still classically deterministic, and degeneratewith respect to time translations, until they are specifiedby initial conditions. However, because the intervals ofrunning will become more prolongued relative to the on-set transients, more information about the onset time willbecome encoded in (generally time-nonlocal) correlationsof the running state, and disappear from instantaneousconfigurational descriptions.

As will be shown below (in a simpler example), clas-sical description based on time-local properties is an in-stance of a coarse-graining, which may lose some detailedinformation about the specification of a thermal ensem-ble, but still keep the leading information that there isa dynamically ordered state. The speculation is thatthis type of classical nonlinearity, together with a weaklyentropy-producing coarse-graining, can account for theattraction property of SO states in systems operatingnear reversible limits.

IV. WHAT IS LIFE

The three physical examples give a conceptual frame-work from which to re-examine the order in living sys-tems. The way thermoacoustic engines “build” their ownstructure from the work the existing structure delivers is

Page 6: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

6

not obviously different in principle from the way livingsystems build large biomolecules by “condensation” oftheir constituents from solution. The information con-tained in the thermoacoustic condensed phase is a one-parameter cartoon of the much richer information con-tained in the compositional and network structure of aliving cell.

A gratuitous feature of the thermoacoustic engines isthe coherence of the ordered state, which the chemicalcurrents through biological reaction chains seldom if everhave. However, the LC circuit and DC motor examplesshow that coherence is not fundamental to the processof organization, and their electronic currents are an ob-viously closer analogue to chemical currents in reactionnetworks or cycles.

Discussing life in terms of engines and refrigeratorssignificantly sharpens the phenomenological characteri-zation given by Schrodinger’s 1945 What is Life? [3].He essentially defines life by just the opposition to theequilibrium paradigm elaborated above for general self-organization. Living systems are physically distinguishedby their ability to extract “negative entropy” from theirenvironments, and Schrodinger proposed explicitly thatnew principles would be needed to explain the emergenceand stability of such structures. While this characteriza-tion is true at some level (as a reference to the excess freeenergy of living over nonliving matter), this descriptionglosses over the essential conditions under which negativeentropy (structural information) transfer occurs, and hasled to widely varying speculations about what new prin-ciples would be required.

In the models developed below, a self-building enginesimply rejects a little more entropy at its exhaust phasethan it imports in the intake phase. This correctionto classical Carnot’s theorem is precisely the informa-tion added to (entropy removed from) the engine’s in-ternal state. That there should be such corrections isexpected, since the engine itself comprises a pair of reser-voirs acted on by refrigeration. The engine is only able todo this when there is a potential between the reservoirs,which presupposes some bottleneck to direct (dissipative)conduction. Thus, the state in which the engine leavesits “environment” is not one accessible by the environ-ment if the engine is removed (at least not on the sametimescales) [5].

The conditions uinder which living matter decreases itsentropy are thus no different from any other conditionsin which work can be extracted to power a refrigerator.Just as engines may be chemical instead of thermal, thechemical version of refrigeration is the most natural toolfrom thermodynamics with which to explain the struc-ture of biomatter.

A. Is biochemistry reversible?

In principle any chemical reaction can be run in eithera forward or its reverse direction by appropriate buffering

of the reagents. However, the free energy differences ofmany biomolecules are enough larger than the chemicalpotential differences of their physiological buffers, thatindividually they are essentially reversible. A case inpoint is the hydrolysis of pyrophosphate to orthophos-phates, the energy delivery mechanism of ATP [18].

It is therefore a somewhat surprising empirical obser-vation that many networks of cellular reactions attainlarge fractions of ideal efficiency, and there are even somereversals that can be driven at physiological conditions.DNA polymerization, powered by ATP, can be reversedto depolymerization under some conditions, by changingthe tension on the complement strand [19]. More remark-able, the tricarboxylic acid (TCA, or Krebs) cycle, thebackbone of intermediary metabolism, is known to oper-ate in both oxidative (forward) and reductive (reverse)directions, in organisms with mostly common internalchemical environments [20].

If it can be shown that reactions networks like the TCAcycle can be taken near reversible limits (under appro-priate conditions on temperature, diffusion, or buffering)without qualitatively changing their self-replicating char-acter, it will be possible to ask whether these are sta-bilized in nature by essentially the mechanism studiedhere. The universality of the Krebs cycle might then beaccounted for with “new principles” of only the weak-est sort: namely application of the underlying assump-tions of equilibrium statistical mechanics to those time-dependent cases where they are already valid.

B. Generalized flow ground states

Recall that the paradox defining self-organization wasthat apparent equilibrium ground states are not stablein the presence of flows, while apparently more struc-tured sequences of configurations are. When equilibriumground states are stable, it is because arbitrary pertur-bations away from them rapidly evolve into correlationstructures irresolvable by the coarse-graining that definesthe equilibrium effective potential.

The framework built here introduces what could becalled a generalized flow ground state [5]. It is the min-imizer of the effective potential appropriately generaliz-ing the static free energy to incorporate heterogeneousenvironmental constraints, and to account for the flowsthey induce. A physical understanding of the universalityof the Krebs cycle would be a calculation showing thatsupra-thermal population of its chemical species charac-terizes a generalized flow ground state for matter, withinprimordial geochemical or current photosynthetic bound-ary conditions.

Page 7: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

7

V. LANGUAGE FOR COARSE-GRAININGAND THERMODYNAMICS

One can discuss thermoacoustic engines, LC circuits,motors, and life, in the language of equilibrium entropiescomputed from their various distributions of particles,excitations, or chemicals. Often the distributions are sta-ble, and can be inferred statistically from ensemble mea-surements of the systems in question. However, buildingthermodynamic descriptions is not just a problem in sta-tistical inference – it hinges on the additional stipulationthat the variables listed give the entire knowledge of thesystem; only at that point do they take on the interpre-tation of classical state variables. The reason equilibriumdescriptions of flow-supporting systems predict incorrectstructures is that they fail to satisfy this criterion of com-pleteness.

The one reliable language in which to discuss the corre-spondence of classical to microscopic states in thermody-namics is Jaynes’s language of maximum ignorance [21].It offers a natural way to relate systems of measure-ment to both the preparation and characterization ofensembles of microstates, and then to assign the val-ues of measured quantites to uniquely defined classicalstates. Maximum-ignorance is preferable to an assump-tion of ergodicity, as a basis for the construction of en-semble descriptions of closed systems, because it sepa-rates the potentially unbounded time interval on whichthose systems may be sampled, from the processes orinteractions responsible for uncertainty about them. Inreversible systems, this separation is fundamental, andwhen the maximum-ignorance principle is properly ap-plied, it becomes more or less obvious why reversibilityis thermodynamically equivalent to equilibrium.

This section relates the concepts of measurement,coarse-graining, and classical state specification, in termsof maximum ignorance. It shows how classical statesspecified by different subsets of average observables withthe same values are defined as different states, and re-minds that a state is not classical by virtue of large size,but by correspondence with a distribution of more finely-grained states. Most important, since ergodic evolutioncannot be used as the fundamental justification for ther-modynamics of a linear system, it shows how the conceptof “classical state” survives as a description of the rela-tions between system preparation and later characteriza-tion, through a basis of measurement choices.

The definitions will first be listed, and then a sim-ple argument enumerated showing that reversibility is bydefinition a version of equilibrium. It will become clearin the following sections why simply tracing the macro-scopic averages of observables over time defines a con-tinuous reparametrization of the classical state variablesthat often fails to be reversible, even though an equallysimple basis including time-nonlocal history descriptionsmay be possible.

A. Definitions

• A coarse-graining is a many-to-one map from densi-ties over microstates to densities over microstates,satisfying the conditions set forth in Ref. [8]. Itdefines an equivalence relation on the densities inthe domain, with each equivalence class being thepreimage of a unique density in the range.

• Any coarse-graining introduces some collectionof variables, whose values index the equivalenceclasses. Because it implies maximal ignorance ofthe microstate, given the class, the values of theindexing variables label the complete set of con-straints used to restrict the distribution of mi-crostates. If the constraints are averages of observ-ables, it may be possible to refer physically to theclass as the maximum-ignorance distribution con-sistent with the constraints. Shannon entropy isthen the measure of ignorance.

• A classical state is a particular set of values of theindexing variables created by some coarse-graining.When the indexing variables are physical (such asaverages of observables), they can inherit a topol-ogy or group structure which then characterizes thecollection of classical states.

• In closed systems, measurements which controlsome but not all of the microscopic degrees offreedom may be used to prepare classical states.Each such preparation algorithm corresponds toa maximum-entropy distribution, constrained toyield the given values from a completely specifiedset of coarse-grained measurements. The classicalstate then corresponds to the ensemble of instancesadmitted by the preparation algorithm, and it issensible to ask what distributions of subsequentmeasurements are obtained in that state.

• By definition, the entropy of any classical state, la-beled under the coarse-graining in which it was pre-pared, is fixed. The entropy in any other coarse-graining can only be equal or larger, because theprepared state variables are the complete set ofconstraints placed on the distribution [8]. Notethat coarse-grainings based on measurements ofeven “the same” configuration variables at differ-ent times are in general different. Entropy increaseof a classical state under subsequent measurementsthus reflects the ignorance of the values of the con-straint variables with which the state was prepared.

• When there is more than one coarse-graining thatyields the same entropy for a classical state, thereis more than one system of measurements sufficientto place the same constraints on the distributionin question. There must then be a bijection be-tween the possible values of state variables in thetwo systems. If the two coarse-grainings are based

Page 8: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

8

on configuration variables at different times, theclassical state is said to evolve reversibly. Note thatthere can be multiple coarse-grainings compatiblewith the same values for some state variables, whichadmit reversible evolution, or there may be no re-versible evolution consistent with a given coarse-graining. (Examples of both cases will arise below.)

• Thermodynamic reversibility is thus a property ofthe system of measurements specifying a coarse-graining, which any given system of measurementsmay or may not have.

• Note that classical states at different levels in a hi-erarchy of coarse-grainings are by definition differ-ent, even if they have the same values of the statevariables they share. A collection of classical statesmay thus constitute a refinement of a classical stateat a coarser scale. The entropy of a classical stateis the entropy of the finest coarse-graining in whichit can be described, and that entropy is equal tothe entropy in the preparation variables.

B. Reversibility as equilibrium

It was shown in Ref. [10], through extensions of the for-malism of finite-temperature field theory, that for a par-ticular model of engines reversibility was indistinguish-able from equilibrium by a change of variables. Thatresult could have been anticipated on the following moregeneral grounds.

1. Hamiltonian dynamics implies that microstateshave an indexing that is independent of time.

2. Thermodynamic reversibility implies thatmacrostates have an indexing that is inde-pendent of time (there is a bijection between anytwo indexings at different times, so either may betreated as canonical). This is just the statementthat the classical state variables of a reversiblesystem evolve deterministically.

3. Formally, thermodynamics is a time-independenttheory of classical state variables. The real con-tent of this definition is that a fixed set of condi-tions – usually those that specify initial preparation

criteria – is used to constrain the indefinite set ofmeasurements of an ensemble of microstates thatcan be made on an unbounded time interval. Cau-tioned by the definitions above, one sees that thisdoes not automatically imply that the state vari-ables have a representation in terms of either time-independent, or even time-local, configuration val-ues. All deterministic classical theories have some

time-independent representation, but they are notin general static.

x

y

θ0

m=0m=1 m=2 . . .

m = M-1 L

R

FIG. 1: Diagram of the Bloch-crystal engine. The big cir-cle represents the annular resonator, even though it only hastwo independent oscillator components. Axes of symmetryfor spatial standing waves are marked x and y, and axes ofspatial standing waves coupled to the reservoirs are at anglesθ0 and −θ0 relative to x. Spatial modes in each reservoir areindexed m = 0, . . . , ML,R

− 1 in each of Left and Right sec-tors. There is a number-exchange coupling between standingwaves in the resonator and the m = 0 spatial oscillator of thecorresponding reservoir.

4. Thus the assumptions of “equilibrium thermody-namics” define not so much a theory of stasis, as themore general theory of fixed ensemble constraintsgiven once and forever, and this includes the casesof classical reversibility. The price of restricting at-tention to static representations is that they canoften admit only constraints imposed uniformly bythe external world. Explicitly time-dependent rep-resentations naturally arise from initial constraintsthat are heterogeneous in a basis of non-eigenstates.(Indeed, this is the main demonstration below.)

VI. MODELING ENGINE AND RESERVOIRDYNAMICS: THE BLOCH CRYSTAL ENGINE

A linear model illustrating the phenomena discussedup to this point is shown in Fig. 1. It is built froman even number M of linear, quantum harmonic oscil-lators with identical level spacing. Two oscillators areconsidered to be the engine, and the remaining M−2 arecollected into symmetric “Left” and “Right” reservoirs.Definitions and notation for the quantum harmonic os-cillator are reviewed in App. A, along with a numberof manipulations of scalar and vector operators that willbe used in the following constructions. Definitions ofcoarse-grainings, and the forms that will be applied tothis system, are given in App. B.

Elementary excitations along orthogonal axes x and yin the engine are created by two raising operators a†x anda†y, respectively. The excitations can be standing-wavephonons, in which case this is identical to the (second-quantized) low-energy effective description of the ther-

Page 9: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

9

moacoustic engine in Ref. [11]. Alternatively, they canbe a low-density approximate description of electrons inopposite wavevector states in a one-dimensional inductorcoil.

The Hermitian conjugate operators to a†x and a†y arecalled ax and ay, and the excitation number operatorsare nx and ny. In terms of these, the free Hamiltonianfor the engine is just its total excitation number:

HE0 ≡ nx + ny. (1)

The reservoirs are given slightly more structure, asBloch crystals, with the nearest-neighbor Hamiltonian

HL0 ≡

ML−2∑

m=0

nLm − γ

2

[

a†Lm aLm−1 + a†Lm aLm+1

]

(2)

for the left reservoir, and similarly for L → R. m isperiodically identified, and ML = MR, so that M =2ML + 2.

The creation operator for a normalized wavevectorstate in the L reservoir is

a†Lk ≡ 1√ML − 1

ML−2∑

m=0

e−ikma†Lm , (3)

so that in terms of the associated number operators

HL0 ≡

k

nLk (1 − γ cos k) , (4)

with sum over k ∈ (0, . . . ,M − 1)× 2π/M , and similarlyfor L→ R.

The coupling of the engine to the reservoirs is a sim-plified version of the coupling used for real thermoacous-tic engines, without the nonlinearity. A convenient formcomes from the interaction Hamiltonian

Hint = − g√2 cos θ0

[

a†θ0aL0 + a†−θ0a

R0 + h.c.

]

= −g[

a†xaS0 + tan θ0a

†yaA0 + h.c.

]

, (5)

where the standing-wave excitations at angles ±θ0 arecreated by the operators

a†θ0 ≡ cos θ0a†x + sin θ0a

†y (6)

and

a†−θ0 ≡ cos θ0a†x − sin θ0a

†y, (7)

respectively. Subscript 0 on the reservoir operators de-notes spatial index m = 0, since k = 0 would make theBloch ring pointless. g is the coupling strength of theengine to the reservoirs, and may be taken small or oforder unity, as desired.

The operators in the first line of Eq. (5) do not have or-thogonal canonical commutation relations at general θ0,so the second line gives an expansion in operators that

do, with symmetric and antisymmetric lowering opera-tors in the reservoirs defined as

aS0 ≡ aL0 + aR0√2

(8)

and

aA0 ≡ aL0 − aR0√2

. (9)

Everywhere h.c. denotes Hermitian conjugate of theterms that appear explicitly.

The closed engine/reservoir system evolves microscop-ically under the Hamiltonian

H = HE0 +HL

0 +HR0 +Hint. (10)

There is a spatial basis{

x, y,mL,mR}

, which naturallydecomposes into components on which heterogeneous en-vironmental couplings can be imposed. (Alternativelyone could use

{

x, y, kL, kR}

.) The excitations in thesebases differ from excitations in the eigenstate basis by aunitary transformation of the raising and lowering opera-tors, and this property leads to nontrivial flow of particlesamong the spatial or wavenumber projections.

A. Preparation of heterogeneous thermal initialconditions

A convenient separation of scales can be achieved, be-tween the fundamental oscillator frequency (set to onein these units), and the frequency of particle exchange,by taking the angle θ0 very close to π/4 in the operat-ing state. It will be assumed below that θ0 < π/4, as inFig. 1.

This coupling admits a very convenient way to im-pose heterogeneous thermal initial conditions, becauseat θ0 = π/4, the engine operators are orthogonal, withcanonical commutation relations. Therefore one canimagine starting with a “preparation coupling”

Hint, prep = − g√2 cos θ0

[

a†laL0 + a†ra

R0 + h.c.

]

= − g√2 cos θ0

[

a†xaS0 + a†ya

A0 + h.c.

]

,

(11)

where a†l and a†r are defined as in Eq. (6) and Eq. (7),respectively, except with θ0 → π/4. The entire left andright sectors will then decouple, each can be prepared ina thermal state at an independently specified tempera-ture, and the density matrix for the ensemble will thenbe the product of densities for the two sectors. Cou-pling can then be introduced with perturbative strengthν ≡ g (1 − tan θ0), by simply rotating the reservoir con-tact points slightly toward the x direction.

The easiest case in which to understand the origin ofparticle transport, and indeed the only case where time-local measurements will lead to reversible dynamics, is

Page 10: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

10

ML = MR = 1. Letting the Bloch exchange couplingγ → 0, the whole preparation Hamiltonian can be writtenin matrix form as

Hprep =[

a†l aL0

†][

1 −g/√

2 cos θ0−g/

√2 cos θ0 1

] [

alaL0

]

+[

a†r aR0

†][

1 −g/√

2 cos θ0−g/

√2 cos θ0 1

] [

araR0

]

.

(12)

Left and right eigenstate excitations are manifestly cre-

ated by operators(

a†l ± aL0†)/√

2 and(

a†r ± aR0†)/√

2,

both with eigenvalues 1∓ g/√

2 cos θ0.It is shown in App. C that thermal densities can

be written in terms of Gaussian integrals over coherentstates, and the notation K is introduced for the kernelmatrix of the Gaussian integral. The eigenvalues of Kfor homogeneous thermal densities are the inverses of themean occupation numbers in the eigenstate basis. In theleft and right preparation bases, these take the approxi-mate forms at high temperature

KL± ≡ eβ

L(1∓g/√

2 cos θ0) − 1 → βL(

1 ∓ g/√

2 cos θ0

)

(13)and

KR± ≡ eβ

R(1∓g/√

2 cos θ0) − 1 → βR(

1 ∓ g/√

2 cos θ0

)

.

(14)

If g/√

2 cos θ0 is chosen close to one, the population ineach sector can be dominated by the symmetric (low-frequency) eigenstate to any desired degree, reducing theanalysis of this problem to that of single-particle thermalstates. After all the properties of that limit are under-stood, the case of general g etc. can be examined.

Not only x and y excitation numbers, but also those in land r and various traveling-wave bases will be of interestin the analysis that follows. Therefore it is useful toremark that all of these bases differ from each other onlyby unitary transformations of the creation operators, andit is shown in App. D that the marginal distributions ofany Gaussian coherent-state densities are exactly thermalin any such bases. Further, the expected mean excitationnumbers are just the diagonal elements of K−1 in thecorresponding representation (Eq. D1)), so it is useful todefine mean-excitation matrices

nL ≡(

KL)−1

(15)

and

nR ≡(

KR)−1

, (16)

from which all the engine excitation numbers can be de-rived by unitary transformation.

B. A reflection on photosynthetic life [20]

It is worth pausing at this point to notice a similarityof the engine preparation just described, with the initial

conditions for photosynthetic life on earth. If the primor-dial geochemistry of hydrothermal vents is put aside as a(possibly important) transient condition, from which lifehas since become independent, there are two reservoirsof energy available to couple to surface chemistry. One isthe background microwave spectrum at ∼ 300K, and theother is the visible spectrum at ∼ 6000K. The microwavespectrum couples quasi-elastically to vibrational and ro-tational molecular excitations in near-thermal equilib-rium, while the visible spectrum couples very nearly elas-tically to electronic transitions.

Living systems have the elaborate reaction networkcalled metabolism to convert high-energy chemical bonds(redox couples) to energy-carrying molecules (e.g., ATP)and activated substrates, which can then be built into thevarious compartments and machinery of cells [18]. With-out photosynthesis, this machinery can exist in short-term equilibrium with the microwave background, whilevisible light exists in a sort of steady-state balance (notquite the same as equilibrium) with electronic transitions.Quantum selection rules, however, prevent the conversionof excited electronic states into the stable redox couplesaccessible to metabolism, for all but a few exceptionalmolecules.

The electronic and vibrational states of matter in thispicture are effectively like the left and right standing-wave engine states of this model, and the photon back-grounds to which they couple are like the associatedreservoir excitations. This may have been the conditionin the earliest stages of life, when the reservoir of sun-light was available but unused. The importance of thegeochemical “transient” at this stage may have been thatit provided redox couples directly, and so supported theemergence of metabolism, in a separate story that set thestage for photosynthesis.

The discovery of the two classes of choromophores(chlorophylls and rhodopsins) effectively introduced acoupling between the electronic and vibrational reser-voirs, by way of the metabolic reaction network. Thesemolecules solve the difficult problem of convertingradiation-excited electronic states to stable high-energychemical bonds, before the excited states can re-radiateelastically. The introduction of chromophores to analready-established metabolism initiates the dynamics ofphotosynthetic transport, in something like the way theθ0 overlap will initiate particle transport in the next sub-section.

C. Single mode-driven oscillations

It was actually possible to diagonalize the preparationHamiltonian in either left and right sectors, or in thebasis {x, S0, y, A0}. However, the thermal initial condi-tions of interest are only diagonal in the left/right basis,whereas the eigenstates with θ0 < π/4 require diagonal-ization in {x, S0, y, A0}. The matrix representation of

Page 11: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

11

the dynamical Hamiltonian is

H =[

a†x aS0

†][

1 −g−g 1

][

axaS0

]

+[

a†y aA0

†][

1 −g tan θ0−g tan θ0 1

][

ayaA0

]

,

(17)

and its eigenstate excitations are created by(

a†x ± aS0†)/√

2, with eigenvalue 1 ∓ g, and(

a†y ± aY0†)/√

2, with eigenvalue 1 ∓ g tan θ0/√

2.

No difficulty is incurred because the basis in whichthe initial thermal projection factors is different fromthe eigenstate basis. App. E shows that, when thecoherent-state representation of thermal density matri-ces is used for L and R sectors, the density which is theirproduct simply defines an M -dimensional kernel K, inwhich the factor matrices KL and KR become diagonalblocks (Eq. (E6)). The partition function has a basis-independent definition, from which the expected excita-tion numbers at any time are easily extracted by suitablesimilarity transform of K or its inverse, n.

The t = 0 transformation from {l, L0, r, R0} to{x, S0, y, A0} operators puts the mean number matrixin the representation

n0 ≡ 1

2

[

nL + nR nL − nR

nL − nR nL + nR

]

, (18)

where the blocks nL and nR are defined by Eq. (15) andEq. (16), respectively.

Since the eigenstates advance their phases according tothe eigenvalues of Eq. (17), n evolves by similarity trans-form with the diagonal time-evolution operator. Becauseof the interaction Hamiltonian used to define the prepara-tion of the sectors, the full eigenstates are superpositionsof the sector eigenstates with the same engine/reservoirsymmetry or asymmetry. In other words, in the eigenba-sis, the block factors nL and nR are themselves diagonal,and the only non-identity contribution to the similaritytransform comes from the energy difference matrix

EA −ES = ν

[

1−1

]

. (19)

Supposing that the coupling between the L and R sec-tors was turned on at t = 0, the number matrix at timet, called nt, is just

nt ≡1

2

[

nL + nR(

nL − nR)

e−i(EA−ES)t(

nL − nR)

ei(EA−ES)t nL + nR

]

.

(20)It is clear that under transformation back to{l, L0, r, R0}, the mean l and r number densitiesoscillate with frequency ν between the values inducedby the two initially imposed temperatures. The x andy mean numbers meanwhile remain constant at theaverage of l and r means.

D. Bases for entropy accounting

It is shown in App. D that the marginal distributionsfor which the matrix (20) gives mean numbers are all ex-actly thermal, in any excitation basis, at any t. Thereis thus a well-defined effective temperature and a well-defined entropy for each marginal, both simple functionsof the associated mean occupation number. Now, thefirst premise of any classical description of an SO systemis that there is a division into “system” and “environ-ment” components. Each is supposed to have a sensiblydefined entropy and equation of state, as these marginalsapparently do. If the system can evolve reversibly, thesum of the factor entropies is supposed to be constantunder the evolution.

In this system at ML,R = 1, the only sensible compo-nent decomposition permitted by the interaction Hamil-tonians is into the position states m = 0 for the reser-voirs, and some basis of excitations for the engine. Asargued in App. B, the factor entropies should be themarginal entropies of these position-diagonal excitationnumbers. However, the symmetric and antisymmetricnumber densities in the L and R preparation eigenbasesoscillate harmonically with frequencies ±ν, and their in-terference, together with the S/A interference, causes theposition-diagonal numbers to oscillate at multiple fre-quencies, generally resulting in no simple conservationlaws for entropy. Approximate conservation laws canbe induced, with particular number measurements andconditions on the coupling, but it is instructive to un-derstand when this works and why the assumption of anumber-based description generally fails, even though allof the marginals are thermal.

It is clear why sums of marginal entropies need notbe conserved in general. The operation of factoring intocomponents (replacing joint distributions with productsof marginals) is a coarse-graining, which can lose infor-mation about system/environment correlations. The cri-terion of thermal reversibility demands that the informa-tion lost be independent of the various transformations towhich the system/environment couple is subject, whetherthrough the system’s own internal dynamics or throughexternally imposed boundary conditions. Note that thisis stronger than a requirement of empirical reversibility,which may be satisfied if the system merely returns tostates with the same classical descriptions, after havingevolved away from them.

Empirical reversibility need not imply thermal re-versibility to be consistent with the second law. Pre-cisely because the factored description has already lostinformation about the true specification of the ensemble,subsequent coarse grainings based on the same factoriza-tion could do better or worse at representing the max-imum possible (complete) set of constraints, and thuscause any sign of the entropy change. The task of con-structing a reversible description is one of finding basesfor measurement that count all of the constraints withoutredundancy.

Page 12: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

12

It is worth noting that this is not merely a questionof extensivity of the entropy, with the information lostsomehow proportional to the energy represented in theinteraction terms, relative to the factor-diagonal terms inthe Hamiltonian. Under transformations, the interactionterms can mediate arbitrary correlation of factor excita-tions, leading to total marginal entropy changes underevolution that can be extensive in the factor sizes. Theorigin of the entropy change is the fact that in closedsystems, the factor states do not evolve ergodically in-

dependently of each other. When the system is coupledto the environment primarly through these interactionterms, correlations may be frozen indefinitely. (Presum-ably this is the case with the distributions of biomoleculesin living cells.)

In the example here, one way to let the marginals re-cover some conservation law is to take g/

√2 cos θ0 close

to one. In that case, the low-frequency states account foressentially all of the population in any basis, and there isa redundancy of the information contained in the reser-voir m = 0 and the engine states. It then becomes possi-ble to use reservoir marginal distributions as proxies forpart of the order in the engine, allowing engine bases tobe explored to account for any additional order not mea-surable in the factored reservoir marginals alone. Notethat this limit is not necessitated by anything fundamen-tal; it is used to compensate for the prejudice that, be-cause the marginals are thermal, the constraints can beinferred from their temperatures alone.

Quantum mechanically, standing-wave and travelingwave excitations are not independent in an engine withonly two degrees of freedom. However, classically, theremay be information represented in the mean excitationnumber in one basis, which is only contained in the jointprobabilities across different excitation numbers in theother basis. Thus exploring the various number bases inthe engine will be the key to recovering an approximatedescription of the complete constraints on the distribu-tion of interfering microstates. The necessary excitationsbeyond those already considered are created by the op-erators

a†+ =a†x − ia†y√

2(21)

and

a†− =−ia†x + a†y√

2. (22)

It follows from the commutation relations that all of

nx + ny = nl + nr = nL0 + nR0 = n+ + n− (23)

are equal and time-independent, and from Eq. (20) thatnx − ny ≡ 0. Unitary transformation to the l, r, L0, R0,or from these to the +,− bases at any time, gives

(nl − nr)t ≈ (nl − nr)0 cos νt (24)

and

(n+ − n−)t ≈ −(nl − nr)0 sin νt (25)

from Eq. (20). Furthermore, because the populationsof both engine and reservoir L and R modes come en-tirely from the interference of the same pair of states,(

nL0 − nR0)

t≈ (nl − nr)t. This is how the choice of cou-

pling makes nL0 and nR0 informational proxies for the land r coherence in the engine.

At high temperature, the thermal entropy of Eq.( C6)(for any one component) is asymptotically equal to thelogarithm of the mean excitation number, so to secondorder in fluctuation amplitudes,

d

dt

[

S(

nL0)

+ S(

nR0)]

6= 0, (26)

and also

d

dt[S (n+) + S (n−)] 6= 0, (27)

though total excitation numbers in the two sectors arepreserved independently. The phase offset between thetwo oscillations, however, implies that to the same order

d

dt

[

S(

nL0)

+ S(

nR0)

+ S (n+) + S (n−)]

≈ 0. (28)

A classical description of a reversible process is usu-ally considered to be a set of time-local configurationmeasurements, which can be made on components of asystem, and in terms of which the sum of entropies isconstant. It should be clear from this example that re-versibility is a lot to ask of a coarse-graining that mustalso be local in time and factorable into components.Under almost any weakening of the restrictions on theexample, the basis used here will no longer yield a re-versible description, and the informational entropies willbe obscured by a larger background of fluctuations dueto inadequacies of the coarse-graining. However, withsome sensitivity to the structure of the dynamics, it willgenerally be possible to preserve factorability if one canuse time-nonlocal measurements. First, though, it is ofinterest to understand heat and information flows in thecurrent model, where they can be computed locally.

E. The heat flow interpretation

Entropy change always accompanies the transfer of ex-citations. It is therefore possible to assign an entropyflow to various particle currents, which then takes on theinterpretation of heat in classical thermodynamics. Toshow that this simplified model is appropriately inter-preted as an engine/reservoir system, it is necessary toshow that the heat flows obey those classically associatedwith engine cycles.

Because the model has been constructed with a topo-logical correspondence to the thermoacoustic case, there

Page 13: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

13

is a natural phenomenology of thermoacoustic enginetransport to map to it. The traveling-wave states + and− correspond to the transporting excitations. Classically,the entropy transported from the R to the L reservoir isproportional to the excess number of + over − excita-tions, while the transport from L to R is proportional tothe − over + excess. Meanwhile, the rate of growth ofthe + excitation number is proportional to the tempera-ture difference of R over L, while the growth rate of − isproportional to the L − R temperature difference. Thismodel is linear, so there will not be the additional propor-tionality of + and − growth with the current amplitudesof + and −.

Only the total particle flow into reservoirs will be di-rectly constrained by the local interaction Hamiltonians.However, at ML,R = 1, nL0 → nL, nR0 → nR, while also(

nL0 − nR0)

t≈ (nl − nr)t. By Equations (24-25),

d

dt

(

nL − nR

nL + nR

)

= ν

(

n+ − n−n+ + n−

)

(29)

and

d

dt

(

n+ − n−n+ + n−

)

= ν

(

nR − nL

nR + nL

)

. (30)

Recognizing that for thermal distributions the meanexcitation number is proportional to the temperature,Eq. (30) satisfies the first part of the thermoacoustic phe-nomenology: that the growth of structure is proportionalto the driving temperature difference.

To infer local particle flows between the engine and thereservoir, it is natural split the interaction Hamiltonianas

Hint ≡ HLint +HR

int, (31)

with

HLint ≡ − g√

2 cos θ0

[

a†θ0aL0 + h.c.

]

(32)

and

HRint ≡ − g√

2 cos θ0

[

a†−θ0aR0 + h.c.

]

. (33)

In the Heisenberg picture, this leads to the expression forthe change in the number of particles nL, due to interac-tion with n+, as

dnL+dt

≡ i[

HLint, n+

]

, (34)

and the change in nR from interaction with n+ as

dnR+dt

≡ i[

HRint, n+

]

. (35)

The sum of operators interacting with n+ satisfies theengine conservation law

dnL+dt

+dnR+dt

= i [H, n+] = −dn+

dt, (36)

while the sum for nL satisfies the reservoir conservationlaw

dnL+dt

+dnL−dt

=dnL

dt. (37)

Both conservation laws remain true at generalML,R, andthere are symmetric constructions for the (−) and R sec-tors, respectively.

Because the entropy change for a thermal distributionis a function only of the mean excitation number, thesplitting of particle currents allows a similar splitting ofentropy changes into “flows”. In a large-n limit, where

dS (n)

dn→ 1

n, (38)

the change in the L reservoir entropy from + currents is

dSL+dt

≡⟨

i[

HLint, n+

]⟩

nL(39)

and similarly for the R reservoir entropy

dSR+dt

≡⟨

i[

HRint, n+

]⟩

nR. (40)

By construction,

dSL+dt

+dSL−dt

=dSL

dt, (41)

and likewise for L→ R.Since the whole-system sum of marginal entropies is

conserved only to second order in fluctuations, it makessense only to expand Equations (39 - 40) to that order,giving

d

dt

(

SL+ + SR+)

= − 2

(nL + nR)

dn+

dt

− 2

(

nL − nR)

(nL + nR)2

i[(

HLint −HR

int

)

, n+

]⟩

.

(42)

In the g/√

2 cos θ0 → 1 limit that is dominated by a singlemode in both x and y, the operator algebra of Eq. (31)gives the simplified relation

i⟨[(

HLint −HR

int

)

, (n+ + n−)]⟩

= ν (n+ − n−) , (43)

plus error terms of order 1 − g/√

2 cos θ0 relative to theterms that are kept. A similar expansion for the anti-symmetric number sum gives

i⟨[(

HLint −HR

int

)

, (n+ − n−)]⟩

=

2g [(ny + 1/2)− tan θ0 (nx + 1/2)] . (44)

Using Eq. (38) for n+, it follows that

d

dt

(

SL+ + SR+)

= −dS+

dt

+ 2g

(

nL − nR

nL + nR

)

tan θ0 (nx + 1/2)− (ny + 1/2)

nx + ny.

(45)

Page 14: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

14

Classical Carnot’s theorem would have Eq. (45) identi-cally zero, because for a reversible process the entropytransport out of R by (+) would exactly equal that intoL, so the sum of two inward entropy transports wouldhave to vanish. In this problem, Eq. (45) has both sym-metric and antisymmetric nonzero terms. The symmetricterm is precisely the information (negative of the changein entropy) stored in the + standing wave, supplying theinformational correction to the classical theorem neededto preclude Maxwell deamons. The second term is to-tally ±-antisymmetric, and describes an artificial entropytransport which exactly cancels between the two travel-ing waves, and thus is never actually delivered to eitherreservoir, or to the engine modes either.

At this point it is useful to make a formal distinctionbetween those entropies considered informational in ori-gin, and those conventionally regarded as thermal. Thethermal entropies that pass through an engine come fromthose correlations necessary to specify the state of the en-vironment. They are originally projected onto the mea-surements of one reservoir and later transferred to theother’s, but never resolved in the engine at any point.The informational entropies are the uncertainties of theactual state of the engine, which may change if the en-gine’s composition is measured as part of the systemcharacterization.

The informational entropy change is the sum−dS+/dt − dS−/dt, from Eq. (42) and its (−)-counterpart. By Eq. (25), it is quadratic in fluctuations,and it has natural interpretations in terms both of self-organization and of flows. The reduction in S+ + S−is just the knowledge gained about the system from theconstraint on its current. A state with zero mean cur-rent maximizes what would normally be computed as the“equilibrium” free energy. The imposition of a mean cur-rent as a constraint would produce just the distributionand entropy computed here as its maximum-ignorancesolution. In this case, a constraint on the ± current ex-cess arises through the dynamics, as the expression ofthe initial reservoir heterogeneity, and so appears self-generated from the perspective of the time-local coarse-graining.

Meanwhile, the thermal entropy passing through theengine is (one half of) the antisymmetric combination

d

dt

(

SL+ − SR+)

=2

(nL + nR)

i[(

HLint −HR

int

)

, n+

]⟩

+ 2

(

nL − nR)

(nL + nR)2

dn+

dt.

(46)

Making use of the same operator identies and single-modelimits as above, this exchanged entropy evaluates to

d

dt

(

SL+ − SR+)

= −2gtan θ0 (nx + 1/2)− (ny + 1/2)

nx + ny

+ ν

(

n+ − n−n+ + n−

)

− ν

(

nL − nR

nL + nR

)2

.

(47)

The first term is again a ±-antisymmetric combinationassociated with nonuniform coupling to the standingwaves. It could be set to zero with suitable populationsof x and y, but never actually accumulates anywhere andis essentially an artifact. The leading-order entropy ac-tually exchanged is linear in n+ − n−, thus obeying thethermoacoustic Carnot relation. The quadratic correc-tion is of the same order as the informational entropy,and describes how it is drawn differentially from the tworeservoirs.

This model, then, does strictly what was describedabove. When an appropriate basis is specified, the en-gine traveling modes appear to transport informationabout the environment from one reservoir to the otherin the process of supporting particle currents. Whenthe temperatures differ, it is not energy, but thermalentropy whose flux is conserved at leading order. Theengine uses the resulting excess of energy to augmentits own structure, or vice versa. When engine order isgrowing, though, classical Carnot’s theorem is not ex-actly respected. The entropy the engine rejects at thelow-temperature reservoir is slightly greater than what ittakes in at high temperature, by just the change in itsown structural information.

F. More modes and nonlocal state variables

When ML,R > 1 or g/√

2 cos θ0 � 1, there will beno way to induce even quadratic-order entropy conserva-tion, for any sum of marginal entropies computed onlyfrom time-local mean excitation numbers. Every basis ofstates in the reservoirs will be populated by interferingeigenmodes with different projections onto the engine.While the sum of reservoir entropies will be constantat linear order in fluctuations, the “exchanged” thermalentropy from each reservoir component will be propor-tional to a separate traveling current in the engine. Thesecurrents will all have different effective temperatures atany time t, even though their distributions combine toform a thermal distribution for total traveling current,which may have yet another, arbitrarily related, temper-ature. The resulting picture is one in which both classi-cal Carnot’s theorem, and all informational entropies, arelost against a background of entropy fluctuation createdby inadequacies of the coarse-graining. At early times,the entropy change is always an increase, recovering theusual picture of an irreversible process.

It is easy to see, though, that the problem with this ex-ample is not that the distribution responsible for the pro-cess is differently specified than that for an equilibrium;it is that the presumptions on the definition of reversibil-ity are too narrow. Every mean particle number canbe Fourier transformed and resolved into the different-frequency components associated with the various L/Rinterfering eigenstates. For each such pair, the Carnot

Page 15: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

15

theorem and informational corrections of the last sectioncan be carried out exactly as when ML,R = 1. Theonly difference is that the set of state variables needed tospecify the true set of constraints on the distribution areinferred from a time-nonlocal measurement process. TheFourier transform need not even be of infinite duration;the populations at finite subsets of frequencies can beobtained with bounded error from transforms over finitetime intervals, and in that way a a regulated approxima-tion to the conserved entropy recovered.

In this sense even the definition of reversibility can beextended, as long as the entropy and other terms in theeffective potential are capable of parametrizing a distri-bution by segments of its history, and are not limited tostrictly time-local snapshots. The essential requirementon reversibility is that a description regularly approxi-mating the preparation description be recoverable at anytime after the evolution has started. There is no reason afinite time interval cannot be admitted to make the mea-surements, as long as the required length of the intervalis bounded. (One must not have to wait for the Poincarecycle to complete, or do anything else that does not scalereasonably with large system size.) But this is preciselywhat is ensured if both the set of accessible system states,and the constraints on them, are specified once and thusfixed.

VII. GENERALIZING THETIME-INDEPENDENT ENTROPY

In the linear oscillator model, a very simple interfer-ence pattern causes the constraints on a single distri-bution of microstates to oscillate, projecting with differ-ent strengths on different components of mean excitationnumber over time. Since the change of basis created byreferring to measurements at different times has no ef-fect on the specification of the distribution, it must becompatible with an equilibrium statistical description.

To see the form the description must take, it is usefulto introduce some case-specific notation. The total ex-citation number in the engine is denoted nT ≡ nx + ny,while the time-independent difference nxy ≡ nx − ny.

The differences, between l and r or + and − sec-tors, are functions of time, and as such are not suit-able order parameters if not so referenced. Thereforelet nlr,0 ≡ nl (0) − nr (0), denote either the value of thenumber difference at a reference time t = 0, or equiva-lently the whole function for which that difference is thesole constraint. Similarly, let n+−,0 ≡ n+ (0) − n− (0)denote an initial value constraint for (±), or the wholehistory that follows from that constraint.

In terms of these, the projection of the matrix n ontothe engine modes, in an xy basis for the raising and low-ering operators, becomes

n0 =1

2

[

nT + nxy nlr,0 + in+−,0nlr,0 − in+−,0 nT + nxy

]

. (48)

In the case (ML,R = 1, g/√

2 cos θ → 1) when trans-formations in this basis are reversible, the engine modeslisted give a total account of the correlations, so addi-tional entropies from marginal distributions in the reser-voirs would redundantly count the actual freedoms of thesystem.

The entropy of the most general configuration con-strained only by number density, and represented witha Gaussian-coherent ensemble, is given by Eq. (C12) interms of n of Eq. (48). If an average β value is definedas

β ≡ βL + βR

2, (49)

it is possible to write the equivalent of an equilibrium freeenergy for the heterogeneously constrained system as

βF ≡ βLnL + βRnR − S. (50)

Elementary algebra shows that S is maximized, andF minimized, at n+−,0 = 0. Along this curve, the traceexpression (C12) for S factors, and the free energy de-composes to the sum of equilibrium forms

βF → βLFL0 + βRFR0 . (51)

In this sense, along a hyperplane of the possible classicalconfigurations, the free energies look exactly like theirequilibrium counterparts, even though the system is dy-namical after t = 0.

Because F is minimized by an order parameter thatrefers to a time-dependent classical history, this mini-mum is an example of a generalized flow ground state. Itis instructive to ask what would be the closest approxi-mation to this solution, obtained if one were to excludeexplicit time dependence from the free energy. In thisproblem, that amounts to taking some static projectionof F , and trying to interpret it as an equilibrium freeenergy. The value β of Eq. (49) was chosen so that onlythe heterogeneous Legendre transform pair, and the de-pendence of the entropy on the time-varying order pa-rameters, would be excluded from the static projection.

This choice of “effective” equilibrium temperatureleads to the expansion

F = 2nT − 1

βSstat +

(

βL − βR

βL + βR

)

2nlr,0 −1

β[S − Sstat] ,

(52)where Sstat ≡ S (nT , nxy, 0, 0). Now, the correct (dy-namical) solution from Eq. (51) leads to a total numberof particles in x and y of

nT → 1

βL+

1

βR, (53)

evenly divided between the two modes and time-independent. In contrast, direct minimization of the firsttwo terms of Eq. (52) gives the expected total number

nT → 2

β=

(

1

βL+

1

βR

)

[

1 −(

βL − βR

βL + βR

)2]

. (54)

Page 16: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

16

The result (54) is clearly inconsistent with the x and yparticle numbers that would be observed, indicating thatthe static parts of Eq. (52) are not the “best” equilibriumapproximation.

A better approximation could be obtained by setting2/β = 1/βL+1/βR and incorporating a constant term ∼nT(

βL − βR)

into the “dynamic” corrections, so that thestatic part alone would give the correct value of both xand y numbers from Eq. (54). In that case, however, thel, r, + and − numbers would be equal, and the entropyhigher than any of the best-characterized instantaneousentropies in the dynamical case.

This situation may be compared retrospectively to thedescription of life by Schrodinger. It may be possibleto parametrize an equilibrium free energy in either casethat gives the correct average energy spectrum of time-independent system excitations. These correspond to themean x and y numbers in the example, and to the kinetictemperature in biology. However, if the real system sup-ports flows, the relative populations of the excitationscoupled to the flows will be wrongly predicted by theequilibrium description. These are the l and r or + and− numbers, at alternating times, in the example, or thedistributions of “improbable” biomolecules in cells. Ineither case, the instantaneous entropy one would assignfrom measurements characterizing all of the system struc-ture are lower than those predicted by the equilibriumapproximation.

The solution to this paradox is to include the addi-tional terms beyond the static part of Eq. (54), or pre-sumably to its biological counterpart. This both re-moves the ambiguity associated with choice of β in mak-ing the static projection, and recovers a true maximum-ignorance ensemble specified by all of the imposed con-straints.

A more interesting aspect of the equilibrium approxi-mation is the meaning of the entropy one would assignto an instantaneous characterization that accounted forall of the actual structure. If the measurements were of land r with x and y numbers, the added structure couldeasily be interpreted in terms of an engine/reservoir de-composition, because the asymmetric temperature on thereservoirs would provide an instantaneous constraint onthe engine. However, this interpretation would be spuri-ous, because it clearly represents the same order as wouldbe seen in + and − one quarter cycle later, at which timethe reservoir temperatures are equalized. The proper in-terpretation is that instantaneous values of dynamicalclassical state variables are not the state variables of equi-librium characterizations, even if momentarily they looklike such. Rather, they are arbitrary characterizations ofhistories, and it is the histories to which the heteroge-neous environments are coupled.

Acknowledgemnts

I am grateful to Cosma Shalizi, Harold Morowitz,Anita Goel, Walter Fontana, and Jim Crutchfield for dis-cussions and references that shaped the development ofthese ideas.

APPENDIX A: HARMONIC OSCILLATORBASICS

This appendix establishes definitions and notationfor one-dimensional (scalar) and M -dimensional (vector)simple quantum harmonic oscillators. Special attentionis given to coherent states, and convenient ways of con-structing them in the vector case.

1. One dimensional oscillation

The algebra of the linear, one-dimensional quantumharmonic oscillator is generated by a raising operator a†,and its Hermitian conjugate lowering operator, a [22].These are normalized by the commutation relation

[

a, a†]

= 1. (A1)

Linear combinations of the raising and lowering operatorsgive the position operator

x ≡ 1

2

(

a+ a†)

, (A2)

and the momentum operator

p ≡ i

2

(

a− a†)

. (A3)

The number operator, which gives the excitation numberof a state above the ground state, is defined as

n ≡ a†a. (A4)

In a first-quantized application, a† raises the energyof a single particle above the ground state, and a lowersit [22]. In a second-quantized application, a† increasesthe number of particles in a state, while a reduces it [23].In the second case a† and a are referred to as creationand annihilation operators, and n as the occupation num-ber operator. The position and momentum operatorsare then interpreted as the field strength of the excita-tion and its canonically conjugate momentum. Nothingabout the algebra depends on this interpretation, but thecreation/annihilation usage will be more natural in de-scribing thermoacoustic or electromagnetic examples.

The state space of the one-dimensional oscillator isbuilt from a ground state (first-quantized language), orvacuum state (second-quantized language), defined bythe operation a |0〉 ≡ 0 |0〉. Eigenstates of the number

Page 17: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

17

operator are built by repeated action of the raising oper-ator on the vacuum, and normalized as

|n〉 ≡(

a†)n

√n!

|0〉 . (A5)

The commutation relation (A1) then gives

n |n〉 ≡ n |n〉 . (A6)

A scalar coherent state is defined for any complex num-ber ξ as

|ξ〉 ≡ e−|ξ|2/2∞∑

n=0

ξn(

a†)n

n!|0〉 ≡ e−|ξ|2/2

∞∑

n=0

ξn√n!

|n〉 .

(A7)It is an eigenstate of the annihilation operator, with

a |ξ〉 ≡ ξ |ξ〉 . (A8)

From the definitions of the position and momentum op-erators, it follows that the expectations

〈ξ | x | ξ〉 = Re (ξ) (A9)

and

〈ξ | p | ξ〉 = Im (ξ) . (A10)

When time evolution is defined below, it will become ap-parent that these expectation values evolve by the clas-sical equations of motion of a simple harmonic oscillator.Thus, coherent states have the interpretation of “classi-cal” wave packets under the correspondence principle.

The coherent-state expectation of the quantized ver-sion of a classical Hamiltonian, whether a mechanicalparticle energy or a field intensity, is

ξ | x2 + p2 | ξ⟩

= 〈ξ | n | ξ〉 +1

2= |ξ|2 +

1

2. (A11)

The expected excitation number is |ξ|2. Excitation num-ber in coherent states is Poisson distributed, with theprobability of number n in state ξ defined and evaluatedas

Pξ (n) ≡ |〈ξ | n〉|2 = e−|ξ|2 |ξ|2n

n!. (A12)

2. Oscillation in more than one dimension

Vector simple harmonic oscillators are defined by rais-ing and lowering operators with subscript m, which cre-ate excitations independently in some number M of di-mensions. The vector ground state will be denoted asbefore, and eigenstates of the vector number operatorare now indexed by a vector-valued excitation number,which will again be denoted n. With proper contraction

rules of row with column vectors, this will create no con-fusion, because the vector oscillator is an exact formalextension of the scalar case.

Vector number states are created by the product ofraising operators

|n〉 ≡M∏

m=1

(

a†)nm

√nm!

|0〉 . (A13)

Coherent states are indexed by a vector ξ with complexcoefficients, and created from the vacuum by

|ξ〉 ≡M∏

m=1

e−|ξm|2/2∞∑

nm=0

(

a†mξm)nm

nm!|0〉 . (A14)

The basis of excitations indexed by m is called or-thogonal if raising operators at different m commute. Inan orthogonal basis, it is very convenient to define the

contraction a†ξ ≡ a†mξm as the raising operator for an

excitation along direction ξ. These may or may not becanonically normalized, depending on the value of |ξ|2.The reason to introduce them is that the multinomialexpansion

(

a†ξ

)N

=∑

{

n;∑

M

m=1nm=N

}

N !

n1!, . . . , nM !

M∏

m=1

(

a†mξm)nm

(A15)implies the compact representation

|ξ〉 ≡ e−|ξ|2/2∞∑

N=0

(

a†ξ

)N

N !|0〉 . (A16)

Thus any vector coherent state may be regarded as ascalar coherent state created by the appropriate raisingoperator. The magnitude of ξ occuring in the normaliza-tion is just the scalar product |ξ|2 ≡ ξ†ξ.

3. Basis transformation and time evolution

The full commutation algebra in an orthogonal basisof canonically normalized raising and lower operators isdefined to be

[

am, a†n]

= δmn . (A17)

The lowering operators may be transformed to any otherbasis by a unitary transformation aµ ≡ vµma

m, if the cor-responding raising operators undergo the inverse trans-formation a†ν ≡ a†nv

nν . It then follows that in the new

basis

[

aµ, a†ν]

= vµm[

am, a†n]

vnν = vµmvmν = δµν , (A18)

so the transformed operators are again orthogonal andcanonically normalized. In the appendices and the text,

Page 18: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

18

superscript greek indices will be reserved for operatorswhich create eigenstates of some Hamiltonian, and sub-script roman indices will denote all other bases. Typ-ically these will be bases in which a coupled systemfactors into engine and reservoir components, and inwhich heterogeneous preparation conditions are block-

diagonal. Geometric objects like a†ξ have componentrepresentations in any basis, and thus define the con-jugate transformation rules for the complex vector ξ:

a†ξ = a†nvnµv

µmξ

m ≡ a†µξµ .

The number operator for excitations of Hamiltonianeigenstate µ is defined as nµ ≡ a†µa

µ (no sum). TheHamiltonian assigns energy to µ-excitations as

[

H, a†µ]

≡ Eµa†µ, (A19)

and hence can be written

H =∑

µ

Eµnµ. (A20)

States evolve in the Schrodinger picture under thetime-evolution operator eiHt. When this is applied tocoherent states, the subscript t will be introduced as thetime index, so that

|ξt〉 ≡ eiHt |ξ0〉 . (A21)

|ξt〉 is created at any time by the same relation (A16),with time evolution introducing only the phase shifts

ξµt ≡ eiEµtξµ0 . (A22)

APPENDIX B: DENSITY MATRICES ANDCOARSE-GRAININGS

It is useful to introduce the definitions of coarse grain-ing, and the examples that will be used in the text, be-cause probability notation arises that will be used in laterappendices. The starting definition is that, for {|ψ〉}some collection of quantum states, any density matrixcan be written as a sum of outer products

ρ ≡∑

ψ

ρψ |ψ〉 〈ψ| . (B1)

A coarse graining of ρ is a map from ρ to some otherdensity ρ which averages out some of the information inρ [8]. A particular map used in the text will be calledthe annular coarse graining, defined in terms of a set ofnumber states |n〉 by

ρA ≡∑

n

Tr (ρ |n〉 〈n|) |n〉 〈n| ≡∑

n

Pρ (n) |n〉 〈n| . (B2)

This map removes information in the relative phases ofdifferent |n〉 components, in which ρ may not be diag-onal. The nature of the averaging can be understoodby applying it to the outer product of a coherent state.

Such a product corresponds to a ball in a classical phasespace, the x and p values of whose center are the realand imaginary parts of some complex vector ξ. If thisball represents ρ, the coarse-grained density ρA uniformlypopulates the annulus in the phase space with mean ra-dius |ξ|, and radial variance comparable to that in theoriginal ρ.

A few lines of algebra show that the map (B2) mani-festly satisfies the two conditions on a coarse-graining setforth in Ref. [8]. It is idempotent,

˜ρ = ρ, (B3)

and states made typical by the coarse-grained distribu-tion are also typical in the original (fine-grained) distri-bution:

Tr (ρ log ρ) = Tr (ρ log ρ) . (B4)

The entropy of any density is defined as

Sρ ≡ −Tr (ρ log ρ) , (B5)

so it follows that the entropy of the coarse-grained densityunder the same definition is

SρA≡ −

n

Pρ (n) logPρ (n) , (B6)

a function only of the occupation-number probabilities.The conditions defining a coarse-graining, together withthe concavity of the logarithm, imply that SρA

≥ Sρ, forany ρ.

A second stage of coarse-graining can be applied toρ, by marginalization of ρA. The marginal probability ofoccupation nm of some single componentm in the densityρ is defined as

Pρ (nm) ≡∑

nk 6=m

Pρ (n) . (B7)

Marginalization of the probability of vector excitationnumber n is replacement of the joint probability withthe product of component marginals, denoted

Pρ (n) ≡∏

m

Pρ (nm) . (B8)

The marginal coarse-graining of ρ is defined as

ρM ≡∑

n

Pρ (n) |n〉 〈n| . (B9)

Marginalization produces an entropy that is a sum ofthe marginal entropies of each component m:

SρM≡∑

m

[

−∑

nm

Pρ (nm) log Pρ (nm)

]

. (B10)

This coarse-graining is performed whenever an interact-ing engine/reservoir system is factored into separate “en-gine” and “reservoir” components, which are assumedto have independently well-defined entropies and equa-tions of state. The marginalization need not be complete,though whenm denotes intra-component eigenstates thatare coupled only through the inter-component interac-tions, it effectively is.

Page 19: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

19

APPENDIX C: THERMAL DENSITIES ANDTHEIR COHERENT-STATE REPRESENTATIONS

All the classical states in this paper are built from ther-mal density matrices. These are maximum-ignorance dis-tributions consistent with fixed expected energy [8, 21],and are defined in terms of an inverse-temperature β as

ρβ ≡ 1

Z

n

|n〉 〈n| · e−β∑

µnµEµ . (C1)

The normalization factor Z is called the partition func-

tion, and equals

Z =∑

n

e−β∑

µnµEµ . (C2)

Since the energy eigenvalues Eµ will be given units offrequency, β will have units of time.

Thermal densities my alternatively be written as inte-grals over outer products of coherent states. In a basis ofeigenstate excitations, the vector thermal density takesthe form

ρβ =1

Z

(

µ

eβEµ

πdξ∗µdξ

µ

)

e−ξ∗µK

µν ξ

ν |ξ〉 〈ξ| . (C3)

A kernel matrix K is introduced by the Gaussian inte-gral, which is diagonal in the eigenstate basis, with eigen-values Kµ

ν ≡ δµν(

eβEµ − 1)

. This K may be checked torecover the thermal occupation number probabilities, byevaluating the trace defined in Eq. (B2):

Pρβ(nµ) =

1

Zµnµ!

∫ ∞

0

eβEµd|ξµ|2 exp(

−eβEµ |ξµ|2)

|ξµ|2nµ .

(C4)Here all ξ component integrals evaluate to one except atµ, and the normalization Zµ is the partition function forthe density over the µ eigenstate alone. The exponen-tial integral differs from nµ! only by the normalizationexp (−βnµEµ), which with Zµ recovers the thermal dis-tribution.

The mean excitation number at any µ is similarly easyto evaluate by Gaussian integraion, as the trace

Tr (ρβnµ) =1

Z

(

µ

eβEµ

πdξ∗µdξ

µ

)

e−ξ∗µK

µν ξ

ν |ξµ|2

≡ nµ =(

K−1)µ

µ, (C5)

where a notation nµ has been introduced for the mean.The population is the inverse of the µ eigenvalue of K,the correct thermal result.

The entropy of a thermal density is a sum of marginalentropies in the eigenstate basis. Evaluating these asfunctions of the mean occupation numbers gives

Sρβ=∑

µ

S (nµ) ≡∑

µ

(nµ + 1) log (nµ + 1) − nµ log nµ.

(C6)

The preceding equations reduce the vector problem toa collection of simple scalar evaluations, but the physicalcontent of any interesting problem is much more apparentin a geometric notation. The Gaussian kernal can bewritten in the basis-independent form

ξ∗µKµν ξ

ν = ξ†Kξ. (C7)

The associated measure over complex vectors ξ is invari-ant under unitary transformations, and so is also definedfrom the product measure in the eigenstate basis as

µ

dξ∗µdξµ = dξ†dξ. (C8)

In this geometric representation, the thermal den-sity (C3) becomes

ρβ =Det (K + I)

ZπM

dξ†dξe−ξ†Kξ |ξ〉 〈ξ| , (C9)

while the partition function is the ratio of determinants

Z =Det (K + I)

πM

dξ†dξe−ξ†Kξ =

Det (K + I)

DetK.

(C10)The eigenstate occupation numbers are eigenvalues of

the diagonal matrix K−1, which thus defines a basis-independent mean number matrix

n ≡ K−1. (C11)

The thermal entropy (C6) then has a representationwhich is manifestly invariant under unitary transforma-tion of the ξ basis:

Sρβ= Tr [(n+ 1) log (n+ 1) − n log n] . (C12)

It is immediately apparent that thermal density matri-ces are a proper subset of the Gaussian-coherent densitymatrices, and that Equations (C9 - C12) hold for a gen-eral kernel matrix K with all positive eigenvalues. Inparticular, the trace form for the entropy can always bereduced to a sum over marginals of the eigenvalues of K,which need not be those of any Hamiltonian. This prop-erty of Gaussian-coherent representations will furnish avery easy way to impose conditions of heterogeneous tem-perature, on system components whose basis states donot diagonaize the fully interacting Hamiltonian.

APPENDIX D: GAUSSIAN COHERENTREPRESENTATIONS GIVE THERMAL

MARGINALS

A property of Gaussian-coherent densities, which willbe useful in the analysis of the model engine, is that allof their marginals are exactly thermal, under any basisrelated to the eigenstates ofK by unitary transformation.This will lead to surprising ways of hiding order when the

Page 20: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

20

unitary transformations are generated by the system’sown dynamical evolution.

To prove this result, define a basis-independent raisingoperator a†σ ≡ a†σ ≡ a†µσ

µ, in terms of an arbitrary

complex vector σ normalized to σ†σ = 1. The conjugatelowering operator is aσ ≡ σ†a ≡ σ∗

µaµ, and the number

operator for excitations along the σ direction is nσ ≡a†σaσ. Using the representation (A16) for the coherentstate |ξ〉, the mean σ-excitation number in density ρβevaluates simply to

Tr (ρβnσ) =DetK

πM

dξ†dξe−ξ†Kξ∣

∣σ†ξ∣

2= σ†K−1σ,

(D1)the generalization of Eq. (C5).

Meanwhile, the marginal probability of exactly nσ ex-citations extracts only nσ powers of the σ component ofξ, generalizing Eq. (C4). A weight factor is added to theGaussian kernel from the normalization of the σ compo-nent of |ξ〉, which cancels against the polynomial sum inall other orthogonal components. The resulting expecta-tion is an elementary Gaussian integral generalizing theGamma function of the scalar case:

Pρ (nσ) =DetK

πMnσ!

dξ†dξe−ξ†Kξ−|σ†ξ|2∣

∣σ†ξ∣

2nσ

=DetK

Det (K + σσ†)

[

σ†(K + σσ†)−1σ]nσ

(D2)

Its important property is that Pρ (nσ) is properly nor-malized, while the ratio at at different values of nσ is

a power of σ†(K + σσ†)−1σ, making the distribution ex-

ponential in nσ , or thermal. It is unnecessary to evaluate

the more complex matrix inverse(

K + σσ†)−1, as it is

related to the mean excitation number by

σ†(K + σσ†)−1σ

1 + σ†(K + σσ†)−1σ

= σ†K−1σ. (D3)

Similarly, the normalization of the marginal distributionhas the simple evaluation

DetK

Det (K + σσ†)=

1

1 + σ†K−1σ. (D4)

APPENDIX E: EXAMPLES OF STATEPREPARATION

Gaussian-coherent densities provide a flexible classwithin which to combine maximum ignorance with vari-ous constraints. The model in the text will only exploreheterogeneous temperature constraints, but it is worth atiny digression to show how coherence could also be in-troduced. This has applications to what transducers doin real thermoacoustic systems, but serves even more toemphasize the role of measurement in the definition ofclassical states, by means of projection.

1. Stern-Gerlach projections

One way to introduce dynamics and asymmetry into aninitially equilibriated engine/reservoir system is to me-chanically excite the engine, and allow it to pump thereservoirs to unequal potentials. If this were done witha transducer in the termoacoustic example, it could bemodeled in the densities of the last section by inserting a

linear combination of 1 and a projection δ2(

ξm − ξm

)

,

for an arbitrarily chosen component m and coherent ini-

tial condition ξm.This is a remarkably simple way to combine a con-

straint of partial coherence with a previous constraint offixed mean energy, preserving maximum ignorance other-wise. It corresponds roughly to what is done in a Stern-Gerlach experiment. A thermal maximum-ignorance en-semble is prepared by an oven, whether of electrons withspin and momentum, or an engine/reservoir system ther-malized uniformly. A measuring apparatus is used to dis-tinguish only a subset of the degrees of freedom (momen-tum or the value of ξm), realiably or stochastically. Theoriginal ensemble is then culled according to the mea-sured values, to produce a new ensemble in which theonly reduction in ignorance comes from the explicit con-straint afforded by the measurement. This serves as aparadigm for the procedure by which any classical stateis related to an explicitly prescribed set of coarse-grainedmeasurements.

2. Product-thermal initial conditions

Just as all marginal distributions from a Gaussian-coherent density are thermal, arbitrary heterogeneousthermal initial conditions can be imposed with such adensity, in any basis related by unitary transformationto the eigenstate basis. For this appendix, suppose thatthe coordinates ξ define the Gaussian integral for theraising operators of the model in Sec. VI.

In the model it was possible to choose a “preparationHamiltonian” (11) whose eigenstates were products ofeigenstates in Left and Right sectors, denoted L and R.Each sector included all of the excitations in its respectivereservoir, and a sector-unique linear combination of theexcitations in the engine. With respect to this decompo-sition, write the column vector of complex ξ coefficients

ξ =

[

ξL

ξR

]

. (E1)

This basis decomposition is related by an orthogonaltransformation to {x, S, y, A} basis of eigenstate excita-tions of the fully interacting engine.

For the subset of coefficients ξL, a standard thermaldensity matrix is given by

ρLβL =Det

(

KL + I)

ZLπML

dξL†dξLe−ξ

L†KLξL ∣

∣ξL⟩ ⟨

ξL∣

∣ ,

(E2)

Page 21: Self-organization as structural refrigerationcsc.ucdavis.edu/~chaos/papers/SOasRefrig.pdf · tems is called the theory of \dissipative structures" [9]. It is fortunate that SO also

21

in terms of inverse temperature βL, per Eq. C9. Thecorresponding density for ξR is given in terms of a βR as

ρRβR =Det

(

KR + I)

ZRπMR

dξR†dξRe−ξ

R†KRξR ∣

∣ξR⟩ ⟨

ξR∣

∣ .

(E3)Coherent states for the full system are products of coher-ent states for the L and R factors, by application of theBinomial theorem to Eq. (A16). Thus,

|ξ〉 =∣

∣ξL, ξR⟩

, (E4)

and the product density as long as the two sectors aredecoupled is simply

ρ = ρRβLρRβR . (E5)

The product (E5) is itself a Gaussian integral over thestates (E4). If the sector coefficients are reassembled intothe column vector (E1), the kernel of that integral hasthe block-diagonal form

K =

[

KL

KR

]

. (E6)

As long as the system is evolving under the preparationHamiltonian, the phases of the separate ξL and ξR cancelfrom their respective Gaussian integrals, becauseKL andKR are diagonal in the sector eigenbases. Thus the timeindex need not be specified explicitly for either the factoror product densities to be well-defined.

To use the density (E5) to specify the same distributionat other times, suppose that interactions are turned onat some time labeled 0. Whatever external couplingsmay have been used to produce thermal distributions inthe L and R factors (by ergodic evolution or however)should have been removed anytime prior to 0. Then takethe coefficient vector ξ to be the explicit coherent-statevector ξ0 of Eq. (A22). The evolving density specifiedby these heterogeneous initial conditions is defined at alllater times by taking the Gaussian integral over ξ0, withK fixed, and allowing the coherent states |ξ〉 to evolve asin the Schrodinger picture. This is how a fixed uncertainyspecifies a distribution on which measurements may bemade over an indefinite time interval.

Alternatively, something like the Heisenberg picturemay be adopted, by taking the Gaussian integral overthe same coefficients ξt used to index the coherent states.The preparation basis is related to the eigenbasis by anorthogonal transformation, and the eigenbasis evolvesunder the diagonal time-evolution matrix. Thus thepreparation basis at time 0 is related to the eigenbasisat any other time by a unitary transformation, and themeasure is invariant under these transformations. Thusthe only change in the Gaussian integral is by similar-ity transform of the kernel K, from the block-diagonalform (E6) at time 0 to whatever basis is desired at timet. The inverse matrix nt evolves under the identical sim-ilarity transform, worked out for the examples of interestin the text in Eq. (20).

[1] Ref from Guy’s paper; W. Fontana, “The arrival of thefittest”; see also G. Hoelzer, J. Pepper, and E. Smith,“On the logical relation between Self Organization andNatural Selection”, in preparation.

[2] J. Holland, Emergence; find appropriate references to theCS approach therein, so I don’t have to cite Wolfram.

[3] E. F. Schrodinger, What is Life?

[4] H. J.Morowitz, Energy flow in biology

[5] E. Smith, “Generalizing the notion of emergence”, in pro-ceedings of the, J. P. Crutchfield and C. R. Shalizi eds.

[6] Belousov and Zhabotinsky,[7] P. Bak, Tang, and Weisenfeld,[8] M. Gell-Mann and S. Lloyd, Complexity

[9] Nicolis and I. Prigogine,[10] E. Smith, “Carnot’s theorem as Noether’s theorem for

thermoacoustic engines”, Phys. Rev. E58, 1999[11] E. Smith, “Statistical mechanics of self-driven Carnot cy-

cles”, Phys. Rev. E60, 1998[12] is this C. F. Bennet? See also M. Gell-Mann, The quark

and the jaguar (if it is in there)[13] K. Huang, Statistical mechanics

[14] P. Ceperley, J. Acoust. Soc. Am. (three articles); For atutorial overview of the closely related irreversible ther-moacoustic engines, see G. W. Swift, “Thermoacoust en-gines”, J. Acoust. Soc. Am.

[15] C. R. Shalizi, PhD Disssertation, 2001.[16] Purcell, Electromagnetism See Ch. for LC circuits, and

Ch. for the back-EMF induced in electric motors.[17] J. G. Polchinski, “Renormalization and the Fermi sur-

face”,[18] L. Stryer, Biochemistry

[19] Get ref. Thanks to A. Goel for this example.[20] Thanks to H. Morowitz for this example, and the related

observations on photochemistry that follow. The generaldiscussion of intermediary metabolism and its role in theorigin of life is developed in H. J. Morowitz, Beginnings

of Cellular Life

[21] E. T. Jaynes, , and references therein[22] D. Park, Quantum Mechanics or whatever it’s called, for

all this stuff.[23] S. Weinberg, The quantum theory of fields, Vol. I