introduction - nyu courant bakhtin/bakhtin-on-burgers-spb-2012.pdfsection 4 is an introduction to...

Download Introduction - NYU Courant bakhtin/bakhtin-on-burgers-spb-2012.pdfSection 4 is an introduction to the Burgers equation. ... only the ideas behind the proof referring the reader to the details in ... 4 YURI BAKHTIN

Post on 17-Apr-2018

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • ERGODIC THEORY OF THE BURGERS EQUATION.

    YURI BAKHTIN

    1. Introduction

    The ultimate goal of these lectures is to present the ergodic theory of the Burgersequation with random force in noncompact setting. Developing such a theory hasbeen considered a hard problem since the first publication on the randomly forcedinviscid Burgers [EKMS00]. It was solved in a recent work [BCK] for the forcingof Poissonian type.

    The Burgers equation is a basic fluid dynamic model, and the main motivationfor the study of ergodicity for Burgers equation probably comes from statisticalhydrodynamics where one is interested in description of statistically steady regimesof fluid flows. It can also be interpreted as a growth model and the main ideaof [BCK] is to look at the Burgers equation as a model of last passage percolationtype. This allowed to use various tools from the theory of first and last passagepercolation.

    First several sections play an introductory role. We begin with an introductionto stochastic stability in Section 2. In Section 3 we briefly discuss the progress inthe ergodic theory of another important hydrodynamics model, the NavierStokessystem with random force. Section 4 is an introduction to the Burgers equation.Section 5 is a discussion of the ergodic theory of Burgers equation with randomforce in compact setting. In Section 6 we introduce the Poissonian forcing for theBurgers equation. In Section 7 we state the ergodic results from [Bak12] on quasi-compact setting. In Section 8 we state the main results and the proof is given inSections 9 through 13. In Section 14 we give some concluding remarks.

    Although many of the proofs given here are detailed and rigorous, often we giveonly the ideas behind the proof referring the reader to the details in [BCK].

    2. Stability in stochastic dynamics

    We begin with a very simple example, a deterministic linear dynamical systemwith one stable fixed point.

    A discrete dynamical system is given by a transformation f of a phase space X.For our example, we take the phase space X to be the real line R and define thetransformation f by f(x) = ax, x R, where a is a real number between 0 and1. To any point x X one can associate its forward orbit (xn)n=0, a sequence ofpoints obtained from x0 = x by iterations of the map f , i.e., xn = f(xn1) for all

    Lecture notes for the Summer School on Probability and Statistical Physics in St.Petersburg,

    June 2012.The author was partially supported by NSF CAREER Award DMS-0742424.

    1

  • 2 YURI BAKHTIN

    n N:

    x0 = x = f0(x)

    x1 = f(x0) = f1(x),

    x2 = f(x1) = f f(x0) = f2(x),x3 = f(x2) = f f f(x) = f3(x),. . . .

    A natural question in the theory of dynamical systems is the behavior of the forwardorbit (xn)

    n=0 as n , where n plays the role of time. In other words, we may be

    interested in what happens to the initial condition x in the long run under evolutiondefined by the map f . In our simple example, the analysis is straightforward.

    Namely, zero is a unique fixed point of the transformation: f(0) = 0, and sincexn = a

    nx, n N and a (0, 1) we conclude that as n , xn converges tothat fixed point exponentially fast. Therefore, 0 is a stable fixed point, or a one-point global attractor for the dynamical system (R, f), i.e., its domain of attractioncoincides with R. So, due to the contraction that is present in the map f , there isa fast loss of memory in the system, and no matter what the initial condition is, itgets forgotten in the long run and the points xn = f

    n(x) approach the stable fixedpoint 0 as n .

    Notice that so far we have always assumed that the evolution begins at time 0,but the picture would not change if we assume that the evolution begins at anyother starting time n0 Z. In fact, since the map f is invertible in our example,the full (two-sided) orbit (xn)nZ = (f

    nx)nZ indexed by Z is well-defined for anyx R.

    Let us now modify the dynamical system of our first example a little by addingnoise, i.e., a random perturbation that will kick the system out of equilibrium. Letus consider some probability space (,F ,P) rich enough to support a sequence(n)nZ of independent Gaussian random variables with mean 0 and variance

    2.For every n Z we will now define a random map fn, : R R by

    fn,(x) = ax+ n().

    This model is known as an autoregressive-moving-average (ARMA) model of or-der 1.

    A natural analogue of a forward orbit from our first example would be a sto-chastic process (Xn)nn0 emitted at time n0 from point x, i.e., satisfying Xn0 = xand, for all n n0 + 1,

    (1) Xn+1 = aXn + n.

    We want to describe the long term behavior of the resulting random dynamicalsystem and study its stability properties. However, it is not as straightforward asin the deterministic case. It is clear that there is no fixed point that serves all mapsfn, at the same time. The solution of the equation fn,(x) = x for some n maybe irrelevant for all other values of n. Still, the system exhibits a pull from infinitytowards the origin and contraction. In fact,

    |fn,(x) fn,(y)| = |(ax+ n) (ay + n)| = a|x y|, x, y R,

  • ERGODIC THEORY OF THE BURGERS EQUATION. 3

    and for the random map m,n corresponding to the random dynamics betweentimes m and n and defined by

    m,n (x) = fn, fn1, . . . fm+2, fm+1,(x).

    we obtain

    (2) |m,n (x) m,n (y)| = anm|x y|.

    The contraction established above should imply some kind of stability. In therandom dynamics setting there are several nonequivalent notions of fixed pointsand their stability.

    One way to deal with stability is at the level of distributions. We notice thatdue to the i.i.d. property of the sequence (n), the process Xn defined above is ahomogeneous Markov process with one-step transition probability

    P (x,A) =12

    xA

    e(yax)2

    22 dy.

    If instead of a deterministic initial condition x, say, at time 0 we have a random ini-tial conditionX0 independent of (n)n1 and distributed according to a distribution0, then the distribution of X1 is given by

    1(A) =

    xR

    0(dx)P (x,A),

    and a probability measure on R is called invariant or stationary if

    (A) =

    xR

    (dx)P (x,A),

    for all Borel sets A. If the dynamics is initiated with initial value distributedaccording to an invariant distribution, then the resulting process (Xn) is stationary,i.e., its distribution is invariant under time shifts.

    Studying the stability of a Markov dynamical system at the level of distributionsinvolves identification of invariant distributions and establishing convergence of thedistribution of Xn to one of the stationary ones.

    In our example, if X0 is independent of 1 and normally distributed with zeromean and variance D, then X1 = aX0 + 1 is also centered Gaussian with variancea2D + 2. So the distributions at time 0 and time 1 coincide if a2D + 2 = D,i.e., D = 2/(1 a2). Therefore, the centered Gaussian distribution with variance2/(1a2) is invariant and gives rise to a stationary process. There are several waysto establish uniqueness of this invariant distribution, e.g., the celebrated couplingmethod introduced by Doeblin in 1930s which also allows to prove that for anydeterministic initial data, the distribution of Xn converges exponentially fast to theunique invariant distribution in total variation as n .

    Another way to approach stability is studying random attractors. Let us convinceourselves that in our example, the random attractor contains only one point andthat point is a global solution (Xn)nZ defined by

    Xn = X(n, n1, n2, . . .) = n + an1 + a2n2 + . . . .

  • 4 YURI BAKHTIN

    Clearly, this series converges with probability 1. Moreover, for any n Z,

    aXn1 + n = n + a(n1 + an2 + a2n3 + . . .)

    = n + an1 + a2n2 + a

    3n3 + . . .

    = Xn,

    and (Xn)nZ is indeed a two-sided orbit, i.e., a global (in time) solution of equa-tion (1) defined on the entire Z. Notice that Xn is a functional of the history of theprocess up to time n. Since the process is stationary, thus constructed (Xn)nZis also a stationary process. In fact, Xn is centered Gaussian with variance

    2 + a22 + a42 + . . . = 2/(1 a2),

    which confirms our previous computation of the invariant distribution. Let us nowinterpret the global solution (Xn)nZ as a global attractor. We know from thecontraction estimate (2) that for any x R,

    (3) |m,n(x)Xn| = |m,n(x) m,n(Xm)| = anm|xXm|.

    Using the stationarity of Xm and applying integration by parts, we see thatm |m|} m |m|} E|X0|

  • ERGODIC THEORY OF THE BURGERS EQUATION. 5

    and, moreover, introduce a natural distribution on these points called the samplemeasure associated to the history of forcing, see [Cra08],[LY88], and [CSG11].

    Notice that the invariant distribution in our example results from a balancebetween two factors. One factor is the decay or dissipation due to the contractivedynamics. In the absence of randomness the system would simply equilibrate tothe stable fixed point. The second one is the random forcing that keeps thesystem from the rest at equilibrium, and the stronger that influence is the more theresulting stationary distribution is spread out. This mechanism is typical for manyphysical systems.

    One can loosely define ergodic theory as the study of statistical patterns indynamical systems in stationary regimes. One of the basic questions of ergodictheory of a dynamical system is the description of the stationary regimes. In fact,taking measurements of a system and averaging them over time makes sense only ina stationary regime. Moreover, different stationary regimes may produce differentlimiting values for these averages, thus making it an important task to characterizeall stationary regimes for a system.

    The main