1 conservation lawsjl305/4a2/lecturenotes2.pdf · 2019. 11. 8. · 1 conservation laws the simplest...
TRANSCRIPT
Materials from
LeVeque R. J. 2002. Finite Volume Methods for Hyperbolic Problems, Cambridge
University Press.
Hirsch C. 1988-1990 Numerical Computation of Internal and External Flows, Volumes
1 and 2, Wiley.
1 Conservation Laws
The simplest example of a one-dimensional conservation law is the partial differential
equation (PDE):
qt + [f(q)]x = 0, (1.1)
where q(x, t) is a vector of m conserved quantities, and f(q) the flux function. Rewriting
this in the quasilinear form
qt + f ′(q)qx = 0, (1.2)
where the Jacobian matrix f ′(q) satisfies certain conditions. For example, the one-
dimensional Euler equations are
q =
ρ
ρu
ρE
, f(q) =
ρu
ρu2 + p
(ρE + p)u
. (1.3)
To develop high-resolution methods for the Euler equations, one can start from a one-
dimensional scalar linear advection equation and extend the method in the following
steps:
1. The first order upwind method for one-dimensional scalar equation.
2. Second order methods for one-dimensional scalar equation.
1
3. High order (TVD) methods for one-dimensional scalar equation.
4. One-dimensional linear hyperbolic system.
5. One-dimensional nonlinear hyperbolic system (Euler equation).
6. Two-dimensional nonlinear hyperbolic system (two-dimensional Euler equation)
on Cartesian meshes using directional splitting operators.
7. Two-dimensional nonlinear hyperbolic system (two-dimensional Euler equation)
on curvilinear meshes.
2 Finite Volume Methods
Figure 1: Illustration of a finite volume method for updating the cell average Qni by
fluxes at the cell edges. Shown in x− t space.
In one space dimension, a finite volume method is based on subdividing the spatial
domain into intervals (the finite volumes, also called grid cells) and keeping track of
an approximation to the integral of q over each of these volumes. In each time step
we update these values using approximations to the flux through the endpoints of the
intervals.
Denote the ith grid cell by
Ci = (xi−1/2, xi+1/2) (2.4)
as shown in Fig. 1. The value Qni will approximate the average value over the ith interval
at time tn:
Qni ≈
1
∆x
∫ xi+1/2
xi−1/2
q(x, tn)dx ≡ 1
∆x
∫Ciq(x, tn)dx, (2.5)
2
where∆x = xi+1/2 − xi−1/2 is the length of the cell. For simplicity we will generally
assume a uniform grid, but this is not required.
The integral form of the conservation law gives
d
dt
∫Ciq(x, t)dx = f(q(xi−1/2, t))− f(q(xi+1/2, t)) (2.6)
Integrating (2.6) in time from tn to tn+1 yields∫Ciq(x, tn+1)dx−
∫Ciq(x, tn)dx
=
∫ tn+1
tn
f(q(xi−1/2, t))dt−∫ tn+1
tn
f(q(xi+1/2, t))dt
Rearranging this and dividing by ∆x gives
1
∆x
∫Ciq(x, tn+1)dx =
1
∆x
∫Ciq(x, tn)dx−
1
∆x
−[∫ tn+1
tn
f(q(xi+1/2, t))dt−∫ tn+1
tn
f(q(xi−1/2, t))dt
](2.7)
This does suggest that we should study numerical methods of the form
Qn+1i = Qn
i −∆t
∆x
(F ni+1/2 − F n
i−1/2), (2.8)
where F ni−1/2 is some approximation to the average flux along x = xi−1/2:
F ni−1/2 ≈
1
∆t
∫ tn+1
tn
f(q(xi−1/2, t)dt. (2.9)
If we can approximate this average flux based on the values Qn, then we will have a fully
discrete method. See Fig. 1 for a schematic of this process.
2.1 The Upwind Method for Advection
For the constant-coefficient advection equation qt + uqx = 0 Fig. 2 (a) indicates that the
flux through the left edge of the cell is entirely determined by the value Qni−1 in the cell
to the left of this cell. This suggests defining the numerical flux as
3
Figure 2: Characteristics for the advection equation, showing the information that flows
into cell Ci during a single time step. (a) For a small enough time step, the
flux at xi−1/2 depends only on the values in the neighboring cells - only on
Qni−1 in this case where u. (b) For a larger time step, the flux should depend
on values farther away.
F ni−1/2 = uQn
i−1 (2.10)
This leads to the standard first-order upwind method for the advection equation,
Qn+1i = Qn
i −u∆t
∆x(Qn
i −Qni−1). (2.11)
Note that this can be rewritten as
Qn+1i −Qn
i
∆t+ u
(Qni −Qn
i−1
∆x
)= 0, (2.12)
We are primarily interested in finite volume methods, and so other interpretations of
the upwind method is valuable. Fig. 3 show a geometric viewpoint. We approximate
q as a constant function within each cell at time tn. This defines a piecewise constant
function at time tn with the value Qni in cell Ci . As time evolves, this piecewise constant
function advects to the right with velocity u, and the jump between states Qni−1 and Qn
i
shifts a distance u∆t into cell Ci. At the end of the time step we compute a new cell
average Qn+1i in order to repeat this process. To compute Qn+1
i we must average the
piecewise constant function shown in the top of Fig. 3 over the cell. This results in a
convex combination of Qni−1 and Qn
i (i.e., the weights are both nonnegative and sum to
1):
4
Figure 3: Wave-propagation interpretation of the upwind method for advection. The
bottom pair of graphs shows data at time tn , represented as a piecewise
constant function. Over time ∆t this function shifts by a distance u∆t as indi-
cated in the middle pair of graphs. We view the discontinuity that originates
at xi−1/2 as a waveWi−1/2. The top pair shows the piecewise constant function
at the end of the time step after advecting. The new cell averages Qn+1i in each
cell are then computed by averaging this function over each cell. (a) shows a
case with u > 0, while (b) shows u < 0.
Qn+1i =
u∆t
∆xQni−1 +
(1− u∆t
∆x
)Qni
This is simply the upwind method, since a rearrangement gives (2.11).
Above, the upwind method was derived as a special case of the approach referred to
as the REA algorithm, for reconstruct-evolve-average. These are one-word summaries
of the three steps involved.
5
Algorithm (REA)
1. Reconstruct a piecewise polynomial function qn(x, tn) defined
for all x, from the cell averages Qni . In the simplest case this
is a piecewise constant function that takes the value Qni in
the ith grid cell, i.e.,
qn(x, tn) = Qni for all x ∈ Ci.
2. Evolve the equation exactly (or approximately) with this ini-
tial data to obtain qn(x, tn+1)a time ∆t later.
3. Average this function over each grid cell to obtain new cell
averages
Qn+1i =
1
∆x
∫Ciqn(x, tn+1)dx.
This whole process is then repeated in the next time step.
2.2 The REA Algorithm with Piecewise Linear Reconstruction
In the previous example, we derived the upwind method by reconstructing a piecewise
constant function qn(x, tn) from the cell averages Qni . To achieve better than first-order
accuracy, we must use a better reconstruction than a piecewise constant function. From
the cell averages Qni we can construct a piecewise linear function of the form
qn(x, tn) = Qni + σni (x− xi) for xi−1/2 ≤ x ≤ xi+1/2, (2.13)
6
where
xi =1
2(xi−1/2 + xi+1/2) = xi−1/2 +
1
2∆x (2.14)
is the center of the ith grid cell and σni is the slope on the ith cell. The linear function
(2.13) on the ith cell is defined in such away that its value at the cell center xi is Qni .
More importantly, the average value of qn(x, tn) over cell Ci is Qni (regardless of the
slope σni ), so that the reconstructed function has the cell average Qni . This is crucial
in developing conservative methods for conservation laws. Note that steps 2 and 3
are conservative in general, and so Algorithm (REA) is conservative provided we use a
conservative reconstruction in step 1, as we have in (2.13).
For the scalar advection equation qt + uqx = 0, we can easily solve the equation with
this data, and compute the new cell averages as required in step 3 of Algorithm (REA).
We have
qn(x, tn+1) = qn(x− u∆t, tn) .
Until further notice we will assume that u > 0 and present the formulas for this particular
case. The corresponding formulas for u < 0 should be easy to derive, and we will
see a better way to formulate the methods in the general case. Suppose also that
|u∆t/∆x| ≤ 1, as is required by the CFL condition. Then it is straightforward to
compute that (see Fig. /4)
Qn+1i =
u∆t
∆x
(Qni−1 +
1
2(∆x− u∆t)σni−1
)+
(1− u∆t
∆x
)(Qni −
1
2u∆tσni
)= Qn
i −u∆t
∆x(Qn
i −Qni−1)−
1
2
u∆t
∆x(∆x− u∆t) (σni − σni−1) . (2.15)
This is the upwind method with a correction term that depends on the slopes.
2.3 Choice of Slopes
Choosing slopes σni = 0 gives the upwind method for the advection equation, since the
final term in (2.15) drops out. To obtain a second-order accurate method we want to
choose nonzero slopes in such a way that σni approximates the derivative qx over the ith
grid cell. Three obvious possibilities are
Centred slope : σni =Qni+1 −Qn
i−1
2∆x(Fromm), (2.16)
7
Flow Direction
Figure 4: Piecewise linear reconstruction and correction of the flux: dark shaded area
flows to the right cell and light shaded area remains in the same cell.
Upwind slope : σni =Qni −Qn
i−1
∆x(Beam−Warming), (2.17)
Downwind slope : σni =Qn+1i −Qn
i
2∆x(Lax−Wendroff). (2.18)
The centered slope might seem like the most natural choice to obtain second-order
accuracy, but in fact all three choices give the same formal order of accuracy, and it is
the other two choices that give methods we have already derived using the Taylor series
expansion. Only the downwind slope results in a centered three-point method, and this
choice gives the Lax-Wendroff method. The upwind slope gives a fully-upwinded 3-point
method, which is simply the Beam-Warming method.
The centered slope (2.16) may seem the most symmetric choice at first glance, but
because the reconstructed function is then advected in the positive direction, the final
updating formula turns out to be a nonsymmetric four-point formula. This method is
known as Fromm’s method.
To compare the typical behavior of the upwind and Lax-Wendroff methods, Fig. 5
shows numerical solutions to the scalar advection equation qt + qx = 0, which is solved
on the unit interval up to time t = 1 with periodic boundary conditions. Hence the
solution should agree with the initial data, translated back to the initial location. The
data, shown as a solid line in each plot, consists of both a smooth pulse and a square-
wave pulse. Fig. 5(a) shows the results when the upwind method is used. Excessive
dissipation of the solution is evident. Fig. 5(b) shows the results when the LaxWendroff
8
Figure 5: Tests on the advection equation with different linear methods. Results at time
t = 1 and t = 5 are shown, corresponding to 1 and 5 revolutions through the
domain in which the equation qt + qx =0 is solved with periodic boundary
conditions: (a) upwind, (b) Lax-Wendroff. [claw/book/chap6/compareadv]
method is used instead. The smooth pulse is captured much better, but the square wave
gives rise to an oscillatory solution.
2.4 Oscillations
Second-order methods such as the Lax-Wendroff or Beam-Warming (and also Fromms
method) give oscillatory approximations to discontinuous solutions. This can be easily
understood using the interpretation of Algorithm (REA).
Consider the Lax-Wendroff method, for example, applied to piecewise constant data
with values
9
Figure 6: (a)Grid values Qn and reconstructed qn(., tn) using Lax-Wendroff slopes.
(b)After advection with u∆t = ∆x/2. The dots show the new cell averages
Qn+1. Note the overshoot.
Qni =
1 if i ≤ J,
0 if i > J.
Choosing slopes in each grid cell based on the Lax-Wendroff prescription (2.18) gives
the piecewise linear function shown in Fig. 6(a). The slope σni is nonzero only for i = J
.
The function qn(x, tn) has an overshoot with a maximum value of 1.5 regardless of
∆x. When we advect this profile a distance u∆t, and then compute the average over
the J th cell, we will get a value that is greater than 1 for any ∆t with 0 < u∆t < ∆x.
The worst case is when u∆t = ∆x/2, in which case qn(x, tn+1) is shown in Fig 6(b) and
Qn+1J = 1.125. In the next time step this overshoot will be accentuated, while in cell
J −1 we will now have a positive slope, leading to a value Qn+1J−1 that is less than 1. This
oscillation then grows with time.
The slopes proposed in the previous section were based on the assumption that the
solution is smooth. Near a discontinuity there is no reason to believe that introducing
this slope will improve the accuracy. On the contrary, if one of our goals is to avoid
nonphysical oscillations, then in the above example we must set the slope to zero in
the J th cell. Any σnJ < 0 will lead to Qn+1J > 1, while a positive slope wouldn’t make
much sense. On the other hand we don’t want to set all slopes to zero all the time,
or we simply have the first-order upwind method. Where the solution is smooth we
want second-order accuracy. Moreover, we will see below that even near a discontinuity,
10
once the solution is somewhat smeared out over more than one cell, introducing nonzero
slopes can help keep the solution from smearing out too far, and hence will significantly
increase the resolution and keep discontinuities fairly sharp, as long as care is taken to
avoid oscillations.
This suggests that we must pay attention to how the solution is behaving near the ith
cell in choosing our formula for σni . (And hence the resulting updating formula will be
nonlinear even for the linear advection equation). Where the solution is smooth, we want
to choose something like the Lax-Wendroff slope. Near a discontinuity we may want to
limit this slope, using a value that is smaller in magnitude in order to avoid oscillations.
Methods based on this idea are known as slope-limiter methods. This approach was
introduced by van Leer in a series of papers where he developed the approach known
as MUSCL (monotonic upstream-centered scheme for conservation laws) for nonlinear
conservation laws.
2.5 Total Variation
How much should we limit the slope? Ideally we would like to have a mathematical
prescription that will allow us to use the Lax-Wendroff slope whenever possible, for
second-order accuracy, while guaranteeing that no nonphysical oscillations will arise. To
achieve this we need a way to measure oscillations in the solution. This is provided by
the notion of the total variation of a function. For a grid function Q we define
TV(Q) =∞∑
i=−∞
|Qi −Qi−1|. (2.19)
For an arbitrary function q(x) we can define
TV(q) = supN∑j=1
|q(ξj)− q(ξ)j−1|, (2.20)
where the supremum is taken over all subdivisions of the real line −∞ = ξ0 < ξ1 < ... <
ξN = ∞. Note that for the total variation to be finite, Q or q must approach constant
values q± as x→ ±∞.
Definition A two-level method is called total variation diminishing (TVD) if, for any
set of data Qn, the values Qn+1 computed by the method satisfy
TV(Qn+1
)≤ TV (Qn) . (2.21)
11
If a method is TVD, then in particular data that is initially monotone, say
Qni ≥ Qn
i+1 for all i,
will remain monotone in all future time steps. Hence if we discretize a single propagating
discontinuity (as in Fig. 6), the discontinuity may become smeared in future time steps
but cannot become oscillatory. This property is especially useful, and we make the
following definition.
Definition A method is called monotonicity-preserving if
Qni ≥ Qn
i+1 for all i,
implies that
Qn+1i ≥ Qn+1
i+1 for all i.
Any TVD method is monotonicity-preserving.
2.6 TVD Methods Based on the REA Algorithm
How can we derive a method that is TVD? One easy way follows from the reconstruct-
evolve-average approach to deriving methods described by Algorithm (REA). Suppose
that we perform the reconstruction in such a way that
TV(qn(., tn)) ≤ TV(Qn). (2.22)
Then the method will be TVD. The reason is that the evolving and averaging steps
cannot possibly increase the total variation, and so it is only the reconstruction that we
need to worry about.
In the evolve step we clearly have
TV(qn(., tn+1)) = TV(qn(., tn)) (2.23)
for the advection equation, since qn simply advects without changing shape. The total
variation turns out to be a very useful concept in studying nonlinear problems as well;
for we will see later that a wide class of nonlinear scalar conservation laws also have this
property, that the true solution has a non-increasing total variation.
It is a simple exercise to show that the averaging step gives
TV(Qn+1) ≤ TV(qn(., tn+1)). (2.24)
Combining (2.22), (2.23) and (2.24) then shows that the method is TVD.
12
2.7 Slope-Limiter Methods
Setting σni = 0, the first-order upwind method is TVD for the advection equation. The
upwind method may smear solutions but cannot introduce oscillations.
One choice of slope that gives second-order accuracy for smooth solutions while still
satisfying the TVD property is the minmod slope
σni = minmod
(Qni −Qn
i−1∆x
,Qni+1 −Qn
i
∆x
), (2.25)
where the minmod function of two arguments is defined by
minmod(a, b) =
a if |a| < |b| and ab > 0,
b if |b| < |a| and ab > 0,
0 if ab < 0.
(2.26)
If a and b have the same sign, then this selects the one that is smaller in modulus, else
it returns zero.
Rather than defining the slope on the ith cell by always using the downwind difference
(which would give the Lax-Wendroff method), or by always using the upwind difference
(which would give the Beam-Warming method), the minmod method compares the two
slopes and chooses the one that is smaller in magnitude. If the two slopes have different
sign, then the value Qni must be a local maximum or minimum, and in this case that we
must set σni = 0.
Fig. 8(a) shows results using the minmod method for the advection problem considered
previously. We see that the minmod method does a fairly good job of maintaining good
accuracy in the smooth hump and also sharp discontinuities in the square wave, with no
oscillations.
Sharper resolution of discontinuities can be achieved with other limiters that do not
reduce the slope as severely as minmod near a discontinuity. Fig. 7 (a) shows some
sample data representing a discontinuity smeared over two cells, along with the minmod
slopes. Fig. 7 (b) shows that we can increase the slopes in these two cells to twice the
value of the minmod slopes and still have (2.22) satisfied. This sharper reconstruction
13
Figure 7: Grid values Qn and reconstructed qn(., tn) using (a) minmod slopes, (b) super-
bee or MC slopes. Note that these steeper slopes can be used and still have
the TVD property.
will lead to sharper resolution of the discontinuity in the next time step than we would
obtain with the minmod slopes.
One choice of limiter that gives the reconstruction of Fig. 7 (b), while still giving
second order accuracy for smooth solutions, is the so-called superbee limiter introduced
by Roe
σni = maxmod(σ(1)i , σ
(2)i
)(2.27)
where
σ(1)i = minmod
((Qni −Qn
i−1∆x
), 2
(Qni+1 −Qn
i
∆x
)),
σ(2)i = minmod
(2
(Qni −Qn
i−1∆x
),
(Qni+1 −Qn
i
∆x
)).
Each one-sided slope is compared with twice the opposite one-sided slope. Then the
maxmod function in (2.27) selects the argument with larger modulus. In regions where
the solution is smooth this will tend to return the larger of the two one-sided slopes, but
will still be giving an approximation to qx, and hence we expect second-order accuracy.
We will see later that the superbee limiter is also TVD in general.
Fig. 8 (b) shows the same test problem as before but with the superbee method. The
14
Figure 8: Tests on the advection equation with different high-resolution
methods, as in Fig 5: (a) minmod limiter, (b) superbee limiter.
[claw/book/chap6/compareadv]
discontinuity stays considerably sharper.On the other hand, we see a tendency of the
smooth hump to become steeper and squared off. This is sometimes a problem with
superbee - by choosing the larger of the neighboring slopes it tends to steepen smooth
transitions near inflection points.
2.8 Flux Formulation with Piecewise Linear Reconstruction
The slope-limiter methods described above can be written as flux-differencing methods
of the form (2.8). The updating formulas derived above can be manipulated algebraically
to determine what the numerical flux function must be. Alternatively, we can derive
the numerical flux by computing the exact flux through the interface xi−1/2 using the
piecewise linear solution qn(x, t), by integrating uqn(xi−1/2, t) in time from tn to tn+1.
15
For the advection equation this is easy to do and we find that
F ni−1/2 = uQn
i−1 +1
2(∆x− u∆t)σni−1.
Using this in the flux-differencing formula (2.8) gives
Qn+1i = Qn
i −u∆t
∆x
(Qni −Qn
i−1)− 1
2(∆x− u∆t)(σni − σni−1).
If we also consider the case u < 0, then we will find that in general the numerical flux
for a slope-limiter method is
F ni−1/2 =
{uQn
i−1 + 12u(∆x− u∆t)σni−1 if u ≥ 0
uQni − 1
2u(∆x+ u∆t)σni if u ≤ 0
(2.28)
or
F ni−1/2 =
uQni−1 + 1
2 |u|(∆x− |u|∆t)σni−1 if u ≥ 0
uQni + 1
2 |u|(∆x− |u|∆t)σni if u ≤ 0
(2.29)
Rather than associating a slope σni with the ith cell, the idea of writing the method in
terms of fluxes between cells suggests that we should instead associate our approximation
to qx with the cell interface xi−1/2 where F ni−1/2 is defined. Across the interface xi−1/2
we have a jump
∆Qni−1/2 = Qn
i −Qni−1 (2.30)
and this jump divided by ∆x gives an approximation to qx . This suggests that we write
the flux (2.29) as
F ni−1/2 = u−Qn
i + u+Qni−1 +
1
2|u|(
1− |u|∆t∆x
)δni−1/2 (2.31)
where u− = min(u, 0), u+ = max(u, 0), and
δni−1/2 = a limited version of ∆Qni−1/2. (2.32)
16
If δni−1/2 is the jump ∆Qni−1/2 itself, then (2.31) gives the Lax-Wendroff method. From
the form (2.31), we see that the Lax-Wendroff flux can be interpreted as a modification
to the upwind flux (2.10). By limiting this modification we obtain a different form of
the high-resolution methods.
2.9 Flux Limiters
From the above discussion it is natural to view the Lax-Wendroff method as the basic
second-order method based on piecewise linear reconstruction, since defining the jump
δni−1/2 in (2.32) in the most obvious way as ∆Qni−1/2 at the interface xi−1/2 results in
that method. Other second-order methods have fluxes of the form (2.31) with different
choices of δni−1/2. The slope-limiter methods can then be reinterpreted as flux-limiter
methods by choosing σni−1/2 to be a limited version of (2.32). In general we will set
δni−1/2 = φ(θni−1/2)∆Qni−1/2 (2.33)
where
θni−1/2 =∆Qn
I−1/2
∆Qni−1/2
. (2.34)
The index I here is used to represent the interface on the upwind side of xi−1/2:
I =
{i− 1 if u > 0,
i+ 1 if u < 0.(2.35)
The ratio θni−1/2 can be thought of as a measure of the smoothness of the data near
xi−1/2. Where the data is smooth we expect θni−1/2 ≈ 1 (except at extrema). Near a
discontinuity we expect that θni−1/2 may be far from 1
The function φ(θ) is the flux-limiter function, whose value depends on the smoothness.
Setting φ(θ) = 1 for all θ gives the Lax-Wendroff method, while setting φ(θ) = 0 gives
upwind. More generally we might want to devise a limiter function φ that has values
near 1 for θ ≈ 1 , but that reduces (or perhaps increases) the slope where the data is
not smooth.
There are many other ways one might choose to measure the smoothness of the data
besides the variable θ defined in (2.34). However, the framework proposed above results
17
in very simple formulas for the function φ corresponding to many standard methods,
including all the methods discussed so far.
In particular, note the nice feature that choosing
φ(θ) = θ
results in the Beam-Warming method. We also find that Fromms method can be ob-
tained by choosing
φ(θ) =1
2(1 + θ).
In summary,
Linear methods:upwind : φ(θ) = 0,
Lax−Wendroff : φ(θ) = 1,
Beam−Warming : φ(θ) = θ,
Fromm : 12(1 + θ).
(2.36)
High-resolution limiters:
minmod : φ(θ) = minmod(1, θ),
superbee : φ(θ) = max(0,min(1, 2θ),min(2, θ),
MC : φ(θ) = max(0,min(1 + θ)/2, 2, 2θ)
van Leer : φ(θ) = θ+|θ|1+|θ| .
(2.37)
A wide variety of other limiters have also been proposed in the literature. The dispersive
nature of the LaxWendroff method also causes a slight shift in the location of the smooth
hump, a phase error, that is visible in Fig. 5, particularly at the later time t = 5. Another
advantage of using limiters is that this phase error can be essentially eliminated. Fig. 10
shows a computational example where the initial data consists of a wave packet, a high-
frequency signal modulated by a Gaussian. With a dispersive method such a packet will
typically propagate at an incorrect speed corresponding to the numerical group velocity
of the method. The LaxWendroff method is clearly quite dispersive. The high-resolution
method shown in Fig. 10(c) performs much better. There is some dissipation of the wave,
but much less than with the upwind method.
18
Figure 9: Limiter functions φ(θ). (a) The shaded regions shows where function values
must lie for the method to be TVD. The second-order linear methods have
functions φ(θ) that leave this region. (b) The shaded region is the Sweby
region of second-order TVD methods. The minmod limiter lies along the
lower boundary. (c) The superbee limiter lies along the upper boundary. (d)
The MC limiter is smooth at φ = 1.
2.10 TVD Limiters
For simple limiters such as minmod, it is clear from the derivation as a slope limiter that
the resulting method is TVD, since it is easy to check that (2.22) is satisfied. For more
complicated limiters we would like to have an algebraic proof that the resulting method
is TVD. A fundamental tool in this direction is the following theorem of Harten, which
can be used to derive explicit algebraic conditions on the function φ required for a TVD
method.
19
Figure 10: Tests on the advection equation with different methods on a wave packet.
Results at time t = 1 and t = 10 are shown, corresponding to 1 and 10
revolutions through the domain in which the equation qt + qx = 0 is solved
with periodic boundary conditions. [claw/book/chap6/wavepacket]
Theorem (Harten) page 116, R. J. LeVeque 2002.
20
21
3 Godunov’s Method for Hyperbolic Linear Systems
In one space dimension, a homogeneous first-order system of partial differential equations
in x and t has the form
qt(x, t) + Aqx(x, t) = 0 (3.38)
in the simplest constant-coefficient linear case. Here q : R×R → Rm is a vector with
m components, and A is a constant m×m real matrix. This system is hyperbolic if the
matrix A has m real eigenvalues λp and a corresponding set of m linearly independent
eigenvectors rp. A nonlinear system (1.1) is hyperbolic if the Jacobian matrix f ′(q)
satisfies the hyperbolic conditions.
3.1 Riemann Problems and Shock tubes
A fundamental tool in the development of finite volume methods is the Riemann problem,
which is simply the hyperbolic equation together with special initial data. The data is
piecewise constant with a single jump discontinuity at some point, say x = 0,
q(x, 0) =
ql if x < 0,
qr if x > 0.(3.39)
If Qi−1 and Qi are the cell averages in two neighboring grid cells on a finite volume
grid, then by solving the Riemann problem with ql = Qi−1 and qr = Qi, we can obtain
information that can be used to compute a numerical flux and update the cell averages
over a time step. For hyperbolic problems, the solution to the Riemann problem is
typically a similarity solution, a function of x/t alone, and consists of a finite set of
waves that propagate away from the origin with constant wave speeds.
The solution to a Riemann problem of the Euler equations typically has a contact
discontinuity and two nonlinear waves, each of which may be either a shock or a rarefac-
tion wave, depending on ql and qr. The structure of a typical Riemann solution is shown
in Fig. 11 (see also Fig. 12). The contact discontinuity is sometimes called the entropy
wave, since it carries a jump in entropy. The first and third wave families are called
acoustic waves, since in the small-disturbance limit these reduce to acoustics equations.
22
Figure 11: Typical solution to the Riemann problem for the Euler equations.
The REA algorithm (reconstruct-evolve-average) developed in section 2.1 for the
scalar advection equation can be naturally extended to the hyperbolic system. In order
to implement this procedure, we must be able to solve the hyperbolic equation in step
2. If we are starting with piecewise constant data, this can be done using the theory of
Riemann problems. The general approach of Algorithm (REA) was originally proposed
by Godunov as a method for solving the nonlinear Euler equations of gas dynamics.
3.2 The Numerical Flux Function for Godunov’s Method
For a hyperbolic system, the exact solution qn(x, tn+1) will typically contain several
discontinuities and we must compute its integral over each grid cell in order to determine
the new cell averages Qn+1i . However, it turns out to be easy to determine the numerical
flux function F that corresponds to Godunov’s method. Recall that the numerical flux
F ni−1/2 should approximate the time average of the flux at xi−1/2 over the time step,
F ni−1/2 ≈
1
∆t
∫ tn+1
tn
f(q(xi−1/2, t))dt .
In general the function q(xi−1/2, t) varies with t, and we certainly don’t know this
variation of the exact solution. However, we can compute this integral exactly if we
replace q(x, t) by the function qn(x, t) defined in Algorithm (REA) using Godunov’s
piecewise constant reconstruction. Clearly qn(xi−1/2, t) is constant over the time inter-
val tn < t < tn+1. The Riemann problem centered at xi−1/2 has a similarity solution
that is constant along rays (x − xi−1/2)/(t − tn) = constant, and looking at the value
along (x − xi−1/2)/(t − tn) = 0 gives the value of qn(xi−1/2, t). Denote this value by
23
Figure 12: Solution to the Sod shock-tube problem for the Euler equations.
Q↓i−1/2 = q↓(Qni−1, Q
ni ). This suggests defining the numerical flux F n
i−1/2 by
F ni−1/2 =
1
∆t
∫ tn+1
tn
f(q↓(Qni−1, Q
ni ))dt = f(q↓(Qn
i−1, Qni )) . (3.40)
This gives a simple way to implement Godunov’s method for a general system of con-
servation laws:
1. Solve the Riemann problem at xi−1/2 to obtain q↓(Qni−1, Q
ni )
2. Define the flux F ni−1/2 = F(Qn
i−1, Qni ) by (3.40).
3. Apply the flux-differencing formula (2.8).
Godunov’s method is often presented in this form.
24
3.3 First Order Godunovs Method
Assume that matrix A has m real eigenvalues λp and a corresponding set of m linearly
independent eigenvectors rp. Define the matrices
Λ+ =
(λ1)+
(λ2)+
...
(λm)+
, Λ− =
(λ1)−
(λ2)−
...
(λm)−
, (3.41)
where (λp)+ = max(λp, 0), and (λp)− = min(λp, 0). Now define
A+ = RΛ+R−1, A− = RΛ−R−1, (3.42)
where R is the matrix whose columns are eigenvectors rp, and note that
A = A+ + A− = RΛR−1 . (3.43)
This gives a useful splitting of the coefficient matrix A into pieces essential for right-going
and left-going propagation.
Consider a linear system of three equations is solved assuming λ1 < 0 < λ2 < λ3, as
shown in Fig.13. The function qn(x, tn+1) will typically have three discontinuities in the
grid cell Ci , at the points xi−1/2 + λ2∆t, xi−1/2 + λ3∆t and xi+1/2 + λ1∆t.
For a linear system, the solution to the Riemann problem can be expressed as a set
of waves,
Qi −Qi−1 =m∑p=1
αi−1/2rp ≡
m∑p=1
Wpi−1/2 (3.44)
where rp is an eigenvector of A with eigenvalue λp, and Wpi−1/2 ≡ αi−1/2r
p. The value
of q in the Riemann solution along x = xi−1/2 is
Q↓i−1/2 = q↓(Qi−1, Qi) = Qi−1 +∑p:λp<0
Wpi−1/2. (3.45)
In the linear case f(Q↓i−1/2) = A(Q↓i−1/2),
F ni−1/2 = AQi−1 +
∑p:λp<0
AWpi−1/2 (3.46)
Since Wpi−1/2 is an eigenvector of A with eigenvalue λp, this can be rewritten as
25
Figure 13: An illustration of the process of Algorithm REA for the case of a linear system
of three equations. The Riemann problem is solved at each cell interface, and
the wave structure is used to determine the exact solution time ∆t later. The
wave W2i−1/2, for example, has moved a distance λ2∆t into the cell.
F ni−1/2 = AQi−1 +
m∑p=1
(λp)−Wpi−1/2 (3.47)
Alternatively, we could start with the formula
Q↓i−1/2 = Qi −∑p:λp>0
Wpi−1/2 (3.48)
and obtain
F ni−1/2 = AQi −
m∑p=1
(λp)+Wpi−1/2 (3.49)
The flux-differencing formula gives
Qn+1i = Qn
i −∆t
∆x
(F ni+1/2 − F n
i−1/2). (3.50)
3.4 High-Resolution Methods for linear Systems
The slope-limiter or flux-limiter methods can be extended to systems of equations. This
is most easily done in the flux-limiter framework. First recall that the Lax-Wendroff
26
method can be written in flux-differencing form if we define the flux by
F(Qni−1, Q
ni ) =
1
2A(Qni−1 +Qn
i
)− 1
2
∆t
∆xA2(Qni −Qn
i−1). (3.51)
Since A = A+ + A−, we can rewrite this as
F(Qni−1, Q
ni ) =
(A+Qn
i−1 + A−Qni
)+
1
2|A|(
1− ∆t
∆x|A|)
(Qni −Qn
i−1)
(3.52)
where |A| = A+ − A−.
In the form (3.52), we see that the Lax-Wendroff flux can be viewed as being composed
of the upwind flux plus a correction term, just as for the scalar advection equation.
To define a flux-limiter method we must limit the magnitude of this correction term
according to how the data is varying. But for a system of equations, ∆Qi−1/2 = Qi−Qi−1
is a vector, and it is not so clear how to compare this vector with the neighboring jump
vector ∆Qi−3/2 or ∆Qi+1/2. It is also not clear which neighboring jump to consider,
since the upwind direction is different for each eigencomponent. The solution, of course,
is that we must decompose the correction term in (3.52) into eigencomponents and limit
each scalar eigencoefficient separately, based on the algorithm for scalar advection.
We can rewrite the correction term as
1
2|A|(
1− ∆t
∆x|A|)
(Qni −Qn
i−1) =1
2|A|(
1− ∆t
∆x|A|) m∑
p=1
αpi−1/2rp
(3.53)
where rp are the eigenvectors of A and the coefficients αpi−1/2. The flux-limiter method
is defined by replacing the scalar coefficient αpi−1/2 by a limited version, based on the
scalar formulas of Section 2.9:
27
αpi−1/2 = αpi−1/2φ(θpi−1/2
)(3.54)
where
θpi−1/2 =αpI−1/2αpi−1/2
, with I =
i− 1 if λp > 0,
i+ 1 if λp < 0.(3.55)
where φ is one of the limiter functions of Section 2.9. The flux function for the flux-
limiter method is then
Fi−1/2 = A+Qi−1 + A−Qi + Fi−1/2, (3.56)
where the first term is the upwind flux and the correction flux Fi−1/2 is defined by
Fi−1/2 =1
2|A|(
1− ∆t
∆x|A|) m∑
p=1
αpi−1/2rp. (3.57)
Note that in the case of a scalar equation, we can take r1 = 1 as the eigenvector
of A = u, so that ∆Qi−1/2 = α1i−1/2, which is what we called δi−1/2 in Section 2.9.
The formula (3.57) then reduces to (2.31). Also note that the flux Fi−1/2 (and hence
Fi−1/2) depends not only on Qi−1 and Qi , but also on Qi−2 and Qi+1 in general, because
neighboring jumps are used in defining the limited values αpi−1/2 in (3.54). The flux-
limiter method thus has a five-point stencil rather than the three-point stencil of the
Lax-Wendroff.
Note that |A|rp = |λp|rp, so that (3.57) may be rewritten as
Fi−1/2 =1
2
m∑p=1
|λp|(
1− ∆t
∆x|λp|)αpi−1/2r
p . (3.58)
4 Godunov’s Method for Nonlinear Systems
Godunov’s method can be easily generalized to nonlinear systems if we can solve the
nonlinear Riemann problem at each cell interface, and this gives the natural general-
28
ization of the first-order upwind method. The cell average is updated by the formula
Qn+1i = Qn
i −∆t
∆x
(F ni+1/2 − F n
i−1/2
),
with
F ni−1/2 = f
(q↓(Qn
i−1, Qni )).
4.1 Approximate Riemann Solvers
To apply Godunov’s method on a system of equations we need only determine q↓(ql, qr),
the state along x/t = 0 based on the Riemann data ql and qr. The process of solving
the Riemann problem is often quite expensive, even though in the end we use very little
information from this solution. It is often not necessary to compute the exact solution
in order to obtain good results. A wide variety of approximate Riemann solvers have
been proposed that are much cheaper and yet give equally good results in many cases.
For given data Qi−1 and Qi, an approximate Riemann solution might define a function
Qi−1/2(x/t) that approximates the true similarity solution to the Riemann problem with
data Qi−1 and Qi. This function will typically consist of some set of m waves Wpi−1/2
propagating at some speeds spi−1/2 with
Qi −Qi−1 =m∑p=1
Wpi−1/2 . (4.59)
These waves and speeds will also be needed in defining high-resolution methods based
on the approximate Riemann solver.
To be conservative, the approximate Riemann solution q(x, t) = w(x/t) must have the
following property that for M sufficiently large,∫ M
−Mw(ξ)dξ = M(ql + ur) + f(ql)− f(qr) (4.60)
Note that the exact Riemann solution w(x/t) has this property, as seen from the integral
form of the conservation law over [−M,M ]× [0, 1].
29
One natural way to obtain w(x/t) is to compute the exact Riemann solution of some
modified conservation law qt + f(q)x = 0, with a flux function f(q) that is presumably
easier to work with than the original flux f(q). By using the integral form of this
conservation law over [−M,M ]× [0, 1], we see that the condition (4.60) will be satisfied
provided that
f(qr)− f(ql) = f(qr)− f(ql) (4.61)
4.2 Roe’s Approximate Riemann Solver
One of the most popular Riemann solvers is to replace the nonlinear problem qt+f(q)x =
0 by some linearized problem defined locally at each cell interface,
qt + Ai−1/2qx = 0. (4.62)
The matrix Ai−1/2 is chosen to be some approximation to f′(q) valid in a neighborhood
of the data Qi−1 and Qi . To determine Ai−1/2 in a reasonable way, Roe suggested that
the following conditions should be imposed on the matrix Ai−1/2 :
1. Ai−1/2 is diagonalizable with real eigenvalues, so that (4.62)
is hyperbolic,
2. Ai−1/2 → f′(q) as Qi−1, Qi → q, so that the method is
consistent with the original conservation law.
3. Ai−1/2(Qi −Qi−1) = f(Qi)− f(Qi−1).
30
We might take, for example,
Ai−1/2 = f′(Qi−1/2) (4.63)
where Qi−1/2 is some average of Qi−1 and Qi.
Condition (3) has two effects. First, it is required by (4.61) and guarantees that the
method is conservative. Another effect is that, in the special case where ql, and qr are
connected by a single shock wave or contact discontinuity, the approximate Riemann
solution agrees with the exact Riemann solution. This follows from the fact that the
Rankine-Hugoniot condition is satisfied for ql, and qr, in this case, so
f(qr)− f(ql) = s(qr − ql)
for s (the speed of the shock or contact). Combined with (3), this shows that qr−ql must,
in this situation, be an eigenvector of Ai−1/2 with eigenvalue s and so the approximate
solution q(x, t) also consists of this single jump qr − ql, propagating with speed s.
From condition (3), we could compute the numerical flux
Fi−1/2 = f(Qi−1) + A−i−1/2(Qi −Qi−1) (4.64)
or by
Fi−1/2 = f(Qi)− A+i−1/2(Qi −Qi−1) (4.65)
Averaging these two expressions gives a third version, which is symmetric in Qi−1 and
Qi,
Fi−1/2 =1
2[f(Qi−1) + f(Qi)]−
1
2|A|i−1/2(Qi −Qi−1) (4.66)
This form is often called Roe’s method and has the form of the unstable centered flux
plus a viscous correction term.
4.3 Roe Solver for the Euler Equations
For Euler Equations (1.3) with the polytropic equation of state, the Jacobian matrix
f′(q) is
f′(q) =
0 1 0
12(γ − 3)u2 (γ − 3)u γ − 1
12(γ − 1)u3 − uH H − (γ − 1)u2 γu
(4.67)
31
where
H = E +p
ρ= h+
1
2u2 (4.68)
is the total specific enthalpy. The eigenvalues are
λ1 = u− c, λ2 = u, λ3 = u+ c, (4.69)
as for the coefficient matrix resulting from the primitive equations. They agree because
the two forms are equivalent and should yield the same characteristic speeds. The
eigenvectors will appear different, of course, in these different variables. We have
r1 =
1
u− c
H − uc
, r2 =
1
u
12u
2
, r3 =
1
u+ c
H + uc
. (4.70)
Roe proposed the averages
u =
√ρi−1ui−1 +
√ρiui√
ρi−1 +√ρi
(4.71)
for the velocity,
H =
√ρi−1Hi−1 +
√ρiHi√
ρi−1 +√ρi
(4.72)
for the total specific enthalpy, and
c =
√(γ − 1)
(H − 1
2u2)
(4.73)
for the sound speed. The eigenvalues and eigenvectors of the Roe matrix are then
obtained by evaluating (4.69) and (4.70) at this averaged state. The coefficients αpi−1/2
in the wave decomposition
δ = Qi −Qi−1 = α1r1 + α2r2 + α3r3 (4.74)
can be obtained by inverting the matrix of right eigenvectors, which leads to the following
formulas:
32
α2 = (γ − 1)(H − u2)δ1 + uδ2 − δ3
c2,
α3 =δ2 + (c− u)δ1 − cα2
2c, (4.75)
α1 = δ1 − α2 − α3.
4.4 High-Resolution Methods for Nonlinear Systems
Godunov’s method based on approximate Riemann solvers can be extended to high-
resolution methods for nonlinear systems using essentially the same approach as was
introduced in subsection 3.4 for linear systems.
33
Figure 14: The three basic steps of Godunov’s method.
5 Time Integration Methods for Space-discretized Equations
The methods discussed so far have all been fully discrete methods, discretized in both
space and time. At times it is useful to consider the discretization process in two stages,
first discretizing only in space, leaving the problem continuous in time. This leads to a
system of ordinary differential equations in time, called the semidiscrete equations. We
then discretize in time using any standard numerical method for systems of ordinary
differential equations (ODEs). This approach reduces a PDE to a system of ODEs
A large number of methods are available for the solution of the system of ordinary
differential equations. Consider a set of m coupled first-order differential equations for
34
Figure 15: Second-order Godunov-type scheme for the linear convection equation.
the functions yi , i = 1, 2, ... , m, having the general form
dyidt
= f(t, y1, y2, ..., ym), (5.76)
where the functions fi on the right-hand side are known. The formula for the Euler
method is
yn+1 = yn + ∆tf(tn, yn) (5.77)
35
t
y(t)
Figure 16:
which advances a solution from tn to tn+1. The formula is unsymmetrical: It advances the
solution through an interval ∆t, but uses derivative information only at the beginning
of that interval (see Fig. 16). There are several reasons that Euler’s method is not
recommended for practical use, among them, (i) the method is not very accurate when
compared to other, fancier, methods run at the equivalent stepsize, and (ii) neither is it
very stable.
Consider, however, the use of a step like (5.77) to take a “trial” step to the midpoint
of the interval. Then use the value of both x and y at that midpoint to compute the
real step across the whole interval. Fig. 17 illustrates the idea. In equations,
k1 = ∆tf(tn, yn)
k2 = ∆tf(tn +1
2∆t, yn +
k12
)
yn+1 = yn + k2 +O((∆t)3
)(5.78)
This symmetrization cancels out the first-order error term, making the method second
order. (5.78) is called the second-order Runge-Kutta or midpoint method.
By far the most often used is the classical fourth-order Runge-Kutta formula:
36
t
y(t)
Figure 17:
k1 = ∆tf(tn, yn)
k2 = ∆tf(tn +∆t
2, yn +
k12
)
k3 = ∆tf(tn +∆t
2, yn +
k22
)
k4 = ∆tf(tn + ∆t, yn + k3)
yn+1 = yn +k16
+k23
+k33
+k46
+O((∆t)5
)(5.79)
The fourth-order Runge-Kutta method requires four evaluations of the right-hand side
per step ∆t (see Fig.18). This will be superior to the midpoint method (5.78) if at least
twice as large a step is possible with (5.79) for the same accuracy.
5.1 Central scheme revisited
Consider the piecewise linear reconstruction of the form (2.13):
qn(x, tn) = Qni + σni (x− xi) for xi−1/2 ≤ x ≤ xi+1/2 .
Setting x = xi ±∆x/2 within cell i gives the interface values
37
Figure 18:
QLi+1/2 = Qn
i +1
2σni ∆x , QR
i−1/2 = Qni −
1
2σni ∆x .
If the first-order scheme is defined by the upwind numerical flux
Fi+ 12
(Qni , Q
ni+1
)= uQn
i ,
the second-order space-accurate numerical flux is obtained from
F(2)
i+ 12
(QLi+1/2, Q
Ri+1/2
)= uQL
i+1/2 .
If we choose the downwind slope for σi = (Qni+1 +Qn
i )/∆x,
F(2)
i+ 12
= Fi+ 12
(QLi+1/2, Q
Ri+1/2
)=u
2
(Qni+1 +Qn
i
).
The flux-difference formula leads to the central scheme:
Qn+1i = Qn
i −∆t
∆x
(F
(2)
i+ 12
− F (2)
i− 12
)= Qn
i −u∆t
2∆x(Qn
i+1 −Qni−1) .
This scheme is linearly unconditionally unstable. The instability arises from the first-
order time differencing whose second-order truncation error, −∆tqtt/2 = −u2∆qxx/2, is
not compensated by a similar term from the second-order space difference. A general
38
formulation of second-order space- and time-accurate upwind schemes can be obtained
as follows, basedon the midpoint method (5.78).
The first step defines intermediate values after a propagation over a time interval
∆t/2:
Qi = Qni −
∆t
2∆x
(Fi+ 1
2− Fi− 1
2
), (5.80)
where F is a first-order numerical flux.
The second step defines the interface variables as second-order extrapolations to the
intermediate values
QLi+1/2 = Qi +
1
2σni ∆x, QR
i+1/2 = Qi+1 −1
2σn+1i ∆x . (5.81)
The last step defines the second-order numerical flux as
F(2)
i+ 12
= Fi+ 12
(QLi+1/2, Q
Ri+1/2
). (5.82)
and the final schene is:
Qn+1i = Qn
i −∆t
∆x
(F
(2)
i+ 12
− F (2)
i− 12
). (5.83)
Applied to the first-order upwind numerical flux and upwind slope σi,
39
Qi = Qni −
u∆t
2∆x(Qi −Qi−1) ,
QLi+1/2 = Qi +
1
2(Qi −Qi−1) =
1
2(3Qi −Qi−1)−
u∆t
2∆x(Qi −Qi−1)
Qn+1i = Qn
i −u∆t
∆x
(QLi+1/2 −QL
i−1/2
)= Qn
i −u∆t
2∆x
(3Qi − 4Qi−1 +Qi−2) +1
2
(u∆t
∆x
)2
(Qi − 2Qi−1 +Qi−2) ,
the above scheme becomes identical to the second-order upwind scheme of Warming
and Beam. Similarly, applied to the first-order downwind numerical flux and downwind
slope σi, the above scheme becomes identical to the second-order central scheme of
Lax-Wendroff.
40