numerical analysis of the two dimensional wave equation

38
MAT-VET-F 20008 Examensarbete 15 hp Juni 2020 Numerical Analysis of the Two Dimensional Wave Equation Using Weighted Finite Differences for Homogeneous and Heterogeneous Media Anton Holmberg Martin Nilsson Lind Christian Böhme

Upload: others

Post on 05-May-2022

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Numerical Analysis of the Two Dimensional Wave Equation

MAT-VET-F 20008

Examensarbete 15 hpJuni 2020

Numerical Analysis of the Two Dimensional Wave Equation Using Weighted Finite Differences for

Homogeneous and Heterogeneous Media

Anton HolmbergMartin Nilsson LindChristian Böhme

Page 2: Numerical Analysis of the Two Dimensional Wave Equation

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Numerical Analysis of the Two Dimensional WaveEquation

Anton Holmberg, Martin Nilsson Lind, Christian Böhme

This thesis discusses properties arising when finite differences are implemented forsolving the two dimensional wave equation on media with various properties. Bothhomogeneous and heterogeneous surfaces are considered. The time derivative of thewave equation is discretised using a weighted central difference scheme, dependenton a variable parameter gamma. Stability and convergence properties are studied forsome different values of gamma. The report furthermore features an introduction tosolving large sparse linear systems of equations, using so-called multigrid methods.The linear systems emerge from the finite difference discretisation scheme. Aconclusion is drawn stating that values of gamma in the unconditionally stable regionprovides the best computational efficiency. This holds true as the multigrid basednumerical solver exhibits optimal or near optimal scaling properties.

ISSN: 1401-5757, MAT-VET-F 20008Examinator: Martin SjödinÄmnesgranskare: Maria StrømmeHandledare: Maya Neytcheva

Page 3: Numerical Analysis of the Two Dimensional Wave Equation

Popularvetenskaplig sammanfattning

Vagrorelse ar ett i naturen vanligt forekommande fenomen. Att med matem-atisk exakthet kunna beskriva hur vagor beter sig kan darfor i manga avseen-den vara viktigt. Det kan galla allt ifran beskrivning av seismiska vibrationeri ett jordskalv, till nyttjandet av vagor i ett vagkraftverk. Sedan 1700-talet har man kunnat beskriva sadana vagfenomen matematiskt med densa kallade vagekvationen, vilken ar en hyperbolisk partiell differentialekva-tion (PDE). Denna ekvation beskriver hur vagor propagerar och oscillerar itiden.

Ett problem med vagekvationen ar att den liksom de flesta PDE:r endasthar analytiska losningar for ett mycket begransat antal fall. Darfor masteidealiseringar ofta goras for att en exakt losning ska kunna alstras, ex-empelvis gallande problemets geometri. Forekommer fysikaliska fenomensom av naturen ar icke-linjara, sasom friktion och elasticitet i problemet,omojliggors ofta analytiska losningar helt och hallet. Om vagekvationeninte kan losas exakt utan att man med forenklingar skadar den fysikaliskariktigheten, maste man istallet forlita sig pa den numeriska analysen. Dennumeriska analysen tillhandahaller en approximation av losningen, darformaste en avvagning goras angaende losningens fel och den onskade pro-blemstorleken. For att numeriskt losa PDE:r finns flera tillvagagangssatt.Den mest klassiska metoden kallas finita differens-metoden (FDM) vilkenersatter ekvationens derivator med finita differenser. Ofta resulterar im-plementering av FDM i ett linjart ekvationssystem. Detta ekvationssystemmaste losas for att den numeriska losningen ska erhallas. Beroende pa denonskade exaktheten hos den valda finita differensmetoden, kommer storlekenpa ekvationssystemet att variera. Ar systemet stort maste losningsmetodenvaljas med forsiktighet da mindre lampliga val kan leda till mycket langaberakningstider.

Detta projekt kretsar kring implementering och undersokning av finita diff-erenser for den tvadimensionella vagekvationen. For vagekvationens rums-derivata anvands en centraldifferens och for tidsdiskretiseringen den intelika etablerade γ-metoden. I γ-metoden beror diskretiseringens egenskaperpa en variabel parameter γ. γ-metoden undersoks har for flera olika fallav vagekvationen, med avseende pa stabilitet och konvergenshastighet. Vi-dare undersoks AGMG vilken ar den metod som har anvands for att losa definita differensernas resulterande ekvationssystem. Projektet innefattar avenen grundlaggande introduktion till den grupp numeriska losningsmetodervilken AGMG tillhor; sa kallade multigrid-metoder.

3

Page 4: Numerical Analysis of the Two Dimensional Wave Equation

Contents

1 Introduction 6

2 Theory 72.1 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . 82.3 Spatial Discretisation . . . . . . . . . . . . . . . . . . . . . . . 8

2.3.1 Square Domain . . . . . . . . . . . . . . . . . . . . . . . . 82.4 The γ-method . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4.1 Stability of the Time Discretisation . . . . . . . . . . . . . 112.4.2 Error Estimates and Convergence of the Fully Discretised

Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.4.3 Numerical Dispersion . . . . . . . . . . . . . . . . . . . . . 14

2.5 Convergence of Eigenvalues . . . . . . . . . . . . . . . . . . . . 142.6 Analysis for Variable Wave Propagation Speed . . . . . . . . . 152.7 Linear Systems of Equations . . . . . . . . . . . . . . . . . . . 16

2.7.1 The Gradient and Conjugate Gradient Methods . . . . . . 172.7.2 Preconditioning Techniques . . . . . . . . . . . . . . . . . 192.7.3 The Multigrid Method . . . . . . . . . . . . . . . . . . . . 212.7.4 The Multigrid Method as a Preconditioner . . . . . . . . . 232.7.5 Algebraic Multigrid . . . . . . . . . . . . . . . . . . . . . . 232.7.6 Aggregation-Based Algebraic Multigrid . . . . . . . . . . . 23

3 Numerical Study 233.1 Verification of the Discretisation Error . . . . . . . . . . . . . . 24

3.1.1 Constant Coefficients . . . . . . . . . . . . . . . . . . . . . 243.1.2 Variable Coefficients . . . . . . . . . . . . . . . . . . . . . 253.1.3 Computational Complexity . . . . . . . . . . . . . . . . . 273.1.4 Computation of the Error . . . . . . . . . . . . . . . . . . 28

3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Results 294.1 Convergence of the γ-method . . . . . . . . . . . . . . . . . . . 29

4.1.1 Constant Wave Speed . . . . . . . . . . . . . . . . . . . . 294.1.2 Non-Constant Wave Speed . . . . . . . . . . . . . . . . . . 32

4.2 Computational Complexity . . . . . . . . . . . . . . . . . . . . 334.3 Scaling of the AGMG-method . . . . . . . . . . . . . . . . . . 33

5 Discussion 34

4

Page 5: Numerical Analysis of the Two Dimensional Wave Equation

5.1 Convergence of the γ-method . . . . . . . . . . . . . . . . . . . 345.1.1 Constant Wave Speed . . . . . . . . . . . . . . . . . . . . 345.1.2 Non-Constant Wave Speed . . . . . . . . . . . . . . . . . . 34

5.2 Computational Complexity . . . . . . . . . . . . . . . . . . . . 355.3 Scaling of the AGMG-method . . . . . . . . . . . . . . . . . . 36

6 Conclusion 36

5

Page 6: Numerical Analysis of the Two Dimensional Wave Equation

1 Introduction

The wave equation is applicable in a wide range of areas in science and en-gineering. Finding a solution to the equation is thus often of great value.Analytical solutions are in many cases impossible to obtain, and so numer-ical methods must be considered. This work focuses on a particular finitedifference discretisation scheme, that allows for high order accuracy and alsofor efficient numerical solutions of the arising large and sparse linear systemsof equations.

This work is limited to solving the wave equation on a two dimensionalsquare domain where the focus lies on different forms of the wave equationitself. The cases of both homogeneous and heterogeneous domains are con-sidered. The time discretisation is of special interest in this project since anot so widely used method referred to as γ-discretisation is employed. Theγ-discretisation uses a parameter γ that can be set differently in order tocontrol accuracy and stability of the discretisation. When choosing γ, vari-ous factors are of interest. One is the convergence of the discretisation: howfast will the discretisation error decrease as the grid and time step are in-creasingly refined? Another such factor is computational efficiency: will theγ−method converge in a reasonable amount of time? Choosing γ > 0 resultsin an implicit method, meaning a system of equations must be solved.

The purpose of this work is to reach a conclusion regarding how some choicesof the parameter γ are more favourable, especially in terms of computationalefficiency. To reach such a conclusion, a detailed study regarding how γaffects the stability, convergence and characteristics of the resulting linearsystem of equations, must be performed. The solver used for the linearsystems is referred to as AGMG (aggregate based algebraic multigrid), see[1]. AGMG belongs to a family of the numerical analysis called multigrid.This project introduces the concept of multigrid at a basic level as it is thefoundation on which AGMG relies upon.

An overview of other general principles and numerical solution methodssuch as the conjugate gradient and the principle of preconditioning are alsoprovided as they are crucial for understanding AGMG. The project furtherstudies the scaling properties of AGMG, as it is known to have an optimalor near optimal computational complexity.

6

Page 7: Numerical Analysis of the Two Dimensional Wave Equation

2 Theory

2.1 The Wave Equation

The wave equation is a hyperbolic partial differential equation (PDE). Thewave equation with forcing can be written on the following form:

utt = c2∆u+ f(x, y, z, t). (2.1)

Here u(x, y, z, t) is the displacement of the wave, c is the wave propagationspeed, ∆ is the Laplace operator and f(x, y, z, t) is a forcing term actingon the medium. If the problem is limited to a two dimensional analysis,equation (2.1) can be written

∂2u

∂t2= c2

(∂2u

∂x2+∂2u

∂y2

)+ f(x, y, t). (2.2)

In (2.1) and (2.2) the medium is homogeneous, thus, the wave propagationspeed is constant throughout the entire domain. If the medium however isnon-homogeneous the wave equation can be generalised to

∂2u

∂t2= ∇ · (c2∇u) + f(x, y, z, t). (2.3)

In two dimensions with q = c2(x, y) this becomes

∂2u

∂t2=

∂x

(q(x, y)∂u

∂x

)+

∂y

(q(x, y)∂u

∂y

)+ f(x, y, t). (2.4)

The wave equation can be solved for different geometries of the domain.The domain is here denoted by Ω and ∂Ω is its boundary. The numericalsolution of the wave equation requires initial conditions (IC) for u and itstime derivative, as well as boundary conditions (BC). The initial conditionsare denoted by u(x, y, 0) = g0(x, y) and ut(x, y, 0) = g1(x, y). The boundaryconditions can be of different types. Here we consider boundary conditionsof Dirichlet type on the whole boundary ∂Ω, namely, u(x, y, t) = h(x, y, t)for (x, y) ∈ ∂Ω.

7

Page 8: Numerical Analysis of the Two Dimensional Wave Equation

2.2 Finite Differences

When solving differential equations numerically there are several discretisa-tion methods that can be used. A classical numerical method is the finitedifference method (FDM). The idea of FDM is to approximate the deriva-tives in a differential equation by finite differences. In this way the solutionis approximated at discrete points only. Thus, for a given discretisation, noinformation is provided on the intervals between the points.

2.3 Spatial Discretisation

2.3.1 Square Domain

For the spatial derivatives a central difference approximation is used. Theresulting five-point scheme can be written

uxx + uyy ≈ ∆h = −4u(x, y, t) + u(x+ h, y, t) + u(x− h, y, t)+ u(x, y + h, t) + u(x, y − h, t),

(2.5)

for the two-dimensional square domain discretised with equidistant mesh-points. Figure 1 displays the resulting stencil.

Figure 1: The five-point stencil of the central difference in(2.5)

8

Page 9: Numerical Analysis of the Two Dimensional Wave Equation

U11

U21

U31

U12

U22

U32

U13

U23

U33

Figure 2: The spatial discretisation of a square domain withequidistant mesh-points.

The indexes of the mesh-points are defined as in Figure 2. This is a conve-nient way to index the mesh points since it matches the indexing of matrixelements. The values of the solution at the mesh-points are then rearrangedto a vector so the finite difference can be expressed as a matrix vector mul-tiplication. When choosing a columnwise ordering the resulting u-vectorbecomes:

U11

U21

U31

U12...U33

(2.6)

When applying the stencil, see Figure 1, to the entire domain a resultingN × N matrix arises, where N is the number of degrees of freedom in the

9

Page 10: Numerical Analysis of the Two Dimensional Wave Equation

system. This matrix has the following form:

L =

−4 1 0 0 . . . 1 0 0 . . . 01 −4 1 0 0 . . . 1 0 0 . . . 00 1 −4 1 0 0 . . . 1 0 0 . . . 0

. . .. . .

1 1 −4 1 10 1 1 −4 1 10 0 1 1 −4 1 1

0 0 0. . .

. . .. . .

. . .. . .

(2.7)

Here each row represents the equation produced by the central differenceapproximation at every point of the mesh.

2.4 The γ-method

For the time derivative in the wave equation we use a discretisation techniquecalled the γ-method. We follow the derivations in [2]. The γ-method is aweighted central difference approximation. To discretise the wave equationin time, the γ-method is applied to equation (2.2) in the following way:

u(x, t+ k)− 2u(x, t) + u(x, t− k) =k2[γutt(x, t+ k) + (1− 2γ)utt(x, t)

+ γutt(x, t− k)], t = k, 2k, ...

(2.8)

where γ is a variable parameter in the interval [0, 12 ]. Using a discrete Laplace

operator as described by the resulting matrix in Section 2.3.1, we get thefully discretised scheme:[

I − γc2k2∆h

][uh(x, t+ k)− 2uh(x, t) + uh(x, t− k)] =

c2k2∆huh(x, t) + k2[γf(x, t+ k) + (1− 2γ)f(x, t) + γf(x, t− k)],(2.9)

Here uh is the finite difference approximation of u(x, y, t) at a point x in themesh and t = k, 2k . . .. From (2.9) it is clear that when choosing γ = 0 themethod becomes explicit, while for γ ≥ 0 the method is implicit. For theimplicit case a linear system of equations must be solved.

10

Page 11: Numerical Analysis of the Two Dimensional Wave Equation

2.4.1 Stability of the Time Discretisation

As the solution in time is solved step-by-step, the stability of the discretisedscheme must be analysed so that initial perturbations are not amplifiedas t −→ ∞. The stability analysis is most easily performed for the casewith homogeneous Dirichlet boundaries and a forcing term set to zero. Itis further assumed that the wave propagation speed is constant, i.e. thedisplacement of the wave is described by (2.2). Denote the eigenvalueseigenvectors of ∆h as λi and vi(x) respectively where vi(x) = 0 on ∂Ωh.Then the eigenvalue problem of the discretised Laplace operator L(vi(x)) ≡∆hvi can be written

∆hvi = λivi, (2.10)

with vi(x) = 0 on the boundary. To solve the fully discretised differencescheme of equation (2.9) we now make the following ansatz

uh(x, t) = (µi)t/k vi(x), t = 0, k, 2k, . . . (2.11)

Here it is assumed that the initial data is an eigenfunction of the discreteLaplace operator, i.e. it can be written as uo(x) = vi(x). This is valid sinceany perturbation of the initial data can be rewritten as a linear combinationof the eigenfunctions vi(x),x ∈ Ωh. Substituting this into equation (2.9)yields(

1 + γc2k2λi) (µ2i − 2µi + 1

)µt/k−1i vi(x) = −c2k2λiµ

t/ki vi(x), t = k, 2k, . . .

(2.12)

From this we can write

µ2i − 2µi + 1 = −µiτi, (2.13)

where τi can be expressed as

τi = c2k2λi/[1 + γc2k2λi

]. (2.14)

Solving this second order equation we get

(µi)1,2 = 1− 1

2τi ±

√(1

2τi

)2

− τi . (2.15)

From equation (2.11) and (2.13) we conclude that the solutions of the ansatzwill be bounded if and only if |µi| ≤ 1 + ξk, where ξ is a parameter ξ > 0.

11

Page 12: Numerical Analysis of the Two Dimensional Wave Equation

This is called the von Neumann stability condition. Its validity is motivatedby |µi|

tk ≤ eξt ∀t > 0 [2]. Multiplying the two roots of (2.13) yields the

value 1. Therefore, |µi| = 1, i.e. the perturbations are not amplified in timeif and only if τi ≤ 4. Now using equation (2.14) the following relation isobtained: 1

4ckλi ≤ 1+γc2k2 = λi. From this one can conclude that in orderfor the perturbations to be bounded then either γ ≥ 1

4 or if γ < 14 then

(ck)2 <4

1− 4γ

1

maxiλi(2.16)

must be satisfied. In other words, if γ is chosen greater than 1/4, then thediscretised scheme is unconditionally stable; it is A-stable. Otherwise ifγ is less than 1/4 the scheme is only conditionally stable, hence the timestep must be chosen sufficiently small. For the unit square problem theeigenvalues are bounded as maxiλi ≤ 8

h2[2]. Thus, for γ = 0 the stability

condition according to equation (2.16) becomes k ≤ h√2c

. For γ = 1/12 the

stability condition is k ≤√

3h2c . Why γ = 1/12 is a case of particular interest

is described in Section 2.4.2.

2.4.2 Error Estimates and Convergence of the Fully DiscretisedProblem

Consider next the behaviour of the time and space discretisation error whenusing the γ -method in time and central difference in space. To analysethis one considers the local truncation error. The operator form (2.9), whendivided by k2, can be expressed as

Lh,kuh :=1

k2

[I − γc2k2∆h

][uh(x, t+ k) − 2uh(x, t) + uh(x, t− k)]

− c2∆huh(x, t) = f(x, t),

(2.17)

with f(x, t) = γf(x, t+ k) + (1− 2γ)f(x, t) + γf(x, t− k).

Then, the local truncation error reads,

Lh,ku− f(x, t) = Lh,k (u− uh) . (2.18)

Let us now apply Taylor expansion to the central difference approximations,where the solution is assumed to be sufficiently smooth. The truncation

12

Page 13: Numerical Analysis of the Two Dimensional Wave Equation

error then becomes:

Lh,k (u− uh) =1

k2

[I − γc2k2∆h

][u(x, t+ k)− 2u(x, t) + u(x, t− k)]

− c2∆hu(x, t)− [γf(x, t+ k) + (1− 2γ)f(x, t) + γf(x, t− k)]

=[I − γc2k2∆h

] [utt(x, t) +

k2

12u

(4)t (x, t) +O

(k4)]

−c2∆hu(x, t)− f(x, t)− γk2ftt(x, t) +O(k4)

=utt(x, t)− c2∆u(x, t)− f(x, t)

+k2

12u

(4)t (x, t)− γc2k2(∆u)tt − γk2ftt(x, t)−

c2h2

12

(u(4)x + u(4)

y

)+O

(k4)

+O(h2k2

)+O

(h4).

(2.19)Using the differential equation utt = c2∆u+ f this results in:

Lh,k (u− uh) =

(1

12− γ)k2u

(4)t (x, t)− c2h

2

12

(u(4)x + u(4)

y

)+O

(k4)

+O(h2k2

)+O

(h4).

(2.20)

The discretisation error is now defined by eh := ‖u−uh‖. By using equation(2.20) we see that the discretisation error depends on the parameter γ inthe following way:

(a) ‖u− uh‖ ≤ CT(k2 + h2

), h, k → 0, 0 ≤ t ≤ T ,

(b) ‖u− uh‖ ≤ CT(k4 + h2

), h, k → 0, 0 ≤ t ≤ T if γ = 1

12 ,

where C is some constant, independent of k and h. It is clear that if γ 6= 1/12

the scheme will have second order convergence in both time and space,similar to the regular central difference. When γ = 1/12 however, the methodwill show fourth order convergence in time. This allows for the time step tobe chosen somewhat larger having the same accuracy, as long as the stabilitycriterion is satisfied.

For hyperbolic problems it is necessary to comment on the so-called CourantLewy Friedrich (CFL) condition, which is important, in particular for ex-plicit methods. The CFL condition requires that the numerical domain ofdependence must contain the physical domain of dependence, for a numeri-cal scheme to be convergent. For explicit schemes, i.e. γ = 0, the necessaryCFL condition states for the wave equation, that the numerical wave speed

13

Page 14: Numerical Analysis of the Two Dimensional Wave Equation

h/k of the discrete problem must be at least as large as the physical wavespeed c of the continuous problem [2]. For the implicit scheme, the CFLcondition is automatically satisfied since the numerical domain of depen-dence is the whole (x,y)-plane. This is motivated by using the fact that theinverse of the operator [I − γc2k2∆h] in equation (2.9) is a full matrix, i.e.any element in the solution vector is affected by all elements in the righthand side.

2.4.3 Numerical Dispersion

Part of the error that can be observed when solving the wave equationnumerically can be attributed to a phenomenon called numerical dispersion.Numerical dispersion occurs when the numerical wave propagates with adifferent speed than the physical one. To describe this, one can introducethe numerical dispersion number d = ω

cl where l is the wave number. Ananalysis of the numerical dispersion number for the discretised scheme canbe performed to see to what degree this affects the error of the solution [2].

2.5 Convergence of Eigenvalues

When solving an eigenvalue problem, it is important to know how the nu-merical eigenvalues depend on the spatial discretisation parameter h. Theeigenvalue problem of the Laplace operator is expressed by

L(u) ≡ ∆u = λu. (2.21)

It is straightforward to see, that on a square domain Ω = [0, 1] × [0, 1]with homogeneous Dirichlet BC u ∈ ∂Ω = 0 the eigenfunctions of (2.21) are

u = sin(kπx)sin(lπy), k = 1, 2, ... l = 1, 2, .... (2.22)

For the continuous problem one obtains

∆u = −[(kπ)2 + (lπ)2]sin(kπx)sin(lπy). (2.23)

Now we pose the question: after applying the central difference approxima-tion, will the numerical eigenvalues approach the exact ones as h goes tozero? Applying the central difference we obtain

Lhu =1

h2[sin(kπ(i− 1)h)sin(lπjh) + sin(kπ(i+ 1)h)sin(lπjh)

+ sin(kπih)sin(lπ(j − 1)h) + sin(kπih)sin(lπ(j + 1)h)

− 4sin(kπih)sin(lπjh)],

(2.24)

14

Page 15: Numerical Analysis of the Two Dimensional Wave Equation

which can be rewritten on the form

Lhu =1

h2[sin(kπih)sin(lπjh)(2cos(kπh) + 2cos(lπh)− 4)]. (2.25)

With further trigonometric manipulation (2.25) can be expressed as

Lhu =1

h2(−4sin2(kπ

h

2)− 4sin2(lπ

h

2)sin(kπih)sin(lπjh), (2.26)

and finally

Lhu = −((k2π2)sin2(kπ h2 )

(kπ h2 )2+ (l2π2)

sin2(lπ h2 )

(lπ h2 )2)sin(kπih)sin(lπjh). (2.27)

Now using the property limx→0sinxx = 1 we conclude that the numerical

eigenvalues approach the analytical ones as h → 0. When applying theTaylor expansion to sinx

x , the rate with which this happens can be observedto be O(h2). The numerical eigenvalues, thus converge with second order tothe ones of the continuous problem.

2.6 Analysis for Variable Wave Propagation Speed

Now consider the spatial discretisation for the case with nonhomogenouswave speed. As before, equation (2.4) is sampled at each mesh point. Wewrite:

∂2

∂t2u(xi, yj , tn) = ∇ · ((q(xi, yj)∇u(xi, yj , tn)) + f(xi, yj , tn). (2.28)

The term ∇·((q(xi, yj)∇u(xi, yj , tn)) must be discretised in a different man-ner as compared to the simple Laplace operator in the constant wave speedcase. This is done by firstly discretising the outer derivative. Here it shouldbe mentioned that using the chain rule is not recommended since it leads to amore difficult discretisation and at some point loss of physical interpretation[3]. We write

φ = q(x, y)∇u(x, y, t). (2.29)

Now a central difference is applied to the divergence of φ at the point (x, y) =(xi, yi):

∇ · φ ≈φi+ 1

2,j − φi− 1

2,j

h+φj+ 1

2,i − φj− 1

2,i

h(2.30)

15

Page 16: Numerical Analysis of the Two Dimensional Wave Equation

with equidistant mesh points, then tackling the outer derivative:

φi+ 12,j ≈ qi+ 1

2,j

uni+1,j − uni,jh

. (2.31)

The same methodology is then applied to the three other points. Finally wecan write:

∂2

∂t2u(xi, yj , tn) ≈ 1

h2

(qi+ 1

2,j(u

ni+1,j − uni,j)− qi− 1

2,j(u

ni,j − uni−1,j)

+ qj+ 12,i(u

nj+1,i − unj,i)− qj− 1

2,i(u

nj,i − unj−1,i)

).

(2.32)

This scheme leads to a matrix of the same form as discussed in Section 2.3.1but with different elements on the diagonals. Now the γ-discretisation can beapplied for the time discretisation as discussed in previous sections. Goingback to Section 2.4.1 one can note that the derived stability condition isdependent of the wave propagation speed. Thus this condition must now berevised. For example it was derived that for γ = 1/12 the stability condition

is k ≤√

3h2c . It can be concluded that the time step is now limited by the

maximal value of the speed propagation coefficient. The maximal value ofthe wave speed is written: c =

√maxx,y∈Ω c(x, y). For γ = 1/12 the stability

condition thus becomes k ≤√

3h2c .

2.7 Linear Systems of Equations

When γ 6= 0, i.e. the implicit case, is used in equation (2.9), a linear systemof equations arises that has to be solved at every time step. As refinementof the discretisation leads to an increasing number of mesh-points the sizeof the system grows. The better the approximation of the PDE, the largerthe system becomes. Finite difference methods generally produce sparsesystems. In our case the matrix is five-diagonal with and also symmetricand positive definite. Using the γ−discretisation discussed in Section 2.4 asystem must be solved at each time-step. For time and computational effi-ciency one must use a suitable method. A general linear system of equationsin matrix form reads:

Ax = b, (2.33)

where x is the solution vector.

A plethora of methods have been developed for solving such linear systemsof equations. For sparse systems of large dimensions, the so called direct

16

Page 17: Numerical Analysis of the Two Dimensional Wave Equation

solution methods would be unfavourable to use due to their relatively largecomputational and memory demands. Examples of such direct methods arethe classical direct Gauss elimination techniques including LU-factorisation.For large sparse systems iterative solvers are preferred.

2.7.1 The Gradient and Conjugate Gradient Methods

Since the matrix A ∈ Rm,m in (2.33) is symmetric and positive definite,a whole class of optimisation methods can be used to obtain the solutionvector x. Among these is the method of steepest descent. The method ofsteepest descent is based on the observation that the problem of solvingAx = b is equivalent to minimising the function:

F (x) =1

2xTAx− bTx, x ∈ Rm, (2.34)

since a stationary point is the x satisfying

f(x) = ∇F (x) = Ax− b = 0.

To minimise (2.34), a method composed of iterative steps can be constructed.Firstly a search direction dk is defined and then F (xk + αdk) is minimisedwith the solution αk. The solution of the next iteration can then be definedas:

xk+1 = xk + αkdk. (2.35)

With F defined as in (2.34), the exact αk can be computed directly as

αk = − gkTdk

dkTAdk(2.36)

where gk is a residual vector defined as

gk = b−Axk = ∇F (xk).

In each iteration the approximation will move towards the minimum in theopposite direction of the gradient. The method is thus referred to as thegradient method or the method of steepest descent [4].

However, as the method of steepest descent follows the gradient at eachiteration, the resulting path will show a ”zigzag”-behaviour, as illustratedin Figure 3.

17

Page 18: Numerical Analysis of the Two Dimensional Wave Equation

Contour lines where f is constant

x0

Figure 3: Method of steepest descent

In practice the method often requires too many iterations to achieve an ac-ceptable tolerance. Used on sparse problems, the convergence rate is oftenas poor as for Jacobi′s method or similar methods [5]. The problem liesin the fact that the current search direction and the next can be almostparallel with respect to the inner product of the vector space spanned bythe columns of the matrix A, defined as < x,y >A := xTAy. This is aso called energy inner product. The method is optimised when the searchdirections dk are orthogonal with respect to the above mentioned inner prod-uct. Such vectors are called conjugate. A method can now be formulatedwhere the next search direction is constructed as the linear combination ofthe current residual and all the previous search directions. Since all thesearch directions are forced to be conjugate the method is referred to asthe conjugate gradient method (CG). The conjugate gradient belongs to agroup of numerical methods called Krylov subspace methods [6] and it canbe seen as an accelerated version of the gradient method [5]. Favourably, theconjugate gradient avoids the ”zigzag” behaviour of the gradient method.An important feature of CG is that it can in theory be implemented as anexact solver since it solves the system in a finite number of iterations. How-ever, due to rounding errors and floating point error, the search directionswill not be exactly orthogonal and a truly exact solution is in reality neverachieved. Though if the system is large, CG is more efficient when usedwith a certain stopping criterion on the residual. Algorithms for both thestandard and the preconditioned conjugate gradient methods are displayedin Algorithm 1. The preconditioned conjugate gradient is commonly abbre-viated PCG and the principles of preconditioniers are discussed in Section2.7.2.

18

Page 19: Numerical Analysis of the Two Dimensional Wave Equation

Unpreconditioned CG Preconditioned CGx = x0 x = x0r = A*x-b r = A*x-b; C*h = rdelta0 = (r,r) delta0 = (r,h)g = -r g = -hRepeat: h = A*g Repeat: h = A*g

tau = delta0/(g,h) tau = delta0/(g,h)x = x + tau*g x = x + tau*gr = r + tau*h r = r + tau*h; C*h = rdelta1 = (r,r) delta1 = (r,h)if delta1 <= eps, stop if delta1 <= eps, stopbeta = delta1/delta0 beta = delta1/delta0g = -r + beta*g g = -h + beta*g

Algorithm 1: Principle algorithm of unpreconditioned andpreconditioned conjugate gradient.

2.7.2 Preconditioning Techniques

Iterative methods are very attractive, particularly for sparse linear systems,as they have an optimal computational complexity per iteration. Namely,each iteration consists in general of one matrix-vector multiplication, a fewvector updates and some scalar products. Thus, for sparse linear systemsthe computational complexity is linear in the number of degrees of freedom.Iterative methods however, risk being inefficient if the matrix of the sys-tem is ill-conditioned. The problem is that for an ill-conditioned matrix thenumber of iterations may become large and rounding errors may accumu-late, possibly resulting in stagnation and even divergence of the method.How well the matrix is conditioned is dependent of the distribution of itseigenvalues. This is usually measured by the so-called condition number[7]. For symmetric positive definite matrices, as in the target problem, thecondition number is defined simply as the ratio κ(A) = λmax

λmin . For the ma-trices at hand, κ(A) = O(h−2). Thus, when we decrease h to reduce thediscretisation error, we increase the condition number of the correspondingmatrix. Therefore, to mitigate the high condition number, so-called precon-ditioning techniques are used. We see from (2.33) that if we choose C = A,then the preconditioning matrix is C−1 and C−1A is the identity matrixand the system is solved in one iteration. This however, is infeasible sinceconstructing A−1 costs more than solving the system with it. Therefore, a

19

Page 20: Numerical Analysis of the Two Dimensional Wave Equation

huge scientific effort has been put in constructing good preconditioners, thatsatisfy the following conditions:

• C−1A has a smaller condition number than A.

• C is easy and cheap to construct.

• Solving systems with C is much cheaper than with A.

• The construction and solution with C is easily parallelisable.

Clearly, achieving all these four goals is not easy and some of them aresomewhat contradicting. Still, for some classes of matrices, such precondi-tioners exist and are referred to as of ”optimal order”. The optimal orderpreconditioner implies two things. First, κ(C−1A) = O(1), meaning it doesnot depend on the discretisation parameters k and h. This means that theiterative method will converge in a number of iteration, independant on thenumber of degrees of freedom. Second, the cost to apply a preconditioner islinear in the number of degrees of freedom. Preconditioner that possess theabove properties are some multigrid, multilevel and domain decompositiontechniques.

In general the preconditioner can be applied to a linear system in severalways. We have the left side preconditioning:

C−1Ax = C−1b (2.37)

where C is the preconditioner. Right side preconditioning can generally beexpressed as:

AC−1Cx = b. (2.38)

Here one must first solve AC−1y = b and then Cx = y.

One of the simplest preconditioning methods is the incomplete LU factori-sation (ILU). The standard full LU-factorisation computes for the upperand lower trangular matrices such that A = LU . The incomplete LU-factorisation however, computes matrices L and U , so A can be writtenin the following way:

A = LU −R (2.39)

Here R is a residual matrix. LU is thus only an approximation for the matrixA. ILU can then be used as a preconditioner that improves the efficiency ofother iterative solvers. One such is the aforementioned conjugate gradientmethod. The incomplete LU-factorisation is relatively easy and inexpensive

20

Page 21: Numerical Analysis of the Two Dimensional Wave Equation

to compute, but may still be inefficient as it often requires many iterations.However, there are alternative versions of the (ILU) that can remedy thisto some degree [7].

2.7.3 The Multigrid Method

Multigrid is a family of numerical methods that was first developed forsolving the linear systems of equations arising from discrete elliptic PDE’s,where the solution is defined on a specific mesh. The multigrid idea isbased on two principles: error smoothing and coarse grid correction. Theerror smoothing is performed using an iterative method such as the Jacobimethod or the Gauss-Seidel method. The profile of the error might at somelevels be very sharp. The important feature of these iterative methods is thatthe error will be smoothened out at each iteration. Coarse grid correctionuses the principle that a quantity which on one grid is smooth, can withoutessential loss of information be approximated on a less refined grid. Thuswe can formulate the coarse grid correction (CGC) quantity principle. Inthe multigrid algorithm this principle is applied to the error as it is a socalled correction quantity.

The theory of multigrid is widely based on Fourier analysis and so the abovementioned principles of error smoothing and CGC can easily be illustratedin such a context. This is done by applying a Fourier expansion to the error:

eh(x, y) =n−1∑k,l=1

αk,lsin(kπx)sin(lπy), (2.40)

as φk,lh = sin(kπx)sin(lπy) are the eigenfunctions to the discrete Laplaceoperator. The fact that the iterative methods have a smoothing effect canbe interpreted such that the high frequency components (i.e. when k, l arelarge) of the error becomes small, whilst the low frequency components (i.e.when k, l are small) remain more or less unaltered. The concept of high andlow frequencies of the error is also applicable to the CGC principle. The lowfrequencies of the fine grid will be visible as high frequencies on the coarsegrid. The high frequencies however, will not be visible on the coarse grid asthey coincide with the low frequencies, in what is referred to as an aliasingphenomenon [8].

So far only one step coarsening has been considered. This is referred to as”two-grid”, but the method can nevertheless reach higher efficiency whenmore than two grids are used. The alternating between different meshes

21

Page 22: Numerical Analysis of the Two Dimensional Wave Equation

allows us to introduce the multigrid algorithm cycles. Figure 4 displays theso called V- and W-cycles where the solution is produced using four levelsof different grid refinements. The classical implementation of multigrid usespredetermined grid sizes in the operating cycle. This is often referred toas geometric multigrid. At the lowest level direct methods are often usedsince they at such coarse grids do not necessarily decrease the computationalefficiency. The direct solver is represented by the red dots in Figure 4.

V-CYCLE W-CYCLE

1 1

2 2

3 3

4 4

Figure 4: Four level V- and W-cycles where level 1 is the finestand level 4 is the coarsest grid.

The multigrid cycle can be summarised heuristically as: Gauss-Seidel orJacobi (or similar methods) are iterated on different grid levels as theydecrease the corresponding high frequency errors. As the process covers allfrequencies the overall error will reduce rapidly [8]. The principal Multigridalgorithm is displayed in Algorithm 2.

Procedure MG: u(k) ←MG(u(k), f(k), k, ν(k)

j kj=1

);

if k = 0, then solve A(0)u(0) = f(0) exactly or by smoothing,

else

u(k) ←s1

S(k)1

(u(k), f(k)

), perform s1 pre-smoothing steps,

Correct the residual:

r(k) = A(k)u(k) − f(k); form the current residual,

r(k−1) ← R(r(k)), restrict the residual on the next coarser grid,

e(k−1) ←MG(0, r(k−1), k − 1, ν(k−1)

j k−1j=1

);

e(k) ← P(e(k−1)

); prolong the error from the next coarser to

sthe current grid,

u(k) = u(k) − e(k); update the solution,

u(k) ←s2

S(k)2

(u(k), f(k)

), perform s2 post-smoothing steps.

endifend Procedure MG

22

Page 23: Numerical Analysis of the Two Dimensional Wave Equation

Algorithm 2: Principle algorithm of multigrid.

2.7.4 The Multigrid Method as a Preconditioner

Multigrid can be used as a preconditioner to increase the efficiency of aniterative solver when dealing with large sparse linear systems of equations.Here a multigrid iteration operator on matrix form is used in a similarmanner as discussed in Section 3.6.2. Multigrid preconditioning is widelyused to accelerate the conjugate gradient [9].

2.7.5 Algebraic Multigrid

Requiring a sequence of geometric meshes is a disadvantage in various caseswhere important features of the problem are not resolved on coarse meshesor simply, when such meshes are not available. In such cases the so calledalgebraic multigrid methods are preferable in comparison to the previouslydiscussed geometric multigrid. Algebraic multigrid uses the same ingredientsas geometric multigrid, but does so without meshes, to achieve the sameefficiency.

2.7.6 Aggregation-Based Algebraic Multigrid

AGMG is an implementation of the algebraic multigrid method. In AGMGthe algebraic multigrid preconditioner is used to accelerate the conjugategradient method (PCG) [10]. AGMG does not operate on the regular multi-grid V- or W-cycle, but on a so called K-cycle (also known as the ”non-linearalgebraic multilevel iteration (AMLI) -cycle”) and can be seen as a ”W-cyclewith Krylov acceleration at all intermediate levels” [10].

3 Numerical Study

The method is tested using MATLAB. For the accuracy studies the \-operator in MATLAB, which is a direct solver, is used to solve the linearsystems. In the performance comparison AGMG is used instead. For thisproject we obtained an academic licence for the sequential version of theAGMG software that can be called by a MATLAB function. All the testsare carried out on a square domain Ω = [0, 1]× [0, 1]. Since the γ-methoduses three points in time, the first step needs to be computed with a differentmethod. This is done using Euler’s implicit method in most cases. However,for the eigenvalue problem a more accurate method is required to get the

23

Page 24: Numerical Analysis of the Two Dimensional Wave Equation

correct order of accuracy for the γ-method, consequently the trapezoidalrule is used. For all of the tests u = 0 for x, y ∈ ∂Ω is the boundarycondition.

3.1 Verification of the Discretisation Error

3.1.1 Constant Coefficients

To verify the implementation of the method, an analytic solution is needed.One option is to use the regular eigenfunction solutions to the wave equationwithout a forcing term. The disadvantage of this is mainly that the effect ofthe forcing term is not tested, but also that it can be difficult to isolate thetemporal error from the spatial, or vice versa. This is due to the conditionalstability for some values of γ, which leads to difficulties having a time-step that differs enough from the spatial-step to see the convergence of thedifferent errors.

The other option is to use a manufactured solution. Some function that fitsthe boundary conditions is used as the solution. It is then substituted intothe equation and a forcing term is picked so that the chosen function solvesthe equation. If the function used is a polynomial of low enough degree,it can be approximated exactly with a finite Taylor series, meaning that afinite difference method can solve the equation exactly. This can then beutilised to isolate the temporal or spatial error.

Three different functions are used to test the method. First a polynomial ofsixth order in time and second order in space

u (x, y, t) =(x2 − x

) (y2 − y

) (t6 + 1

)f (x, y, t) =30t4

(x2 − x

) (y.2 − y

)− c2

(2(y2 − y

) (t6 + 1

)+ 2

(x2 − x

) (t6 + 1

)),

(3.1)

so that there is only a temporal error. Then, a polynomial of first order intime and fourth order in space

u (x, y, t) =(x4 − x

) (y4 − y

)(t+ 1)

f (x, y, t) =− c2(12x2

(y4 − y

)(t+ 1) + 12y2

(x4 − x

)(t+ 1)

),

(3.2)

so that there is only a spatial error. Lastly, the simplest eigenfunctionsolution

u (x, y, t) = sin (πx) sin (πy) cos(cπt√

2). (3.3)

24

Page 25: Numerical Analysis of the Two Dimensional Wave Equation

For all of the solutions, u (x, y, 0) is the initial condition. The time step andmesh-size are set equal to get fair comparisons. The wave speed is set toc = 0.4 to achieve numerical stability for all cases.

3.1.2 Variable Coefficients

When considering the case with variable coefficients, the spatial discreti-sation is being tested. Thus, for convenience γ = 1/2 is used as it grantsunconditional stability when testing the convergence. Three different func-tions describing the wave speed are examined. First a continuous functionwith a not so steep gradient is used:

q(x, y) = arctanx + arctan y. (3.4)

Figure 5: How the wave speed varies over the domain withq(x, y) as in equation (3.4).

To find an exact solution, the same approach as in the constant wave speedcase is used. A function that fits the boundary condition is chosen as asolution, and a forcing term is picked so that the function solves the equation.In this case, a polynomial solution (with high enough order to not get anexact solution) is picked.

u(x, y, t) = (x4 − x)(y4 − y)(t4 + 1)

f(x, y, t) = ∇ · (q(x, y)∇u)− utt.(3.5)

25

Page 26: Numerical Analysis of the Two Dimensional Wave Equation

The source term, f(x, y, t), is computed symbolically using the symbolicspackage in MATLAB. The initial condition is the exact solution evaluatedat t = 0. The second case tested is still continuous, but with a much steepergradient:

q(x, y) = arctan(w(x− 0.25))− arctan(w(−0.25)) (3.6)

where w is a parameter that controls how steep the function is. In our testsit is set as w = 50.

Figure 6: How the wave speed varies over the domain withq(x, y) as in equation (3.6).

The exact solution is manufactured in the same way as in the previous case,hence being the same as in (3.5). But f(x, y, t) is obviously different sinceq(x, y) is different. The initial condition is the exact solution evaluated att = 0. The third case is a domain with a discrete jump in the coefficients:

q(x, y) =

1, for x < 0.5

3, for x ≥ 0.5(3.7)

26

Page 27: Numerical Analysis of the Two Dimensional Wave Equation

Figure 7: How the wave speed varies over the domain withq(x, y) as in equation (3.7).

In this case there are no known exact solutions, instead a very precise nu-meric solution is used to compute the error. The initial condition is set asa Gaussian bell function:

u(x, y, 0) = exp(−((x− 0.8)2 + (y − 0.5)2)/0.01),

with the source term set to zero.

3.1.3 Computational Complexity

To study the computational complexity of the γ-method with AGMG, thevariable coefficient case is studied with q(x, y) as in equation (3.4). A toler-ance level is set for the absolute error of the solution in the final time step.The time step k and the mesh-size h are then adjusted to achieve stability,according to the criteria given in (2.16), and to satisfy the given tolerance.Since AGMG uses a direct solver for smaller systems, and we wanted totest the scaling of the actual AGMG-method, the tolerance needed to be setquite low. This results in fairly large systems.

The tolerance is first set to 5e-5 and secondly 1e-6. The total executiontime and the number of iterations required by AGMG every time step arerecorded. For the first tolerance the final time is set to T = 2, and with the

27

Page 28: Numerical Analysis of the Two Dimensional Wave Equation

finer tolerance the final time is T = 1. In this way the iterations in time arekept the same, so that if the degrees of freedom in the matrix is doubled,the total execution time should be about double too. The tests are doneusing γ = 0, 1/12, 1/2.

Then, AGMG is tested by itself using the matrix corresponding to the con-stant coefficient case with γ = 1/2, k = h and c = 1, but with a randomright hand side. Solution time and number of iterations are recorded. Thetest is done on fairly large matrices to see if the method behaves optimally,i.e. that the number of iterations stays the same for the different problemsizes, and that the total execution time increases linearly with the problemsize.

3.1.4 Computation of the Error

To compute the error we use the L2 norm of the absolute and relative errorin the final time step. The absolute error is defined as

Eabs =

√√√√h2

N∑i=1

N∑j=1

(uhi,j − ui,j)2

and the relative error as

Erel =

√√√√∑Ni=1

∑Nj=1 (uhi,j − ui,j)2∑Ni=1

∑Nj=1 u

2i,j

.

3.2 Implementation

The boundary conditions are implemented in three steps. First the effecton the right hand side from the boundary points is taken into account atthe three time levels. The second step is changing the matrix so that theboundary points in the solution are equal to the boundary points in the righthand side. The final step is to force the boundary points in the right handside to have the value desired in the solution. This is a classical methodologywhen implementing boundary conditions for finite difference problems. Theimplementation can deal with homogeneous and non-homogeneous Dirichletboundary conditions, see ”main wave test.m” and ”dirichlet.m” in [11].

The variable coefficient matrix is implemented with a continuously definedfunction for the coefficients. Instead of averaging the values in the mesh-points, the defined values between the points are used. This however, is

28

Page 29: Numerical Analysis of the Two Dimensional Wave Equation

easily changed. The coefficients corresponding to the different derivativesare placed in their respective off-diagonals, and in the main diagonal, see”variable coeff matrix.m” in [11].

4 Results

4.1 Convergence of the γ-method

4.1.1 Constant Wave Speed

When looking at the temporal error using the case in (3.1), the followingresults are obtained.

Table 1: Convergence for the polynomial solution (3.1) withγ = 0

DOF k = h Absolute error Factor Rate

289 0.0625 1.69e-04 - -1089 0.03125 4.09e-05 4.14 2.054225 0.015625 1.00e-05 4.07 2.0316641 0.0078125 2.49e-06 4.04 2.0166049 0.0039063 6.19e-07 4.02 2.01

Table 2: Convergence for the polynomial solution (3.1) withγ = 1/12

DOF k=h Absolute error Factor Rate

289 0.0625 6.93e-07 0 01089 0.03125 5.02e-08 13.8 3.794225 0.015625 3.37e-09 14.9 3.9016641 0.0078125 2.18e-10 15.4 3.9566049 0.0039063 1.39e-11 15.7 3.97

Table 3: Convergence for the polynomial solution (3.1) withγ = 1/2

DOF k = h Absolute error Factor Rate

289 0.0625 7.37e-04 - -1089 0.03125 1.90e-04 3.88 1.954225 0.015625 4.84e-05 3.93 1.9716641 0.0078125 1.22e-05 3.96 1.9966049 0.0039063 3.07e-06 3.98 1.99

29

Page 30: Numerical Analysis of the Two Dimensional Wave Equation

For the cases γ = 1/2, γ = 0 the convergence approaches second order whenh decreases, see Tables 1 and 3. For γ = 1/12 fourth order convergence isobserved, see Table 2.

Then, looking at the error from the space discretisation using equation (3.2)the following results are obtained.

Table 4: Convergence for the polynomial solution (3.2) withγ = 0

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -1089 0.03125 2.33e-04 3.87 1.954225 0.015625 5.91e-05 3.93 1.9816641 0.0078125 1.49e-05 3.97 1.9966049 0.0039063 3.74e-06 3.98 1.99

Table 5: Convergence for the polynomial solution (3.2) withγ = 1/12

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -1089 0.03125 2.33e-04 3.87 1.954225 0.015625 5.91e-05 3.93 1.9816641 0.0078125 1.49e-05 3.97 1.9966049 0.0039063 3.74e-06 3.98 1.99

Table 6: Convergence for the polynomial solution (3.2) withγ = 1/2

DOF k = h Absolute error Factor Rate

289 0.0625 9.00e-04 - -1089 0.03125 2.33e-04 3.87 1.954225 0.015625 5.91e-05 3.93 1.9816641 0.0078125 1.49e-05 3.97 1.9966049 0.0039063 3.74e-06 3.98 1.99

Here in Tables 4-6 second order convergence is observed for all the cases.

30

Page 31: Numerical Analysis of the Two Dimensional Wave Equation

Then, looking at the error from the eigensolution from equation (3.3) thefollowing is obtained.

Table 7: Convergence for the eigenfunction solution (3.3) withγ = 0

DOF k=h Absolute Error Factor Rate

289 0.0625 7.62e-04 - -1089 0.03125 1.93e-04 3.95 1.984225 0.015625 4.84e-05 3.98 1.9916641 0.0078125 1.21e-05 3.99 266049 0.0039063 3.04e-06 4 2

Table 8: Convergence for the eigenfunction solution (3.3) withγ = 1/12

DOF k=h Absolute error Factor Rate

289 0.0625 1.09e-03 - -1089 0.03125 2.80e-04 3.90 1.964225 0.015625 7.08e-05 3.95 1.9816641 0.0078125 1.78e-05 3.98 1.9966049 0.0039063 4.46e-06 3.99 2

Table 9: Convergence for the eigenfunction solution (3.3) withγ = 1/2

DOF k=h Absolute error Factor Rate

289 0.0625 2.71e-03 - -1089 0.03125 7.14e-04 3.8 1.924225 0.015625 1.82e-04 3.91 1.9716641 0.0078125 4.61e-05 3.96 1.9966049 0.0039063 1.16e-05 3.98 1.99

Here in Tables 7-9 the convergence approaches second order as h decreases.

31

Page 32: Numerical Analysis of the Two Dimensional Wave Equation

4.1.2 Non-Constant Wave Speed

Table 10: Convergence for wave speed with quite mellow gra-dient, γ = 1/2

DOF k=h Absolute error Factor rate

289 0.0625 5.28e-03 - -1089 0.03125 1.36e-03 3.88 1.954225 0.015625 3.46e-04 3.94 1.9816641 0.0078125 8.72e-05 3.97 1.9966049 0.0039063 2.19e-05 3.98 1.99

In Table 10 the result from the solution presented in (3.5) is shown. Thewave speed is the arctan function presented in (3.4). Here the convergenceapproaches second order as h decreases.

Table 11: Convergence for wave speed with steep gradient,γ = 1/2

DOF k=h Absolute error Factor rate

289 0.0625 2.67e-02 - -1089 0.03125 2.94e-03 9.08 3.184225 0.015625 1.12e-04 26.2 4.7116641 0.0078125 3.58e-05 3.13 1.6466049 0.0039063 9.03e-06 3.96 1.99263169 0.0019531 2.27e-06 3.99 2.00

In Table 11 the result from the solution presented in equation (3.5) is shown.Where the wave speed is the steeper, almost vertical, arctan function pre-sented in equation (3.6). Here the convergence approaches second order ash decreases although at first the convergence is a bit faster.

Table 12: Convergence with discrete jump in wave speed, γ =1/2

DOF k=h Absolute error Factor rate

289 0.0625 1.07e-01 - -1089 0.03125 1.04e-01 1.03 0.03924225 0.015625 6.79e-02 1.54 0.62016641 0.0078125 3.51e-02 1.93 0.951

In Table 12 the result from the case with a discrete jump in the wave speed,see equation (3.7), is shown. Here the convergence approaches first order as

32

Page 33: Numerical Analysis of the Two Dimensional Wave Equation

h decreases.

4.2 Computational Complexity

Table 13: Solution time with tolerance 5e-5 and final timeT = 2

γ DOF k Absolute error Time(s) Iterations in AGMG1/2 66049 k = h 1.16e-05 117 61/12 66049 k = h/2 1.21e-05 221 20 66049 k = h/2 1.21e-05 199 -

Table 14: Solution time with tolerance 1e-6 and final timeT = 1

γ DOF k Absolute error Time(s) Iterations in AGMG1/2 263169 k = h 4.44e-07 496 61/12 263169 k = h/2 7.47e-07 964 40 263169 k = h/2 8.69e-07 832 -

In Table 13 and 14 the results of the test of computational complexity isshown. The parameters were set as described in section 3.1.3. The totalsolution time seems to increase linearly with the problem size and γ = 1/2is the fastest in both cases.

4.3 Scaling of the AGMG-method

Table 15: Scaling of the AGMG method with tolerance 1e-6

DOF Total time(s) Solution time(s) Setup time(s) Iterations

263169 7.50e-02 6.20e-02 1.30e-02 61050625 4.56e-01 2.06e-01 2.50e-01 64198401 1.09 8.95e-01 1.14 616785409 11.41 6.37 5.04 6

In Table 15 the result from the test of the scaling of AGMG is presented,with parameters given in Section 3.1.3. The number of iterations stays thesame independent of the number of degrees of freedom and the total solutiontime increases about one order of magnitude at every increase in size.

33

Page 34: Numerical Analysis of the Two Dimensional Wave Equation

5 Discussion

5.1 Convergence of the γ-method

5.1.1 Constant Wave Speed

For the three different solutions tested in the constant coefficient case theresults are consistent with the theory in Section 2.4.2 and 2.5. For the eigen-solutions and the polynomial with an error bounded by the space discretisa-tion, second order convergence is observed. For the polynomial solution thatisolates the error from the time discretisation, (3.1), there is second orderconvergence with values of γ 6= 1/12. For γ = 1/12 we observe fourth orderconvergence. Unfortunately the fourth order convergence will be difficult toobserve in a ”real-world” scenario since the stability criterion forces the sizeof the discretisation step to be about the same in space and time, see (2.16).

5.1.2 Non-Constant Wave Speed

The case when the medium is non-homogeneous has several interesting phys-ical interpretations. The wave propagation speed c(x, y) described by a con-tinuous function (see Figure 5) can to some degree be interpreted as varyingelasticity in the material the wave propagates through. The case wherec(x, y) is a discontinuous function (see Figure 7) represents propagation ofwaves through two different materials. This is of interest for example whenstudying seismic phenomena. When the vibrations caused by an earth-quake reverberates through the surroundings, different materials will affectthe vibrations in different ways. Such inter-material wave transfers can bemodelled by discontinuities in similar manners as performed in this work.

In the first variable coefficient case, see (3.4), the solution exhibits a sec-ond order convergence, Table 10. This is expected according to the theory,since the spatial discretisation should be second order even when the wavecoefficient is not constant. Furthermore, the temporal discretisation is thesame as in the constant coefficient case, so the accuracy in time should beunchanged.

In the second case, see (3.6), where the coefficients are still continuous buthave a much steeper gradient, the convergence varies a bit at first but goesto second order when h gets small enough, Table 11. Since the error is quitelarge at first compared to the previously discussed domain, our belief is thatthis is due to the source term being very steep, i.e. containing very shortwavelengths that can not be represented on a coarser mesh. Then, when

34

Page 35: Numerical Analysis of the Two Dimensional Wave Equation

the mesh goes from being too coarse to being sufficiently refined for thosewavelengths to be represented, the convergence is a bit faster than expected.

In the case with a discrete jump, see (3.7), in the coefficients the rate ofconvergence deteriorates to only first order, Table 12. It can be shown thatthis is expected for a central difference in one dimension, see proposition twoin [12]. That this would extend to the two dimensional case is not that farof a stretch. So if possible when computing a real case it might be better toapproximate a discrete jump in the coefficients with a very steep continuousfunction to keep the order of accuracy. As mentioned in Section 3.1.2 theconvergence rate is obtained by comparing with a more accurate numericalsolution. The validity of this is motivated by the fact that the method hasbeen verified to work for the other cases.

5.2 Computational Complexity

In both of the tests γ = 1/2 is the fastest, Table 13 and 14. This is dueto the fact that k does not have to be changed for stability reasons sohalf the amount of iterations in time are required. The values of γ thatleads to unconditional stability will always require more iterations in timeas long as the maximum value of the coefficients, max(q(x, y)), is largerthan some constant, depending on γ, defined by the stability criterion givenin equation (2.16). The case γ = 1/12 allows for a higher convergence rateas derived in Section 2.4.2 giving us the opportunity to choose the timestep larger in comparison to the space discretisation and thus optimisingthe computational efficiency. The advantage of this however, never reallycomes to action as the size of the time step is still limited by the stabilitycriterion.

The only unconditionally stable case tested was for γ = 1/2, though thismight not be the fastest since the matrix is changed depending on γ anda lower γ might need one iteration less of AGMG which could make thatfaster. This could be optimised by further testing.

Since the solution time increases linearly with the degrees of freedom inthe matrix, computational complexity is optimal. Though the iterationsin AGMG are not constant for γ = 1/12, they probably would be if thematrix was larger but that was not really feasible to test due to hardwareconstraints. However, the time still scaled almost linearly with the problemsize. The iterations are constant for γ = 1/2 so there it is definitely optimal.

35

Page 36: Numerical Analysis of the Two Dimensional Wave Equation

5.3 Scaling of the AGMG-method

The factor between the total times for different sizes is not completely con-sistent but it seems to increase slightly less than one order of magnitudewhen the problem size is increased, Table 15. The times are quite shortso the inconsistencies might just be background processes interfering or thecpu not turboing equivalently in the different tests. Even larger problemsizes would probably have been better since it could have minimised someof those problems, unfortunately this was not possible due to limited RAM.However, the iterations stayed the same for all the sizes so the method wasoptimal as expected.

6 Conclusion

The γ-method is in summary good as it offers second order convergenceand can be unconditionally stable, meaning no stringent requirements onthe time step, while still being fairly simple to implement. However, a goodsolver for the linear system is required, such as AGMG or some other optimalor near optimal method, otherwise performance will be poor in the implicitcase.

We conclude that the unconditionally stable case, i.e. when 1/4 ≤ γ ≤ 1/2,is in many cases preferable as the choice of time step is not limited by thestability criterion. This is good since when using a central difference in spacethe errors from the time and space discretisation are balanced, so having afiner time step compared to mesh-size does not give better accuracy, it onlytakes longer to compute. The case with the theoretically best convergencerate, γ = 1/12 was somewhat deprived of its potential advantages by thestability condition, leading in our case to longer computations than for γ =1/2.

If one wants to compute a case where the coefficients have discrete jumps,it would probably be better to approximate those with very steep continu-ous functions, since using this method with a discontinuity results in loweraccuracy. Generally finite differences can be less suitable when solving dis-continuous problems; other alternatives e.g. finite elements might workbetter.

This work has covered some of the interesting aspects when implementingthe γ-discretisation to solve the wave equation. There are however many fur-ther aspects that could be studied. One example of such is a more specific

36

Page 37: Numerical Analysis of the Two Dimensional Wave Equation

study regarding which is the optimal γ to use in terms of computationalefficiency. The value of γ changes the matrix of the system and affectsthe number of iterations required in AGMG. This work was limited to acomparison of only a few different values of γ, i.e. γ = 0, γ = 1/12 andγ = 1/2. An improvement of the study would be to test other values in theunconditionally stable region in order to fully optimise the method. It wouldfurthermore be of interest to extend the study to other geometries, e.g. a cir-cular domain. Here a special discretisation technique called Shortley-Wellercan be used. Research could be carried out regarding whether the orderof accuracy is affected when handling curved boundary, as finite differencesgenerally suffer for more complex geometries.

37

Page 38: Numerical Analysis of the Two Dimensional Wave Equation

References

[1] Y. Notay. (). Agmg software and documentation, [Online]. Available:http://agmg.eu.

[2] O. Axelsson, “Finite difference methods for hyperbolic problems,” inFinite difference methods, ser. Encyclopedia of computational mechan-ics. John Wiley & Sons, Ltd, 2004.

[3] H. P. Langtangen, “Generalization, variable wave velocity,” in Finitedifference methods for the wave equation. Department of InformaticsUniversity of Oslo, 2016.

[4] J. C. Strikwerda, “The method of steepest descent and the conjugategradient method,” in Finite Difference Schemes and Partial Differen-tial Equations. Wadsworth & Brooks/Cole Advanced Books & Soft-ware, 2004.

[5] P. Knabner and L. Angermann, “Gradient and the conjugate gradientmethod,” in Numerical Methods for Elliptic and Parabolic DifferentialEquations. Springer-Verlag, 2003.

[6] Y. Saad, “Basic concepts in linear systems,” in Iterative Methods forSparse Linear Systems. PWS - Publishing Company, 1996.

[7] ——, “Preconditioning techniques,” in Iterative Methods for SparseLinear Systems. PWS - Publishing Company, 1996.

[8] U. Trottenberg, C. Oosterlee, and A. Schuller, “Multigrid introduc-tion,” in Multigrid. Academic Press, 2001.

[9] P. Wesseling, “Multigrid algorithms,” in Multigrid methods. EdwardsInc, 2004.

[10] Y. Notay and P. Vassilevski, “Recursive krylov-based multigrid cy-cles.,” Numer. Linear Algebra Appl, vol. 15, pp. 473–487, 2008.

[11] A. Holmberg, C. Bohme, and M. Nilsson Lind. (2020). Kandjobb gamma-metoden, [Online]. Available: https://github.com/Antonholmbergg/Kandjobb_gammametoden.git.

[12] A. Sei and W. Synes, “Error analysis of numerical schemes for thewave equation in heterogeneous media,” Appl Numer Math, vol. 15,pp. 465–480, Nov. 1994. doi: 10.1016/0168-9274(94)00036-0.

38