thispageintentionallyleftblank...galerkin methods, meshless methods, monte carlo methods, wavelets,...

30

Upload: others

Post on 07-Sep-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation
Page 2: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

This page intentionally left blank

Page 3: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

Numerical Analysis of Partial Differential Equations

Page 4: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

PURE AND APPLIED MATHEMATICS

A Wiley Series of Texts, Monographs, and Tracts

Founded by RICHARD COURANT Editors Emeriti: MYRON B. ALLEN III, DAVID A. COX, PETER HILTON, HARRY HOCHSTADT, PETER LAX, JOHN TOLAND

A complete list of the titles in this series appears at the end of this volume.

Page 5: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

Numerical Analysis of Partial Differential Equations

S. H. Lui Department of Mathematics

University of Manitoba Winnipeg, Manitoba, Canada

WILEY A JOHN WILEY & SONS, INC., PUBLICATION

Page 6: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

Copyright © 2011 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representation or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print, however, may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-ln-Publication Data:

Lui, S. H. (Shaun H.), 1961-Numerical analysis of partial differential equations / S.H. Lui.

p. cm. — (Pure and applied mathematics, a Wiley-Interscience series of texts, monographs, and tracts)

Includes bibliographical references and index. ISBN 978-0-470-64728-8 (hardback) 1. Differential equations, Partial—Numerical solutions. I. Title. QA377.L84 2011 518'.64—dc23 2011013570

Printed in the United States of America.

oBook ISBN: 978-1 -118-11113-0 ePDF ISBN: 978-1-118-11110-9 ePub ISBN: 978-1-118-11111-6

10 9 8 7 6 5 4 3 2 1

Page 7: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

CONTENTS

Table of Contents Preface Acknowledgments

1 Finite Difference

1.1 1.2

1.3

1.4

1.5

1.6

1.7 1.8

1.9

Second-Order Approximation for Δ Fourth-Order Approximation for Δ

Neumann Boundary Condition

Polar Coordinates Curved Boundary

Difference Approximation for Δ 2

A Convection-Diffusion Equation

Appendix: Analysis of Discrete Operators Summary and Exercises

V

ix xiii

1 15 19 24 26 30 32 35 37

Mathematical Theory of Elliptic PDEs 45

2.1 Function Spaces 45 2.2 Derivatives 48

Page 8: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

CONTENTS

2.3 Sobolev Spaces 52 2.4 Sobolev Embedding Theory 56 2.5 Traces 59 2.6 Negative Sobolev Spaces 62 2.7 Some Inequalities and Identities 64 2.8 Weak Solutions 67 2.9 Linear Elliptic PDEs 74 2.10 Appendix: Some Definitions and Theorems 82 2.11 Summary and Exercises 88

Finite Elements 95

3.1 Approximate Methods of Solution 95 3.2 Finite Elements in ID 101

3.3 Finite Elements in 2D 109 3.4 Inverse Estimate 119 3.5 L2 and Negative-Norm Estimates 122 3.6 Higher-Order Elements 125 3.7 A Posteriori Estimate 133 3.8 Quadrilateral Elements 136 3.9 Numerical Integration 138 3.10 Stokes Problem 144 3.11 Linear Elasticity 156 3.12 Summary and Exercises 160

Numerical Linear Algebra 169

4.1 Condition Number 169 4.2 Classical Iterative Methods 173 4.3 Krylov Subspace Methods 178

4.4 Direct Methods 190 4.5 Preconditioning 196 4.6 Appendix: Chebyshev Polynomials 208 4.7 Summary and Exercises 210

Spectral Methods 221

5.1 Trigonometric Polynomials 221 5.2 Fourier Spectral Method 232

5.3 Orthogonal Polynomials 242 5.4 Spectral Galerkin and Spectral Tau Methods 262

Page 9: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

CONTENTS VÜ

5.5 Spectral Collocation 264

5.6 Polar Coordinates 278

5.7 Neumann Problems 280

5.8 Fourth-Order PDEs 281

5.9 Summary and Exercises 282

6 Evolutionary PDEs 291

6.1 Finite Difference Schemes for Heat Equation 292

6.2 Other Time Discretization Schemes 311 6.3 Convection-Dominated equations 315

6.4 Finite Element Scheme for Heat Equation 317

6.5 Spectral Collocation for Heat Equation 321

6.6 Finite Difference Scheme for Wave Equation 322

6.7 Dispersion 327 6.8 Summary and Exercises 332

7 Multigrid 345

7.1 Introduction 346

7.2 Two-Grid Method 349

7.3 Practical Multigrid Algorithms 351

7.4 Finite Element Multigrid 354

7.5 Summary and Exercises 363

8 Domain Decomposition 369

8.1 Overlapping Schwarz Methods 370

8.2 Orthogonal Projections 374

8.3 Non-overlapping Schwarz Method 382

8.4 Substructuring Methods 387

8.5 Optimal Substructuring Methods 395

8.6 Summary and Exercises 410

9 Infinite Domains 419

9.1 Absorbing Boundary Conditions 420

9.2 Dirichlet-Neumann Map 424

9.3 Perfectly Matched Layer 427

9.4 Boundary Integral Methods 430

9.5 Fast Multipole Method 433

Page 10: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

viii CONTENTS

9.6 Summary and Exercises 436

10 Nonlinear Problems 441

441

446

449

467

468

469

471

477

10.1 10.2

10.3 10.4

10.5 10.6

Newton's Method Other Methods Some Nonlinear Problems Software Program Verification Summary and Exercises

Answers to Selected Exercises

References

Index 483

Page 11: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

PREFACE

This is a book on the numerical analysis of partial differential equations (PDEs). This beautiful subject studies the theory behind algorithms used to approximate solutions of PDEs. The ultimate goal is to design methods which are accurate and efficient. It is a relatively young field which draws upon powerful theory from many branches of mathematics, both pure and applied.

The contents in this book are suitable for a two-semester course at the senior undergraduate or beginning graduate level, for students in mathematical sciences and engineering. The emphasis is on elliptic PDEs, with one chapter discussing evolutionary PDEs. A prerequisite for reading this book is a solid undergraduate course in analysis. Some exposure to numerical analysis and PDEs is helpful.

Our aim is to offer an introduction to most of the important concepts in the numer-ical analysis of PDEs. The two-dimensional Poisson equation is the model problem upon which the various methods are analyzed. Because of the simplicity of this equa-tion, the analysis can almost always be carried out in full. In this sense, this book is entirely self-contained; the student does not need to consult other books for the proof of a theorem. The only exception is the chapter on the mathematical theory of elliptic PDEs, where very few proofs are given. Many students taking this course may have no prior experience in this area and thus this chapter is simply an introduction to the topic by examples. A proper treatment of PDEs requires several semesters, which most students cannot accommodate in their programs. Some may argue that

ix

Page 12: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

X PREFACE

there is too much emphasis on the Poisson equation, however, our response is that for an introductory course, this equation is appropriate because it gives a good picture of the kinds of results expected without the complications involved in more general PDEs. Convection-diffusion equations and nonlinear equations are of course inter-esting, but they belong to more advanced courses and many topics there are still under active research. Other omitted topics include finite volume methods, discontinuous Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation and visualization issues are not discussed at all.

The topics covered are the three main discretization methods of elliptic PDEs: finite difference, finite elements and spectral methods. These are presented in Chapters 1,3 and 5, respectively. In between are discussions on the mathematical theory of elliptic PDEs in Chapter 2 and numerical linear algebra in Chapter 4. Time-dependent PDEs make a brief appearance in Chapter 6. Multigrid and domain decomposition, are covered in Chapters 7 and 8. These are among the most efficient techniques for solving PDEs today. Chapter 9 contains a discussion of PDEs posed on infinite domains. The main issue here is how to pose the boundary condition on the artificial boundary which is necessary on a finite computational domain. Methods for nonlinear problems are briefly described in Chapter 10. Here, we also describe some important nonlinear problems in many fields of science and engineering. These can serve as computing projects for students from different disciplines.

Each chapter can be, and have been, expanded by other authors into a course by itself! Most chapters can be covered in approximately 10 hours. The exceptions are: Chapter 3 (finite elements) and Chapter 5 (spectral methods) which require about 15 hours each to cover all sections; Chapter 6 (multigrid), Chapter 9 (infinite domains) and Chapter 10 (nonlinear problems) need about five hours each.

A few words about the ordering of the chapters are called for. The material on the finite difference method requires few prerequisites and thus is placed in the first chapter. The analysis of the finite element and spectral methods uses the language of Sobolev spaces and their properties, which are conveniently covered in the second chapter. Having seen the structured matrices in the finite difference method and the unstructured matrices in the finite element method, readers are well motivated to appreciate and comprehend the issues in numerical linear algebra in Chapter 4. A possible alternative ordering of the first part of the book is to discuss Sobolev spaces first, followed by the three discretization techniques and finally numerical linear algebra. The problem is that the material on PDE theory appears to many students as abstract, dry and unmotivated. Furthermore, the discussion on numerical PDEs does not begin until the fourth week. This book does not have to be read in the order presented but Chapter 2 should precede all subsequent chapters except 4,9 and 10. Chapters 6 through 9 can be read in any order, but they rely heavily on material from the first four chapters.

No one learns mathematics by reading alone. Exercises are an integral part of this book and students are encouraged to try them. They are essential for reinforcing the material and many extend theories and techniques developed in the text. Both theoretical and programming problems are available, with the former prefixed by E

Page 13: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

PREFACE X¡

and the latter prefixed by P. Answers to selected written exercises are given. Although this book does not emphasize the implementation of the algorithms, readers willing to invest time on the programming exercises will gain a much better appreciation of the subject. As already mentioned, about half of the chapters can be covered in about four weeks. The time frame can easily extend by one to two weeks per chapter if students do a substantial fraction of the written and programming exercises.

Almost all material in this book are well known to numerical analysts and have been gleaned from various sources listed in the bibliography. References to texts or monographs are generally given in place of the original articles. There are al-ready excellent texts on each of the areas discussed in this book but there does not appear to be one which covers all the topics here. The present book should serve as a good preparation for more advanced work. Readers are encouraged to send their comments and corrections to [email protected]. A webpage http://home.cc.umanitoba.ca/~luish/numpde has been created for this book. It will contain errata as well as some MATLAB programs.

Page 14: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

This page intentionally left blank

Page 15: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

ACKNOWLEDGMENTS

I have learned a great deal from the books and papers of many mathematicians (a partial list appears in the References) and whose ideas I have followed in this book. In fact, I was studying several topics for the first time as I was writing this book-experts in the fields should have no difficulty pointing out my level of ignorance. I sincerely thank my colleagues Susanne Brenner, Raymond Chan, Qiang Du, Martin Gander, Laurence Halpem, Ronald Haynes, Felix K wok, Jie Shen, Xue-Cheng Tai and Justin Wan for reading parts of this book on short notice and making many insightful suggestions. This book is poorer because I have not the time, energy and/or expertise to implement all their proposed changes. I take this opportunity to thank the many students who have taken courses based on earlier drafts of th is book. They pointed out many typos, inaccuracies and made numerous suggestions which have immeasurably improved the presentation. Of course, I am responsible for all remaining mistakes in the book. It has truly been a humbling and rewarding experience having so many wonderful students in my classes.

Michael Doob, Amy Hendrickson and Jonas Lippuner have been extremely helpful with the finer points of I4TpX. I also thank my editor Susanne Steitz-Filler and the staff at John Wiley & Sons, Inc. Finally, financial support from NSERC and the University of Manitoba are gratefully acknowledged.

XIII

Page 16: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

This page intentionally left blank

Page 17: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

CHAPTER 1

FINITE DIFFERENCE

The finite difference method was one of the first numerical methods used to solve partial differential equations (PDEs). It replaces differential operators by finite dif-ferences and the PDE becomes a (finite) system of equations. Its simplicity and ease of computer implementation make it a popular choice for PDEs defined on regular geometries. One drawback is that it becomes awkward when the geometry is not regular or cannot be mapped to a regular geometry. Another disadvantage is that its error analysis is not as sharp as that of the other methods covered in this book (finite element and spectral methods). A recurring theme is that the analysis of, and properties of, discrete operators mimic those of the differential operators. Examples include integration by parts, maximum principle, energy method, Green's function and the Poincaré-Friedrichs inequality.

1.1 SECOND-ORDER APPROXIMATION FOR Δ

Consider the Poisson equation

-Au = fonQ, u = 0ondtt. (1.1)

Numerical Analysis of Partial Differential Equations. By S. H. Lui 1 Copyright © 2011 John Wiley & Sons, Inc.

Page 18: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

2 FINITE DIFFERENCE

Here,

Δ = — — dx1 dy2

is the Laplacian in two dimensions. Throughout this chapter, except the sections on polar coordinates and curved boundaries, Ω is the unit square (0,1 )2 = {(x, y), 0 < x,y < 1}.

The Poisson equation is a fundamental equation which arises in elasticity, elec-tromagnetism, fluid mechanics and many other branches of science and engineering. Because an explicit expression of the solution is available only in a few exceptional cases, we often rely on numerical methods to approximate the solution. The goal of the subject of numerical analysis of PDEs is to design a numerical method which approximates the solution accurately and efficiently. Roughly speaking, a method is accurate if the computed solution differs from the exact solution by an amount which goes to zero as h, a discretization parameter, goes to zero. A method is efficient if the amount of computation and storage requirement of the method do not grow quickly as a function of the size of the input data, / , in the case of the Poisson equation. The remaining pages of this book will be preoccupied with these issues and will culminate in algorithms (multigrid and domain decomposition) which are optimal in the sense that the amount of work to approximate the solution is no more than a linear function of the size of the input.

We take a uniform grid of size h = 1/n, where n is a positive integer. Let

&h = {xij = (ih,jh), 1 < i, j < n - 1}

denote the set of interior grid points and

düh = {{0,jh), (l,jh), (jh,0), {jh, 1), 1 < j < n - 1}

denote the boundary grid points. [Observe that the corner points (0,0), (1,0), (0,1), ( l , l )arenotin90/, . ] Define Cth = Ω^υοΩ^, which consists of (n + 1)2 —4 points.

The finite difference method seeks the solution of the PDE at the grid points in Ω. Specifically, the (n - l ) 2 unknowns are w¿¿ = u(ih,jh), 1 < i, j < n — 1. We obtain (n — l ) 2 equations by approximating the differential equation by a finite difference approximation at each interior grid point. That is,

Aujj - Uj+ij - Uj-iyj - Ujj+i - U j , j - i _ _ . . ■ 2 — hi '~ J\Xii)·

(These equations will be derived later when we discuss the consistency of the scheme.) The boundary values are UQJ = unj = u¿o = Uin = 0, 1 < i,j < n — 1; they are known from the boundary condition. The system of linear equations is denoted by the discrete Poisson equation

-Ahuh = fh.

Here, Uh is the vector of unknowns arranged in an order so that Δ^ is block tridiagonal:

Uh = [«11 , · · · ,Un-ltl,Ui2,... ,Un-it2,... , W n _ l i n _ l ] ,

Page 19: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 3

fh = [/n, · · · , / n - i , n - i ] T and

-Ah=V

T -I -I T

-I T -I -I T

- 1 4 - 1 - 1 4

Both T and I above are (n — 1) x (n — 1) matrices with I the identity matrix. An alternative representation of A/¡ is given by the molecule

1

h¿ I 1

- 4 1 1

Before proceeding further, we define some norms which measure the size of vectors and matrices. First consider the infinity norm on RN. For any x € RN, define

\x oo = max \Xi\. Ki<N

For any N x N matrix A, we claim that

\Ax\ \A\oo :-- sup -¡—¡— = max

x€RN\0 \x\oo l<i<N

N

Σΐο«ι·

In other words, \A\oo equals the maximum row sum of the matrix. To see this, for any x e RN \ 0,

\Ax[

Σ /ν j=\ aljxj

Σ 1\ • = 1 a^jXj

max Ki<N

N

N

< max Ki<

^Σΐα«ι· j = l

Hence |il|oo < maxi<¿<jv Σ ί = ι laij!· To show equality, suppose A Φ 0 and the maximum row sum occurs on row i. Define Xj = sign(a¿j). Then |x|oo = 1 and

|J4X|OO = Σ]=ι \aij\- As a n application, we note that ΙΔ/Joo = 8h 2. Another useful norm is the Euclidean (or 2-) norm defined by |:r |ij = xTx and for

matrix A,

\A\i \Ax\2 s u p — —

x#0 F|2

One useful property is \A\2 = σ\, where σ\ is the largest eigenvalue of AT A. If A is symmetric, then \A \ 2 = \λι\ while |^4_1|2 = |λ^χ | , where λι and λ^ are eigenvalues of A of largest and smallest magnitude, respectively.

Page 20: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

4 FINITE DIFFERENCE

Two norms || · || i and || · ||2 in a normed vector space X are said to be equivalent if there are positive constants c¿ such that for every v in X, ci||v||i < \\v\\2 < C2||v||i. It is well known that any two norms on a finite-dimensional space are equivalent. This means that we can use | · (2 or | ■ ¡<χ>> whichever is more convenient. Although c\ and C2 are independent of v e X, they do depend on N, the dimension of the space.

Finally, we define C r( í í ) , for 0 < r < 00, as the space of r times continuously differentiable functions on Ω with norm

IMIc-íñ) = „j*18* sup|Da t ;(x)| . 0 < | a | < r x e u

Here Dav denotes any derivative of v up to rth order, where a is a multi-index. For instance, if a = (2,1), then Dav denotes the 3rd derivative vxxy. We abbreviate C°(ß) as C(ñ). If v e Cr(ñ), then Dav e C(ü) for every a such that \a\ := a\ + ct2 < r. Define

ΙΙυΙϋτ"·ίο^ = maxsup|DQi;(x)|. (1.2)

This is not a norm but it comes up frequently in the analysis of finite difference schemes. In the one-dimensional case,

IMIc([o,i]) = mgx (|v(s)|, \v'(x)\, \υ"(χ)\), ||«|Ισ»((ο,ιΐ) = xm » x l«"(«)l-

In this book, all functions are real unless otherwise specified. Also, c appears in many places and denotes a positive constant whose value may differ in different occurrences. The same remark applies to c\, C2, C, C\, C2, etc.

Next, we shall demonstrate several properties of — Ah· These are crucial in esti-mating the error of the approximate solution.

Positive Definiteness

Let (·, ·) be the L2 inner product defined by

(u, v) = uv Jn

for all square-integrable functions u and v defined on Ω. It is well known that - Δ is a self-adjoint and positive definite operator. This means that for all smooth functions u, v, w vanishing on 9Ω with w φ 0,

(—Au, v) = (u,-Av) and (—Aw, w) > 0.

(A rigorous justification of self-adjointness is non-trivial since it requires checking the domain of the operator.)

We now show that the discrete operator has analogous properties. By inspection, —Ah is symmetric. We now show that it is positive definite. This is shown in two

Page 21: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 5

different ways. The first way uses summation by parts which is a discrete integration by parts. Let v be any smooth function which vanishes at x = 0 and x = 1. Then one form of integration by parts is

r1 I1 rl r1

- / v(x)v"(x)dx = -v{x)v'(x)\ + / (v'(x))2dx = / (y\x)fdx. Jo lo Jo Jo

Now let Vh be any non-zero vector defined on Ω„ with coordinates v^. Extend its definition to be zero on ΟΩ^. Then

n - l

-h2VhAhvh = ^2 vij(4vij - Vi+ij - Vi-ij - Vij+i - Vij-i)

n-1

= 5 Z Vij(Vij - Vi+ij) + Vij(Vij - Vi-ij)

+ Vij(Vij - Vij+l) + Vij(Vij - ViJ-l)

n—In—1 n—2n—\

= ] ζ Σ Vij(vij - vi+l,j) + Σ Σ Vi+hj(vi+hj - Vij) i=l j=\ i=0 j=l

n—ln—1 π—1n—2

+ Σ Σ νν(να - vij+i) + Σ Σ vhj+i(vij+i - va) ¿=1 j = l i = l j = 0

n—In—1 n —In—1

= Σ Σ(υ« _ yi+hj)2 + Σ Σ ( υ « " r<j+o2· ¿=0 j = l ¿=1 j=0

Define the discrete forward difference operators

s-x _ Vj+l,j — Vij ry _ Vj,} + 1 - Vij °hvij - ^ > dhViJ - ^

and

&hVh = [<*h«01,··· , ^ υ η - 1 , 1 » ^ υ 0 2 , · · · , <*h«n-l,2, · · · , ¿ h « 0 , n - l , · · · , <SftV n_i ,„_l]T ,

áfc«/i = [¿fcUio,··· , ί"ν„_ι ,0 )Α"υιι , . . . , ί ^υ η _ι , ι , . . . ,<^υ1)ΤΙ_ι,... ,5"νη - ι ,η- ι ]Τ ·

The above calculation can be compactly summarized by

-vlAhvh = \6*hvh\l + \5lvh\22 (1.3)

for any vector Vh- This is summation by parts, which is clearly a discrete version of integration by parts. If u„ is not the zero vector then -v^AhVh > 0 and so -Δ/» is positive definite.

A second method to show positive definiteness is by discrete Fourier analysis. An explicit spectral decomposition for Δ^ is available:

-Ahq^ = Xijq^\ 1 < i,j < n - 1, (1.4)

Page 22: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

6 FINITE DIFFERENCE

where

q(ij) — 2hsmmx sinjny, K l ~ / i 2 s i n 2 ( - ) + s i n 2 ( ^

with (x, y) e fth- Note that the eigenvectors {q^} are orthonormal. Since all the eigenvalues are positive, —Δ^ is positive definite.

The proof of the eigenvalue relation (1.4) is by a straightforward calculation. The (r, s) component of the vector —h/2Ahq^ is

2sinirnh sinjsnh — sin(r + l)inh sinjsnh — sin(r — l)inh sinjsnh

+ finite difference of second y derivative

= 2 sin irnh sin jsnh — 2 sin irvh cos inh sin jsnh -\

= 2sinirnh sin jsnh (1 — οοβίπ/ι) Η

,2 4 sin ir-nh sin jsnh

-X aiij)

sin (£)+-.*(f);

Since —Δ/j is symmetric positive definite, ΙΔ/1Ι2 is equal to its largest eigenvalue while |Δ^Χ|2 is equal to the inverse of its smallest eigenvalue. Hence we obtain |ΔΛ |2 = 8/i-2sin2((n - l)irh/2) < 8ft-2 and

^ - i S ^ i f f l ^ a · 5 1

if h < 1. This is because the middle expression in (1.5) is an increasing function of h for 0 < h < 1. The condition number of — Ah, which is defined as the ratio of the largest eigenvalue to the smallest eigenvalue of — Ah, is bounded above by a constant multiple of h~2. The condition number is a fundamental quantity in numerical analysis and we shall encounter it many times in this book.

Consistency

Let r be a positive integer. The discretization -Ah is said to be consistent of order rif

\AhRhV - RhAv\oo < c | | t ; | |^+ 2 ( n ) / i r

holds for all v £ Cr+2(fl) vanishing on dfl and c is a constant independent of h and v. The restriction Rh is an operator from <7(Ω) to the (n — l)2-vector whose components are V{XÍJ), where x¿j £ Ω/,:

(Rhv)(P) = v(P), P e Slh.

In other words, Rh restricts a continuous function to the grid points. Consistency is, roughly speaking, a measure of how good the discrete operator approximates the continuous operator at the grid points.

Page 23: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 7

Theorem 1.1 Second-order consistency of Δ^. For any v € (74(Ω) vanishing on ΘΩ, \AhRhV RhAv^^cWvW^h2.

Proof: Let Vij = v(ih,jh). By Taylor's Theorem,

, dv{xjj)u d2v{xij)h2 , d3v(xij)h3 , 94υ(φ h4

«i±U - Vij ± Qx ft + 9χ2 2 ± fe3 6 + fe4 2 4

for some a á G Ω. Adding the above two equations and rearranging, we obtain

h2 dx2

+ £ i ' WEÚc*m <c\\v\\c*(ñ)h ■

Adding a similar equation for the second y derivative, we have

AhRhv = RhAv + E, \\E\\*CHTi) < C Ι Μ Ι ^ Λ 2 ·

This is the desired result. □

The domain of Δ/, is RN, where N = (n - l ) 2 . It can also be thought of as the set of real-valued functions defined on Ω/j. For a vector u/,, we sometimes use the notation VH{P) to denote the component of Vh corresponding to the point Ρ ε Ω / , . Yet another equivalent description of the domain of Δ^ is the set of real-valued functions defined on Ω/, vanishing on dflh- Sometimes it is convenient if the domain can be expanded to include vectors/functions which need not vanish at the boundary. For such a vector Vh, define

-(Ähvh)t] = ^ - , Ι + 1 „ - ^ - ^ + 1 - ^ - 1 ; i < 4ι_, < n _ i.

(1.6)

That is, the domain of Δ^ is R*n+1 ' ~4, or equivalently, the set of functions defined on Ω^, and the range is the set of iV-vectors corresponding to values of the discrete Laplacian applied to Vh at Ω^. For instance,

-{AhVh)n = ^2 '

and so the boundary values woi, vw also contribute. A more general definition of consistency does not require the function υ to vanish

on the boundary. The condition for second-order consistency in this case is

\A~h~Rhv - RHAV^ < c |Μ |^4 ( Π) Λ 2 >

where Rh restricts v onto Ω^. In case v vanishes on 9Ω^, then AhR^v = AhRhV.

Page 24: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

8 FINITE DIFFERENCE

Stability

The discretization - Δ / , is said to be stable with respect to |-|oo if Ι Δ ^ 1 ^ is bounded independently of ft. Thus stability implies that for the scheme — AhUh = fh '·— Rhfh, the ratio \uh\oo over \fh\oo is bounded independently of ft and is certainly a very desirable property. It also means that a small change in the data leads to a small change in the solution for all small ft. Suppose the data fh is perturbed by δ. The new solution is Vh = —Δ^1(// ι + δ). Then Uh — Vh = Α^δ and consequently \uh — Vh\oo < ΙΔ^ΙοοΙ^Ιοο. Thus a stable scheme means that the change in the solution is bounded by a constant multiple of the size of the perturbation δ. We shall discuss a technique called the discrete maximum principle which can be used to show stability of a scheme.

Discrete Maximum Principle

The (weak) maximum principle states that if u e <72(Ω) Π (7(Ω) and Au > 0 on Ω, then the maximum value of u is achieved on the boundary. Similarly, if Au < 0, then the minimum value of u is achieved on the boundary. For the Poisson equation (1.1), this says that if u is smooth and / < 0 on Ω, then u < 0 on Ω. This information is obtained without first solving the PDE. We shall show below that a similar statement holds for the discrete problem.

If Wh is a vector, by Wh > 0, we mean that each component of Wh is non-negative. Now we are ready for:

Theorem 1.2 Discrete Maximum Principle. If A^h > 0 on Ω^, then the maximum value of Vh is attained on 9Ω^.

Proof: Suppose AhVk > 0. We emphasize that Vh does not vanish on dQh in general. If the maximum value of Vh. occurs at a boundary point, then there is nothing to prove. Otherwise, suppose the maximum value of Vh occurs at Xij € Ω/,. Now

-JVjj + ^ + l , j + Vj-l,j + Vj,j+1 + Vj,j-1 > n

ft2

implies that

Vj+l,j + Vj-l,j + Vj,j+\ + Vi,j-l ^ Vij + Vij + Vij + Vij _

since Vij is maximum. It can now be inferred that Vij = Vj+i.j = w¿-i,j = ^¿,j+i = Vij-1. Repeatedly applying this argument, we conclude that Vh is constant on Ω^ and, in particular, also achieves the maximum on 9Ω^. ü

The theorem below states that our scheme—Δ^ω/ί = fh is indeed stable. Actually, this has already been proven for the 2-norm; see (1.5). Below, 1 stands for the vector of all ones.

Theorem 1.3 Stability of ΔΛ . Ι Δ ^ 1 ^ < 1/8.

Page 25: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 9

Proof: Given any fh defined on Ω/j, define Uh = -Ahlfh- The goal is to show that

]Afr /fcloo _ \uh\oo 1.

\fh\oo f//»|oo 8

Define w(x, y) = [(.τ - \)2 + (y - | ) 2 ] /4 and define the vector Wh on Ω/j by

% =w(ih,jh) lh-\)2+(jh-¿2 4. (1.7)

We now show that AhWh = 1. It is easy to see that Aw = 1, Wh = RhW and 1 — AhWh = RhAw — AhRhW = 0 since the consistency error involves fourth derivatives ofw which all vanish. Hence A^w^ = 1. Of course, one can also verify this equality by a direct calculation. (The terms 1/2 in w give the smallest estimate oflAj-V)

Note that Wh is non-negative everywhere and its maximum occurs along ΘΩ ,̂ with \wh\oo = 1/8. Now

Ah(\fh\ooWh + Uh) = Ι Λ Ι ο ο Ι - //» > 0.

By the discrete maximum principle, \fh\ooWh + Uh achieves its maximum on 9Ω/». For (ih,jh) G Ω^, recalling that Uh vanishes on dSlh,

Uij < \fh\ooWij + Uij < \fh\oo \u>h\oo = 7 7 ^ ·

Similarly,

Ah(\fh\ooWh - Uh) = l / h loo l + fh > 0,

which implies that

— Uij < \fh\ooWij - Uij < \fh\oo\u!h\oo = — ^ ·

These inequalities togethermean \uh\oo < |//i|<x>/8 and hence the result. □

Convergence

The scheme —AhUh = fh '■= Rhf ¡s s a ' d t o De convergent of order r if

\RhU -Uh\oo<c \\u\\*Cr+a(ji)hr

holds for some constant c independent of h and u. Here u is the exact solution of the Poisson equation and is assumed to lie in 6""+2(Ω).

The following is one of the main results of this chapter-our scheme for the Poisson equation is convergent of order 2.

Page 26: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

1 0 FINITE DIFFERENCE

Theorem 1.4 Second-orderconvergenceofA/l.Supposethesolutionuof(l.l)isinC'4(fi). Then \uh - -RfcUloo < C I M I j ^ ^ / l 2 .

Proof: Let e^ = Uh — RhU. Then

Aheh = Ahuh - AhRhu

= -fh - AhRhu

= -Rhf - AhRhu

= RhAu - AhRhU,

and thus en = A^1(i?/ lAu — AhRhu). Since the scheme is stable and consistent,

Moo < | H l c 4 ( ñ ) / l 2 ·

D

Let u, v G (7(Ω) and Uh = RHU, VH = Rh.v. Define the discrete L2 inner product

n - l

(uh,Vh)h = h2 ] P UijVij (1.8)

and the discrete L2 norm of u/, by

|wfcU = (vh,Vh)l/2 = h\vh\i. (1.9)

As the name implies, this discrete norm approximates the L2 norm oft;. An analogous convergence result in the discrete L2 norm is

\eh\h<c\\u\\*c4(ñ)h2. (1.10)

In the proof of Theorem 1.4, it can be seen that consistency and stability imply con-vergence. This is a general principle which occurs frequently in numerical analysis. It is of great practical importance since consistency is usually relatively easy to check by a Taylor's expansion while stability often follows from the analogous property of the differential operator. These two properties are enough to yield convergence, which is ultimately what we want to show and is non-trivial to show directly. An abstract version of this principle will be given at the end of the next chapter. See Exercise El for a converse of this theorem.

Note that a stable method need not be convergent. For instance, if the discrete Poisson equation is solved using the identity operator (i.e., define Uh = I fh = /ft). then this method is stable but not convergent.

Discrete Energy Method

Thus far, we have discussed two methods to show stability: the discrete maximum principle and discrete Fourier analysis. In the latter case, stability is with respect to

Page 27: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 11

| · |2; see (1.5). The discrete energy method is a third alternative. We first illustrate the energy method for the continuous problem —Δω = / for a smooth function u which vanishes on 9Ω. Multiply the PDE by u and then integrate by parts to obtain

c\\u\\2< /" |Vu|i = / / « < U/H ||«||, ( l . l l ) Jn Ju

where

INI2 = i u2

JQ

is the square of the L2 norm of the function u. The first inequality of (1.11) is known as the Poincare-Friedrichs inequality and will appear again in the next and subsequent chapters. The second inequality in (1.11) is the Cauchy-Schwarz inequality. From (1.11), we immediately obtain

HI < «A. c

Using the summation by parts formula (1.3) and the discrete Poincare-Friedrichs inequality

2\vh\l<\%vh\l + \Svhvh\l

which will be shown later, we have

2 \vh\2 < \Sx

hvh\2 + \Sv

hvh\2 = -vlAhvh,

meaning that the smallest eigenvalue of — Ah is at least 2. This follows from the variational characterization of Am, the smallest eigenvalue of a symmetric matrix A:

. xTAx Am = mm , .

x^O \x\%

(An analogous characterization holds for the maximum eigenvalue where min is replaced by max.) Hence the stability result |Δ^Χ |2 < 1/2 is obtained.

To show the discrete Poincare-Friedrichs inequality, notice that for 1 < i,j < n — 1,

V0j\ < 22, \vk+l,j - Vk,j\ k=0

Page 28: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

1 2 FINITE DIFFERENCE

Recall that VQJ = vnj = 0 for all j . Hence

4 ^ [Y^\vk+i,j-vkj\\ \fc=0 /

\fc=0 /

< Λ 2 Σ ^ Χ ^ ) 2 Σ 1 2

fc=0 fc=0

= ΛΣ^«) 2 · fc=0

Thus

n—In—1 n—In—1

ΣΣ4^ΣΣ(^) 2

j = l ¿=1 j=l fe=o

orlufcll < l^u/ila- Adding this to a similar inequality for Sfoh results in the desired inequality.

Discrete Green's Function

The final technique to show stability is now introduced. Recall that the solution of

—Au = f on Ω, u = 0 on 9Ω

can be written as

u(P) = i G(P,Q)f(Q)dQ, Pen, (1.12) Jn

where G is the Green's function, which is the solution of

-AG{P,Q) = S(P-Q), QGÜ,

G(P,Q) = o, Qeaa In the above, P e Ω is fixed, Δ is with respect to Q and δ is the delta distribution defined by / Ω δ(Ρ - Q)g(Q) dQ = g(P) for any continuous g. If P e 9Ω, define G(P, Q) = 0 for all Q e Ω.

We now discuss a discrete analog of the Green's function. For a fixed P € Ωη, define Gh(P, ·) as the solution of

-AhGh(P,-) = h-26P, (1.13)

where δρ is the vector with each entry equal to zero except for the one corresponding to P , which takes on the value one. When P € dQ,h, define Gh(P, Q) = 0 for all

Page 29: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

SECOND-ORDER APPROXIMATION FOR Δ 13

Q e ilh- By the discrete maximum principle, Gh(P, ·) > 0. Another property is Gh(P, Q) = Gh(Q, P) for P, Q € ΩΛ. It is not difficult to see that the solution of -Ahuh = fh is

uh(P) = uh-5p = uh- (-h2AhGh(P, ·)) = h2fh ■ Gh(P, ·) = {Gh(P, ·), h)h-(1.14)

[See (1.8) for the definition of the discrete inner product (·, )ft.] Clearly, (1.14) is a discrete form of (1.12).

Let 1 be the vector of all ones. The following fact is useful.

Proposition 1.5 For any P e Ω^,

0<G f c (P , · ) , {Gh(P,-),l)h<\. (1.15)

Proof: For P e Ω/,, the inequality 0 < Gh(P, ·) follows from the discrete maximum prin-ciple. Define

vh(P) = (Gh(P-),l)h = h2 Σ Gh(PQ). Qen,,

Observe that

Sfcffc = - 1 . (1.16)

By the discrete maximum principle, VH > 0. Recall that Wh defined in (1.7) satisfies AhWh, = 1. Thus Ah(vh + Wh) = 0. By the discrete maximum principle, the maximum of w/¡ + Wh occurs at dilh- Since Vh vanishes at dQh and the maximum of Wh is 1/8,

0 < vh(P) < (vh + wh)(P) < ^.

D

Stability now follows easily. For P € Ω/,, use (1.14) and the above proposition to get

\uh(P)\ <(Gh(P, · ) , ι>„ΙΛΙοο<^ ,

recovering our earlier result. We have discussed four techniques to show stability with respect to the norms | · |«,

and | - 12 for the discrete Laplacian: discrete Fourier analysis, maximum principle, energy method and Green's function. For a more general (possibly nonlinear) differ-ential operator with variable coefficients, typically only a small subset of these tools is applicable.

Page 30: Thispageintentionallyleftblank...Galerkin methods, meshless methods, Monte Carlo methods, wavelets, eigenvalue problems, inverse problems, free boundary value problems, etc. Implementation

1 4 FINITE DIFFERENCE

Figure 1.1 After a rotation by angle π/4 about the origin, cell i goes to i 4- 1 for 1 < i < 7 while cell 8 goes to cell 1. Of course, cell 0 retains its original position.

Rotationally Invariant Scheme

The Laplacian operator is rotationally invariant, meaning that Δ and any rotation op-erator commute. Fix any angle φ and define the rotation operator ηφ by R<f,v(r, Θ) = v(r, θ + φ). Here v is a function defined in polar coordinates. Hence the statement that Laplacian is rotationally invariant means that

ΚφΑυ = ΑΒ,φυ

holds for all φ and all twice differentiable functions v. In some applications, image processing to name one example, it is highly desirable to preserve this property as much as possible. For a uniform discretization in both directions, this means that a discrete Laplacian Δ^ should satisfy the criterion for a rotation angle φ equal to any integer multiple of π/4. In this section, we adopt the convention that Uij is the approximation of the function u at the centre of the square cell with four corners ((« ± 0.5)/i, (j ± 0.5)/i). See Figure 1.1. Consider the infinite vectors Uh and Vh defined by

Hj - { o, i > 0 ; otherwise;

Vij = { o, otherwise.

Here i, j range over the integers so that we need not be concerned about boundaries. Note that UH is VH rotated by an angle of π/4. It is simple to check that (A/lu^l)oo = hr2 while (AhVh)oo = 2h~2. Thus Δ^ does not respect grid rotational invariance.

Observe that Δ^ defined by the molecule

Ä f c = 2h2

1 1 - 4

1 1