an upscaling method using coefficient splitting and its applications to elliptic pdes

19
Computers and Mathematics with Applications 65 (2013) 712–730 Contents lists available at SciVerse ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An upscaling method using coefficient splitting and its applications to elliptic PDEs Xinguang He a , Lijian Jiang b,a College of Resource and Environment Science, Hunan Normal University, Changsha, 410081, China b Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, NM 87545, United States article info Article history: Received 10 July 2012 Received in revised form 7 November 2012 Accepted 13 December 2012 Keywords: Upscaling method Green’s function Stochastic elliptic equations Parameter space dimension reduction collocation abstract In this paper, we develop an upscaling method using coefficient splitting techniques. Green’s function is constructed using the differential operator associated with the first part of the splitting. An effective upscaling coefficient is recursively calculated by Green’s function. The computation of the upscaling process involves some independent steps. Combining the proposed upscaling method with the stochastic collocation method, we present a stochastic space reduction collocation method, where the stochastic collocation method is performed on a lower dimension stochastic space than the full-dimension stochastic space. We thoroughly analyze the convergence of the proposed upscaling method for both deterministic and stochastic elliptic PDEs. Computation complexity is also addressed for the stochastic upscaling method. A number of numerical tests are presented to confirm the convergence analysis. Published by Elsevier Ltd 1. Introduction Since there exist corruption of measurement and lack of knowledge on physical properties, many subsurface flow models often contain random parameters (e.g., hydraulic conductivity), which represent the inherent uncertainty. A prevailing approach to quantifying the uncertainty is to treat the random parameters as a random process or field. Thus such models become stochastic partial differential equations (SPDEs). Then model’s output can be accurately predicted by efficiently solving the derived SPDEs. On the other hand, the underlying porous media of subsurface flow usually exhibit high heterogeneities and have a wide range of length scales. The simulation of subsurface flow in heterogeneous porous media requires large-scale numerical simulations. Although more powerful computers and new simulation techniques are continuously being developed, the size of grid in practical flow simulations is too large to explicitly account for all effects of microscopic heterogeneities. The existence of heterogeneities and uncertainties will cause a great challenge for simulations. Let be a set of outcomes and D be a bounded domain in R d with a Lipschitz boundary. We consider the stochastic elliptic boundary value problem: seek a random field u(x, ω) : ¯ D × −→ R such that u(x, ω) almost surely (a.s) satisfies the following equation: div(k(x, ω)u(x, ω)) = f (x) in D u(x, ω) = 0 on D (1.1) where k(x, ω) is a random field. In subsurface applications, Eq. (1.1) represents stationary single-phase flow equations, k(x, ω) refers to permeability field, and u(x, ω) refers to pressure. The random permeability field k(x, ω) often varies over Corresponding author. E-mail addresses: [email protected] (X. He), [email protected], [email protected] (L. Jiang). 0898-1221/$ – see front matter. Published by Elsevier Ltd doi:10.1016/j.camwa.2012.12.007

Upload: lijian

Post on 08-Dec-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An upscaling method using coefficient splitting and its applications to elliptic PDEs

Computers and Mathematics with Applications 65 (2013) 712–730

Contents lists available at SciVerse ScienceDirect

Computers and Mathematics with Applications

journal homepage: www.elsevier.com/locate/camwa

An upscaling method using coefficient splitting and itsapplications to elliptic PDEsXinguang He a, Lijian Jiang b,∗

a College of Resource and Environment Science, Hunan Normal University, Changsha, 410081, Chinab Applied Mathematics and Plasma Physics, Los Alamos National Laboratory, NM 87545, United States

a r t i c l e i n f o

Article history:Received 10 July 2012Received in revised form 7 November 2012Accepted 13 December 2012

Keywords:Upscaling methodGreen’s functionStochastic elliptic equationsParameter space dimension reductioncollocation

a b s t r a c t

In this paper, we develop an upscaling method using coefficient splitting techniques.Green’s function is constructed using the differential operator associated with the firstpart of the splitting. An effective upscaling coefficient is recursively calculated by Green’sfunction. The computation of the upscaling process involves some independent steps.Combining the proposed upscaling method with the stochastic collocation method, wepresent a stochastic space reduction collocation method, where the stochastic collocationmethod is performed on a lower dimension stochastic space than the full-dimensionstochastic space. We thoroughly analyze the convergence of the proposed upscalingmethod for both deterministic and stochastic elliptic PDEs. Computation complexity is alsoaddressed for the stochastic upscaling method. A number of numerical tests are presentedto confirm the convergence analysis.

Published by Elsevier Ltd

1. Introduction

Since there exist corruption ofmeasurement and lack of knowledge on physical properties, many subsurface flowmodelsoften contain random parameters (e.g., hydraulic conductivity), which represent the inherent uncertainty. A prevailingapproach to quantifying the uncertainty is to treat the random parameters as a random process or field. Thus suchmodels become stochastic partial differential equations (SPDEs). Then model’s output can be accurately predicted byefficiently solving the derived SPDEs. On the other hand, the underlying porous media of subsurface flow usually exhibithigh heterogeneities and have a wide range of length scales. The simulation of subsurface flow in heterogeneous porousmedia requires large-scale numerical simulations. Although more powerful computers and new simulation techniques arecontinuously being developed, the size of grid in practical flow simulations is too large to explicitly account for all effects ofmicroscopic heterogeneities. The existence of heterogeneities and uncertainties will cause a great challenge for simulations.

Let Ω be a set of outcomes and D be a bounded domain in Rd with a Lipschitz boundary. We consider the stochasticelliptic boundary value problem: seek a random field u(x, ω) : D × Ω −→ R such that u(x, ω) almost surely (a.s) satisfiesthe following equation:

−div(k(x, ω)∇u(x, ω)) = f (x) in Du(x, ω) = 0 on ∂D (1.1)

where k(x, ω) is a random field. In subsurface applications, Eq. (1.1) represents stationary single-phase flow equations,k(x, ω) refers to permeability field, and u(x, ω) refers to pressure. The random permeability field k(x, ω) often varies over

∗ Corresponding author.E-mail addresses: [email protected] (X. He), [email protected], [email protected] (L. Jiang).

0898-1221/$ – see front matter. Published by Elsevier Ltddoi:10.1016/j.camwa.2012.12.007

Page 2: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 713

different scales in space, resolving that the finest scale would result in a large number of unknowns. Obtaining the fine-scale and full-scale numerical solutions for such problems is infeasible in practice. Thus, simplified equations derived byhomogenizing [1,2] or upscaling coefficient play a crucial role as it can effectively capture the subgrid heterogeneity. Fornumerical implementation, the random field k(x, ω) is usually approximated by a high-dimensional random parameter.This brings in great difficulty in computing the statistical output quantities of interest.

Upscaling methods lump small-scale details of the medium into a few representative macroscopic parameters on acoarse scale which preserve the large-scale behavior of the medium and are more appropriate for simulations. Numericalcalculation of upscaled parameters (absolute permeability and pseudorelative permeability) usually requires solving localproblems on fine scale. Depending on the flow information used for these calculations, upscaling procedures can be roughlyclassified as local, global, and quasi-global. Local methods solve a local domain (target coarse block) under some assumedlocal boundary conditions. The local methods are most efficient and work well for separable scales, but may not performwell for models with strong non-separable scales. This calls for global techniques, which require fine-scale simulationsover the entire domain. Global upscaling methods may provide more suitable coarse-scale parameters for the models withnonseparable scales, though the computational demands aremuchmore significant than demands for local methods. Quasi-global approaches offer a compromise, as they introduce approximate global information into the upscaling procedure. Forcomprehensive discussions on these upscaling methods, we can refer to [3–8]. In the upscaling methods, the parameter (orcoefficient) of the local problem is solved in a fine grid, and this results in full-scale upscaling methods.

In this paper we develop an upscaling method using a parameter splitting technique. The splitting technique wasproposed in the setting of MsFEM in the recent work [9], where the performance of the splitting technique is investigated infine scales by MsFE projection (multiscale basis functions). In the MsFEM, a set of multiscale basis functions are iterativelygenerated usingGreen’s kernel on a fine-scalemesh over each coarse grid block. Herewe consider the efficacy of the splittingin coarse scales by the proposed upscaling. In the upscaling method, an effective macroscopic coefficient is recursivelycalculated on each coarse-scale block by Green’s function. Although the proposed upscaling shares some similarities to thework in [9], there exist some differences. The iterative process in the upscaling is slightly different from that used in [9]. Theconvergence analysis in the upscaling is straightforward and results in an optimal convergence rate regarding the iterationnumber. However, the convergence analysis in [9] ismore complicated, and the convergence rate is not optimalwith respectto the iteration number (Ref. Theorem 3.2 and Theorem 4.2 in [9]). In the work [10], the authors proposed a Green-function-based (GFB) multiscale method, which is formulated to decompose a boundary value problem with randommicrostructureinto a slow-scale deterministic problem and a fast-scale stochastic one by incorporating a generalized variational principle.To reduce thenumber of dimensions of a stochastic differential equation, Xu in thework [11] developed a random field-basedorthogonal expansionmethod and combined itwith theGFBmultiscalemethod to form aGFBmultiscale stochastic FEM. Theproposed upscaling based on splitting of random fields extends the central idea in theworks [10,11].We split the parameterk (e.g., the coefficient in Eq. (1.1)) into two parts, k = k0 + k1. Then we construct Green’s kernel using the differentialoperator associated with k0. Green’s kernel is used to construct a sequence of multiscale ‘‘bubble functions’’, which areutilized to calculate the upscaling parameter. Green’s kernel offers an efficient way to compute the bubble functions. Theupscaling parameter is computed in an iterative manner by using Green’s kernel. The splitting of k is flexible and can besmartly controlled to lead to desirable convergence of the iterative procedure. If k0 is not defined in full scales, then Green’skernel may be computed cheaply. This situation occurs if the permeability field k can be represented by a coarse part plusa fine-scale perturbation. Compared to standard full-scale upscaling methods, the proposed upscaling method can offer aneffective upscaling parameter on coarse scales.

The purpose of the paper is to use the proposed upscaling method to quantify the uncertainty through the statisticalmoments of the upscaling solution to Eq. (1.1). For simulation, the random parameter is usually approximated by atruncated Karhunen–Loève expansion or truncated polynomial chaos expansion. This results in a deterministic systemwitha high-dimensional parameter. The most challenging part of solving the high-dimensional system is to discretize the high-dimensional random parameter. There exist a few methods for the discretization of the random space. A broad survey ofthese methods can be found in [12,13]. Among these methods, Monte Carlo methods and stochastic collocation methodshave been widely used. Monte Carlo methods and stochastic collocation methods generate completely decoupled systems,each of which has the same size as the deterministic system. This is suitable for parallel computing and amendable forrelatively high-dimensional random inputs. In Monte Carlo methods, a large number of samples are randomly chosen andseparate solvers for each of the samples are used to determine the statistical behavior of solutions. The convergence ofMonteCarlo methods is usually slow. Unlike Monte Carlo methods, stochastic collocation requires independent solves at fixedcollocation points which are specifically chosen. In turn, this type of method has the capability to provide better accuracythan Monte Carlo with a fewer number of samples. An extensive comparison between the Monte Carlo and sparse gridstochastic collocation methods and discussion of computational demand for the sparse grid method can be found in [14].The results in [14] show that the sparse grid stochastic collocation method outperforms the Monte Carlo method to solveproblems with a moderately large number of random variables.

However, for stochastic collocation methods, the high-dimensional random parameter entails curse of dimensionality.Although the Smolyak sparse grid technique (see e.g., [15–17]) can alleviate the curse of dimensionality, the difficulty imposedby the high dimensionality is not yet completely overcome. In the paper, we propose a stochastic dimension reductioncollocationmethod tuning in the proposed upscaling method. The stochastic collocation is performed on a lower dimensionrandom space. If we use Karhunen–Loève expansion to split the random field in Eq. (1.1), i.e, k(x, Θ) = k0(x, Θ0)+k1(x, Θ),

Page 3: An upscaling method using coefficient splitting and its applications to elliptic PDEs

714 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

then the random field k0(x, Θ0) has a lower dimension than the random field k(x, Θ). We apply the stochastic collocationto Green’s kernel associated with k0(x, Θ0) to obtain the upscaling parameter at arbitrary sample. The proposed upscalingmethodwith the stochastic collocation substantially reduces the dimension of random spacewhere stochastic interpolationis computed. The proposed upscaling method offers the possibility of simulating a model in heterogeneous porous mediawith high-dimensional random parameters.

In the setting of partial differential equations, we present convergence analysis of the proposed upscaling methodfor deterministic partial differential equations and stochastic partial differential equations. Complexity analysis is alsopresented for the calculation of upscaling parameter and the stochastic collocation method.

The rest of the paper is organized as follows. In Section 2 we present the splitting technique which is used to calculatethe upscaling parameter and provide an efficient computational algorithm. In Section 3, convergence analysis is derivedfor deterministic elliptic PDEs. Section 4, is devoted to convergence analysis for stochastic elliptic PDEs and the stochasticcollocation methods. We also discuss the complexity for the stochastic collocation methods in the section. In Section 5, anumber of numerical examples are presented to confirm the theoretical results. Some conclusions and closing remarks aremade in Section 6.

2. Formulation of iterative upscaling methods

We consider the following elliptic equation:−div(k(x)∇u(x)) = f (x) in Du(x) = 0 on ∂D,

(2.2)

where k is a heterogeneous scalar function. We assume that the coefficient function k(x) in (2.2) admits the followingsplitting:

k(x) = k0(x) + k1(x), (2.3)

where k(x) and k0(x) satisfy the following boundedness condition:

0 < a0 ≤ k(x) ≤ a1, 0 < b0 ≤ k0(x) ≤ b1, ∀x ∈ D. (2.4)

In the splitting (2.3), k0(x) represent the coarse-scale information of k(x) and k1(x) the fine-scale information of k(x). Forsimplicity of presentation, we will suppress the spatial variable x in the rest of paper when no ambiguity occurs.

Remark 2.1. In the case of homogenization, the separable scale coefficient on each coarse cell K can be written as

kx,

= khom(x) + O

ϵ

H

pfor 1 ≤ p < 4,

where khom(x) is the homogenization of k(x, xϵ) (Refs. [1,2,18,8]). HereH is the size of grid to compute khom(x). This is a special

splitting of the form k = k0 + k1 for separable scales.

We define some notations for the rest of paper. Lp(D) (1 ≤ p ≤ ∞) denotes the Lebesgue space. The norm of L2(D) isdenoted by ∥ · ∥0,D. H1(D) is the usual Sobolev space equipped with norm ∥ · ∥1,D and seminorm | · |1,D. In the paper, (·, ·) isthe usual L2 inner product. We define an energy norm on a sub-domain D′ by |||v|||D′ := ∥

√k∇v∥0,D′ . We let TH be a uniform

coarse partition of D and K be a representative coarse block with diam(K) = H (see Fig. 2.1). Let h be the diameter of theunderlying fine mesh in K .

2.1. Iterative upscaling approximation

Following [19,3], we define the local equation on each coarse grid K by−div(k∇φei) = 0 in Kφei = li on ∂K .

(2.5)

We can choose different boundary conditions for Eq. (2.5), e.g., linear pressure drop condition, pressure drop no-flowcondition, and periodic condition (Refs. [19,3]). To improve accuracy, global information can be incorporated into theboundary condition (see [7,20,21]). To simplify the presentation, we take linear pressure drop condition for Eq. (2.5) inthe paper, i.e., li = x · ei, where ei (i = 1, . . . , d) are the unit vector along the ith direction. Then the standard upscalingcoefficient k∗ is computed by

k∗ei =1

|K |

Kk∇φeidx := ⟨k∇φei⟩K , i = 1, . . . , d. (2.6)

Page 4: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 715

Xp

Fig. 2.1. Schema of a representative coarse grid K .

We extend the boundary condition of Eq. (2.5) onto the entire coarse block K and still denote it by li (i = 1, . . . , d). Wesplit φei by φei = li + ξei , where ξei solve the following equation:

−div(k∇ξei) = div(k∇li) in Kξei = 0 on ∂K .

(2.7)

We will use the splitting (2.3) and an iterative process to approximate ξei . To this end, we define ξei,0 by−div(k0∇ ξei,0) = div(k∇li) in Kξei,0 = 0 on ∂K .

(2.8)

We then recursively define a sequence of functions ξei,j, where j = 1, 2, . . . , which solve the following equation:−div(k0∇ ξei,j) = div(k1∇ ξei,j−1) in Kξei,j = 0 on ∂K .

(2.9)

Then we set

φei,J = li +J

j=0

ξei,j. (2.10)

We define the new upscaling coefficient k∗

J , which is associated with φei,J , by

k∗

J ei =1

|K |

Kk∇φei,Jdx := ⟨k∇φei,J⟩K , i = 1, . . . , d. (2.11)

We note that the upscaling coefficients k∗ and k∗

J are full-tensor matrix. To simplify the presentation, we will suppressthe subscripts ei and i from φei , φei,J , ξei,J , and li when no confusion occurs.

2.2. Computation for upscaling coefficient

By Eq. (2.11), the proposed upscaling coefficient k∗

J inherently depends onφJ , whereφJ = l+J

j=0 ξj. So the computationof k∗

J depends on the computation of the ‘‘bubble’’ functions ξj, (j = 1, . . . , J). From the definition of ξj, i.e., Eqs. (2.8) and(2.9), we find that they are all associated with the differential operator L0 := −div(k0∇). Furthermore, ξj (i = 0, . . . , J) canbe formally written as L−1

0 f , where f is the source term in the equation of ξj. Green’s function can be viewed as the inverseof operators, and we will use it to get L−1

0 .

Page 5: An upscaling method using coefficient splitting and its applications to elliptic PDEs

716 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

Let G(x, y) be Green’s function associated with operator L0. Here x ∈ K and y ∈ K . Green’s function G(x, y) solves theequation

−div(k0∇G(x, y)) = δ(x, y) in KG(x, y) = 0 on ∂K ,

(2.12)

where δ(x, y) is the Dirac Delta function. Using Green’s function and Eq. (2.8), we have

ξ0(x) =

KG(x, y)divy(k(y)∇yl(y))dy = −

K

∇yG(x, y) · k(y)∇yl(y)dy. (2.13)

By Eq. (2.9), we compute ξj (j = 1, . . . , J) by performing

ξj(x) =

KG(x, y)divy(k1(y)∇yξj−1(y))dy = −

K

∇yG(x, y) · k1(y)∇yξj−1(y)dy. (2.14)

The computation complexity of the proposed upscaling methods depends on the computation of ξj, j = 0, 1, . . . , J .We investigate the computation of ξj in terms of matrix operations. Let xp be a vertex of an underlying fine grid in K andnK be the number of internal fine vertices in K . Let ℓp(x) (p = 1, . . . , nK ) be the standard finite element basis function(e.g., linear/bilinear functions) at the fine internal vertex xp in K (see Fig. 2.1). We define vector function L(x) by

L(x) := (ℓ1(x), . . . , ℓnK (x))T .

To compute ξj (j = 0, 1, . . . , J) in matrix formulation, we need to introduce some notations. We define vector v by

v =

K

∇L(x) ⊗ [k(x)∇l(x)]dx,

where ⊗ denotes the tensor product. Let M0 and M1 be the stiffness matrix corresponding to the operators −divk0∇ and−divk1∇ , respectively. Then

M0 =

K

∇L(x) ⊗ [k0(x)∇LT (x)]dx M1 =

K

∇L(x) ⊗ [k1(x)∇LT (x)]dx.

We have the following theorem to compute ξj (j = 0, . . . , J) in the setting of the finite element method. In comparisonwith Eqs. (2.13) and (2.14), the notations ξj are slightly abused in the following theorem.

Theorem 2.1. Let ξj(x) (j = 0, . . . , J) be the finite element approximation on the underlying fine grid in K . Then

ξj(x) = (−1)j+1M−10 L(x)

T(M1M−1

0 )jv, j = 0, 1, . . . , J. (2.15)

Proof. Let us still use G(x, y) to represent the finite element solution of Eq. (2.12) on the underlying fine grid. Thenstraightforward calculation implies that

G(x, y) = (M−10 L(x))T L(y). (2.16)

By Eqs. (2.13) and (2.16), we have

ξ0(x) = −

K

∇yG(x, y) · k(y)∇l(y)dy

= −

K

∇yM−1

0 L(x)TL(y) · k(y)∇l(y)dy

= −M−1

0 L(x)T

K∇yL(y) ⊗ k(y)∇l(y)dy

= −M−1

0 L(x)T

v. (2.17)

Due to Eqs. (2.14) and (2.16), it follows that

ξ1(x) = −

K

∇yG(x, y) · k1(y)∇ ξ0(y)dy

= (−1)2M−1

0 L(x)T

K∇yL(y) ⊗ k1(y)∇LT (y)dy

M−1

0 v

= (−1)2M−1

0 L(x)T

(M1M−10 )v. (2.18)

Page 6: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 717

By recursively using the procedure, we have for i = 2, . . . , J ,

ξj(x) = (−1)j+1M−10 L(x)

T(M1M−1

0 )jv.

The proof is complete.

We precompute vector v and matrix M0 and M1. Since v, M0 and M1 only depend on the local information in K , this issuitable for parallel computation. Moreover, Theorem 2.1 implies that the computations of ξj (j = 0, . . . , J) are independentof each other and suitable for parallel computation as well. By Theorem 2.1, we can obtain the upscaling coefficient k∗

J byperforming a direct matrix–vector multiplication. In fact, we can show that

k∗

J ei = Ai +M−1

0 RT J

j=0

(−1)j+1(M1M−10 )j

vi, (2.19)

where

Ai = ⟨k∇li⟩K , R = (⟨k∇LT ⟩K )T and vi =

K

∇L(x) ⊗ [k(x)∇li(x)] dx.

For standard upscaling method, we can similarly show that

k∗ei = Ai − (M−1R)T vi, (2.20)

whereM =K ∇L(x)⊗[k(x)∇LT (x)]dx. On comparing Eqs. (2.19) and (2.20),we find that the computation of k∗

J is comparableto the computation of k∗ in a parallel setting.

Remark 2.2. SinceM0 is the stiffnessmatrix associatedwith the coarse-scale information k0(x) andM is the stiffnessmatrixassociated with the full-scale information k(x), the condition number ofM0 is usually smaller than that ofM . Consequently,the computation of M−1

0 is more efficient than that of M−1, provided that an iterative algorithm (e.g., conjugate gradientmethod) is employed to compute the inverse of the matrix.

Remark 2.3. If the eigenvalues of M1M−10 are less than 1, then Eq. (2.19) implies that k∗

J is convergent as J → ∞. This isconsistent with Assumption 3.1 for convergence analysis.

3. Convergence analysis for the iterative upscaling methods

For the convergence analysis, we make the following assumption.

Assumption 3.1. Assume that the splitting k = k0 + k1 on K satisfies

ηK :=

k1k0L∞(K)

< 1. (3.21)

If ηK < 1 fails, then we can slightly modify the splitting k. Let SK be a constant on K such that

SK > supx∈K

maxk1(x) − k0(x)

2, −k0(x)

. (3.22)

Then the modified splitting k = k0 + k1 := (k0 + SK ) + (k1 − SK ) satisfies the assumption (3.21).Let φ = l + ξ and φJ = l + ξJ := l +

Jj=0 ξj, where ξ solves Eq. (2.7) and ξj (j = 0, . . . , J) solve Eq. (2.8) or Eq. (2.9).

Then we have the following lemma.

Lemma 3.1. Let ηK be defined in Assumption 3.1. Then

|||φ − φJ |||0,K ≤ αK |||l|||0,KηJ+1K ,

where αK = ∥

k0k ∥L∞(K)∥

kk0

∥L∞(K).

Proof. Let ξ0 solve Eq. (2.8). Then integration by parts givesk0∇ ξ0, ∇ ξ0

K

= −

k∇l, ∇ ξ0

K

= −

kk0

√k∇l,

k0∇ ξ0

K

.

Page 7: An upscaling method using coefficient splitting and its applications to elliptic PDEs

718 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

It follows immediately that

k0∇ ξ0∥0,K ≤

kk0

L∞(K)

|||l|||K . (3.23)

Let ξj (j = 1, 2, . . .) solve Eq. (2.9). Using similar argument as above gives that

k0∇ ξj∥0,K ≤ ηK∥

k0∇ξj−1∥0,K . (3.24)

Recursively using (3.23) and (3.24), we have

k0∇ ξj∥0,K ≤ η

jK

kk0

L∞(K)

|||l|||K . (3.25)

By adding Eq. (2.8) and the sequence of equations in (2.9), we have−div

k0∇

J

j=0

ξj

= div(k∇l) + div

k1∇

J−1j=0

ξj

in K

Jj=0

ξj = 0 on ∂K .

(3.26)

SinceJ−1

j=0 ξj = ξJ − ξJ , Eq. (3.26) reduces to−div(k∇ξJ) = div(k∇l) − div(k1∇ ξJ) in KξJ = 0 on ∂K .

(3.27)

We note that ξ solves Eq. (2.7). Subtracting Eq. (3.27) from Eq. (2.7), we get−div

k∇(ξ − ξJ)

= div

k1∇ ξJ

in K

ξ − ξJ = 0 on ∂K .(3.28)

Performing integration by parts and Cauchy–Schwarz inequality for Eq. (3.28), we have

|||ξ − ξJ |||K ≤

k1√kk0

L∞(K)

k0∇ ξJ∥0,K

k1√kk0

L∞(K)

ηJK

kk0

L∞(K)

|||l|||K ≤ αKηJ+1K |||l|||K ,

where we have used (3.25) in the second step. Hence

|||φ − φJ |||K = |||ξ − ξJ |||K ≤ αKηJ+1K |||l|||K .

This completes the proof.

Remark 3.1. By assumption (2.4), it follows that αK ≤

a1b1a0b0

.

Remark 3.2. Noting that |∇l| = 1, the proof of Lemma 3.1 implies that

|||φ − φJ |||0,K ≤ αK |K |1/2η

J+1K , (3.29)

where αK = ∥

k0k ∥L∞(K)∥

k√k0

∥L∞(K).

Using Lemma 3.1, we have the following theorem.

Theorem 3.1. Let k∗ and k∗

J be defined in (2.6) and (2.11). Then on coarse grid K ,

∥k∗− k∗

J ∥L∞ ≤ CKηJ+1K ,

where CK = αK∥√k∥L∞(K).

Page 8: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 719

Proof. By Eqs. (2.6) and (2.11), we get on coarse grid K ,

∥k∗− k∗

J ∥L∞ = ∥⟨k∇φ⟩K − ⟨k∇φJ⟩K∥L∞ = ∥⟨k∇(φ − φJ)⟩K∥L∞

≤ ∥√k∥L∞(K)

|||φ − φJ |||K√

|K |≤ CKη

J+1K ,

where (3.29) is used in the last step. The proof is done.

Let u∗ solve the following equation:−div(k∗

∇u∗) = f in Du∗

= 0 on ∂D.(3.30)

Let u∗

J solve the equation−div(k∗

J ∇u∗

J ) = f in Du∗

J = 0 on ∂D.(3.31)

Let maxK∈TH ηK = η, where TH is a coarse partition of D. Then

Theorem 3.2. Let u∗ and u∗

J solve Eqs. (3.30) and (3.31), respectively. Then

∥u∗− u∗

J ∥1,D ≤ C∥∇u∗

J ∥0,KηJ+1,

where C = CmaxK∈TH

CKa0

,D.

Proof. Thanks to Eqs. (3.30) and (3.31), it follows that

−divk∗

∇(u∗− u∗

J )

= f + div(k∗

− k∗

J )∇u∗

J

+ div(k∗

J ∇u∗

J ) = div(k∗

− k∗

J )∇u∗

J

.

By using integration by parts and Cauchy–Schwarz inequality, we use Theorem 3.2 to get

∥∇(u∗− u∗

J )∥0,D ≤∥k∗

− k∗

J ∥L∞(D)

a0∥∇u∗

J ∥0,D ≤ maxK∈TH

CK

a0

∥∇u∗

J ∥0,DηJ+1. (3.32)

By Poincaré inequality, it follows

∥u∗− u∗

J ∥0,D ≤ C(D)∥∇(u∗− u∗

J )∥0,D. (3.33)

Combining (3.32) and (3.33) completes the proof.

Let u be the solution of (2.2) on fine mesh. Then using triangle equality and Theorem 3.2, we have the following corollaryimmediately.

Corollary 3.3. Let u and u∗

J solve Eqs. (2.2) and (3.31), respectively. Then

∥u − u∗

J ∥0,D ≤ ∥u − u∗∥0,D + CηJ+1.

Remark 3.3. If the only local information is used for the upscaling, e.g., li = x · ei in Eq. (2.10), then

∥u − u∗∥0,D ≤ C1

hH

+ C2H + C3h + C4ηJ+1.

This can be verified by Corollary 3.3 and the result in [22] (page 66). We see that the spatial resonance error O( hH ) occurs in

the local upscaling. If li in Eq. (2.10) represents some global information and is used for the upscaling, then the resonanceerror can be significantly reduced and much better accuracy is achieved (see [7,20]).

4. Upscaling of stochastic fields

Let the coefficient k(x, ω) of Eq. (1.1) be a stochastic process with second moment. Then k(x, ω) can be parameterizedto a finite-dimensional random process by using truncated Karhunen–Loève expansion (KLE) (Ref. [12]). Let Y (x, ω) be astochastic process and its covariance function cov[Y ] : D × D −→ R of Y (x, ω) by

cov[Y ](x1, x2) = cov[Y (x1), Y (x2)] = E [(Y (x1) − E[Y (x1)])(Y (x2) − E[Y (x2)])] .

Page 9: An upscaling method using coefficient splitting and its applications to elliptic PDEs

720 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

The function cov[Y ] induces an integral operator TY : L2(D) −→ L2(D) by

TY g(·) =

Dcov[Y ](x, ·)g(x)dx ∀g ∈ L2(D).

The operator TY is compact and self-adjoint. Consequently, there exist the eigenpairs (λm, bm(x))m≥1 of TY , such that

(bi, bj)L2(D) = δij, λ1 ≥ λ2 . . . , ≥ λm . . . , limm−→∞

λm = 0.

Define the mutually uncorrelated random variables by

θi(ω) :=1

√λi

D(Y (x, ω) − E[Y ](x))bi(x)dx, i = 1, 2, . . . .

Then it follows the KLE of Y (x, ω). Here we assume that Y (x, ω) admit the following truncated KLE, i.e.,

Y (x, Θ(ω)) = E[Y ] +

ni=1

λibi(x)θi(ω), (4.34)

where Θ := (θ1, . . . , θm, θm+1, . . . , θn) := (Θ0, Θ1) ∈ Rn, where Θ0 := (θ1, . . . , θm) ∈ Rm and Θ1 ∈ Rn−m. Foranalysis, we consider k(x, Θ) to be a logarithmic stochastic field, i.e., k(x, Θ) := exp(Y (x, Θ)). Then we define the splittingof k(x, Θ) by

k(x, Θ) = k0(x, Θ0) + k1(x, Θ), (4.35)

where k0(x, Θ0) := exp(E[Y ] +m

i=1√

λibi(x)θi(ω)) (m < n) and k1(x, Θ) = k(x, Θ) − k0(x, Θ0).

The eigenvalues λi play an important role to control | k1k0 |. To this end, we define energy ratio E(m) by E(m) =

mi=1

√λin

i√

λi.

Then we can show that ∥k1(x,Θ)

k0(x,Θ0)∥L∞(D×Ω) is proportional to 1 − E(m) under certain conditions.

Lemma 4.1 ([9]). Let ∥θi∥L∞(Ω) ≤ Cθ uniformly for all m + 1 ≤ i ≤ n. If cov[Y ](x1, x2) are piecewise analytic in D × D, thenthere exists constant CY such that for m large enough k1(x, Θ)

k0(x, Θ0)

L∞(D×Ω)

≤ CY1 − E(m)

,

where CY =74Cθ maxm+1≤i≤n|bi|L∞(D).

We define L2(Ω) to be the square integrable space with the probability measure ρ(ω)dω, where ρ(ω) is the jointprobability density function of Θ . We can use Theorem 3.2 to derive an error estimate of the stochastic elliptic equation(1.1) using the proposed upscaling. By using Theorem 3.2 and Lemma 4.1, we immediately have the following theorem.

Theorem 4.1. Suppose that the assumptions in Lemma 4.1 hold. If m is large enough such that 1 − E(m) is well below 1, then

∥u∗− u∗

J ∥H1(D)⊗L2(Ω) ≤ C∥∇u∗

J ∥L2(D)⊗L2(Ω)

1 − E(m)

J+1.

Theorem4.1 shows that the convergence rate of the proposed upscalingmethod for the stochastic equation (1.1) dependson the energy ratio E(m).

For stochastic simulation, we can use Monte Carlo methods. The main disadvantage of Monte Carlo methods is slowconvergence. To overcome the disadvantage, here we use stochastic collocation methods to discretize random parameterspace. The finite element method on coarse grid is used to discretize the spatial variable. Combined with the proposedupscalingmethod,we can perform a stochastic collocationmethod on a lower parameter spacewith dim = m. This approachreduces the dimension of parameter space for practical stochastic collocation computation.

Let Θ10 , Θ2

0 , . . . , Θ s0 ⊂ Rm be s collocation points scattered in randomparameter space associatedwith an interpolation

operator Im. Let v(Θ0) ∈ C(Rm) be a deterministic solution depending on parameter Θ0. Then given a realization Θ0 ∈ Rm,the collocation solution vm is defined by vm(Θ0) := Imv(Θ0). We usually use the roots of an orthogonal polynomial(e.g., Hermite polynomial or Chebyshev polynomial) to find the collocation points. One can select different collocation pointsand use a different interpolation operator Im to obtain different stochastic collocation methods, for example, full-tensorproduct collocation [23] and Smolyak sparse grid collocation [15].

If the coefficient k is a stochastic field, Green’s function in (2.12) depends on the random parameter Θ0 ∈ Rm, i.e., G =

G(Θ0). Hence the associated Green matrixM−10 (Θ0) depends on Θ0 as well. We define the interpolation ofM−1

0 (Θ0) by

M−10 (Θ0) := ImM−1

0 (Θ0).

Page 10: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 721

For any arbitrary realization Θ := (Θ0, Θ1) ∈ Rm× Rn−m, we use (2.19) to define a modified interpolation operator Im for

the proposed upscaling coefficient, i.e.,Imk∗

J (Θ)

i:= Ali(Θ) +

M−1

0 (Θ0)R(Θ)T J

j=0

(−1)j+1M1(Θ)M−10 (Θ0)

jvi(Θ). (4.36)

Here

·

denotes the i-th column of the inside matrix. By (4.36), we compute Ali(Θ), M−1

0 (Θ0), R(Θ), M1(Θ), and vi(Θ),

i = 1, . . . , d, to obtain the interpolated upscaling Imk∗

J (Θ). Since the computations of Ali(Θ), R(Θ), M1(Θ), and vi(Θ)(i = 1, . . . , d) are cheap, we compute them directly without using stochastic interpolation. We note that all thesecomputations are independent of each other. The dominant computation lies in M−1

0 (Θ0) and considerably depends onthe dimension m of the random parameter space. Using the interpolation M−1

0 (Θ0) in (4.36) may possibly deteriorate theconvergence of the iterative upscaling method. We can utilize the modified splitting technique defined in (3.22) to alleviatethe deterioration. The numerical results in Section 5.3 confirm the case.

If we use the standard upscaling method defined in (2.20), then the upscaling coefficient k∗(Θ) depends on high-dimensional parameter Θ ∈ Rn. The stochastic interpolation for the upscaling coefficient k∗(Θ) is performed in the fullrandom space Rn (n > m). If n is large, the number of collocation nodes is large and the interpolation on Rn becomescomputationally expensive and prohibitive.

LetH(n+L, n) denote the interpolation nodes for Smolyak sparse grid collocation at dimension n and interpolation level L[15]. Although Smolyak sparse grid collocation requiresmuch fewer nodes than the full-tensor product collocation to achievethe similar accuracy, the number of nodes H(n + L, n) increases very quickly as n increases. To use the Smolyak sparse gridcollocation method for the upscaling methods, we need to compute Green’s matrix M−1

0 for the iterative upscaling, andM−1 for the standard upscaling method, at each collocation nodes. Consequently, the ratio α of computation cost of Green’smatrix between the proposed upscaling method and the standard upscaling method at collocation nodes is approximatelygiven by

α =H(m + L,m)

H(n + L, n)≈

mn

Lfor m ≫ 1, n ≫ 1,

where we have used the fact H(n + L, n) ≈2LL! n

L for n ≫ 1 (see [15]). If a Smolyak sparse grid interpolation is performedon m-dim space, the computation of the interpolation is O

mH(m + L,m)

[15,13], which has the behavior O(mL+1). This

implies the necessity to reduce the random dimensions for Smolyak sparse grid collocation.We will apply Smolyak sparse grid collocation for the numerical tests. The stochastic approximation of the Smolyak

sparse grid collocation method depends on the total number of sparse grid collocation nodes and the dimension m of therandom parameter space. The convergence analysis in [16] implies that the convergence of Smolyak sparse grid collocationis exponential with respect to the number of Smolyak nodes but depends on the parameter dimension m. This exponentialconvergence rate behaves algebraically form ≫ 1.

Although we focus on the proposed upscaling method combined with stochastic collocation method in the paper, wenote that the proposed approach can also be used in Monte Carlo sampling and reduce the number of samples in the high-dimensional random space. This can be realized by using the splitting and projecting the high-dimensional stochastic spaceto the lower dimensional space. For Monte Carlo simulations, a large number of samples are chosen in the high-dimensionalstochastic space. A smaller number of samples are scattered in the lower dimensional space by projecting those high-dimensional samples. Then, we cluster the smaller number of lower dimensional samples and apply the proposed iterativeupscaling method, through which we can use much fewer samples for Green functions and the computation efficiency canbe improved. It is worth exploring the proposed upscaling with the Monte Carlo method in future.

5. Numerical experiments

In this section, we present several representative numerical experiments to verify the analysis and demonstrate theperformance of the proposed upscaling method.

5.1. Deterministic upscaling coefficient

In this subsectionwe concentrate on illustrating iterative upscaling coefficient described in Section 2. To bemore specific,wewould like to verify the convergence rate of the deterministic upscaling coefficient on a single coarse block K as describedin Theorem 3.1. In all the tests of the subsection, we consider a coefficient generated by truncated KLE. Let Y (x, Θ) be astochastic field and the logarithmic stochastic field k(x, Θ) := exp(Y (x, Θ)). In our case we use the following covariancefunction for the stochastic field Y :

cov[Y ](x1, y1; x2, y2) := σ 2exp

|x1 − x2|2

2l2x−

|y1 − y2|2

2l2y

, (5.37)

Page 11: An upscaling method using coefficient splitting and its applications to elliptic PDEs

722 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

0

0.5

1

0

0.5

1

0

5

10

15

20

k

0

0.5

1

0

0.5

1

0

5

10

15

20

25

k0

0

0.5

1

0

0.5

1

−10

−5

0

5

10

k1

Fig. 5.2. KLE coefficient decomposition generated on a 32 × 32 mesh; n = 25,m = 15.

where σ 2 is the variance, lx and ly denote the correlation length in the x-direction and y-direction, respectively. Here wetake σ 2

= 1.5, lx = 0.15, and ly = 0.08. In this subsection we consider the coefficient generated on 32 × 32 uniform mesh.For all examples in the subsection, we truncate n = 25 terms in KLE expansion to get the full coefficient k. Then, in order tosplit the coefficient accordingly, we choose a variety ofm to obtain k0 and employ Eq. (4.35) for the splitting. Fig. 5.2 showsa sample of the KLE coefficient splitting for m = 15. According to the initial analysis built in the deterministic setting, weuse the same, fixed θi (i = 1, . . . , n) in Eq. (4.34) for all related tests. Let us recall the error estimate in Theorem 3.1:

∥k∗− k∗

J ∥L∞ ≤ CKηJ+1K . (5.38)

In particular, we would like to emphasize that when ηK = ∥k1k0

∥L∞(K) < 1, the convergence of the proposed upscalingcoefficient sequence k∗

J is expected from the analysis. In order to verify the theoretical result (5.38), we take the relativeerror ∥k∗

− k∗

J ∥L∞/∥k∗∥L∞ as a measure for the error of the upscaling coefficients with a variety of splitting configurations.

Fig. 5.3 shows three representative cases of splitting where ηK = ∥k1k0

∥L∞(K) < 1. More specifically, these examples resultfrom the cases where m = 15, m = 16, and m = 17 terms are used in the KLE splitting. From Fig. 5.3, we observe thatthe relative error of the iterative upscaling coefficient decreases monotonously as the number of iteration J in the proposedupscaling increases and remains very small all through for each case. We also note that the errors rapidly decrease at firstthree steps of iteration. This is because the error estimate η

J+1K quickly decreases as J increases. This demonstrates that the

convergence of k∗

J to k∗ as J → ∞. In addition, we also observe that a smaller value of ηK yields a smaller error.We note that the results in Fig. 5.3 use a relatively large number of terms to get k0. This is a natural choice since the

analysis requires that ηK < 1 for the convergence of the iterative upscaling coefficient k∗

J . By the analysis presented inSection 3, we can deal with the case when the original coefficient splitting gives ηK > 1. If this happens, we find a constantSK as described in Section 3 such that we are ensured of the convergence of upscaling coefficient. To be more specific, wetake

SK = supx∈K

maxk1(x) − k0(x)

2, −k0(x)

+ ε. (5.39)

where ε is a small positive constant such that the inequality (3.22) is satisfied. Then the modified splitting k = (k0 + SK ) +

(k1 − SK ) = k0 + k1 is constructed, and the analysis implies the similar convergence for the modified splitting. Fig. 5.4illustrates two cases where m = 8 and m = 12. For these results we take ε = 0.5 to obtain η∗

K = ∥k1k0

∥L∞(K) < 1. In thesetwo cases, the original ηK was found to be 3.160 and 1.730 form = 8 andm = 12, respectively. We find that application of(5.39) yields the desired result, and we observe the convergence of the upscaling coefficient sequence using the modifiedsplitting.

At present, our numerical results show that the estimates offered in Theorem 3.1 are solidly grounded for either setting(ηK < 1 or ηK ≥ 1). In order to elaborate on the ‘‘order’’ of convergence offered in Eq. (5.38), we also test the estimate usinga slightly different manner. To be more specific, we fix a value of J and plot ηK vs. err∞ := ∥k∗

− k∗

J ∥L∞/∥k∗∥L∞ in the log

scale. In doing so, we hope to at least recover the exponent J + 1 from a power trend line err∞ ≈ cηpK . Here we take the

Page 12: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 723

1 2 3 4 5 6 7 8 9 100

0.005

0.01

0.015

Iteration number J

Rel

ativ

e er

ror

Relative error of upscaling coefficient versus iteration number J

ηK

≈ 0.908

ηK

≈ 0.776

ηK

≈ 0.630

Fig. 5.3. Relative error of the iterative upscaling coefficients for different splittings.

1 2 3 4 5 6 7 8 9 100

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

0.045

Iteration number J

Rel

ativ

e er

ror

Relative error of upscaling coefficient vs. iteration number J

ηK* ≈ 0.919

ηK* ≈ 0.843

Fig. 5.4. Relative error of the iterative upscaling coefficients for different modified KLE splittings.

iteration number J = 1, 3, 5 for the test. In this test example we use a variety of splitting configurations to obtain differentvalues of ηK . Fig. 5.5 illustrates the log-scale plots as well as the exponent p obtained from the power trend line. In all caseswe obtain exponents which exceed the initial proposed value of J + 1. In particular, we obtain an exponent of 2.483 forJ = 1, an exponent of 4.769 for J = 3, and an exponent of 6.932 for J = 5. We would like to claim that these results areexpected due to the fact that the constant CK

k√k0

∥L∞(K)

(see Theorem 3.1) on the right-hand side of Eq. (5.38) constitutes

a stronger convergence rate. In turn, it may expect the convergence rate with a maximum exponent J + 1 + δ (0 < δ < 1)from Fig. 5.5.

5.2. Deterministic elliptic solution

In this subsection, we consider the convergence of the iterative upscaling solution of Eq. (2.2) with f (x) = 1. In particular,we would like to compare the standard upscaling solution u∗ with the iterative upscaling solution u∗

J . By Theorem 3.2, u∗

Jconverges to u∗ as J → ∞. In order to verify this theoretical result, we test the coefficient generated by KLE with covariancefunction (5.37). Here variance σ 2

= 1.5 and the correlation length lx = 0.15, ly = 0.08. In the KLE expansion, we still usea set of fixed random parameters and fix a single realization of the random field analogous to those in Section 5.1. In thesubsection, the fine-scale coefficient field is defined on a 256×256 fine mesh and upscaling solutions are obtained by usinga 16 × 16 coarse mesh.

To begin, we truncate the KLE expansion at n = 20 terms to define the full coefficient k = k0 + k1. Fig. 5.6 shows arepresentative realization of a KLE coefficient splitting. In order to verify the convergence property in Theorem 3.2, we are

Page 13: An upscaling method using coefficient splitting and its applications to elliptic PDEs

724 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

Fig. 5.5. Relative error of upscaling coefficient vs. ηK in the log scale for J = 1, 3, 5.

interested in calculating a variety of the iterative upscaling solutions u∗

J for J = 1, 2, . . . . As a measure for the error ofthe solutions u∗

J , we take the relative error ∥u∗

J − u∗∥1,D/∥u∗

∥1,D. We can obtain k0 by using a relatively large number ofterms m in the KLE splitting. As before, this is a natural choice since the analysis requires that η = maxK∈TH ηK < 1 for theconvergence of the iterative upscaling solution. However, we may often take a relatively small number of termsm for k0 inthe practical application. Thus the original splitting may yield η > 1. If this happens, we can find a constant SK defined byEq. (5.39) on each coarse element K ∈ TH such that the convergence is ensured. Then on each coarse element K , the fieldk = (k0 + SK ) + (k1 − SK ) = k0 + k1 is created, η∗

= maxK∈TH η∗

K < 1 is achieved, and the analysis in Section 3 suggeststhat the iterative upscaling solution u∗

J converges in a similar way.Fig. 5.7 illustrates three representative cases where we take ε = 0.2, ε = 1.0, and ε = 1.9 such that η∗ < 1. For these

results, we use a relatively small number of termm = 8 in the KLE splitting configuration to get k0. For the casewherem = 8,the original η = 1.816. We see that the application of (5.39) to each coarse element yields the desired convergence of u∗

Jas J increases. More specifically, the relative error of the iterative upscaling solution decreases monotonously as J increasesand remains very small all through for the cases, where η∗

= 0.805 and η∗= 0.659. We also find that the relative errors

rapidly decrease at first three steps of the iteration for all these cases. In addition, we also see that the smaller η∗ value is, thebetter accuracy is obtained. To visualize the approximation, we plot the iterative upscaling solution u∗

J , standard upscalingsolution u∗, and the error (u∗

J − u∗) in Fig. 5.8 when η∗= 0.805 and J = 2. From Fig. 5.8, we see that the iterative upscaling

solution provides good agreement with the standard upscaling result.We now illustrate the convergence behavior of the iterative upscaling method as a function of mesh resolution. For a

reference solution in finemesh, we compute the elliptic solution on the finest mesh 256×256 by the bilinear finite elementmethod. We compute the upscaling solutions (standard upscaling u∗ and iterative upscaling u∗

J ) on different coarse meshes,i.e., 8×8, 16×16, 32×32, and 64×64.Wewould like tomake comparison between the iterative upscaling solutions and thestandard upscaling solutions on the different coarsemeshes. To compare the upscaling solutions with the reference solutionin the finemesh, we reconstruct the fine-scale reference solution by restricting the fine reference solution to coarse vertices.We denote the reconstructed fine-scale reference solution by R(u). In the test, we compute u∗

J by the iterative upscalingmethod for η∗

= 0.805 and J = 2. We plot the three relative errors in Fig. 5.9:

e1 :=∥R(u) − u∗

∥H1(D)

∥R(u)∥H1(D)

, e2 :=∥R(u) − u∗

J ∥H1(D)

∥R(u)∥H1(D)

, e3 =∥u∗

− u∗

J ∥H1(D)

∥R(u)∥H1(D)

.

We observe that the relative errors e1 and e2 decrease as the number of coarse elements increases, that is, the upscalingsolutions converge, as the coarse mesh is refined. This result supports the classic upscaling theory (Ref. [3]). We also seethat the relative errors e1 and e2 remain relatively very small when the coarsening factor is appropriate. Furthermore,we observe that the relative error e3 between the iterative upscaling solution and standard upscaling solution remainsvery small all through. This indicates that the proposed upscaling method gives nearly identical results with the standardupscaling method. By a careful observation, we see that e3 decreases slowly as the coarse mesh becomes fine. This resultnicely confirms Theorem3.2 because the constant C in the convergence Theorem3.2 depends on CK , and the CK maydecreaseas coarse grid K is refined.

Page 14: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 725

Fig. 5.6. A sample of KLE splitting on a 256 × 256 mesh; n = 20,m = 8.

1 2 3 4 5 6 7 8 9 100

0.0005

0.001

0.0015

0.002

0.0025

0.003

Iteration number J

Rel

ativ

e er

ror

Relative error of upscaling solution vs. iteration number J

η *≈ 0.958

η*≈ 0.805

η*≈ 0.659

Fig. 5.7. Relative error of the iterative upscaling solutions with different modified coefficient splittings.

Page 15: An upscaling method using coefficient splitting and its applications to elliptic PDEs

726 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

0

0.5

1

0

0.5

10

0.5

1

1.5

2

2.5x 10

−3

u*J

0

0.5

1

0

0.5

10

0.5

1

1.5

2

2.5x 10

−3

u*

0

0.5

1

0

0.5

1−2

0

2

4

6

8

10

12x 10

−7

u*J−u*

Fig. 5.8. Comparison of elliptic solution between the iterative upscaling method and the standard upscaling method for J = 2.

10 15 20 25 30 35 40 45 50 55 600

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

0.018

0.02

Dimension of Coarse−grid cells

Rel

ativ

e er

ror

e3

e2

e1

Relative errors e1, e2 and e3 versus coarse mesh resolution

Fig. 5.9. Relative errors e1 , e2 , and e3 versus different dimensions of coarse cells.

5.3. Stochastic upscaling solution using the parameter reduction collocation

In this subsection, we consider the parameter reduction collocation method described in Section 4. We are mainlyinterested in testing the performance of random parameter reduction collocation with the iterative upscaling method andcomparing it with the standard upscaling method with Monte Carlo sampling. For the tests, we take the variance σ 2

= 1.0,and correlation lengths lx = 0.15, ly = 0.08 in the covariance function (5.37) to construct a stochastic field. The associatedKLE is truncated at n = 22 terms, and the random field k(x, Θ) is defined on a 96 × 96 fine mesh. We assume that therandom normal parameters θi (i = 1, 2, . . . , 22) represent a 22-dimensional vector Θ in the hypercube [−2.5, 2.5]22. Wetake N = 500 samples for the stochastic simulations, and the upscaling computations are performed on 12 × 12 coarsemesh.

In order to verify the convergence result in Theorem 4.1, we first recall the energy ratio E(m) =

mi=1

√λin

i√

λi, where λi are the

eigenvalues from the KLE expansion. In general, 1− E(m) quickly decreases, asm increases. As more terms for k0 in the KLEexpansion typically give smaller errors for the KLE approximation, we expect consistent behavior between 1− E(m) versusthe mean and variance quantities of the relative errors, ∥u∗

J − u∗∥1,D/∥u∗

∥1,D and ∥u∗

J − u∗∥1,D/∥u∗

∥1,D, where u∗

J denotesthe iterative upscaling solution with the parameter reduction approach, u∗

J denotes the iterative upscaling solution withMonte Carlo sampling, and u∗ denotes the standard upscaling solution with Monte Carlo sampling. We would like to usethese errors as benchmarks for comparison with the proposed parameter reductionmethod. In particular, we are interested

Page 16: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 727

0 0.05 0.1 0.15 0.2 0.250

0.5

1

1.5

2

2.5

3

3.5

4x 10

−3

1−E(m)

Mea

n of

rel

ativ

e er

ror

Mean of relative errors vs. 1−E(m)

Level 1Level 2Monte Carlo

0 0.05 0.1 0.15 0.2 0.250

0.2

0.4

0.6

0.8

1

x 10−5

1−E(m)V

aria

nce

of r

elat

ive

erro

r

Variance of relative errors vs. 1−E(m)

Level 1Level 2Monte Carlo

Fig. 5.10. The mean and variance quantities of the relative errors ∥u∗

J − u∗∥1,D/∥u∗

∥1,D and ∥u∗

J − u∗∥1,D/∥u∗

∥1,D for J = 2.

in comparing the standard upscaling method with Monte Carlo sampling versus the proposed upscaling method with theparameter reduction collocation approach.

In Fig. 5.10, we offer the simulation results when the iteration number J = 2, and the interpolation levels L = 1 andL = 2. Here we use Smolyak sparse grid interpolation. From Fig. 5.10, we first observe that the relative errors from theiterative upscaling method with parameter reduction collocation clearly decrease as the interpolation level increases. Herewe note that the total error of the proposed method can be decomposed into two components: one is the splitting error,and the other is the interpolation error. As we can see from Fig. 5.10, the level 2 collocation error is relatively close to thosefrom the iterative upscaling method with Monte Carlo sampling, and this slight discrepancymay be viewed as a trade of theincreased efficiency of the proposed parameter reductionmethod. At the same time, we also note that as 1−E(m) increases,themean and variance quantities of the relative errors also increase. In particular, we see that less terms for the constructionof k0 in the KLE expansion give errors that grow algebraically with respect to 1−E(m) for the proposed parameter reductionmethod with the level 2 and Monte Carlo sampling method.

In addition to the better accuracy obtained by using a higher interpolation level, Fig. 5.10 shows that the relative errorfrom all the methods does not exceed 0.25%. Although the results from the level 1 do not closely match those from theMonte Carlo method, an error less than 0.25% may be acceptable for many practical applications. In particular, we shouldnote that the results from the interpolation level 1 show that the proposed method with a low level may be not sensitivewith respect to 1− E(m), and fewer terms for k0 kept in KLE coefficient splitting configuration may not lose much accuracyfor stochastic elliptic solution. Next, we consider the performance of the iterative upscaling method with respect to theiteration number J employed in the proposed upscaling method. In Fig. 5.11, we offer a comparison between the mean andvariance of the relative error obtained from the proposed parameter reduction solution with the interpolation level 2 andthe iterative upscaling solution with Monte Carlo sampling for J = 1 and J = 3. From Fig. 5.11, we observe that increasingthe iterations of the proposed upscaling method achieves better accuracy in the mean quantity. At the same time, we seethat as the number of iteration J increases, the variance of the relative error also increases. This is because the variance maypropagate as more iterations are employed in the iterative upscaling method.

To substantially reduce the dimension of the randomparameter space,much less terms in KLE are truncated for k0(x, Θ0).Then ηK := ∥

k1(x,Θ)

k0(x,Θ0)∥L∞(K×Ω) ≥ 1 may happen if dim(Θ0) := m becomes smaller and smaller. For this case, the modified

splitting k = k0 + k1 is constructed so that the inequality (3.22) is satisfied almost surely. Fig. 5.12 depicts the relationshipbetween the moments (mean and variance) of the relative errors and the dimension reduction (n − m). In the test, thedimension of Θ0 reduces to 8 from 22. When n − m ≤ 8 (here n = 22), ηK < 1 and the original splitting ensures theconvergence of the proposed upscaling. When n − m ≥ 8, ηK becomes larger than 1 for a few K ∈ TH . Then we employmodified splitting for those coarse blocks. From Fig. 5.12, we see that the three errors (from level 1 collocation, level 2collocation, and Monte Carlo sampling) become almost identical when n − m = 14. This is because the splitting errordominates the collocation error when n − m becomes considerably large.

At last, we would like to visualize the statistical comparisons from the fine-scale reference solution u (computed in finemesh), the iterative upscaling solution u∗

J with the parameter reduction approach, and the standard upscaling solution u∗

Page 17: An upscaling method using coefficient splitting and its applications to elliptic PDEs

728 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

0 0.05 0.1 0.15 0.2 0.250

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

x 10−3

1−E(m)

Mea

n of

rel

ativ

e er

ror

Mean of relative error vs. 1−E(m)

Level 2, J=1Monte Carlo, J=1Level 2, J=3Monte Carlo, J=3

0 0.05 0.1 0.15 0.2 0.250

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x 10−6

1−E(m)

Var

ianc

e of

rel

ativ

e er

ror

Variance of relative error vs. 1−E(m)

Level 2, J=1Monte Carlo, J=1Level 2, J=3Monte Carlo, J=3

Fig. 5.11. Comparison of the mean and variance quantities of the relative errors for J = 1 and J = 3.

0 2 4 6 8 10 12 140

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

x 10−3

n−m

Mea

n of

rel

ativ

e er

ror

Mean of relative errors vs. dim reduction

Level 1Level 2Monte Carlo

0 2 4 6 8 10 12 140

0.5

1

1.5

2

2.5

3

3.5

4

x 10−5

n−m

Var

ianc

e of

rel

ativ

e er

ror

Variance of relative errors vs. dim reduction

Level 1Level 2Monte Carlo

Fig. 5.12. The mean and variance quantities of the relative errors ∥u∗

J − u∗∥1,D/∥u∗

∥1,D and ∥u∗

J − u∗∥1,D/∥u∗

∥1,D versus n − m. Here n = 22 and J = 2.

with Monte Carlo sampling. Here we have used the same reconstruction procedure R(u) for the fine-scale solution u as thecase described in Section 5.2. In Fig. 5.13, we plot the means of u∗

J and R(u), and the difference between the mean of u∗

J andthe mean of R(u), i.e, E[u∗

J (x)] − E[R(u)(x)], and between the mean of u∗ and the mean of R(u), i.e, E[u∗(x)] − E[R(u)(x)].Here we take m = 16, J = 2, and L = 2. In addition to visualize mean quantity, we also plot the variance quantity. To thisend, in Fig. 5.14 we depict the variance of u∗

J and R(u), and the difference between the variance of u∗

J and the variance ofR(u), i.e., Var[u∗

J (x)]− Var[R(u)(x)], and between the variance of u∗ and the variance of R(u), i.e., Var[u∗(x)]− Var[R(u)(x)].From Figs. 5.13 and 5.14, we see that the iterative upscaling solutionwith the parameter reduction approach provides a goodagreement with the standard upscaling and the reference solution R(u). This further verifies the accuracy for the iterativeupscaling method with the parameter space reduction collocation.

Page 18: An upscaling method using coefficient splitting and its applications to elliptic PDEs

X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730 729

Fig. 5.13. Comparison of the mean value of the upscaling solutions and reference solution R(u).

Fig. 5.14. Comparison of the variance value of the upscaling solutions and reference solution R(u).

6. Concluding remarks

Mathematical models with uncertainties can be represented as stochastic partial differential equations (SPDEs). Themodel’s output can be accurately predicted by efficiently solving the associated SPDEs. It is quite challenging to solve theSPDEs when the random inputs (e.g., coefficient) vary over multiple scales in space and contain inherent uncertainties.In this paper, we used a splitting technique to develop an iterative upscaling method. The upscaling coefficient isiteratively generated using Green’s kernel. Green’s kernel is based on the first differential operator of the splitting. Theproposed upscaling method was applied to deterministic elliptic equations and stochastic elliptic equations, and using theproposed upscalingmethod can considerably reduce the dimension of the random parameter space for stochastic problems.Combining the iterative upscalingmethodwith the stochastic collocationmethod,we proposed a parameter space reductioncollocation method. We thoroughly analyzed the convergence of the proposed upscaling method for both deterministicand stochastic elliptic equations. The proposed upscaling method and the parameter space reduction approach involve a

Page 19: An upscaling method using coefficient splitting and its applications to elliptic PDEs

730 X. He, L. Jiang / Computers and Mathematics with Applications 65 (2013) 712–730

few independent computations; this is desirable in a parallel implementation. The iterative upscaling method providesan approach to solve multiscale problems with high-dimensional inputs. The performance of the proposed method wasconfirmed by a number of numerical tests. We remark that the proposed approach has the potential to tackle a stochasticmodel in the high-dimensional stochastic space when the number of dimensions can be almost identified as a small numberof effective dimensions (particularly an effective one). For example, the situation in KLE when the random parameterscorresponding to very small eigenvalues can be neglectable.

Although the proposed method in this paper gives an encouraging perspective in stochastic dimension reduction, thereexist some limitations and room for improving its potential capability. The proposed splitting method works well for therandom fieldswithmoderate variance. However, in some practical stochasticmodels, such as geologicalmodels, the randomfieldsmay have very large variance and their correlation lengthsmay be quite small and highly anisotropic. For this case, theproposed splittingmethodmay not substantially reduce the random parameters and the derivedmodeling errormay not beso small. Some numerical results in Section 5.3 have presented the implication of limitation. To eliminate the limitation, wecan combine the coefficient splitting technique and high-dimensional model reduction techniques [24] to develop a moreefficient stochastic dimension reduction approach with broader applications. Further investigation of this novel approachis worth pursuing in future.

Acknowledgments

X. He acknowledges that the work is funded by the National Natural Science Foundation of China grant 41272271, andSpecialized Research Fund for the Doctoral Program of Higher Education of China grant 20094306120007. L. Jiang thanksfor the support by Chinese NSF 10901050. L. Jiang acknowledges the support by the Department of Energy at Los AlamosNational Laboratory under contracts DE-AC52-06NA25396 and the DOE Office of Science Advanced Computing Research(ASCR) program in Applied Mathematical Sciences. We thank for reviewers comments to improve the paper.

References

[1] G. Allaire, R. Brizzi, A multiscale finite element method for numerical homogenization, Multiscale Model. Simul. 4 (2005) 790–812.[2] V. Jikov, S. Kozlov, O. Oleinik, Homogenization of Differential Operators and Integral Functionals, Springer-Verlag, 1994, (Translated from Russian).[3] X.H. Wu, Y. Efendiev, T.Y. Hou, Analysis of upscaling absolute permeability, Discrete Contin. Dyn. Syst. Ser. B2 (2002) 185–204.[4] Ph. Renard, G. de Marsily, Calculating effective permeability: a review, Adv. Water Res. 20 (1997) 253–278.[5] C.L. Farmer, Upscaling: a review, Internat. J. Numer. Methods Fluids 40 (2002) 63–78.[6] Y. Chen, L.J. Durlofsky, M. Gerritsen, X.H. Wen, A coupled local–global upscaling approach for simulating flow in highly heterogeneous formations,

Adv. Water Res. 26 (2003) 1041–1060.[7] Y. Chen, Louis J. Durlofsky, Efficient incorporation of global effects in upscaled models of two-phase flow and transport in heterogeneous formations,

Multiscale Model. Simul. 5 (2006) 445–475.[8] J.D. Moulton, J.E. Dendy Jr., J.M. Hyman, The black box multigrid numerical homogenization algorithm, J. Comput. Phys. 142 (1998) 80–108.[9] L. Jiang, M. Presho, A Resourceful splitting technique with applications to determonistic and stochastic multiscale finite element methods, Multiscale

Model. Simul. 10 (2012) 954–985.[10] X.F. Xu, X. Chen, L. Shen, A Green-Function-Based multiscale method for uncertainty quantification of finite body random heterogeneous materials,

Computers and Structures 87 (2009) 1416–1426.[11] X.F. Xu, Stochastic computation based on orthogonal expansion of random fields, Comput. Methods Appl. Mech. Engrg. 200 (2011) 2871–2881.[12] M. Kleiber, T.D. Hien, The Stochastic Finite Element Method, Wiley, New York, 1992.[13] D. Xiu, Jan S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM J. Sci. Comput. 27 (2005) 1118–1139.[14] C.G. Webster, sparse grid stochastic collocation techniques for the numerical solution of partial differential equations with random input data, The

Floria State University, Ph.D. Dissertation, 2007.[15] V. Barthelmann, E. Novak, K. Ritter, High dimensional polynomial interpolation on sparse grids, Adv. Comput. Math. 12 (2000) 273–288.[16] F. Nobile, R. Tempone, C.G. Webster, A sparse grid stochastic collocation method for partial differential equations with random input data, SIAM J.

Numer. Anal. 46 (2008) 2309–2345.[17] S. Smolyak, Quadrature and interpolation formulas for tensor products of certain classes of functions, Soviet Math. Dokl. 4 (1963) 240–243.[18] A. Gloria, An analytic framework for the numerical homogenization of monotne elliptic operators and quasiconvex energies, Multiscale Model. Simul.

5 (2006) 996–1043.[19] R. Ewing, O. Iliev, R. Lazarov, I. Rybak, J. Willems, A simplified method for upscaling composite materials with high contrast of the conductivity, SIAM

J. Sci. Comput. 31 (2009) 2568–2586.[20] H. Owhadi, L. Zhang, Metric based up-scaling, Com. Pure Appl. Math. 60 (2007) 675–723.[21] L. Jiang, Y. Efendiev, I. Mishev, Mixed multiscale finite element methods using approximate global information based on partial upscaling, Comput.

Geosci. 14 (2010) 319–341.[22] O. Iliev, I. Rybak, On numerical upscaling for flows in heterogeneous porous media, Comput. Methods Appl. Math. 8 (2008) 60–76.[23] I. Babuška, F. Nobile, G. Zouraris, A stochastic collocation method for elliptic partial differential equations with random input data, SIAM J. Numer.

Anal. 45 (2007) 1005–1034.[24] H. Rabitz, ö.F. Allis, J. Shorter, K. Shim, Efficient input–output model representation, Comput. Phys. Commun. 117 (1999) 11–20.