variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of...

38
Variance reduction through robust design of boundary conditions for stochastic hyperbolic systems of equations JanNordstr¨om a , Markus Wahlsten b a Department of Mathematics, Computational Mathematics, Link¨ oping University, SE-581 83 Link¨oping, Sweden ([email protected]). b Department of Mathematics, Computational Mathematics, Link¨ oping University, SE-581 83 Link¨oping, Sweden ([email protected]). Abstract We consider a hyperbolic system with uncertainty in the boundary and initial data. Our aim is to show that different boundary conditions gives different convergence rates of the variance of the solution. This means that we can with the same knowledge of data get a more or less accurate description of the uncertainty in the solution. A variety of boundary conditions are compared and both analytical and numerical estimates of the variance of the solution is presented. As applications, we study the effect of this technique on Maxwell’s equations as well as on a subsonic outflow boundary for the Euler equations. Keywords: uncertainty quantification, hyperbolic system, initial boundary value problems, well posed, stability, boundary conditions, stochastic data, variance reduction, robust design, summation-by parts 1. Introduction In most real-world applications based on partial differential equations, data is not perfectly known, and typically varies in a stochastic way. There are essentially two different techniques to quantify the resulting uncertainty in the solution. Non-intrusive methods [1, 2, 3, 4, 5, 6, 7] use multiple runs of existing deterministic codes for a particular statistical input. Standard quadrature techniques, often in combination with sparse grid techniques [8] can be used to obtain the statistics of interest. Intrusive methods [9, 10, 11, 12, 13, 14, 15, 16] are based on polynomial chaos expansions leading to a systems of equations for the expansion coefficients. This implies that new specific non-deterministic codes must be developed. The statistical properties Preprint submitted to Journal of Computational Physics November 1, 2014

Upload: liu-se

Post on 18-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Variance reduction through robust design of boundary

conditions for stochastic hyperbolic systems of equations

Jan Nordstroma, Markus Wahlstenb

aDepartment of Mathematics, Computational Mathematics, Linkoping University,SE-581 83 Linkoping, Sweden ([email protected]).

bDepartment of Mathematics, Computational Mathematics, Linkoping University,SE-581 83 Linkoping, Sweden ([email protected]).

Abstract

We consider a hyperbolic system with uncertainty in the boundary and initialdata. Our aim is to show that different boundary conditions gives differentconvergence rates of the variance of the solution. This means that we canwith the same knowledge of data get a more or less accurate description of theuncertainty in the solution. A variety of boundary conditions are comparedand both analytical and numerical estimates of the variance of the solution ispresented. As applications, we study the effect of this technique on Maxwell’sequations as well as on a subsonic outflow boundary for the Euler equations.

Keywords: uncertainty quantification, hyperbolic system, initial boundaryvalue problems, well posed, stability, boundary conditions, stochastic data,variance reduction, robust design, summation-by parts

1. Introduction

In most real-world applications based on partial differential equations,data is not perfectly known, and typically varies in a stochastic way. Thereare essentially two different techniques to quantify the resulting uncertaintyin the solution. Non-intrusive methods [1, 2, 3, 4, 5, 6, 7] use multiple runsof existing deterministic codes for a particular statistical input. Standardquadrature techniques, often in combination with sparse grid techniques [8]can be used to obtain the statistics of interest. Intrusive methods [9, 10, 11,12, 13, 14, 15, 16] are based on polynomial chaos expansions leading to asystems of equations for the expansion coefficients. This implies that newspecific non-deterministic codes must be developed. The statistical properties

Preprint submitted to Journal of Computational Physics November 1, 2014

are obtained by a single run for a larger system of equations. There are alsoexamples of semi-intrusive methods [18, 17]. The different procedures arecompared in [19, 20] and a review is found in [21].

In this paper we take a step back from the technical developments men-tioned above and focus on fundamental questions for the governing initialboundary value problem, and in particular on the influence of boundary con-ditions. Our aim is to minimize the uncertainty or variance of the solutionfor a given stochastic input. The variance reduction technique in this paperis closely related to well-posedness of the governing initial boundary valueproblem. In particular it depends on the sharpness of the energy estimate,which in turn depend on the boundary conditions.

The technique used in this paper is directly applicable to hyperbolic lin-ear problems such as for example the Maxwell’s equations, the elastic waveequations and the linearised Euler equations where the uncertainty is knownand limited to the data of the problem. The theoretical derivations are forsimplicity and clarity done in one space dimension and for one stochastic vari-able. The extension to multiple space dimensions and stochastic variablesis straightforward and would add more technical details but no principalproblems.

We begin by deriving general strongly well posed boundary conditionsfor our generic hyperbolic problem [22, 23, 54, 33]. These boundary condi-tions are implemented in a finite difference scheme using summation-by-parts(SBP) operators [24, 29, 30, 31] and weak boundary conditions [38, 39, 40, 41].Once both the continuous and semi-discrete problems have sharp energy es-timates, we turn to the stochastic nature of the problem.

We show how to use the previously derived estimates for the initial bound-ary value problem in order to bound and reduce the variance in the stochas-tic problem. Finally we exemplify the theoretical development by numericalcalculations where the statistical moments are computed by using the non-intrusive methodology with multiple solves for different values of the stochas-tic variable [36, 37]. The statistical moments are calculated with quadratureformulas based on the probability density distribution [34, 35].

The remainder of the paper proceeds as follows. In section 2 the continu-ous problem is defined and requirements for well-posedness on the boundaryoperators for homogeneous and non-homogeneous boundary data are derived.In section 3 we present the semi-discrete formulation and derive stability con-ditions. Section 4 presents the stochastic formulation of the problem togetherwith estimates of the variance of the solution. We illustrate and analyze the

2

variance for a model problem in section 5. In section 6 and we study theimplications of this technique on the Maxwell’s equations and on a subsonicoutflow boundary conditions for the Euler equations. Finally in section 7 wesummarize and draw conclusions.

2. The continuous problem

The hyperbolic system of equations with stochastic data that we consideris,

ut + Aux = F (x, t, ξ) 0 ≤ x ≤ 1, t ≥ 0H0u = g0(t, ξ) x = 0, t ≥ 0H1u = g1(t, ξ) x = 1, t ≥ 0

u(x, 0, ξ) = f(x, ξ) 0 ≤ x ≤ 1, t = 0,

(1)

where u = u(x, t, ξ) is the solution, and ξ is the variable describing thestochastic variation of the problem. In general ξ is a vector of multiplestochastic variables, but for the purpose in this paper, one suffice. H0 andH1 are boundary operators defined on the boundaries x = 0 and x = 1. Ais a symmetric M ×M matrix which is independent of ξ. F (x, t, ξ) ∈ R

M ,f(x, ξ) ∈ R

M , g0(t, ξ) ∈ RM and g1(t, ξ) ∈ R

M are data of the problem.

Remark 1. The limitation to one space and stochastic dimension in (1)is for clarity only. Multiple space and stochastic dimensions adds to thetechnical complexity (for example more complicated quadrature to obtain thestatistics of interest), but no principal problems would occur.

In this initial part of the paper, we do not focus on the stochastic part ofthe problem, that will come later in section 4. We derive conditions for (1)to be well posed, and focus on the boundary operators H0 and H1.

2.1. Well-posedness

Letting F = 0, we multiply (1) by uT and integrate in space. By rear-ranging and defining ‖u‖2 =

ΩuTudx we get,

‖u‖2t =(uTAu

)

x=0−(uTAu

)

x=1. (2)

Due to the fact that A is symmetric, we have

A = XΛXT , X = [X+, X−], Λ =

[Λ+ 00 Λ−

]

. (3)

3

In (3), X+ and X− are the eigenvectors related to the positive and negativeeigenvalues respectively. The eigenvalue matrix Λ is divided into diagonalblock matrices Λ+ and Λ− containing the positive and negative eigenvaluesrespectively. Using (2) and (3) we get,

‖u‖2t = (XTu)T0Λ(XTu)0 − (XTu)T1Λ(X

Tu)1. (4)

The boundary conditions we consider are of the form

H0u = (XT+ −R0X

T−)u0 = g0, x = 0

H1u = (XT− −R1X

T+)u1 = g1, x = 1,

(5)

where R0 and R1 are matrices of appropriate size and dimension. The relation(5) means that we specify the ingoing characteristic variables in terms of theoutgoing ones’ and given data, see [22, 23].

2.1.1. The homogeneous case

By using (5) in (4) and neglecting the data, we obtain,

‖u‖2t = (XT−u)

T0 [R

T0 Λ

+R0 + Λ−](XT−u)0

− (XT+u)

T1 [R

T1 Λ

−R1 + Λ+](XT+u)1.

(6)

From (6) we conclude that the energy is bounded for homogeneous boundaryconditions if,

RT0 Λ

+R0 + Λ− ≤ 0,RT

1 Λ−R1 + Λ+ ≥ 0.

(7)

Consequently (7) puts a restriction on R0 and R1. Note that with R0 = R1 =0 we have the so called characteristic boundary conditions.

We summarize the result in

Proposition 1. The problem (1) with the boundary conditions (5), subjectto condition (7) and zero boundary data is well-posed.

Proof. By integration of (6), subject to (7) we get

‖u‖2 ≤ ‖f‖2 . (8)

The estimate (8) implies uniqueness. Existence is guaranteed by the factthat we use the correct number of boundary conditions in (5).

4

2.1.2. The non-homogeneous case

We now consider the case with non-zero data. Using (5) in (4) and keepingthe data gives us,

‖u‖2t =

[(XT

−u)0g0

]T [RT

0 Λ+R0 + Λ− RT

0 Λ+

Λ+R0 Λ+

] [(XT

−u)0g0

]

−[(XT

+u)1g1

]T [RT

1 Λ−R1 + Λ+ RT

1 Λ−

Λ−R1 Λ−

] [(XT

+u)1g1

]

.

(9)

We can now add and subtract gT0 G0g0 and gT1 G1g1 where G0,1 are positivesemi-definite matrices since g0 and g1 are bounded. The result is,

‖u‖2t =

[(XT

−u)0g0

]T [RT

0 Λ+R0 + Λ− RT

0 Λ+

Λ+R0 −G0

]

︸ ︷︷ ︸

M0

[(XT

−u)0g0

]

−[(XT

+u)1g1

]T [RT

1 Λ−R1 + Λ+ RT

1 Λ−

Λ−R1 G1

]

︸ ︷︷ ︸

M1

[(XT

+u)1g1

]

+ gT0 (Λ+ +G0) g0 − gT1 (Λ− −G1) g1.

(10)

For (10) to lead to an estimate, M0 andM1 must be negative respectivelypositive semi-definite. We start by considering M0,

M0 =

[RT

0 Λ+R0 + Λ− (Λ+R0)

T

Λ+R0 −G0

]

. (11)

To prove that (11) is negative semi-definite we multiply M0 with a matrixfrom left and right, and obtain

M0 =

[I C0

0 I

]T [RT

0 Λ+R0 + Λ− (Λ+R0)

T

Λ+R0 −G0

] [I C0

0 I

]

. (12)

Evaluating (12), requiring that RT0 Λ

+R0+Λ− is strictly negative definite andchoosing C0 = −(RT

0 Λ+R0 + Λ−)−1(Λ+R0)

T , gives us the diagonal matrix

M0 =

[RT

0 Λ+R0 + Λ− 00 −(Λ+R0)

T (RT0 Λ

+R0 + Λ−)−1(Λ+R0)−G0

]

.

With the Schur complement −(Λ+R0)T (RT

0 Λ+R0+Λ−)−1(Λ+R0)−G0 being

negative semi-definite, M0 is negative semi-definite. This means that thechoice,

G0 ≥ −(Λ+R0)T (RT

0 Λ+R0 + Λ−)−1(Λ+R0), (13)

5

and the requirement that RT0 Λ

+R0 + Λ− is negative definite, see (7), makesM0 negative semi-definite, and hence also M0.

By using exactly the same technique for the matrix M1 we conclude thatwe can choose the matrices R0, R1 such that M0 and M1 are negative re-spectively positive semi-definite, and hence lead to an energy estimate alsoin the non homogeneous case. Note that we only need to prove that thereexist a certain choice of G0 and G1 which bounds (9), the particular choicesof matrices are irrelevant.

We summarize the result in

Proposition 2. The problem (1) with boundary conditions (5) and condition(7) with strict definiteness is strongly well-posed.

Proof. By integrating (10), with the conditions (7), (13) leads to,

‖u‖2 ≤ ‖f‖2 +∫ T

0

gT0 (Λ+ +G0)g0 + gT1 (|Λ−|+G1)g1dt. (14)

The relation (7) guarantees uniqueness. Existence is guaranteed by the cor-rect number of boundary conditions in (5).

Remark 2. If we can estimate the solution in all forms of data, the solutionis strongly well-posed, if zero boundary data is necessary for obtaining anestimate, it is well posed see [43, 44, 45] for more details on well-posedness.

Remark 3. The fact that we can get a more or less sharp energy estimatedepending on the choice of R0 and R1 will have implications for the variancein the solution of the stochastic problem.

3. The semi-discrete problem

It is convenient to introduce the Kronecker product, defined for anyM×Nmatrix A and P ×Q matrix B,

A⊗ B =

a11B · · · a1NB...

. . ....

aM1B · · · aMNB

.

We now consider a finite difference approximation of (1) using SBP operators,see [24, 25, 26]. The boundary conditions are implemented using the weak

6

penalty technique using simultaneous approximation terms (SAT) describedin [30, 31, 38, 39, 40]. The semi-discrete SBP-SAT formulation of (1) is,

ut + (D ⊗ A)u = (P−1E0 ⊗ Σ0)((IN ⊗H0)u− e0 ⊗ g0)+ (P−1EN ⊗ ΣN)((IN ⊗H1)u− eN ⊗ g1).

u(0) = f(15)

in (15), D = P−1Q is the difference operator. P is a positive definite matrixand Q satisfies Q + QT = diag[−1, 0, ..., 0, 1]. E0 and EN are zero matricesexcept for element (1, 1) = 1 and (N,N) = 1 respectively. Difference oper-ators of this form exist for order 2, 4, 6, 8 and 10, see [24, 25, 26, 28]. Theboundary data g0 and g1 are defined as,

g0 = [g0, 0]T , g1 = [0, g1]

T ,

where g0 and g1 are the original boundary data of problem (1).The numerical solution u is a vector organized as,

u = [u0, ..., ui, ..., uN ]T , ui = [u0, ..., uj, ..., uM ]T

i,

where uij approximates uj(xi). The vectors e0 and eN are zero except forthe first and last element respectively which is one. ΣN and Σ0 are penaltymatrices of appropriate size. H0 and H1 are defined as

H0 =

[I+ −R0

0 0

]

XT , H1 =

[0 0

−R1 I−

]

XT . (16)

In (16), I+ and I− is the identity matrix with the same size as Λ+ and Λ−

and R0, R1 are the matrices in the boundary conditions (5). Note that H0

and H1 in (16) are expanded versions of H0 and H1 in (5).

3.1. Stability

In this section we prove stability using the discrete energy-method. Wefollow essentially the same route as in section 2 for the continuous problem.

3.1.1. The homogeneous case

To prove stability for the homogeneous case (g0 = g1 = 0) we multiplyequation (15) by uT (P⊗IM) from the left, and add the transpose. By defining

7

the discrete norm ‖u‖2P⊗I = uT (P ⊗ I)u, using Q + QT = diag[−1, 0, ...0, 1]and A = XΛXT we obtain,

ddt‖u‖2P⊗I = uT0 (XΛXT + Σ0H0 + (Σ0H0)

T )u0− uTN(XΛXT − ΣNH1 − (ΣNH1)

T )uN .(17)

In (17), u0 and uN is the vector u at grid point zero and N respectively.Rewriting (17) and using the fact that XXT = I yields,

ddt‖u‖2P⊗I = (XTu)T0 (Λ + (XTΣ0H0X) + (XTΣ0H0X)T )(XTu)0

− (XTu)TN(Λ− (XTΣNH1X)− (XTΣNH1X)T )(XTu)N .(18)

Let Σ0 = XTΣ0 and ΣN = XTΣN , where we choose Σ0 and ΣN such that Σ0

and ΣN are diagonal matrices, that is

ΣN =

[Σ+

N 0

0 Σ−N

]

, Σ0 =

[Σ+

0 0

0 Σ−0

]

.

The sizes of Σ+N , Σ

−N , Σ

+0 , Σ

−0 corresponds to the sizes of Λ+ and Λ− respec-

tively. By the following split,

(XTu)0 =

[(XT

+u)0(XT

−u)0

]

, (XTu)N =

[(XT

+u)N(XT

−u)N

]

,

we can rewrite (18) as,

ddt‖u‖2P⊗I =

[(XT

+u)0(XT

−u)0

]T [Λ+ + 2Σ+

0 −Σ+0 R0

−RT0 Σ

+0 Λ−

]

︸ ︷︷ ︸

B0

[(XT

+u)0(XT

−u)0

]

−[(XT

+u)N(XT

−u)N

]T [Λ+ Σ−

NR1

RT1 Σ

−N Λ− − 2Σ−

N

]

︸ ︷︷ ︸

BN

[(XT

+u)N(XT

−u)N

]

.

(19)

To obtain an estimate, the matrices B0 and BN must be negative and positivesemi-definite respectively.

We first consider the left boundary. Note that the boundary term atx = 0 in the continuous energy estimate (4) can be written as,

[(XT

+u)0(XT

−u)0

]T [0 00 RT

0 Λ+R0 + Λ−

] [(XT

+u)0(XT

−u)0

]

. (20)

8

We know from (7) that (20) is negative semi-definite. To take advantage ofthat, we rewrite B0 in the following way,

B0 =

[0 00 RT

0 Λ+R0 + Λ−

]

+

[Λ+ + 2Σ+

0 −Σ+0 R0

−RT0 Σ

+0 −RT

0 Λ+R0

]

. (21)

We consider the second matrix in (21), which we rewrite as,

B(2)0 =

[I 00 R0

]T [Λ+ + 2Σ+

0 −Σ+0

−Σ+0 −Λ+

] [I 00 R0

]

. (22)

By choosing Σ+0 = −Λ+ in (22) we find

B(2)0 =

[I 00 R0

]T(C ⊗ Λ+

)

︸ ︷︷ ︸

M0

[I 00 R0

]

, C =

[−1 +1+1 −1

]

.

The matrix C has the eigenvalues −2, 0 which means that M0 and hence B0

is negative semi-definite. This choice of Σ0 and Σ0 = XΣ0 makes B0 negativesemi-definite. We finally let Σ−

0 = 0 since it has no importance for stability.Exactly the same procedure for the right boundary matrix BN lead to

the following result

Proposition 3. The approximation (15) with the boundary operators (16)and penalty coefficients

Σ0 = XΣ0, ΣN = XΣN , (23)

where,

Σ0 = −[Λ+ 00 0

]

, ΣN =

[0 00 Λ−

]

, (24)

is stable.

Proof. Time-integration of (19) with the penalty coefficients (23), (24) yield

‖u‖2P⊗I ≤ ‖f‖2P⊗I . (25)

Remark 4. Note the similarity between the continuous estimate (8) and thediscrete estimate (25).

9

3.1.2. The non-homogeneous case

By using the same technique as in the homogeneous case but keeping thedata we obtain the following quadratic form,

ddt‖u‖2P⊗I =

(XT+u)0

(XT−u)0g0

T

Λ+ + 2Σ+0 −Σ+

0 R0 −Σ+0

−RT0 Σ

+0 Λ− 0

−Σ+0 0 0

︸ ︷︷ ︸

B0

(XT+u)0

(XT−u)0g0

(XT+u)N

(XT−u)Ng1

T

Λ+ Σ−NR1 0

RT1 Σ

−N Λ− − 2Σ−

N Σ−N

0 Σ−N 0

︸ ︷︷ ︸

BN

(XT+u)N

(XT−u)Ng1

,

(26)

where we recognize the top 2× 2 blocks from the homogeneous case (19). Toobtain an estimate we must prove that the matrices B0 and BN are negativeand positive semi-definite respectively.

We first consider the left boundary. Note that the boundary term atx = 0 in the continuous energy estimate (10) involves,

(XT+u)0

(XT−u)0g0

T

0 0 00 RT

0 Λ+R0 + Λ− RT

0 Λ+

0 Λ+R0 −G0

(XT+u)0

(XT−u)0g0

(27)

Proposition 2 implies that the matrix in (27) is negative semi-definite. Totake advantage of that, we rewrite B0 in the following way,

B0 =

0 0 00 RT

0 Λ+R0 + Λ− RT

0 Λ+

0 Λ+R0 −G0

+

Λ+ + 2Σ+0 −Σ+

0 R0 −Σ+0

−RT0 Σ

+0 −RT

0 Λ+R0 −R0Λ

+

−Σ+0 −Λ+R0 −Λ+

+

0 0 00 0 00 0 Λ+ +G0

.

We focus on the second matrix above and rewrite it as,

B(2)0 =

I 0 00 R0 00 0 I

T

Λ+ + 2Σ+0 −Σ+

0 −Σ+0

−Σ+0 −Λ+ −Λ+

−Σ+0 −Λ+ −Λ+

I 0 00 R0 00 0 I

. (28)

10

By choosing Σ+0 = −Λ+ in (28) we get

B(2)0 =

I 0 00 R0 00 0 I

T

(C ⊗ Λ+

)

︸ ︷︷ ︸

M0

I 0 00 R0 00 0 I

, C =

−1 +1 +1+1 −1 −1+1 −1 −1

.

The matrix C has the eigenvalues −3, 0, 0 which means that M0 and henceB0 is negative semi-definite. This choice of Σ0 and Σ0 = XΣ0 makes B0

negative semi-definite. We finally let Σ−0 = 0 since it has no importance for

stability.By a similar procedure for the right boundary matrix BN we arrive at

Proposition 4. The approximation (15) with the boundary operators (16)and penalty coefficients (23), (24) is strongly stable.

Proof. By time-integration of the approximation (26) with the penalty coef-ficients (23), (24) we obtain,

‖u‖2P⊗I ≤ ‖f‖2P⊗I +

∫ T

0

gT0 (Λ+ +G0)g0 + gT1 (|Λ−|+G1)g1dt (29)

The estimate (29) includes both the initial and boundary data.

Remark 5. Note that the conditions in proposition 3 and 4 are identical.If we can estimate the problem using non-zero boundary data it is stronglystable. If zero boundary data is necessary for obtaining an estimate, it iscalled stable. See [43, 44] for more details.

Remark 6. Notice the similarity between the continuous estimate (14) andthe discrete estimate (29).

4. The stochastic problem

In this section we focus on the stochastic properties of (1) which weformulate as,

ut + Aux = F (x, t, ξ) = E[F ](x, t) + δF (x, t, ξ)H0u(0, t, ξ) = g0(t, ξ) = E[g0](t) + δg0(t, ξ)H1u(1, t, ξ) = g1(t, ξ) = E[g1](t) + δg1(t, ξ)u(x, 0, ξ) = f(x, ξ) = E[f ](x) + δf(x, ξ).

(30)

11

In (30), E[·] denotes the expected value or mean, and δ indicates the variationfrom the mean. E[·] is defined as,

E[X(ξ)] =

∫ ∞

−∞X(ξ)fξ(ξ)dξ,

where fξ is the probability density function of the stochastic variable X.By taking the expected value of (30) and letting E[u] = v we obtain,

vt + Avx = E[F ](x, t)H0v(0, t) = E[g0](t)H1v(1, t) = E[g1](t)v(x, 0) = E[f ](x).

(31)

The difference between (30) and (31) and the notation e = u − E[u] for thedeviation from the mean leads to,

et + Aex = δF (x, t, ξ)H0e(0, t, ξ) = δg0(t, ξ)H1e(1, t, ξ) = δg1(t, ξ)e(x, 0, ξ) = δf(x, ξ).

(32)

Remark 7. Note that (32) is of the same form as (30). Therefore, since(30) has an energy estimate in terms of the data, so does (32). Note alsothat the data in (32) is the deviation from the mean.

4.1. The variance formulation

The energy method on (32) (multiply by e and integrate in space) leadssimilarly as in the general non-homogeneous case (5) to,

‖e‖2t =

[e−0δg0

]T [RT

0 Λ+R0 + Λ− RT

0 Λ+

Λ+R0 Λ+

] [e−0δg0

]

−[e+1δg1

]T [RT

1 Λ−R1 + Λ+ RT

1 Λ−

Λ−R1 Λ−

] [e+1δg1

]

,

(33)

Unlike the derivation of (10), where our ambition was to bound the solutionin terms of the data, our ambition is now to evaluate the right hand side of(33).

12

To ease the notation we introduce e+0 = (XT+e)0, e

−0 = (XT

−e)0, e+1 =

(XT+e)1 and e−1 = (XT

−e)1. By replacing δg0 and δg1 from some parts of theboundary terms in (33) by using the boundary conditions we get,

‖e‖2t − (R0e−0 )

TΛ+(R0e−0 )− (e−0 )

TΛ−(e−0 )+ (R1e

+1 )

TΛ−(R1e+1 ) + (e+1 )

TΛ+(e+1 )= δgT0 Λ

+δg0 + (e+0 −R0e−0 )

TΛ+(R0e−0 )

− δgT1 Λ−δg1 − (e−1 −R1e

+1 )

TΛ−(R1e+1 )

+ (R0e−0 )

TΛ+(e+0 −R0e−0 )

− (R1e+1 )

TΛ−(e−1 −R1e+1 ).

(34)

Taking the average of (33) and (34) cancels out the (R0e−0 )

TΛ+(R0e−0 ) and

(R1e+1 )

TΛ−(R1e+1 ) terms and gives us

‖e‖2t − (e−0 )TΛ−(e−0 ) + (e+1 )

TΛ+(e+1 )= (δg0)

TΛ+δg0 + (δg0 + e+0 )TΛ+(R0e

−0 )

− (δg1)TΛ−δg1 − (δg1 + e−1 )

TΛ−(R1e+1 ).

(35)

Note that

E[‖e‖2] = E[∫ 1

0e2dx] =

∫ 1

0E[e2]dx =

∫ 1

0E[(u− E[u])2]dx

=∫ 1

0Var[u]dx = ‖Var[u]‖1 .

(36)

Taking the expected value of (35) and using (36) gives us,

‖Var[u]‖t − E[(e−0 )TΛ−(e−0 )] + E[(e+1 )

TΛ+(e+1 )]= E[δgT0 Λ

+δg0] + E[(δg0 + e+0 )TΛ+(R0e

−0 )]

− E[δgT1 Λ−δg1]− E[(δg1 + e−1 )

TΛ−(R1e+1 )].

(37)

Equation (37) gives us a clear description of when and how much the variancedecays depending on i) the boundary operators, and ii) the correlation of thestochastic boundary data.

The estimate (37) has its semi-discrete counterpart, which is stronglystable as was shown in Proposition 4. We do not repeat that derivation forthe deviation e, but use (37) in our discussion below on the effect on thevariance.

4.2. Dependence on stochastically varying data

In this section we will study the effects of different boundary conditionsfor different stochastic properties of the initial and boundary data.

13

4.2.1. Zero variance on the boundary

If we have perfect knowledge of the boundary data, that is δg0 = 0 andδg1 = 0 and use this in (37) we obtain,

‖Var[u]‖t = E[(R0e−0 )

TΛ+(R0e−0 )] + E[(e−0 )

TΛ−(e−0 )]− E[(R1e

+1 )

TΛ−(R1e+1 )]− E[(e+1 )

TΛ+(e+1 )].(38)

From (38) we conclude that any non-zero value of R0 and R1 gives a positivecontribution to the growth of the L1-norm of the variance of the solution.The optimal choices of R0 and R1 for zero variance on the boundary is R0 =R1 = 0. Note also that the sign of R0 and R1 cancels in the quadratic form.

4.2.2. Decaying variance on the boundary

Next, assume that the variance of the boundary data is non-zero butdecays with time. The difference in the energy estimate (37) for the charac-teristic and non-characteristic case include the term,

E[(δg0 + e+0 )TΛ+(R0e

−0 )] = 2E[(e+0 )

TΛ+R0e−0 ]

− E[(R0e−0 )

TΛ+R0e−0 ],

(39)

where we used δg0 = e+0 −R0e−0 . With a similar argument we conclude that,

E[(δg1 + e−1 )TΛ−(R1e

+1 )] = 2E[(e−1 )

TΛ−R1e+1 ]

− E[(R1e+1 )

TΛ−R1e+1 ].

(40)

Furthermore, the correlation coefficient between e+0 and R0e−0 is,

ρe+0 ,R0e−

0=

E[(e+0 − E[e+0 ])T (R0e

−0 − E[R0e

−0 ])]

E[(e+0 − E[e+0 ])2]T

E[(R0e−0 − E[R0e

−0 ])

2]

=E[(e+0 )

TR0e−0 ]

E[(e+0 )2]TE[(R0e

−0 )

2].

(41)

Similarly,

ρe−1 ,R1e+1

=E[(e−1 )

TR1e+1 ]

E[(e−1 )2]TE[(R1e

+1 )

2]. (42)

It follows from (39), (40) and the correlation coefficients in (41) and (42)that the contribution to the decay of the variance depends on the correlationbetween the characteristic variables at the boundaries.

A negative correlation lowers the variance and a positive correlation in-creases the variance. By choosing the initial data to be sufficiently correlated

14

we can possible make E[(δg0+e+0 )

TΛ+(R0e−0 )]−E[(δg1+e

−1 )

TΛ−(R1e+1 )] neg-

ative and hence get a greater variance decay than in the characteristic case(when R0 = R1 = 0). For long times however, the variance decay is wellapproximated by the zero boundary case (38) which means that the charac-teristic boundary conditions are optimal.

Remark 8. Note that even though the initial data and boundary data de-pend on the same single random variable, they are not fully correlated if theydepend differently on the random variable.

Notice also that as data decays the boundary terms can be approximatedas,

e+0 = R0e−0 + δg0 ≈ R0e

−0

e−1 = R1e+0 + δg1 ≈ R1e

+1 .

(43)

A consequence of (43) is that the characteristic variables at the boundariesbecome more and more correlated as the boundary data decays. Using (43)in (41) and (42) ensures that the correlation coefficients are positive. Withzero data we get a perfect correlation between the characteristic variables.

4.2.3. Large variance on the boundary

In the case where the variance of the boundary is large, no general con-clusions can be drawn since the terms E[δgT0 Λ

+(R0e−0 )], −E[(R0e

−0 )

TΛ+δg0],−E[δgT1 Λ

−(R1e+1 )] and E[(R1e

+1 )

TΛ−δg1] will dominate.

Remark 9. For multiple stochastic variables, the estimation of the correla-tion effects would be more complicated, but in principle the same.

5. A detailed analysis of a model problem

We consider the problem (1), where we for simplicity and clarity chooseA = XΛXT to be the 2× 2 matrix

A =

[1 22 1

]

, X = 1√2

[1 11 −1

]

, Λ =

[3 00 −1

]

. (44)

The boundary conditions are of the type (5). In this simplified case R0 andR1 are scalars.

15

5.1. The spectrum

To quantify the effect of boundary conditions on the decay of the variancewe compute the continuous and discrete spectrum for our model problem. Fordetails on this technique, see [22], [23], [43], [49], [50], [51], [52], [53].

5.1.1. The spectrum for the continuous problem

Consider the problem (1) with the matrix given in (44). Laplace trans-forming (1) gives the following system of ordinary differential equations,

su+ Aux = 0 0 ≤ x ≤ 1,H0u = (XT

+ −R0XT−)u = 0 x = 0,

H1u = (XT− −R1X

T+)u = 0 x = 1,

(45)

where we have used zero data since that does not influence the spectrum.The set of ordinary differential equations with boundary conditions (45)

form an eigenvalue problem. The ansatz u = eκxψ leads to,

(sI + Aκ)ψ = 0. (46)

Equation (46) has a non-trivial solution only when (sI + Aκ) is singular.Hence, κ is given by |sI + Aκ| = 0, which lead to κ1 = s and κ2 = −s/3.Equation (46) provide us with the eigenvectors and the general solution

u = α1esx

[1−1

]

+ α2e− s

3x

[11

]

. (47)

We insert (47) into the boundary conditions, which gives,

Eα =

[−R0 1es −R1e

− s3

] [α1

α2

]

= 0. (48)

A non-trivial solution of (48) require s-values such that

|E| = R0R1e− s

3 − es = 0.

The singular values of |E| which yields the spectrum of the continuous prob-lem (1) are,

s =

34ln(|R0R1|) + 3nπi

2, n ∈ Z, if R0R1 > 0,

34ln(|R0R1|) + 3nπi

2+ 3πi

4, n ∈ Z, if R0R1 < 0.

(49)

16

Note that no spectrum is defined when R0 and/or R1 are equal to zero,which corresponds to the characteristic boundary conditions. Note also thatthe real part of s is negative for |R0R1| < 1.

The continuous spectrum given by (49) tells us how the analytical solutiongrows or decays in time. That growth or decay is given by the specificboundary conditions used, as can be seen in (48)

5.1.2. The spectrum for the semi-discrete problem

Consider the semi-discrete problem (15) with homogeneous boundary con-ditions after rearrangement,

ut + (D ⊗ A+ (P−1E0 ⊗ Σ0H0) + (P−1EN ⊗ ΣNH1))︸ ︷︷ ︸

M

u = 0.

The eigenvalues of the matrix M is the discrete spectrum of (15). Figure1 shows the discrete and continuous spectrum for different discretizationsand combinations of R0 and R1. Similar to the continuous case, the discretespectrum tells us how the numerical solution grows or decays in time.

Figure 1 show that the discrete spectrum converges to the continuous oneas the number of grid points increase. The relation (49) show that the realpart of s decreases when |R0R1| decrease, which is also seen in figure 1.

Figure 2 show the discrete spectrum for the characteristic case (whereno continuous spectrum exist) when both R0 and R1 are zero. Note thatthe eigenvalues move to the left in the complex plane as the number of gridpoints increases.

5.2. Accuracy of the model problemThe order of accuracy p is defined as,

p = log2

‖eh‖P∥∥∥eh

2

∥∥∥P

, ‖eh‖ = ‖ua − uh‖

where eh is the error, uh the computed solution using grid spacing h and datafrom ua which is the manufactured analytical solution. The manufacturedanalytical solution is used to compute the exact error e, see [47, 48].

The time-integration is done using the classical 4th order explicit Runge-Kutta scheme [27]. Table 1 shows the p-value for different grid sizes in spaceusing SBP-SAT schemes of order two and three, see [46]. In all simulationsbelow we use the SBP-SAT scheme of order three with a mesh containing 50grid points in space and 1000 grid points in time.

17

−2.5 −2 −1.5 −1 −0.5 0 0.5−40

−30

−20

−10

0

10

20

30

40

Real part

Imagin

ary

part

R0 R

1 = 1

Nx = 40

Nx = 80

Nx = 160

Nx = 320

Continuous

−2.5 −2 −1.5 −1 −0.5 0 0.5−40

−30

−20

−10

0

10

20

30

40

Real part

Imagin

ary

part

R0 R

1 = 0.75

Nx = 40

Nx = 80

Nx = 160

Nx = 320

Continuous

−2.5 −2 −1.5 −1 −0.5 0 0.5−40

−30

−20

−10

0

10

20

30

40

Real part

Imagin

ary

part

R0 R

1 = 0.5

Nx = 40

Nx = 80

Nx = 160

Nx = 320

Continuous

−2.5 −2 −1.5 −1 −0.5 0 0.5−40

−30

−20

−10

0

10

20

30

40

Real part

Imagin

ary

part

R0 R

1 = 0.25

Nx = 40

Nx = 80

Nx = 160

Nx = 320

Continuous

Figure 1: The discrete and continuous spectrum for different grids and values of R0, R1.

SBP operator Nx = 20 Nx = 40 Nx = 80 Nx = 1602nd order 2.293 2.078 2.021 2.0063rd order 3.250 3.089 3.034 3.014

Table 1: The order of accuracy for the 2nd and 3rd order SBP-SAT schemes for differentnumber of grid points in space.

18

−9 −8.5 −8 −7.5 −7 −6.5 −6−40

−30

−20

−10

0

10

20

30

40

Real part

Ima

gin

ary

pa

rt

R0 R

1 = 0

Nx = 40

Nx = 80

Nx = 160

Nx = 320

−30 −25 −20 −15 −10 −5 0−1000

−800

−600

−400

−200

0

200

400

600

800

1000

Real part

Ima

gin

ary

pa

rt

R0 R

1 = 0

Nx = 40

Nx = 80

Nx = 160

Nx = 320

Figure 2: The discrete spectrum for characteristic boundary conditions and different grids.

5.3. Random distributions

We consider uniformly and normally distributed randomness denotedU(a, b) and N (µ, σ2). The uniform distribution denoted U(a, b) is definedby the parameters a and b which denotes the minimum and maximum valueof the distribution respectively. The probability density function f for auniformly distributed random variable is

f(x) =

1

b−afor a ≤ x ≤ b,

0 for x < a, or x > b.

The normal distribution denoted N (µ, σ2) is defined using its mean µ andvariance σ2. The probability density function for the normal distribution is

f(x) =1

2πσe−

(x−µ)2

2σ2 .

The numerical simulations are carried out using the scheme in section 3with an additional discretization of the random variable ξ = [ξ0, ξ1, ..., ξS]using 80 grid points, which give the variance with four significant digits. Thevariance is computed using a Gaussian quadrature rule where the weights arecalculated using the stochastic variable’s probability density function [34, 35].

19

0 1 2 3 4 5 6 7 8 9 100

5

10

15

20

25

30

35

40

t

||Var

[u]||

R

0 R

1 = 0

R0 R

1 = 0.25

R0 R

1 = 0.5

R0 R

1 = 0.75

R0 R

1 = 1.0

Figure 3: The L1-norm of the variance as a function of time for a normally distributed ξ forcharacteristic and non-characteristic boundary conditions. Zero variance on the boundary.

5.4. Zero variance on the boundary

To investigate the zero variance case on the boundary, we prescribe,

g0(t, ξ) = 0g1(t, ξ) = 0

f(x, ξ) =[2 + 2 sin(2πx)ξ3, 1− 3 sin(2πx)ξ

]T,

as the initial and boundary conditions. Figure 3 and 4 shows the variancedecay with time comparing characteristic and non-characteristic boundaryconditions when using a normally (ξ ∼ N (0, 1)) and uniformly distributed(ξ ∼ U(−

√3,√3)) ξ respectively. Note the similarity between studying

variance decay in the case of zero variance on the boundary and analyzingconvergence to steady state, see [49, 50, 51].

We clearly see from figures 3 and 4 that the analytical predictions werecorrect and that characteristic boundary conditions gives a smaller variancethan non-characteristic boundary conditions. We also see that the theoret-ical predictions from the spectrum calculations indicating further decay fordecreasing |R0R1| are confirmed.

The similarity of the results in figures 3 and 4 despite the difference instochastic distribution is typical for all the calculations performed. In therest of this section we only include results obtained using ξ ∼ N (0, 1).

20

0 1 2 3 4 5 6 7 8 9 100

2

4

6

8

10

12

14

t

||Var

[u]||

R

0 R

1 = 0

R0 R

1 = 0.25

R0 R

1 = 0.5

R0 R

1 = 0.75

R0 R

1 = 1.0

Figure 4: The L1-norm of the variance as a function of time for a uniformly distributedξ for characteristic and non-characteristic boundary conditions. Zero variance on theboundary.

5.5. Decaying variance on the boundary

For the case of decaying variance on the boundary, we choose,

g0(t, ξ) = 3 + 3e−1.1t cos(2πt)ξ3 −R0(2 + 3e−1.1t cos(2πt)ξ)g1(t, ξ) = 2 + 3e−1.1t cos(2πt)ξ −R1(3 + 3e−1.1t cos(2πt)ξ3)

f(x, ξ) =[3 + 3ξ3, 2 + 3ξ

]T,

as initial and boundary data. In the figures 5 below we compare the L1-normof the variance between the characteristic and non-characteristic boundaryconditions for different combinations of R0 and R1.

Figure 6 show the correlation between the characteristic variables at theboundaries for one of the cases. In order to study how the correlation betweenthe characteristic variables at the boundary effect the variance we show thecase R0R1 = 0.25 from t = 0 to t = 1 in figures 7 and 8. Figure 5 show thatthe characteristic boundary conditions give us the smallest variance for longtimes. The enhanced decay of the variance with decreasing R0R1 from thespectrum analysis is also confirmed.

By studying the expanded figures 7 and 8 we see a negative correlationin the approximate region of t = 0.4 to t = 0.7 and an indication of a faster

21

0 1 2 3 4 5 6 7 8 9 100

50

100

150

200

250

300

t

||Var

[u]||

R

0 R

1 = 0

R0 R

1 = 0.25

R0 R

1 = 0.5

R0 R

1 = 0.75

R0 R

1 = 1.0

Figure 5: The L1-norm of the variance as a function of time for characteristic and non-characteristic boundary conditions. Decaying variance on the boundary.

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

t

ρ

Left boundary

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

t

ρ

Right boundary

Figure 6: The correlation between the characteristic variables at the boundaries for thenon-characteristic boundary condition (R0R1 = 0.25). Decaying variance on the boundary.

22

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1

−0.5

0

0.5

1

t

ρ

Left boundary

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1

−0.5

0

0.5

1

t

ρ

Right boundary

Figure 7: A blow-up of the correlation between the characteristic variables at the bound-aries for the non-characteristic boundary condition (R0R1 = 0.25). Decaying variance onthe boundary.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

50

100

150

t

||Var

[u]||

Characteristic

Non−characteristic

Figure 8: The L1-norm of the variance as a function of time for characteristic and non-characteristic boundary conditions (R0R1 = 0.25). Decaying variance on the boundary.

23

R0R1 = 0 0.25 0.5 0.75 1.0∫ T

0‖V ar[u]‖ dt 62.6 82.7 193.4 581.8 2427.7

Table 2: The integral of the L1-norm of the variance for different values of R0 and R1 forξ ∼ N (0, 1) and T = 10.

decay of the variance than in the characteristic case. We also see in figure 6that as the data decays, the characteristic variables at the boundaries becomeincreasingly correlated.

Finally, we compare the different cases by integrating the L1-norm of thevariance from zero to T , which can be interpreted as the total amount ofuncertainty. Table 2 shows the total L1-norm of the variance of the solutionfor the five different cases. The characteristic boundary condition gives intotal the lowest L1-norm of the variance and confirms the spectrum analysis.

5.6. Large variance on the boundary

For the case of non-decaying variance on the boundary, we choose,

g0(t, ξ) = 3 + 3 cos(2πt)ξ3 −R0(2 + 3 cos(2πt)ξ)g1(t, ξ) = 2 + 3 cos(2πt)ξ −R1(3 + 3 cos(2πt)ξ3)

f(x, ξ) =[3 + 3ξ3, 2 + 3ξ

]T,

as the initial and boundary data. In the figures below we compare theL1-norm of the variance between the characteristic and non-characteristicboundary conditions for different combinations of R0 and R1.

Although there is no clear theoretical support for what happens when wehave a large variance on the boundary, the trend from the previous casesprevail. With the exception of the case when R0R1 = 0.25, figure 9 showthat the characteristic boundary conditions give us the smallest variance forlong times. In this case there is no convergence of the correlation betweenthe characteristic variables on the boundaries.

Finally, we compare the different cases by integrating the L1-norm of thevariance from zero to T which can be interpreted as the total amount ofuncertainty. Table 3 shows the total L1-norm of the variance of the solutionfor the five different cases. A decreasing R0R1 lead to a decreasing L1-normof the variance.

24

0 1 2 3 4 5 6 7 8 9 100

100

200

300

400

500

600

700

800

t

||Var

[u]||

R

0 R

1 = 0

R0 R

1 = 0.25

R0 R

1 = 0.5

R0 R

1 = 0.75

R0 R

1 = 1.0

Figure 9: The L1-norm of the variance as a function of time for characteristic and non-characteristic boundary conditions. Large variance on the boundary.

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

t

ρ

Left boundary

0 1 2 3 4 5 6 7 8 9 10−1

−0.5

0

0.5

1

t

ρ

Right boundary

Figure 10: The correlation between the characteristic variables at the boundaries for thenon-characteristic boundary condition (R0R1 = 0.25). Large variance on the boundary.

25

R0R1 = 0 0.25 0.5 0.75 1.0∫ T

0‖V ar[u]‖ dt 747.3 602.6 1157.5 2280.5 4738.9

Table 3: The integral of the L1-norm of the variance for different values of R0 and R1 forξ ∼ N (0, 1) and T = 10.

6. Applications

6.1. An application in fluid mechanics

We study different subsonic outflow boundary conditions for the Eulerequations with random boundary data. The linearized one dimensional sym-metrized form of the Euler equations with frozen coefficients is, see [42],

Ut + AUx = 0. (50)

In (50),

U =

[

c√γρρ, u,

1

c√

γ(γ − 1)T

]T

, A =

u c√γ

0

c√γ

u√

γ−1γc

0√

γ−1γc u

and A = XΛXT where

X =

−√

γ−1γ

1√2γ

1√2γ

0 1√2

− 1√2

1√γ

√γ−12γ

√γ−12γ

, Λ =

u 0 00 u+ c 00 0 u− c

.

The perturbation variables ρ, u, p, T , c and γ represent the normalizeddensity, velocity, pressure, temperature, speed of sound and the ratio of spe-cific heat. The overbar denotes variables at the constant state where A iscalculated. We have used u = 1, c = 2, ρ = 1 and γ = 1.4.

The boundary conditions are of the type (5), where in this case R0 andR1 are matrices of size 2 × 1 and 1 × 2 respectively. Together with thecharacteristic case we study the following two settings of R0 and R1 in thenon-characteristic case,

26

Case Characteristic Pressure Velocity∫ T

0‖V ar[u]‖ dt Normal 2579.0 3264.0 3143.0

∫ T

0‖V ar[u]‖ dt Uniform 663.9 840.3 809.1

Table 4: The integral of the L1-norm of the variance for different values of the matricesR0 and R1 for ξ ∼ N (0, 1) and ξ ∼ U(−

√3,√3).

Characteristic R0 = [0, 0]T , R1 = [0, 0],

Pressure R0 = [0, 0]T , R1 = [0,−1], (51)

Velocity R0 = [0, 0]T , R1 = [0,+1]. (52)

In (51) and (52), R0 and R1 are chosen such that the conditions foran energy estimate in (7) are satisfied. Note that the non-characteristicboundary condition (51) corresponds to specifying the pressure p and (52)corresponds to specifying the velocity u.

For completeness, we introduce the characteristic variables,

XTU =[

1√γ−1ρc

(p− c2ρ), 1√2ρc

(p+ cρu), 1√2ρc

(p− cρu)]T

.

The non-decaying randomness in the boundary data is given by ρ = u =p = 0.1 + 3 cos(2πt)ξ3. In all calculations below we use the characteristicboundary conditions on the inflow boundary. On the outflow boundary,three different boundary conditions are considered. We either specify theingoing characteristic variable, or the pressure or the velocity. We restrictourselves to the study of large variance on the boundary.

We show results for ξ ∼ N (0, 1) and ξ ∼ U(−√3,√3) and compare the

L1-norm of the variance for the characteristic, pressure and velocity boundarycondition respectively. Figures 11 and 12, show that even without a decayingvariance on the boundary, the characteristic boundary conditions give us thesmallest variance. We also compare the different boundary conditions byintegrating the L1-norm of the variance from zero to T . Table 4 shows thetotal L1-norm of the variance of the solution for the different cases. As seenfrom Table 4 the characteristic boundary condition gives in total the lowestL1-norm of the variance.

27

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5400

500

600

700

800

900

1000

1100

t

||Var

[u]||

Characteristic

Pressure

Velocity

Figure 11: The L1-norm of the variance as a function of time for ξ ∼ N (0, 1) and charac-teristic, pressure and velocity boundary condition.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5100

120

140

160

180

200

220

240

260

t

||Var

[u]||

Characteristic

Pressure

Velocity

Figure 12: The L1-norm of the variance as a function of time for ξ ∼ U(−√3,√3) and

characteristic, pressure and velocity boundary condition.

28

6.2. An application in electromagnetics

In this section we study the Maxwell’s equations in two dimensions.The one-dimensional theoretical developments in section 2 and 3 for well-posedness and stability are completely similar in 2D and not reiterated here.

The relation between electric and magnetic fields is given by, see [54]

µ∂H

∂t= −∇× E, ǫ

∂E

∂t= ∇×H − J,

∇ · ǫE = ρ, ∇ · µH = 0,(53)

where E, H, J , ρ, ǫ and µ represent the electric field, magnetic field, electriccurrent density, charge density, permittivity and permeability. In this exam-ple we let ρ = 1, ǫ = 1 and µ = 1. By letting J = 0 we can write (53) inmatrix form as

Sut + Aux +Buy = 0,

where u = [Hz, Ex, Ey]T and

S =

µ 0 00 ǫ 00 0 ǫ

, A =

0 0 10 0 01 0 0

, B =

0 −1 0−1 0 00 0 0

.

Furthermore we introduce the eigendecomposition

ΛA = ΛB =

1 0 00 0 00 0 −1

, XA =

− 1√2

0 − 1√2

1√2

0 − 1√2

0 1 0

, XB =

1√2

0 1√2

0 −1 01√2

0 − 1√2

.

The zero eigenvalue in ΛA and ΛB does not lead to a strict inequality in(7). However also, in this case we can find a G in (12) such that the matrixM becomes positive semi-definite. The theoretical conclusions remains thesame.

The boundary conditions are of the type (5), where in this case R isrepresented at the North, South, East and West boundary (seen in figure 13)as RN , RS, RE and RW , which are matrices of size 2× 1 and 1× 2. Togetherwith the characteristic case (CHA) we study the following cases

BC1 RN = [+12, 0]T , RS = [0,−1

2],

RE = [+12, 0]T , RW = [0,+1

2],

BC2 RN = [−12, 0]T , RS = [0,+1

2],

RE = [−12, 0]T , RW = [0,−1

2],

CHA RN = [0, 0]T , RS = [0, 0],RE = [0, 0]T , RW = [0, 0].

(54)

29

Figure 13: A schematic of the domain.

In (54), BC1 represents specifying a linear combination of Hz, Ex, Ey at theNorth and East boundaries and Hz, Ex at the South and West boundaries.In BC2 we specify a linear combination of Hz, Ex, Ey at the North andSouth boundaries and Hz, Ex at the East and West boundaries. RN , RS,RE and RW are chosen such that we together with ΛA and ΛB can find a Gwhich makes the matrix M in (12) positive semi-definite.

For completeness, we introduce the characteristic variables,

XTAu =

[−Hz + Ex√2

, Ey,−Hz − Ex√

2

]T

, XTBu =

[Hz + Ey√

2,−Ex,

Hz − Ey√2

]T

.

When constructing boundary and initial data we assume randomness in Hz,Ex and Ey given by

Hx = Ex = Ey = sin(2πx) sin(2πy) + 1 + 3 cos(2πt)ξ.

In the figures below we compare the L1-norm of the variance for BC1, BC2and CHA for ξ ∼ N (0, 1) and ξ ∼ U(−

√3,√3). We also compare the total

variance for the three different cases in Table 5. As can be seen, the trendfrom the previous sections remains, namely that the characteristic boundarycondition (CHA) gives the smallest variance.

7. Summary and conclusions

We have studied how the choice of boundary conditions for randomlyvarying initial and boundary data influence the variance of the solution for ahyperbolic system of equations. Initially, the stochastic nature of the problemwas ignored and the general theory for initial boundary value problems and

30

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 510

12

14

16

18

20

22

24

26

28

30

t

||Var

[u]||

BC1

BC2

CHA

Figure 14: The L1-norm of the variance as a function of time for ξ ∼ N (0, 1) and BC1,BC2 and CHA.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5100

150

200

250

300

350

400

t

||Var

[u]||

BC1

BC2

CHA

Figure 15: The L1-norm of the variance as a function of time for ξ ∼ U(−√3,√3) and

BC1, BC2 and CHA.

31

Case CHA BC1 BC2∫ T

0‖V ar[u]‖ dt Normal 74.9 87.1 106.6

∫ T

0‖V ar[u]‖ dt Uniform 921.1 1070.7 1311.8

Table 5: The integral of the L1-norm of the variance for different values of the matricesRN , RS , RE and RW for ξ ∼ N (0, 1) and ξ ∼ U(−

√3,√3).

the related stability theory was used. Energy estimates for the continuousproblem and stability estimates for the numerical scheme were derived forboth homogeneous and non-homogeneous boundary conditions.

Next, the stochastic nature of the problem were considered and it wasshown how the variance of the solution depend on the variance of the bound-ary data and the correlation between the characteristic variables at theboundaries. We also confirmed the close relation between the sharpness ofthe energy estimate and the size of the variance in the solution.

The continuous and discrete spectrum of the problem was employed todraw conclusions about the quantitative variance decay of the solution. Nu-merical results for a hyperbolic model problem supporting our theoreticalderivations were presented for three different cases.

With perfect knowledge of the boundary data, the optimal choice in termsof variance reduction is the characteristic boundary conditions. With decay-ing variance on the boundary, the correlation between initial and boundarydata decide which type of boundary conditions gives the lowest variance forshort times. For long times, the characteristic boundary conditions are su-perior. With large variance on the boundary, the conclusions drawn abovecontinue to hold, although no clear theoretical support exist for that case.

In a fluid mechanics application, we study different subsonic outflowboundary conditions with large non-decaying randomness in the data for theEuler equations. The numerical studies showed that the characteristic bound-ary condition is more variance reducing than two common non-characteristicboundary conditions such as specifying the pressure or the normal velocity.

In the application in electromagnetics, again we study the characteristicboundary condition together with two non-characteristic boundary condi-tions with large non-decaying randomness in the data. The numerical resultsfor the Maxwell’s equations are in line with the fluid mechanics applicationand show that the characteristic boundary condition gives a higher variancedecay than the non-characteristic ones.

32

The general conclusions from this investigation are i) with the sameknowledge of data, we can get a more or less accurate description of theuncertainty in the solution, ii) the size of the variance depend to a largeextent on the boundary conditions and iii) the characteristic boundary con-ditions are generally a good choice.

Acknowledgements

The UMRIDA project has received funding from the European UnionsSeventh Framework Programme for research, technological development anddemonstration under grant agreement no ACP3-GA-2013-605036.

References

[1] D. Xiu, G. E. Karniadakis, The Wiener–Askey Polynomial Chaos forStochastic Differential Equations, SIAM Journal of Scientific Comput-ing, Volume 24, Issue 2 (2002) 619-644.

[2] D. Xiu, J. S. Hesthaven, High-Order Collocation Methods for Differen-tial Equations with Random Inputs, SIAM Journal of Scientific Com-puting, Volume 27, Issue 3 (2005) 1118-1139.

[3] Wan, X., Karniadakis, G.E., An adaptive multi-element generalizedpolynomial chaos method for stochastic differential equations. J. Com-put. Phys. 209, 617-642 (2005).

[4] Wan,X.,Karniadakis,G.E., Long-term behavior of polynomial chaos instochastic flow simulations. Comput. Methods Appl. Math. Eng. 195,5582-5596 (2006)

[5] I. Babuka, F. Nobile, R. Tempone, Stochastic Collocation Method forElliptic Partial Differential Equations with Random Input Data, SIAMJournal on Numerical Analysis, Volume 45, Issue 3 (2007) 1005-1034.

[6] J. A. S. Witteveen, H. Bijl, Efficient Quantification of the Effect ofUncertainties in Advection-Diffusion Problems Using Polynomial Chaos,An International Journal of Computation and Methodology, Volume 53,Issue 5 (2008) 437-465.

33

[7] Xiu, D.: Numerical Methods for Stochastic Computations: A SpectralMethod Approach. Princeton University Press (2010).

[8] Ganapathysubramanian, B., Zabaras, N.: Sparse grid collocationschemes for stochastic natural convection problems. J. Comput. Phys.225(1), 652685 (2007).

[9] Ghanem, R.G., Spanos, P.D.: Stochastic finite elements: a spectralapproach. Springer- Verlag, New York (1991)

[10] Le Maitre, O.P. and Knio, O.M. and Najm, H.N. and Ghanem, R.G., Astochastic projection method for fluid flow. I. Basic formulation, Journalof Computational Physics, 2001, Volume 173, 481–511.

[11] Xiu, D., Shen, J.: Efficient stochastic Galerkin methods for randomdiffusion equations. J. Comput. Phys. 228(2), 266-281 (2009).

[12] D. Gottlieb, D. Xiu, Galerkin Method for Wave Equations with Uncer-tain Coefficients, Communications in Computational Physiscs, Volume3, Issue 2 (2007) 505-518.

[13] P. Pettersson, G. Iaccarino & J. Nordstrom, Numerical analysis of theBurger’s equation in the presence of uncertainty, Journal of Computa-tional Physics, Vol. 228, pp. 8394–8412, 2009.

[14] P. Pettersson, G. Iaccarino & J. Nordstrom, An Intrusive HybridMethod for Discontinuous Two-Phase Flow under Uncertainty, Com-puters & Fluids, Volume 86, pp. 228-239, 2013.

[15] P. Pettersson, A. Doostan & J. Nordstrom, On Stability and Mono-tonicity Requirements of Finite Difference Approximations of Stochas-tic Conservation Laws with Random Viscosity, Computer Methods inApplied Mechanics and Engineering, Vol 258, pp. 134–151, 2013.

[16] P. Pettersson, G. Iaccarino & J. Nordstrom, A stochastic Galerkinmethod for the Euler equations with Roe variable transformation, Jour-nal of Computational Physics, Volume 257, Part A, pp.481-500, 2014.

[17] Constantine, P.G. and Doostan, A. and Iaccarino, G., A hybrid col-location/Galerkin scheme for convective heat transfer problems withstochastic boundary conditions, International Journal for NumericalMethods in Engineering, Volume 80, (2009), 868–880.

34

[18] R. Abgrall P. M. Congedo, A semi-intrusive deterministic approach touncertainty quantification in non-linear fluid flow problems, Journal ofComputational Physics, Volume 235 (2013) 828-845.

[19] J. Back, F. Nobile, L. Tamellini, R. Tempone, Implementation of op-timal Galerkin and collocation approximations of PDEs with randomcoefficients, ESAIM: Proc. 33 (2011) 10-21.

[20] R. S. Tuminaro, E. T. Phipps, C. W. Miller, H. C. Elman, Assessmentof collocation and Galerkin approaches to linear diffusion equations withrandom data, Int. J. Uncertainty Quantif. 1 (1) (2011), 19-33.

[21] D. Xiu, Fast Numerical Methods for Stochastic Computations: A Re-view Communications In Computational Physics, Volume 5, Issue 2-4(2008) 242-272

[22] H.O Kreiss, Initial boundary value problems for hyperbolic systems.Commun. Pure Appl. Math. 23(3), 277-298 (1970).

[23] H.O Kreiss, B Gustafsson, Boundary Conditions for Time DependentProblems with an Artificial Boundary, Journal of Computational Physics30 (1979), 333-351

[24] B. Strand, Summation by parts for finite difference approximations ford/dx, Journal of Computational Physics 110 (1) (1994) 47-67

[25] K. Mattsson, Boundary procedures for summation-by-parts,operators,Journal of Scientific Computing 18 (1) (2003) 133-153

[26] M. H Carpenter, J. Nordstrom, D. Gottlieb, A Stable and ConservativeInterface Treatment of Arbitrary Spatial Accuracy, Journal of Compu-tational Physics 148 (1999), 341-365

[27] Butcher, John C. , Coefficients for the study of Runge-Kutta integrationprocesses, Journal of the Australian Mathematical Society, pp. 185-201,Volume 3, Issue 02, 1963

[28] K. Mattsson, M. Almquist, A solution to the stability issues with blocknorm summation by parts operators, Journal of Computational Physics253 (2013) 418-442.

35

[29] H.O. Kreiss, G. Scherer, Finite element and finite difference methods forhyperbolic partial differential equations, in: C. De Boor (Ed.), Math-ematical Aspects of Finite Elements in Partial Differential Equation,Academic Press, New York, 1974

[30] M. Carpenter, J. Nordstrom, D. Gottlieb, A stable and conservativeinterface treatment of arbitrary spatial accuracy, Journal of Computa-tional Physics 148 (1999) 341-365.

[31] K. Mattsson, J. Nordstrom, Summation by parts operators for finite dif-ference approximations of second derivatives, Journal of ComputationalPhysics 199 (2004) 503-540.

[32] J. Nordstrom, R. Gustafsson, High order finite difference approximationsof electromagnetic wave propagation close to material discontinuities,Journal of Scientific Computing 18 (2003) 215-234.

[33] J. Nordstrom, Error bounded schemes for time-dependent hyperbolicproblems, SIAM Journal on Scientific Computing 30 (2007) 46-59.

[34] G. H. Golub and J. H. Welsch. Calculation of Gauss quadrature rules.Technical report, Stanford, CA, USA, 1967.

[35] S. Smolyak. Quadrature and interpolation formulas for tensor productsof certain classes of functions. Soviet Mathematics, Doklady, 4:240-243,1963.

[36] R. Abgrall. A simple, flexible and generic deterministic approach touncertainty quantifications in non linear problems: application to fluidflow problems. Rapport de recherche, 2008.

[37] P. J. Roache. Quantification of uncertainty in computational fluid dy-namics. Annual Review of Fluid Mechanics, 29:123-160, 1997.

[38] M.H. Carpenter, D. Gottlieb, S. Abarbanel, Time-stable boundary con-ditions for finite-difference schemes solving hyperbolic systems: Method-ology and application to high-order compact schemes, Journal of Com-putational Physics 111 (1994) 220-236.

[39] M. Svard, M. Carpenter, J. Nordstrom, A stable high-order finite dif-ference scheme for the compressible Navier-Stokes equations: far-field

36

boundary conditions, Journal of Computational Physics 225 (2007)1020-1038.

[40] M. Svard, J. Nordstrom, A stable high-order finite difference schemefor the compressible Navier-Stokes equations: No-slip wall boundaryconditions, Journal of Computational Physics 227 (2008) 4805-4824.

[41] J. Berg, J. Nordstrom, Stable Robin solid wall boundary conditionsfor the Navier-Stokes equations, Journal of Computational Physics 230(2011) 7519-7532

[42] S. Abarbanel, D. Gottlieb, Optimal time splitting for two- and three-dimensional Navier-Stokes equations with mixed derivatives, Journal ofComputational Physics 41 (1981) 1-43.

[43] B. Gustafsson, H.-O. Kreiss, and J. Oliger, Time Dependent Problemsand Difference Methods, John Wiley & Sons, New York, 1995.

[44] J. Nordstrom, M. Svard, Well-posed boundary conditions for the Navier-Stokes equations, Siam Journal of Numerical analysis 43 (2005) 1231-1255.

[45] S. Ghader, J. Nordstrom, Revisiting well-posed boundary conditions forthe shallow water equations, Dynamics of Atmospheres and Oceans, Vol.66, 2014, p. 1–9,

[46] Magnus Svard, Jan Nordstrom, On the order of accuracy for differenceapproximations of initial-boundary value problems, Journal of Compu-tational Physics 218 (2006) 333-352

[47] P.J. Roache, Code verification by the method of manufactured solutions,Journal of Fluids Engineering, Transactions of the ASME 124 (2002) 4-10.

[48] L. Shunn, F. Ham, P. Moin, Verification of variable-density flow solversusing manufactured solutions, Journal of Computational Physics 231(2012) 3801-3827.

[49] B. Engquist, B. Gustafsson, Steady State Computations for Wave Prop-agation Problems, Mathematics of Computation 49 (1987) 39-64.

37

[50] J. Nordstrom, The Influence of Open Boundary Conditions on the Con-vergence to Steady State for the Navier-Stokes Equations, Journal ofComputational Physics, Volume 85 (1989) 210-244

[51] J. Nordstrom, S. Eriksson, P. Eliasson, Weak and strong wall boundaryprocedures and convergence to steady-state of the Navier-Stokes equa-tions, Journal of Computational Physics, Volume 231 (2012) 4867-4884.

[52] J. Nordstrom, Conservative Finite Difference Formulations, Variable Co-efficients, Energy Estimates and Artificial Dissipation, Journal of Scien-tific Computing, Volume 29 (2006) 375-404.

[53] J. Nordstrom, S. Eriksson, Fluid Structure Interaction Problems: theNecessity of a Well Posed, Stable and Accurate Formulation, Commu-nications in Computational Physics, Volume 8 (2010) 1111-1138.

[54] J. Nordstrom, R. Gustafsson, High Order Finite Difference Approxima-tions of Electromagnetic Wave Propagation Close to Material Disconti-nuities, Journal of Scientific Computing, Vol. 18, No. 2, 215-234, 2003

38