optimal control of systems governed by pdes with … · optimal control of systems governed by pdes...

28
OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE * , CATALIN TRENCHEA , AND IVAN YOTOV Abstract. We present methods for the optimal control / parameter identification of systems governed by partial differential equations with random input data. We consider several identification objectives that either minimize the expectation of a tracking cost functional or minimize the difference of desired statistical quantities in the appropriate spatial-L p norm (including higher order moments, hence allowing to match any statistics, e.g., the variance, skewness, kurtosis, etc.). A specific problem of parameter identification of a linear elliptic PDE that describes flow of a fluid in a porous medium with uncertain permeability field is examined. We present numerical results to study the consequences of the moment-tracking approximation and the efficiency of the method. The stochastic parameter identification algorithm integrates an adjoint-based deterministic algorithm with the sparse grid stochastic collocation mixed-FEM approach. We also derive rigorous error estimates for fully discrete problems, using the Fink-Rheinboldt theory for the approximation of solutions of a class of nonlinear problems. Key words. AMS subject classifications. 1. Introduction. Driven by the needs from applications both in industry and other sciences, the field of inverse problems has undergone a tremendous growth within the last two decades, where recent emphasis has been laid more than before on nonlin- ear problems. Advances in this field and the development of sophisticated numerical techniques for treating the direct problems allow to address and solve industrial in- verse problems on a level of high complexity. Parameter estimation is an important field in the area of modeling physical or biological processes. The set of parameters that maximize the model’s agreement with experimental data, i.e. the ideal parameter set, can be used to yield important insight into a given system. It can help scientists more clearly describe the behavior of the system, predict behavioral changes in the system during pathological situations, and assess the efficacy of various corrective options. In addition, once those ideal parameters have been found, other mathematical techniques can be used to obtain further insight into the system’s behavior. Local sensitivity analysis at the optimal parameter set can be used to assess the local importance of the parameters. As mathematical/computational models become more complex in order to better describe physical systems, parameter estimation can grow in difficulty and cost due to the increase in number of parameters and consequently computational runtime. The problem of calibrating a model in a reasonable amount of time depends more and more on efficient methods of parameter estimation. There is a vast literature on estimating parameters that arise in partial differential equations using different techniques. The most studied approach to stochastic inverse * Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]). Partially supported by NSF Grant DMS-1522574 and Air Force Grant FA 9550-12-1-0191 Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]). Partially supported by NSF Grant DMS-1522574 and Air Force Grant FA 9550-12-1-0191 Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]). 1

Upload: others

Post on 11-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITHRANDOM PARAMETER FIELDS

ROXANA TANASE∗, CATALIN TRENCHEA † , AND IVAN YOTOV ‡

Abstract. We present methods for the optimal control / parameter identification of systemsgoverned by partial differential equations with random input data. We consider several identificationobjectives that either minimize the expectation of a tracking cost functional or minimize the differenceof desired statistical quantities in the appropriate spatial-Lp norm (including higher order moments,hence allowing to match any statistics, e.g., the variance, skewness, kurtosis, etc.).

A specific problem of parameter identification of a linear elliptic PDE that describes flow ofa fluid in a porous medium with uncertain permeability field is examined. We present numericalresults to study the consequences of the moment-tracking approximation and the efficiency of themethod. The stochastic parameter identification algorithm integrates an adjoint-based deterministicalgorithm with the sparse grid stochastic collocation mixed-FEM approach.

We also derive rigorous error estimates for fully discrete problems, using the Fink-Rheinboldttheory for the approximation of solutions of a class of nonlinear problems.

Key words.

AMS subject classifications.

1. Introduction. Driven by the needs from applications both in industry andother sciences, the field of inverse problems has undergone a tremendous growth withinthe last two decades, where recent emphasis has been laid more than before on nonlin-ear problems. Advances in this field and the development of sophisticated numericaltechniques for treating the direct problems allow to address and solve industrial in-verse problems on a level of high complexity.

Parameter estimation is an important field in the area of modeling physical orbiological processes. The set of parameters that maximize the model’s agreementwith experimental data, i.e. the ideal parameter set, can be used to yield importantinsight into a given system. It can help scientists more clearly describe the behavior ofthe system, predict behavioral changes in the system during pathological situations,and assess the efficacy of various corrective options. In addition, once those idealparameters have been found, other mathematical techniques can be used to obtainfurther insight into the system’s behavior. Local sensitivity analysis at the optimalparameter set can be used to assess the local importance of the parameters.

As mathematical/computational models become more complex in order to betterdescribe physical systems, parameter estimation can grow in difficulty and cost due tothe increase in number of parameters and consequently computational runtime. Theproblem of calibrating a model in a reasonable amount of time depends more andmore on efficient methods of parameter estimation.

There is a vast literature on estimating parameters that arise in partial differentialequations using different techniques. The most studied approach to stochastic inverse

∗Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260([email protected]). Partially supported by NSF Grant DMS-1522574 and Air Force GrantFA 9550-12-1-0191†Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260 ([email protected]).

Partially supported by NSF Grant DMS-1522574 and Air Force Grant FA 9550-12-1-0191‡Department of Mathematics, University of Pittsburgh, Pittsburgh, PA 15260

([email protected]).

1

Page 2: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

problems is the Bayesian approach [57, 42, 56, 35, 14, 10, 37, 46, 45, 47, 44, 6, 23, 51,17, 24, 39, 3, 43, 2, 11].

Following [28], we consider the other approach - the adjoint-variable method. Itrelies on deterministic simulators, and unlike the Bayesian approach, requires no apriori assumptions on the design variables. The first type of cost functional we in-vestigate with this approach, which is in ubiquitous use in the literature (see e.g.,[5, 13, 12, 34, 33, 59]), is defined as the expected value of a cost functional usedin deterministic control and identification problems. The second type, we apply thedeterministic cost functional to the expected value of the mismatch, we attempt tominimize the spatial L2 mismatch of the expected values (and/or higher-order mo-ments) of the solutions of the PDE and the given target function. This type offunctional was considered from a computational point of view and for a small numberof parameters in [60].

The adjoint variable-based method solves a large class of optimization, inverseproblems, parameter estimation and optimal control problems. It is one of thegradient-based techniques in which gradient vector of the cost functional with respectto the unknown parameters is calculated indirectly by solving an adjoint equation.Although an additional cost arises from solving the adjoint equation, the gradientsof the cost functional can be altogether achieved with respect to each parameter.Thus, the total cost to obtain these gradients is independent of the number of pa-rameters and amounts to the cost of solving two partial differential equations (PDEs)roughly. From a control theory point of view, the algorithm is based on the Pontrya-gin maximum principle, since it tries to iteratively solve the necessary conditions foroptimality. From an optimization point of view, the algorithm consists of a gradientdescent, in which the gradient of the cost functional is efficiently computed via theadjoint variable-based method.

The paper is organized as follows. In §2, we state the specific PDE we consideras a constraint for the stochastic inverse problems, and also list assumptions aboutthat PDE and the nature of the stochastic inputs we consider. In §3, we preciselystate the stochastic control and identification problems we consider, including thedefinitions of the two types of cost functionals. Existence and uniqueness results arestated and the first-order optimality conditions which optimal states and controls orparameters, as the case may be, must satisfy are derived. Finally, in §5, we providesome numerical examples which we use to illustrate the better matching results aswell as the better computational efficiency resulting from using the second approachfor defining stochastic cost functionals.

2. Problem setting. Let D be a convex bounded polygonal domain in Rd,d = 1, 2, 3, and (Ω,F , P ) a complete probability space, where Ω is the set of outcomes,F ⊂ 2Ω is the σ-algebra of events and P : F → [0, 1] is a probability measure. Thegeneral framework for the stochastic inverse problem is the following: we seek randomparameters, coefficients κ(ω, x) and/or forcing terms f(ω, x), with x ∈ D, ω ∈ Ω, thatminimize the mismatch between stochastic measured and simulated data. We denoteby W (D) a Banach space of functions v : D → R and define the stochastic Banachspace L2

P (Ω;W (D)), consisting of Banach valued functions that have finite secondmoments:

L2P (Ω;W (D)) =

v : Ω→W (D) | v is strongly measurable,∫

Ω

‖v(ω, ·)‖2W (D)dP (ω) < +∞.

2

Page 3: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

2.1. State Equations. Soil properties are difficult to measure on the whole spa-tial domain, therefore the material properties used in the simulation of groundwaterflows are usually flawed by uncertainties. There has been recently an increasing inter-est in the modeling and computational aspects of the uncertainties of the groundwaterflow [32, 50, 52] and porous media, see e.g., [18, 26, 54, 38, 53, 58].We consider the groundwater flow problem in a region D ⊂ Rd, where the flux isrelated to the hydraulic head gradient by Darcy’s law. We model the uncertainties inthe soil by describing the conductivity coefficient κ as a random field denoted κ(ω, x).Similarly, the stochastic forcing term f(ω, x) models the uncertainty in the sourcesand sinks (see, e.g. [1, 55, 15] and the references therein). Therefore the hydraulichead p and velocity u are also random fields satisfying the elliptic stochastic partialdifferential equation (SPDE):

(2.1)

u(ω, x) = −κ(ω, x)∇p(ω, x) in Ω×D,∇ · u = f in Ω×D,

p = 0 on Ω× ∂D.

In order to write an appropriate weak formulation for (2.1), we need to introduce theHilbert space (see [27])

H(div,D) =v ∈ (L2(D))d | ∇ · v ∈ L2(D)

with the corresponding norm

‖v‖H(div,D) = (‖v‖2L2(D) + ‖∇ · v‖2L2(D))1/2.

Currently, numerical methods for Darcy flow consider two different approaches: thefirst one using the primal, single-phase formulation for pressure, which involves solv-ing a Poisson equation for pressure, and the second one using a mixed, two-phaseformulation, with velocity and pressure as the variables of interest.We will now make the following assumptions concerning the abstract state equationsgiven by (2.1):

A1) the solution u, p to (2.1) has realizations in the Banach spaces H(div,D) andL2(D) respectively, i.e., u(ω, ·) ∈ H(div,D), p(ω, ·) ∈ L2(D) almost surelyand ∀ω ∈ Ω

‖u(ω, ·)‖H(div,D) + ‖p(ω, ·)‖L2(D) ≤ C‖f(ω, ·)‖L2(D)

where C is a constant independent of the realization ω ∈ Ω.A2) the forcing term f ∈ L2

P (Ω;L2(D)) is such that the solution u, p is uniqueand bounded in L2

P (Ω;H(div,D)) and L2P (Ω;L2(D)) respectively.

The linear elliptic SPDE (2.1) with κ(ω, ·) uniformly bounded and coercive, i.e., thereexists κmin > 0 and κmax <∞ such that

(2.2) P[ω ∈ Ω : κmin ≤ κ(ω, x) ≤ κmax ∀x ∈ D

]= 1,

and f(ω, ·) square integrable with respect to P , satisfies assumptions A1 and A2 (see[4, 49]).We shall assume that D is a bounded and open subset of Rd, either with smoothboundary (of class C2 for instance) or convex. This implies that for every f ∈L2P (Ω;L2(D)), problem (2.1) has a unique solution (u, p) ∈ L2

P (Ω;H(div,D)) ×3

Page 4: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

L2P (Ω;L2(D)).

We will denote the expected value of a random variable X(ω) with probability densityfunction (p.d.f.) ρ by

E [X] =

∫Ω

X(ω)dP (ω) =

∫Rxρ(x)dx.(2.3)

The usual multiplication by test functions v ∈ H(div,D) and w ∈ L2(D) andsubsequent application of Green’s Theorem in the system (2.1) yield the standardweak mixed formulation, namely: find u(ω, x) ∈ L2

P (Ω;H(div,D)) and p(ω, x) ∈L2P (Ω;L2(D)) such that

(2.4)

E[(κ−1u, v

)(L2(D))d

−(p,∇ · v

)L2(D)

]= 0, ∀v ∈ H(div,D)

E[(∇ · u,w

)L2(D)

]= E

[(f, w

)L2(D)

], ∀w ∈ L2(D).

Throughout the rest of this chapter, for simplicity of notation, the inner product inL2(D) or (L2(D))d will be denoted by (·, ·).

3. Generalized stochastic inverse problems. First we define the admissibleset of conductivity coefficients given by

(3.1) Aad = κ ∈ L∞(Ω;L∞(D)) | κ(ω, x) satisfies (2.2) ,

then given κ ∈ Aad let the admissible set of states and controls be defined as

Bad =

(u, p, f) | u ∈ L2P (Ω;H(div,D)), p ∈ L2

P (Ω;L2(D)), f ∈ L2P (Ω;L2(D))

.

(3.2)

Finally, given f ∈ L2P (Ω;L2(D)) let the admissible set of states and coefficients be

described as

Cad =

(u, p, κ) | u ∈ L2P (Ω;H(div,D)), p ∈ L2

P (Ω;L2(D)), κ ∈ Aad.(3.3)

We also introduce the stochastic target functions p ∈ L2P (Ω;L2(D)) a given possible

perturbed observation of the pressure, and u ∈ L2P (Ω;H(div,D)) a given possible

perturbed observation of the Darcy velocity.

3.1. Stochastic optimal control problems. In this section we consider ageneral class of minimization problems for solving the stochastic inverse problem forthe random forcing function f(ω, x) and the solution (u(ω, x), p(ω, x)) satisfying a.s.(2.1). Here we assume as given the input random process κ ∈ Aad and the targetsp ∈ L2

P (Ω;L2(D)) and u ∈ L2P (Ω;H(div,D)) and we want to recover (u∗J , p

∗J , f

∗J ) such

that

(3.4) (u∗J , p∗J , f

∗J ) = inf

(u,p,f)∈BadJ(u, p, f) : subject to (2.1)

where J(u, p, f) is a given stochastic functional constructed to track the desired ran-dom fields (u, p) or the statistical quantities of interest (QoI) of such stochastic func-tions. This leads to the following definition.

4

Page 5: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

Definition 3.1 (Stochastic optimal control). A 3-tuple (u∗J , p∗J , f

∗J ) ∈ Bad sat-

isfying (2.1) a.s., for which the infimum in (3.4) is attained is called the stochasticoptimal solution and the control f∗J is referred as stochastic optimal control. In whatfollows we will describe two functionals, denoted J1(u, p, f) and J2(u, p, f), used tosolve stochastic optimal control problems. The first functional, defined in (3.5), isbased on the standard classical approach based on stochastic least squares approxi-mation. The second functional, defined in (3.8), uses statistical tracking objectivesand is easily generalizable for higher order moments, similarly to (3.20). We will de-rive the corresponding adjoint equations, state the necessary conditions for existenceand uniqueness of the stochastic optimal solution and prove the necessary conditionsfor optimality.

3.1.1. The optimal control problem using stochastic least squares mini-mization. For κ ∈ Aad given data, we consider the following optimal control problemassociated with a stochastic elliptic boundary value problem:

Minimize the cost functional(P.1)

J1(u, p, f) = E[1

2‖u(ω, ·)− u(ω, ·)‖2(L2(D))d +

1

2‖p(ω, ·)− p(ω, ·)‖2L2(D)

](3.5)

+ E[α

2‖f(ω, ·)‖2L2(D)

],

on all (u, p, f) ∈ Bad subject to the stochastic mixed state equations (2.1).

Using standard techniques (see e.g. [40, 41, 7, 9, 48, 31, 30, 29]) one can provethat the problem (3.5)-(2.1) has a unique optimal solution that is characterized by amaximum principle type result.We introduce the co-state elliptic equations, written in weak mixed form:

(3.6)E[(κ−1q, v)− (z,∇ · v)

]= −E [(u− u, v)] , ∀v ∈ H(div,D),

E [(∇ · q, w)] = E [(p− p, w)] , ∀w ∈ L2(D).

We now state the necessary conditions for optimality in problem (P.1).

Proposition 3.2. (u, p, f) is the unique optimal solution in problem (3.5)-(2.4)if and only if there exists a co-state (q, z) ∈ L2

P (Ω;H(div,D)) × L2P (Ω;L2(D)) such

that (u, p, f , q, z) satisfies the following optimality conditions:

E[(κ−1u, v)− (p,∇ · v)

]= 0, ∀v ∈ H(div,D)

E [(∇ · u, w)] = E[(f , w)

], ∀w ∈ L2(D)

E[(κ−1q, v)− (z,∇ · v)

]= −E [(u− u, v)] , ∀v ∈ H(div,D),

E [(∇ · q, w)] = E [(p− p, w)] , ∀w ∈ L2(D)

E[(z + αf, fs − f)

]≥ 0, ∀(u, p, fs) ∈ Bad.

(3.7)

The proof of this result follows in similar manner with the next result, Theorem 1.We note that it is possible to solve the coupled optimality system in one-shot, see e.g.[40].

3.1.2. The optimal control problem utilizing statistical tracking ob-jectives. Now we aim at matching expected values, i.e., we consider the followingproblem:

Minimize the cost functional(P.2)

5

Page 6: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

J2(u, p, f) =1

2‖Eu(·, x)− Eu(·, x)‖2(L2(D))d +

1

2‖Ep(·, x)− Ep(·, x)‖2L2(D)(3.8)

2

∫D

Ef2(·, x)dx,

on all (u, p, f) ∈ Bad subject to the stochastic mixed state equations (2.1).

Remark 3.1. Note that we have∫D

[Eu(·, x)− Eu(·, x)

]2dx ≤ E

(‖u− u‖2L2(D)

),∫

D

[Ep(·, x)− Ep(·, x)

]2dx ≤ E

(‖p− p‖2L2(D)

),

which justifies the functional (3.8).

Theorem 3.3. The 3-tuple (u, p, f) is the unique optimal solution in prob-lem (3.8)-(2.4) if and only if there exists a co-state (q, z) ∈ L2

P (Ω;H(div,D)) ×L2P (Ω;L2(D)) such that (u, p, f , q, z) satisfies the following optimality conditions:

E[(κ−1u, v)− (p,∇ · v)

]= 0, ∀v ∈ H(div,D)

E [(∇ · u, w)] = E[(f , w)

], ∀w ∈ L2(D)

E[(κ−1q, v)− (z,∇ · v)

]= −E [(Eu− Eu, v)] , ∀v ∈ H(div,D),

E [(∇ · q, w)] = E [(Ep− Ep, w)] , ∀w ∈ L2(D)

E[(z + αf, fs − f)

]≥ 0, ∀(u, p, fs) ∈ Bad.

(3.9)

Proof. The sensitivity equations corresponding to the state equations (3.6) areE[(κ−1us, v)− (ps,∇ · v)

]= 0, ∀v ∈ H(div,D)

E [(∇ · us, w)] = E [(fs, w)] , ∀w ∈ L2(D),(3.10)

where fs ∈ L2P (Ω, L2(D)), ps ∈ L2

P (Ω, L2(D)) and us ∈ L2P (Ω, H(div,D)). Then the

optimality condition for problem (3.8) writes

0 ≤dJ2(u|f , p|f , f)

dffs ≡

dJ2(u, p, f)

d(u, p, f)(us, ps, fs)

(3.11)

=

∫D

E[us(·, x)]E[u(·, x)− u(·, x)]dx+

∫D

E[ps(·, x)]E[p(·, x)− p(·, x)]dx

+ α

∫D

E[ffs]dx

=

∫D

E[us(·, x)E[u(·, x)− u(·, x)]

]dx+

∫D

E[ps(·, x)E[p(·, x)− p(·, x)]

]dx

+ α

∫D

E[ffs

]dx (since E[u(·, x)− u(·, x)] is deterministic)

=

∫D

E[− us(·, x)κ−1q + z∇ · us

]dx+

∫D

E[ps(·, x)∇ · q

]dx+ α

∫D

E[ffs

]dx

(by (3.9) with v = us)

6

Page 7: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

= E[ ∫

D

−us(·, x)κ−1q + z∇ · usdx]

+ E[ ∫

D

ps(·, x)∇ · qdx]

+ αE[ ∫

D

ffsdx

](by Fubini’s theorem)

= E[ ∫

D

−us(·, x)κ−1qdx+

∫D

∇ · uszdx]

+ E[ ∫

D

κ−1us(·, x)qdx

]+ E

[ ∫D

αffsdx

](by (3.10))

= E[ ∫

D

fszdx

]+ E

[ ∫D

αffsdx

](by (3.10))

= E[ ∫

D

(z + αf

)fsdx

]= E

[(z + αf, fs

)], ∀(us, ps, fs) ∈ TanBad

(u, p, f

), z + αf ∈ NBad ,

where we have used the fact that E [u(·, x)− u(·, x)] is a deterministic quantity, theadjoint equations (3.9), Fubini’s theorem, the sensitivity equation (3.10) and thedefinition of normal cone. Here TanBad denotes the tangent cone, while NBad is thenormal cone (see [16]).

The necessary and sufficient conditions (3.9) resemble the optimality system (3.7),the difference is only in the adjoint equations which have a deterministic right-handside. Nevertheless, the adjoint variables (q, z) are still stochastic quantities.

3.2. Stochastic parameter identification problems. We also study the i-dentification of the coefficient κ in the stochastic boundary value problem (2.1). Inthe deterministic case, for the direct problem, where κ is given, the existence anduniqueness results are well known, see e.g. [36]. The linear deterministic inverseproblem related to (2.1) has been studied in e.g. [7, 48], for the nonlinear deterministicsee e.g. [8].

For the identification problem, we are given possible perturbed observations u, pcorresponding to the state variables u, respectively p, and we must determine κ in(2.1) such that u(κ) = u and p(κ) = p in Ω×D. Of course, such a κ may not exist.

3.2.1. Parameter identification using stochastic least squares minimiza-tion. The least squares approach leads us to the minimization problem:

Minimize the cost functional(P.3)

J3(u, p, κ) = E[

1

2‖u(ω, ·)− u(ω, ·)‖2(L2(D))d +

1

2‖p(ω, ·)− p(ω, ·)‖2L2(D)

](3.12)

+ E[β

2‖κ(ω, ·)‖2L2(D)

],

on all (u, p, κ) ∈ Cad subject to the stochastic mixed state equations (2.1).

We introduce the co-state elliptic equations for this problem (P.3):

(3.13)

E[((κ∗)−1q, v)− (η,∇ · v)

]= E

[− (u∗ − u, v)

], ∀v ∈ H(div,D)

E[(∇ · q, w

)]= E

[(p∗ − p, w)

], ∀w ∈ L2(D).

7

Page 8: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

Theorem 3.4. Let (u∗, p∗, κ∗) be an optimal solution in problem (3.12)-(2.4).Then there exists a co-state (q, η) ∈ L2

P (Ω;H(div,D)) × L2P (Ω;L2(D)) such that

(u∗, p∗, κ∗, q, η) satisfies the following optimality conditions:

E[((κ∗)−1u∗, v)− (p∗,∇ · v)

]= 0, ∀v ∈ H(div,D)

E[(∇ · u∗, w)

]= E

[(f, w)

], ∀w ∈ L2(D)

E[((κ∗)−1q, v)− (η,∇ · v)

]= E

[− (u∗ − u, v)

], ∀v ∈ H(div,D)

E[(∇ · q, w

)]= E

[(p∗ − p, w)

], ∀w ∈ L2(D)

κ∗(ω, x) = maxκmin,min 1β (κ∗)−2u∗(ω, x)q(ω, x), κmax,

(3.14)

a.e. in Ω×D.Proof. The sensitivity equations are

E[(

(κ∗)−1us − (κ∗)−2κsu∗, v)− (ps,∇ · v)

]= 0, ∀v ∈ H(div,D)

E[(∇ · us, w)

]= 0, ∀w ∈ L2(D),

(3.15)

where (us, ps, κs) ∈ Tan Cad(u∗, p∗, κ∗

).

Let Sad = (u, p, κ) ∈ Cad : (u, p, κ) satisfy the state equations (2.1) be set ofadmissible states and parameters to problem (3.12). We introduce the tangentialcone to the set Sad at (u, p, κ) ∈ Sad

TanSad(u, p, κ) = (us, ps, κs) which satisfy the sensitivity equations (3.15),

us ∈ L2P (Ω;H(div,D)), ps ∈ L2

P (Ω;L2(D)), κs ∈ TanAad.(3.16)

Recall that if

J(u∗, p∗, κ∗) = inf(u,p,κ)∈Sad

J(u, p, κ)

and the functional J(u, p, κ) is Gateaux differentiable, then necessarily

dJ(u∗, p∗, κ∗)

d(u, p, κ)(us, ps, κs) ≥ 0 for all (us, ps, κs) ∈ TanSad(u∗, p∗, κ∗),(3.17)

where dJ(u∗,p∗,κ∗)d(u,p,κ) ≡ dJ(u(κ∗),p(κ∗),κ∗)

dκ stands for the Gateaux derivative of J at

(u∗, p∗, κ∗) ∈ Sad, and (u∗, p∗, κ∗) ≡ (u(κ∗), p(κ∗), κ∗). Applying the optimum prin-ciple given by (3.17) it follows that the optimality condition for problem (3.12) writes

0 ≤ dJ3(u(κ∗), p(κ∗), κ∗)

dκκs ≡

dJ3(u∗, p∗, κ∗)

d(u, p, κ)(us, ps, κs)

= E[ ∫

D

us(·, x)(u∗(·, x)− u(·, x)

)dx

]+ E

[ ∫D

ps(·, x)(p∗(·, x)− p(·, x)

)dx

]8

Page 9: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

+ E[β

∫D

κ∗(·, x)κs(·, x)dx

]= E

[−∫D

us(·, x)(κ∗)−1(·, x)q(·, x) + η(·, x)∇ · us(·, x)dx

]+ E

[ ∫D

ps(·, x)∇ · q(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

](by (3.13) with v = us and w = ps)

= E[−∫D

us(·, x)(κ∗)−1(·, x)q(·, x)dx

]+ E

[ ∫D

ps(·, x)∇ · q(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

](by (3.15) with w = η)

= E[−∫D

(κ∗)−2(·, x)κs(·, x)u∗(·, x)q(·, x)dx−∫D

ps(·, x)∇ · q(·, x)dx

]+ E

[ ∫D

ps(·, x)∇ · q(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

](by (3.15) with v = q)

= E[ ∫

D

(− (κ∗)−2(·, x)u∗(·, x)q(·, x) + βκ∗(·, x)

)κs(·, x)

],

for all (us, ps, κs) ∈ TanBad(u∗, p∗, κ∗

). Here we have used the adjoint equations

(3.13), the sensitivity equations (3.15).

3.2.2. Parameter identification utilizing statistical tracking objectives.For the identification problem matching expected values, given a possible perturbedobservation (u, p) corresponding to the state variables u, p, we seek κ in (2.1) suchthat Eu(κ) = Eu and Ep(κ) = Ep in D. Therefore we consider the problem:

Minimize the cost functional(P.4)

J4(u, p, κ) =1

2

∫D

[Eu(·, x)− Eu(·, x)

]2dx+

1

2

∫D

[Ep(·, x)− Ep(·, x)

]2dx(3.18)

2

∫D

Eκ2(·, x)dx,

on all (u, p, κ) ∈ Cad subject to the stochastic state equations (2.1).

Theorem 3.5. Let (u, p, κ) be an optimal solution in problem (2.1) and (3.18).Then there exists a co-state (q, η) ∈ L2

P (Ω;H(div,D)) × L2P (Ω;L2(D)) such that

(u, p, κ, q, η) satisfies the following optimality conditions:

E[(κ−1u, v)− (p,∇ · v)

]= 0, ∀v ∈ H(div,D)

E[(∇ · u, w)

]= E

[(f, w)

], ∀w ∈ L2(D)

E[(κ−1q, v)− (η,∇ · v)

]= E

[− (Eu− Eu, v)

], ∀v ∈ H(div,D)

E[(∇ · q, w

)]= E

[(Ep− Ep, w)

], ∀w ∈ L2(D)

κ(ω, x) = maxκmin,min 1β (κ)−2u(ω, x)q(ω, x), κmax, a.e. in Ω×D.

(3.19)

9

Page 10: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

Proof. See the proof of Theorem 3.6.For the problem of matching covariance, and/or higher order moments, the cost

functional used in problem (3.18) can be generalized as follows. Assume we areinterested in L-order moments, and f ∈ LLP (Ω;L2L−2(D)), then

Minimize the cost functional(P.5)

J5(u, p, κ) =

L∑`=1

αu,`2`

∫D

[Eu`(·, x)− Eu`(·, x)

]2dx(3.20)

+

L∑`=1

αp,`2`

∫D

[Ep`(·, x)− Ep`(·, x)

]2dx+

β

2

∫D

Eκ2(·, x)dx,

on all (u, p, κ) ∈ Cad subject to the stochastic state equations (2.1).

We introduce the co-state elliptic equations for this problem (P.5):

(3.21)

E[((κ∗)−1q, v)− (η,∇ · v)

]= −E

[ L∑`=1

αu,`(u∗)`−1(E(u∗)` − Eu`, v)

],

E[(∇ · q, w

)]= E

[ L∑`=1

αp,`(p∗)`−1(E(p∗)` − Ep`, w)

],

for all v ∈ H(div,D), w ∈ L2(D).Theorem 3.6. Let (u∗, p∗, κ∗) be an optimal solution in problem (3.20)-(2.4).

Then there exists a co-state (q, η) ∈ L2P (Ω;H(div,D)) × L2

P (Ω;L2(D)) such that(u∗, p∗, κ∗, q, η) satisfies the following optimality conditions:

E[((κ∗)−1u∗, v)− (p∗,∇ · v)

]= 0, ∀v ∈ H(div,D)

E[(∇ · u∗, w)

]= E

[(f, w)

], ∀w ∈ L2(D)

E[((κ∗)−1q, v)− (η,∇ · v)

]= −E

[ L∑`=1

αu,`(u∗)`−1(E(u∗)` − Eu`, v)

],

∀v ∈ H(div,D)

E[(∇ · q, w

)]= E

[ L∑`=1

αp,`(p∗)`−1(E(p∗)` − Ep`, w)

], ∀w ∈ L2(D)

κ∗(ω, x) = maxκmin,min 1β (κ∗)−2u∗(ω, x)q(ω, x), κmax, a.e. in Ω×D.

(3.22)

Proof. The sensitivity equations areE[(

(κ∗)−1us − (κ∗)−2κsu∗, v)− (ps,∇ · v)

]= 0, ∀v ∈ H(div,D)

E[(∇ · us, w)

]= 0, ∀w ∈ L2(D),

(3.23)

10

Page 11: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

where (us, ps, κs) ∈ Tan Cad(u∗, p∗, κ∗

). Applying the optimum principle given by

(3.17) it follows that the optimality condition for problem (3.20) writes

0 ≤ dJ5(u(κ∗), p(κ∗), κ∗)

dκκs ≡

dJ5(u∗, p∗, κ∗)

d(u, p, κ)(us, ps, κs)

=

L∑`=1

∫D

αu,`E[us(·, x)(u∗)`−1(·, x)

]E[(u∗)`(·, x)− u`(·, x)

]dx

+

L∑`=1

∫D

αp,`E[ps(·, x)(p∗)`−1(·, x)

]E[(p∗)`(·, x)− p`(·, x)

]dx

+ β

∫D

E[κ∗(·, x)κs(·, x)

]dx

=

L∑`=1

∫D

αu,`E[us(·, x)(u∗)`−1(·, x)E

((u∗)`(·, x)− u`(·, x)

)]dx

+

L∑`=1

∫D

αp,`E[ps(·, x)(p∗)`−1(·, x)E

((p∗)`(·, x)− p`(·, x)

)]dx

+ β

∫D

E[κ∗(·, x)κs(·, x)

]dx

=

∫D

E[− us(·, x)(κ∗)−1(·, x)q(·, x) + η(·, x)∇ · us(·, x)

]dx

+

∫D

E[ps(·, x)∇ · q(·, x)

]dx+ β

∫D

E[κ∗(·, x)κs(·, x)

]dx

(by (3.21) with v = us and w = ps)

= E[ ∫

D

−us(·, x)(κ∗)−1(·, x)q(·, x) + η(·, x)∇ · us(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

]= E

[−∫D

us(·, x)(κ∗)−1(·, x)q(·, x)dx

]+ E

[ ∫D

ps(·, x)∇ · q(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

](by (3.23) with w = η)

= E[−∫D

(κ∗)−2(·, x)κs(·, x)u∗(·, x)q(·, x)dx−∫D

ps(·, x)∇ · q(·, x)dx

]+ E

[ ∫D

ps(·, x)∇ · q(·, x)dx

]+ βE

[ ∫D

κ∗(·, x)κs(·, x)dx

](by (3.23) with v = q)

= E[ ∫

D

(− (κ∗)−2(·, x)u∗(·, x)q(·, x) + βκ∗(·, x)

)κs(·, x)

],

for all (us, ps, κs) ∈ TanBad(u∗, p∗, κ∗

). Here we have used the adjoint equations

(3.21), the sensitivity equations (3.23).

4. Error estimates for the parameter identification problems underopen-interval parameter and finite-dimensional noise assumptions. For reader’sconvenience, we provide a brief derivation of error estimates [28] under the following

11

Page 12: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

assumptions

(i) (Finite-dimensional noise) κ(ω, x) and f(x, ω) can be expressed in terms of afinite number of independent random variables with bounded support y1(ω),. . . , yN (ω).

(ii) The coefficient κ belongs to the open interval (κmin, κmax), i.e.,

P[ω ∈ Ω : κmin < κ(ω, x) < κmax ∀x ∈ D

]= 1.(4.1)

Using the finite-dimensional noise assumption(i), the solution u, p of (2.1) dependson the realization ω ∈ Ω through the value taken by the random vector y(ω) =(y1(ω), . . . , yN (ω)

), i.e., u = u(y(ω), x), p = p(y(ω), x). Therefore, we can replace

the probability space (Ω,F , P ) with (Γ,B(Γ), ρ(y)dy), where Γ = y(Ω) denotes theimage of the random variable y, B(Γ) denotes the Borel σ-algebra on Γ, and ρ(y)dydenotes the distribution measure of the vector y with ρ(y) : Γ → R+ denoting the

joint probability density function for y. Then the stochastic domain Γ =∏Nn=1 Γn,

where Γn = yn(Ω) denotes the image of the random variable yn, and also ρ(y) =∏Nn=1 ρn(yn), where ρn(yn) : Γn → R+ denotes the probability density function for

the random variable yn.

Denoting by

κ1 =1

2

(κmin + κmax

), κ2 =

1

π

(κmax − κmin

),(4.2)

we have that under the assumption (ii)

κ(ω, x) = κ1 + κ2 arctan(ν(ω, x)),(4.3)

is a bijection from R onto (κmin, κmax), and yields a re-parametrization of problems(P.3)-(P.5) without any restriction on the new variable ν

min(u,p,κ)∈Cad

Ji(u, p, κ) : subject to (2.4) = min(u,p,ν)∈Cνad

Ji(u, p, κ(ν)) : (2.4), (4.3),

where

Cνad =

(u, p, κ) | u ∈ L2P (Ω;H(div,D)), p ∈ L2

P (Ω;L2(D)), κ ∈ L∞(Ω)× L∞(D).

The optimality systems for problems (P.3)-(P.5) can be now be expressed in terms ofν instead of κ. For i = 3, 4, 5, we denote by νJi the optimal parameter, uJi , pJi thecorresponding state variables, and qJi , ηJi the adjoint states.

In view of (2.3) and the above considerations, the state equations (2.4) are

(4.4)

∫D

((κ1+κ2 arctan(νJi(y, x))

)−1uJi(y, x) · v(x)− pJi(y, x)∇ · v(x)

)dx = 0,∫

D

∇ · uJi(y, x)w(x) dx =

∫D

f(y, x)w(x) dx,

12

Page 13: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

for all v ∈ H(div,D), w ∈ L2(D), ρ-a.e. in Γ. Similarly, the adjoint equations are

∫D

(((κ1 + κ2 arctan(νJi(y, x))

)−1qJi(y, x) · v(x)− ηJi(y, x)∇ · v(x)

)dx

=

−∫D

(uJ3(y, x)− u(y, x)

)v(x) dx,

−∫D

(EuJ4(·, x)− Eu(·, x)

)v(x) dx,

−∫D

L∑`=1

αu,`(uJ5(y, x))`−1(E(uJ5(·, x))` − E(u(·, x))`

)v(x) dx,

∫D

∇·qJi(y, x)w(x) dx=

∫D

(pJ3(y, x)− p(y, x)

)w(x) dx,∫

D

(EpJ4(·, x)− Ep(·, x)

)w(x) dx,

∫D

L∑`=1

αp,`p`−1J5

(y, x)(Ep`J5(·, x))− Ep`(·, x)

)w(x) dx,

(4.5)

and the optimality condition writes

νJi(y, x) = tan( 1

κ2

((1βuJi · qJi

) 13 − κ1

)).(4.6)

In particular, using (4.3) and (4.6), we have that

(κmin, κmax) 3 κ(νJi) =( 1

βuJi · qJi

) 13

a.s., a.e.,(4.7)

hence the optimality system for problems (P.3)-(P.5) can be written as∫D

(( 1

βuJi(y, x) · qJi(y, x)

)− 13uJi(y, x) · v(x)− pJi(y, x)∇ · v(x)

)dx = 0,∫

D

∇ · uJi(y, x)w(x) dx =

∫D

f(y, x)w(x) dx,∫D

(( 1

βuJi(y, x) · qJi(y, x)

)− 13 qJi(y, x) · v(x)− ηJi(y, x)∇ · v(x)

)dx

=

−∫D

(uJ3(y, x)− u(y, x)

)v(x) dx,

−∫D

(EuJ4(y, x)− Eu(y, x)

)v(y, x) dx,

−∫D

L∑`=1

αu,`(uJ5(y, x))`−1(E(uJ5(y, x))` − E(u(y, x))`

)v(x) dx,

∫D

∇·qJi(y, x)w(x)dx =

∫D

(pJ3(y, x)− p(y, x)

)w(x) dx,∫

D

(EpJ4(y, x)− Ep(·, x)

)w(x) dx,

∫D

L∑`=1

αp,`p`−1J5

(·, x)(Ep`J5(·, x))− Ep`(·, x)

)w(x)dx.

13

Page 14: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

The error estimates we derive make use of the Fink-Rheinboldt theory [19, 20,21, 22] concerning the approximation of a class of nonlinear problems. For the sakeof completeness, we state the relevant results of that theory, specialized to our needs.

The parameter-dependent nonlinear problems considered are of the type

(4.8) F (θ) ≡ F (ζ, λ) = 0,

where F : Θ = Z ×Λ→ Ξ is a Cr, r ≥ 1, Fredholm mapping of index 1 from an opensubset S of a real Banach space Θ into another Banach space Ξ. In our context, Z isan infinite-dimensional space whereas Λ ⊂ R is a one-dimensional parameter space.We are interested in approximations of solutions of the infinite-dimensional problem(4.8) obtained by solving a finite-dimensional (discretized) approximate problem

(4.9) F (θ) ≡ F (ζ, λ) = 0,

where F is a mapping from the discretized space Θ = Z × Λ into another discretizedspace Ξ.

Suppose we have a point θ0 ∈ Θ which may not solve the problem (4.8) butinstead certainly solves the problem

(4.10) F (θ) ≡ F (ζ0, λ0) = ξ0 with ξ0 := F (θ0)

so that it lies on the solution manifold Mξ0 of this equation. If DF (θ) ∈ L (Θ,Ξ)denotes the total Frechet derivative of F at the point θ, then a point θ ∈ S is referredto as a a regular point of F if DF (θ) is surjective; a point ξ ∈ Ξ is a regular valueof F if F (−1)(ξ) contains only regular points. If ξ ∈ Ξ is a regular value of F , thenMξ ≡ F (−1)(ξ) is an m-dimensional Cr-manifold in X without boundary.

A basis on Λ can serve as a local coordinate system for the manifold Mξ0 at apoint θ0 ∈Mξ0 provided that kerDζF (θ0) = 0, where DζF (θ0) ∈ L (Z, Y ) denotesthe Z-partial derivative of F at the point θ0.

Theorem 4.1. [20] Let ξ0 denote a regular value of F and let θ0 ∈Mξ0 . Supposethat Θ = Z × Λ → Ξ, dim Λ = 1, is a splitting such that kerDζF (θ0) = 0. Setθ0 = (ζ0, λ0) with ζ0 ∈ Z and λ0 ∈ Λ. Then, there exist open neighborhoods B ⊂ Λ ofλ0 and U ⊂ Θ of θ0 and a unique Cr-function ζ : B → Z such that ζ(λ0) = ζ0 and

Mξ0 ∩ U = θ ∈ Θ : θ =(ζ(λ), λ

), λ ∈ B.

Any θ0 ∈ S where kerDζF (θ0) = 0 is referred to as a nonsingular point of F(with respect to the splitting X = Z ⊕ Λ); otherwise, θ0 is called a singular point.Thus, if θ0 ∈ S is a nonsingular point, then kerDζF (θ0) is an isomorphism of Z ontoΞ.

We next choose a finite-dimensional subspace Z ⊂ Z, a projection QZ → Z ofZ onto Z, and an isomorphism J : Z → Ξ. With Θ = Z × Λ, Ξ = JZ, and theprojection P : Ξ→ Ξ specified by P = JQJ−1, we then define a function F : S → Ξby F (θ) = PF (θ), θ ∈ S. Then, corresponding to (4.9), we have the finite-dimensional(discretized) problem

(4.11) F (θ0) = ξ0 with ξ0 := F (θ0).

Theorem 4.2. [20] Suppose that the mapping F is of class Cr, r ≥ 2, and thatthe second Frechet derivative D2

θF is bounded on bounded subsets of its domain S.

14

Page 15: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

Consider a point θ0 ∈ S such that ξ0 = F (θ0) is a regular value of F and assume thatkerDζF (θ0) = 0. Set θ0 = (ζ0, λ0), λ0 ∈ Λ, and let θ : B ⊂ Λ→ S, θ(λ) =

(ζ(λ), λ

)denote the solution of (4.8) as given by Theorem 4.1. Set Z = Z∩Θ and suppose that

ker PDζF (θ0)|Z = 0. Then, there exist a closed ball B0 ⊂ B centered at λ0 and a

Cr-function ζ : B0 → Z such that θ(λ) =(ζ(λ), λ

)solves (4.10) for each λ ∈ B0 and

‖ζ(λ)− ζ(λ)‖Z ≤ C‖(IZ − Q)(ζ(λ)− ζ0)‖Z ∀λ ∈ B0,(4.12)

where C is a constant independent of λ.

We focus on the optimality system corresponding to optimal parameter identifica-tion problem (P.4). The error analyses for problems (P.3) and (P.5) are very similar.Also, the optimal control problems (P.1) and (P.2) can be treated in a similar, albeitsimpler, manner. Thus, for the sake of economy of exposition, from now on we dropthe subscript (·)J4 so that, e.g., now uJ4 = u, pJ4 = p, qJ4 = q, ηJ4 = η.

We recast the optimality system (4.4)-(4.6) into a form that fits the above frame-work so that we can apply Theorem 4.2, without major modifications, to derive anerror estimate for solutions of the optimality system for the parameter identificationproblem (P.4).

We begin by choosing the state space Z = [L2ρ

(Γ;H(div,D)

)×L2

ρ

(Γ;L2(D)

)]2 and

also Λ = R+ with ζ = (u, p, q, η) and λ = 1/β so that Θ = Z×Λ = [L2ρ

(Γ;H(div,D)

L2ρ

(Γ;L2(D)

)]2×R+. Then, the range space Ξ is the dual space of Z with respect to

the standard duality pairing

(ψ,ϕ

)=

∫D

∫Γ

ϕψρ(y)dydx for ϕ ∈ Z and ψ ∈ Ξ.(4.13)

We also choose the space S = [L6ρ

(Γ;H(div,D)∩L6(D)

)×L2

ρ

(Γ;L2(D)

)]2×R+ ⊂ Θ.

For convenience, we choose to define the function F (θ) and its derivatives weaklyin terms of the duality pairing. Following (4.6) and (4.3), for each θ =

((u, p, q, η), β

)∈

Θ, we denote

ν(θ) = tan( 1

κ2

((1βu · q

) 13 − κ1

))and κ(θ) = κ1 + κ2 arctan(ν(θ)),(4.14)

and define F (θ) by

(4.15)

(F((u, p, q, η), β

), (u, p, q, η)

)=(κ−1(θ)u , u

)−(p,∇ · u

)+(∇ · u− f, p

)+(κ−1(θ) q , q

)−(η,∇ · q

)+(Eu− Eu , q

),

+(∇ · q − (Ep− Ep) , η

)∀ (u, p, q, η) ∈ Z

so that the optimality conditions (4.4)-(4.6) for problem (P.4) become: seek (u, p, q, η) ∈Z = [L2

ρ

(Γ;H(div,D)

)× L2

ρ

(Γ;L2(D)

)]2 such that

(4.16)(F((u, p, q, η), β

), (u, p, q, η, )

)= 0 ∀ (u, p, q, η) ∈ Z.

Let Z ⊂ Z denote a finite-dimensional subspace. Then, from (4.15), for any θ ∈ Θ,

15

Page 16: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

the discrete operator F (θ) = PF (θ) is defined by

(4.17)

(F((u, p, q, η), β

), (u, p, q, η)

)=(κ−1(θ)u , u

)−(p,∇ · u

)+(∇ · u− f, p

)+(κ−1(θ) q , q

)−(η,∇ · q

)+(Eu− Eu , q

),

+(∇ · q − (Ep− Ep) , η

)∀ (u, p, q, η) ∈ Z

and the discrete problem (4.9) becomes: seek (u, p, q, η) ∈ Z ⊂ Z such that

(4.18)(F((u, p, q, η), β

), (u, p, q, η)

)= 0 ∀ (u, p, q, η) ∈ Z.

Note that, through the use of the weak definition of the various mappings, theprojection operator Q and the isomorphism J are implicitly induced through theduality pairing. In fact, we have that J is the Riesz mapping connecting Z =[L2ρ

(Γ;H(div,D)

)×L2

ρ

(Γ;L2(D)

)]2 with its dual space and Q : Z → Z is the projec-

tion with respect to the inner product in [L2ρ

(Γ;H(div,D)

)× L2

ρ

(Γ;L2(D)

)]2.

To derive error estimates by applying Theorem 4.2 to our setting, we first needto verify the assumptions of that theorem and those of Theorem 4.1.

The first partial Frechet derivative DζF (θ0) ∈ L(Θ,Ξ) is given by, with ζ =(u, p, q, η), θ0 =

((u0, p0, q0, η0), β0

), θ1 =

((u1, p1, q1, η1), β1

),

(4.19)

(DζF (θ0)θ1, ζ

)=(κ−1(θ0)u1 , u

)+(κ(θ0, θ1)u0 , u

)− (p1,∇ · u) + (∇ · u1, p)

+(κ−1(θ0)q1 , q

)+(κ(θ0, θ1)q0 , q

)− (η1,∇ · q) + (Eu1, q)

+ (∇ · q1 − Ep1, η)

for all ζ = (u, p, q, η) ∈ Z, where

κ(θ0, θ1) = − 1

3β0κ−4(θ0)

(u1 · q0 + u0 · q1

).

The Frechet derivative DF (θ0) ∈ L(Θ,Ξ) is then given by(DF (θ0)θ1, ζ

)=(DζF (θ0)θ1, ζ

)+(DβF (θ0)θ1, ζ

),

where (DβF (θ0)θ1, ζ

)=

β1

3β20

(κ−4(θ0)u0 · q0 u0, u

)+

β1

3β20

(κ−4(θ0)u0 · q0 q0, q

).

The second Frechet derivative D2θF (θ) ∈ L(Θ×Θ,Ξ), with θ2 =

((u2, p2, q2, η2), β2

),

is given by

(4.20)

(D2θF (θ0)θ1θ2 , ζ

)=(χ(θ0, θ1, θ2)u0 , u

)+(ψ(θ0, θ2)u1 + ψ(θ0, θ1)u2 , u

)+(χ(θ0, θ1, θ2)q0 , q

)+(ψ(θ0, θ2)q1 + ψ(θ0, θ1)q2 , q

)for all ζ = (u, p, q, η) ∈ Z, where

χ(θ0, θ1, θ2) =β2

3β20κ

4(θ0)

(u1 · q0 + u0 · q1

)+

β1

3β20κ

4(θ0)

(u2 · q0 + u0 · q2

)16

Page 17: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

− 1

3β0κ4(θ0)

(u1 · q2 + u2 · q1 +

2β1β2

β20

u0 · q0

)+

4

9β20κ

7(θ0)

(u1 · q0 + u0 · q1 −

β1

β0u0 · q0

)(u2 · q0 + u0 · q2 −

β2

β0u0 · q0

),

and

ψ(θ0, θi) =1

3κ4(θ0)

( βiβ2

0

u0 · q0 −1

β0ui · q0 −

1

β0u0 · qi

),

with i = 1, 2, and κ(θ) defined as in (4.14).Proposition 4.3. The mapping (4.15) F : Θ→ Ξ, where

Θ ≡ Z × Λ = [L2ρ

(Γ;H(div,D)

)× L2

ρ

(Γ;L2(D)

)]2 × R+,

Ξ = Z∗ (duality with respect to (4.13)),

is of class C2 on S ⊂ Θ

S = [L6ρ

(Γ;H(div,D) ∩ L6(D)

)× L2

ρ

(Γ;L2(D)

)]2 × R+,

and D2θF is bounded on all bounded sets of S.

Proof. Starting with (4.20), straightforward but somewhat tedious calculationshows that F is C2 mapping on S, and D2

θF is bounded on bounded sets of S.Proposition 4.4. Let θ0 =

((u0, p0, q0, η0), β0

)be a point in S such that ξ0 =

F (θ0) is a regular value of F . Then, kerD(u,p,qη)F (θ0) = 0. Moreover, if Z ⊂ Z,

then kerD(u,p,q,η)F (θ0)|Z = 0 as well.Proof. Without loss of generality [20], i.e., by redefining the origin in Θ, we may

choose θ0 =((u, p, q, η), β

)with β in an bounded interval of R+ and u, q

such that(

1βu · q

) 13 = 1

2

(κmin + κmax). Then from (4.2) and (4.14) we have that

κ(θ0) = κ1, and therefore (4.19) writes(DζF (θ0)θ1, ζ

)=

1

κ1

(u1 , u

)− 1

3βκ41

((u1 · q + u · q1)u , u

)− (p1,∇ · u)

+ (∇ · u1, p) +1

κ1

(q1 , q

)− 1

3βκ41

((u1 · q + u · q1)q , q

)− (η1,∇ · q) + (Eu1, q) + (∇ · q1 − Ep1, η), ∀ ζ = (u, p, q, η) ∈ Z

from which it easily follows that kerD(u,p,q,η)F (θ0) = 0. The result for the discrete

operator F follows in the same manner.We have now verified all the assumptions of Theorems 4.1 and 4.2 specialized to

our setting. Then, the following error estimate follows directly from Theorem 4.2.Theorem 4.5. Let

(u(β), p(β), q(β), η(β)

)∈ Z = [L2

ρ

(Γ;H(div,D)

)×L2

ρ

(Γ;L2(D)

)]2

and(u(β), p(β), q(β), η(β)

)∈ Z ⊂ Z denote solutions of the optimality system (4.16)

and its discretization (4.18). Then,

(4.21)

‖u(β)− u(β)‖L2ρ(Γ,H(div,D)) + ‖p(β)− p(β)‖L2

ρ(Γ,L2(D))

+ ‖q(β)− q(β)‖L2ρ(Γ,H(div,D)) + ‖η(β)− η(β)‖L2

ρ(Γ,L2(D))

≤ C inf(u,p,q,η)∈Z

‖u(β)− u(β)‖L2

ρ(Γ,H(div,D)) + ‖p(β)− p(β)‖L2ρ(Γ,L2(D))

+ ‖q(β)− q(β)‖L2ρ(Γ,H(div,D)) + ‖η(β)− η(β)‖L2

ρ(Γ,L2(D))

.

17

Page 18: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

5. Numerical Experiments.

5.1. Sensitivity Analysis for the Parameter Estimation. Consider thestate equations:

(5.1)

u = −κ∇p in Ω×D,

∇ · u = f in Ω×D,p = 0 on Ω× ∂D.

We introduce the adjoint equations:

(5.2)

∇ · q = p− p in Ω×D,

κ−1q +∇η = −(u− u) in Ω×D,η = 0 on Ω× ∂D.

Define the cost functional:

J3(Yi, i = 1 . . . N) =1

2E[‖u(Yi, ·)− u(Yi, ·)‖2L2(D)

]+

1

2E[‖p(Yi, ·)− p(Yi, ·)‖2L2(D)

]+β

2E[‖κ(Yi, ·)‖2L2(D)

].

We assume that the map κ→ u is differentiable. Then, the sensitivity equations arethe following:

(5.3)

u+ εus = − ((κ+ εκs)∇(p+ εps)) in Ω×D,

∇ · (u+ εus) = f in Ω×D,ps = 0 on Ω× ∂D.

Multiplying out and using the state equations (5.1), we get:

(5.4)

κ−1us − κ−2κsu = −∇ps in Ω×D,

∇ · us = 0 in Ω×D,ps = 0 on Ω× ∂D.

Next, we multiply by the adjoint variables q, η and integrate over D:

(5.5)

∫Dκ−1usq −

∫Dκ−2κsuq = −

∫D∇psq in Ω×D,∫

D∇ · usη = 0 in Ω×D,

ps = 0 on Ω× ∂D.

Integrate by parts:

(5.6)

∫Dκ−1usq −

∫Dκ−2κsuq =

∫Dps∇ · q in Ω×D,∫

Dus∇η = 0 in Ω×D,

ps = 0 on Ω× ∂D.

There are no boundary terms since η = 0 and ps = 0 on Ω× ∂D.Take expectation and add the first two identities of (5.6):

E[∫

D

us(κ−1q +∇η

)]− E

[∫D

κ−2κsuq

]= E

[∫D

ps∇ · q].

18

Page 19: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

Using the adjoint equations (5.2), the previous identity becomes:

−E[∫

D

us(u− u)

]− E

[∫D

κ−2κsuq

]= E

[∫D

ps(p− p)].

We give below the pseudocode for the Adjoint variable-based Algorithm, as in[25]:

INITIALIZATION: i ← 1, RelError ← 1000, Choose initial conditions for Y,ε = 1, ε← 2ε/3while RelError > tol doε← 3ε/2, i← i+ 1Solve Adjoint Equations (for the adjoint variables)Solve Standard Gradient Update, i.e. Yi+1 = Yi − ε dJdYiSolve State EquationsEvaluate Jn(i)

while J(i)n > J

(i−1)n do

ε← ε/10Solve Standard Gradient UpdateSolve State EquationsEvaluate Jn(i)

end whileRelError ←

∣∣∣J (i)n − J (i−1)

n

∣∣∣ / ∣∣∣J (i)n

∣∣∣end while

The numerical experiments were performed using MATLAB R2012a and were solvedon a square domain [0, 1] × [0, 1]. The convergence is computed on a 40 × 40 spatialmesh, with Dirichlet boundary conditions. For solving the equation (5.1) numeri-cally, an upwind scheme is used to find the effective diffusion coefficient and centraldifference for finding the hydraulic gradient. We assume the true random diffusioncoefficient κ and the exact solution p to be given by:

κ(ω, x) = (1 + x2 + y2) +1

N∗

N∑n=1

cos (nπx) · cos (nπy)Yn(ω)

p(ω, x) =

N∑n=1

sin (nπx) · sin (nπy)Yn(ω)

and then we calculate the source f(ω, x).

To understand the dynamics that the computational model produces, we first presentsome sample simulations. The tolerance was taken 10−4, the step size for the adjointalgorithm is ε = 1 and the coefficient β = 10−6 in the cost functional formula.

5.2. Numerical Experiments for the Deterministic Elliptic Case. Theexact values used for producing simulated measurements were 0.5 for the Yi, i =1, . . . , N in the formula for the diffusion coefficient k. Figures 5.1(a) and 5.1(b) showthe plot for the cost functional J and the logarithm of J to base 10. The trajectoriesof the N = 5 Ys, the cross-section of target solution versus estimated solution, thecross-section of target diffusion versus estimated diffusion are presented in figures5.1(c), (d) and (e) respectively.

19

Page 20: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

0 5 10 15 20 250.6

0.8

1

1.2

1.4

1.6

1.8

2

2.2x 10

−5 Convergence of the cost functional J

Iterations

Co

st

Fu

nctio

na

l

(a)

0 5 10 15 20 25−5.15

−5.1

−5.05

−5

−4.95

−4.9

−4.85

−4.8

−4.75

−4.7

−4.65Convergence of Log10(Cost Functional J)

Iterations

Log10(C

ost F

unctional)

(b)

0 5 10 15 20 250.52

0.53

0.54

0.55

0.56

0.57

0.58

0.59

0.6

0.61N= 5 trajectories of Y for a 40X40 spatial mesh

Iterations

Y v

alu

es

(c)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.5

0

0.5

1

1.5Crossections of target solution(red) versus estimated solution(blue)

(d)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 11

1.5

2

2.5Crossections of target diffusion(red) versus estimated diffusion(blue)

(e)

Fig. 5.1: Deterministic Case: (a)Cost functional J, (b)Log10(J), (c)N=5 trajectories of Y’s,(d)cross-section of target solution versus estimated solution, (e)cross-sections of target diffusion ver-sus estimated diffusion for a 40x40 grid; tol=10−4, ε = 1, β = 10−6. The exact values of Ys are0.5.

20

Page 21: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

5.3. Numerical Experiments for the Stochastic Elliptic Case. The exactvalues used for producing simulated measurements were considered uniformly dis-tributed random numbers for the Yi, i = 1, . . . , n. To understand the dynamics thatthe computational model produces, we present sample simulations on a 40x40 spatialmesh, where we first considered 10 realizations (see Figures 5.2 and 5.3) and then 50realizations (see Figures 5.4 and 5.5) for our stochastic model.

In the first simulation where only 10 realizations were considered, we observedthat for achieving the same tolerance of 10−4 for the relative error, the cost functionalJ3 only needs to do 27 iterations, whereas J4 and J5 require 76 and 61 iterationsrespectively. By plotting cross-sections, we observed that our estimated solutionscorresponding to either J3, J4 or J5 approximate very well the mean of the targetsolution, while the variance of the target solution is better approximated when usingthe solution corresponding to J3 cost functional. When considering the cross-sectionsfor the diffusion coefficient, the mean and variance of our estimated diffusion were notdoing so well in approximating the mean and variance of the target diffusion coeffi-cient. One explanation would be the fact that our cost functionals try to minimizethe difference between the estimated and target solutions, whereas the difference be-tween the estimated and target diffusion coefficient is never taken into account in theformulas of the cost functionals.

In the second simulation with 50 realizations being considered, the cost func-tional J3 only needs 22 iterations, whereas J4 and J5 require 47 and 95 iterationsrespectively. By plotting cross-sections, again it was observed that our estimated so-lutions corresponding to either J3, J4 or J5 approximate really well the mean of thetarget solution, whereas for the variance, it seems the solution corresponding to J5

is closer to the variance of the target solution. By looking at the cross-sections forthe diffusion coefficient, we can see the mean and variance of our estimated diffusionare not doing so great in approximating the mean and variance of the target diffusioncoefficient.

21

Page 22: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

0 10 20 30 40 50 60 70 800

0.5

1

1.5

2

2.5

3

3.5

4

4.5x 10

−4 Convergence of the Cost Functional

Iterations

Co

st

Fu

nctio

na

l

J3

J4

J5

(a)

0 10 20 30 40 50 60 70 80−4.6

−4.4

−4.2

−4

−3.8

−3.6

−3.4

−3.2

−3Convergence of the Log10(Cost functional)

Iterations

Log10(C

ost F

unctional)

Log10(J3)

Log10(J4)

Log10(J5)

(b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1

−0.5

0

0.5

1

1.5

2Crossections of the target solution and its mean

(c)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 11

1.5

2

2.5Crossections of the target diffusion coeff D and its mean

(d)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 11

1.5

2

2.5Crossections of Mean of target diffusion coeff versus Mean of estimated diffusion coeff

E(Dtarget

)

E(D3)

E(D4)

E(D5)

(e)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.005

0.01

0.015

0.02

0.025

0.03Crossections of Variance of target diffusion coeff versus Variance of estimated diffusion coeff

Var(Dtarget

)

Var(D3)

Var(D4)

Var(D5)

(f)

Fig. 5.2: (a)J , (b)Log10(J) and cross-sections for: (c)target solution, (d)target diffusion, (e)meanof target diffusion vs. mean of estimated diffusion, (f)variance of target diffusion vs. variance ofestimated diffusion. Grid considered is 40x40, tol=10−4, ε = 1, β = 10−6, runs=10. The targetvalues of N=5 Ys are random.

22

Page 23: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6Crossections of Mean of target solution versus Mean of estimated solution

E(utarget

)

E(u3)

E(u4)

E(u5)

(a)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.05

0.1

0.15

0.2

0.25Crossections of Variance of target solution versus Variance of estimated solution

Var(utarget

)

Var(u3)

Var(u4)

Var(u5)

(b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−800

−600

−400

−200

0

200

400

600

800

1000Crossections of the target forcing function versus its mean

(c)

0 10 20 30 40 50 60 70 805

6

7

8

9

10

11

12

13

14x 10

−5 Mean Convergence in L2 norm of the estimated solution

Iterations

Quantity

of in

tere

st

||E(u3)−E(u

target)||

2

||E(u4)−E(u

target)||

2

||E(u5)−E(u

target)||

2

(d)

Fig. 5.3: cross-sections for: (a)mean of target solution vs. mean of estimated solution, (b)varianceof target solution vs. variance of estimated solution, (c)forcing function f, (d)mean convergence inL2 norm of estimated solution. Grid considered is 40x40, tol=10−4, ε = 1, β = 10−6, runs=10. Thetarget values of N=5 Ys are random.

23

Page 24: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

0 50 100 150 200 2500

0.5

1

1.5

2

2.5

3Convergence of the Cost Functional

Iterations

Co

st

Fu

nctio

na

l

J3

J4

J5

(a)

0 50 100 150 200 250−5

−4

−3

−2

−1

0

1Convergence of the Log10(Cost functional)

Iterations

Lo

g1

0(C

ost

Fu

nctio

na

l)

Log10(J3)

Log10(J4)

Log10(J5)

(b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5Crossections of the target solution and its mean

(c)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 11

1.5

2

2.5Crossections of the target diffusion coeff D and its mean

(d)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 11

1.5

2

2.5Crossections of Mean of target diffusion coeff versus Mean of estimated diffusion coeff

E(Dtarget

)

E(D3)

E(D4)

E(D5)

(e)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.005

0.01

0.015

0.02

0.025

0.03Crossections of Variance of target diffusion coeff versus Variance of estimated diffusion coeff

Var(Dtarget

)

Var(D3)

Var(D4)

Var(D5)

(f)

Fig. 5.4: (a)J , (b)Log10(J) and cross-sections for:(c)target solution, (d)target diffusion, (e)meanof target diffusion vs. mean of estimated diffusion, (f)variance of target diffusion vs. variance ofestimated diffusion. Grid considered is 40x40, tol=10−4, ε = 1, β = 10−6, runs=50. The targetvalues of N=5 Ys are random.

24

Page 25: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4Crossections of Mean of target solution versus Mean of estimated solution

E(utarget

)

E(u3)

E(u4)

E(u5)

(a)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

0.18Crossections of Variance of target solution versus Variance of estimated solution

Var(utarget

)

Var(u3)

Var(u4)

Var(u5)

(b)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−1000

−500

0

500

1000

1500Crossections of the target forcing function versus its mean

(c)

0 50 100 150 200 2502

3

4

5

6

7

8

9x 10

−5 Mean Convergence in L2 norm of the estimated solution

Iterations

Qu

an

tity

of

inte

rest

||E(u3)−E(u

target)||

2

||E(u4)−E(u

target)||

2

||E(u5)−E(u

target)||

2

(d)

Fig. 5.5: cross-sections for: (a)mean of target solution vs. mean of estimated solution, (b)varianceof target solution vs. variance of estimated solution, (c)forcing function f, (d)mean convergence inL2 norm of estimated solution. Grid considered is 40x40, tol=10−4, ε = 1, β = 10−6, runs=50. Thetarget values of N=5 Ys are random.

25

Page 26: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

REFERENCES

[1] R.J. Adler, The geometry of random fields, John Wiley & Sons, Chichester, 1981.[2] Alen Alexanderian, Noemi Petra, Georg Stadler, and Omar Ghattas, A-optimal design of ex-

periments for infinite-dimensional Bayesian linear inverse problems with regularized `0-sparsification, SIAM J. Sci. Comput. 36 (2014), no. 5, A2122–A2148. MR 3257642

[3] Douglas Allaire and Karen Willcox, A mathematical and computational framework for mul-tifidelity design and analysis with computer models, Int. J. Uncertain. Quantif. 4 (2014),no. 1, 1–20. MR 3249674

[4] Ivo Babuska, Fabio Nobile, and Raul Tempone, A stochastic collocation method for ellipticpartial differential equations with random input data, SIAM J. Numer. Anal. 45 (2007),no. 3, 1005–1034 (electronic). MR MR2318799 (2008e:65372)

[5] Velamur Asokan Badri Narayanan and Nicholas Zabaras, Stochastic inverse heat conductionusing a spectral approach, Internat. J. Numer. Methods Engrg. 60 (2004), no. 9, 1569–1593.MR 2068927

[6] Guillaume Bal, Ian Langmore, and Youssef Marzouk, Bayesian inverse problems with MonteCarlo forward models, Inverse Probl. Imaging 7 (2013), no. 1, 81–105. MR 3031839

[7] H.T. Banks and K. Kunisch, Estimation techniques for distributed parameter systems,Birkhauser, Boston, 1989.

[8] V. Barbu and K. Kunisch, Identification of nonlinear elliptic equations, Appl. Math. Optim33 (1996), 139–167.

[9] Viorel Barbu, Analysis and control of nonlinear infinite-dimensional systems, Mathematics inScience and Engineering, vol. 190, Academic Press Inc., Boston, MA, 1993. MR MR1195128(93j:49002)

[10] I. Bilionis and N. Zabaras, Solution of inverse problems with limited forward solver evaluations:a Bayesian perspective, Inverse Problems 30 (2014), no. 1, 015004, 32. MR 3151683

[11] Tan Bui-Thanh and Omar Ghattas, An analysis of infinite dimensional Bayesian inverse shapeacoustic scattering and its numerical approximation, SIAM/ASA J. Uncertain. Quantif. 2(2014), no. 1, 203–222. MR 3283906

[12] Peng Chen and Alfio Quarteroni, Weighted reduced basis method for stochastic optimal controlproblems with elliptic PDE constraint, SIAM/ASA J. Uncertain. Quantif. 2 (2014), no. 1,364–396. MR 3283913

[13] Peng Chen, Alfio Quarteroni, and Gianluigi Rozza, Stochastic optimal Robin boundary controlproblems of advection-dominated elliptic equations, SIAM J. Numer. Anal. 51 (2013), no. 5,2700–2722. MR 3115461

[14] Peng Chen, Nicholas Zabaras, and Ilias Bilionis, Uncertainty propagation using infinite mixtureof Gaussian processes and variational Bayesian inference, J. Comput. Phys. 284 (2015),291–333. MR 3303621

[15] G. Christakos, Random field models in earth sciences, Academic Press, New York, NY, 1992.[16] Francis Clarke, Functional analysis, calculus of variations and optimal control, Graduate Texts

in Mathematics, vol. 264, Springer, London, 2013. MR 3026831[17] Tiangang Cui, Youssef M. Marzouk, and Karen E. Willcox, Data-driven model reduction for the

Bayesian solution of inverse problems, Internat. J. Numer. Methods Engrg. 102 (2015),no. 5, 966–990. MR 3341244

[18] M. Dentz, D.M. Tartakovsky, E. Abarca, A. Guadagnini, X. Sanchez-Vila, and J. Carrera,Variable-density flow in porous media, J. Fluid Mech. 561 (2006), 209–235.

[19] James P. Fink and Werner C. Rheinboldt, On the discretization error of parametrized nonlinearequations, SIAM J. Numer. Anal. 20 (1983), no. 4, 732–746. MR 708454 (85i:65072)

[20] , Solution manifolds and submanifolds of parametrized equations and their discretizationerrors, Numer. Math. 45 (1984), no. 3, 323–343. MR 769244 (86d:58018)

[21] , Local error estimates for parametrized nonlinear equations, SIAM J. Numer. Anal. 22(1985), no. 4, 729–735. MR 795950 (87a:65106)

[22] , A geometric framework for the numerical study of singular points, SIAM J. Numer.Anal. 24 (1987), no. 3, 618–633. MR 888753 (88h:65118)

[23] M. Frangos, Y. Marzouk, K. Willcox, and B. van Bloemen Waanders, Surrogate and reduced-order modeling: a comparison of approaches for large-scale statistical inverse problems,Large-scale inverse problems and quantification of uncertainty, Wiley Ser. Comput. Stat.,Wiley, Chichester, 2011, pp. 123–149. MR 2856654

[24] D. Galbally, K. Fidkowski, K. Willcox, and O. Ghattas, Non-linear model reduction for uncer-tainty quantification in large-scale inverse problems, Internat. J. Numer. Methods Engrg.81 (2010), no. 12, 1581–1608. MR 2642821 (2011a:65024)

[25] M.R. Garvie, P. Maini, and C. Trenchea, An efficient and robust numerical algorithm for

26

Page 27: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

estimating parameters in Turing systems, J. Comp. Phys. 229 (2010), no. 19, 7058–7071.[26] R. Ghanem and S. Dham, Stochastic finite element analysis for multiphase flow in heteroge-

neous porous media, Transport in Porous Media 32 (1998), 239–262.[27] V. Girault and P.-A. Raviart, Finite element methods for Navier-Stokes equations, Springer

Series in Computational Mathematics, vol. 5, Springer-Verlag, Berlin, 1986, Theory andalgorithms.

[28] M. Gunzburger, C. Trenchea, and C. Webster, Error estimates for a stochastic collocationapproach to identification and control problems for random elliptic pdes, Tech. report,University of Pittsburgh, 2015.

[29] Max D. Gunzburger, Hyung-Chun Lee, and Jangwoon Lee, Error estimates of stochastic opti-mal Neumann boundary control problems, SIAM J. Numer. Anal. 49 (2011), no. 4, 1532–1552.

[30] V. Isakov, Inverse problems for partial differential equations, second ed., Applied MathematicalSciences, vol. 127, Springer, New York, 2006. MR MR2193218 (2006h:35279)

[31] Kazufumi Ito and Karl Kunisch, Lagrange multiplier approach to variational problems andapplications, Advances in Design and Control, vol. 15, Society for Industrial and AppliedMathematics (SIAM), Philadelphia, PA, 2008. MR MR2441683 (2009g:49001)

[32] A. Keese and Matthies H.G., Parallel computation of stochastic groundwater flow, NIC Sym-posium 2004, Proceedings 20 (2003), 399–408.

[33] D. P. Kouri, M. Heinkenschloss, D. Ridzal, and B. G. van Bloemen Waanders, A trust-regionalgorithm with adaptive stochastic collocation for PDE optimization under uncertainty,SIAM J. Sci. Comput. 35 (2013), no. 4, A1847–A1879. MR 3073358

[34] , Inexact objective function evaluations in a trust-region algorithm for PDE-constrainedoptimization under uncertainty, SIAM J. Sci. Comput. 36 (2014), no. 6, A3011–A3029.MR 3293456

[35] Jesper Kristensen and Nicholas J. Zabaras, Bayesian uncertainty quantification in the evalua-tion of alloy properties with the cluster expansion method, Comput. Phys. Commun. 185(2014), no. 11, 2885–2892. MR 3252545

[36] O. Ladyzhenskaya and N. Ural’tseva, Equations a derivees partielles de type elliptique, Dunod,Paris, 1968.

[37] Toni Lassila, Andrea Manzoni, Alfio Quarteroni, and Gianluigi Rozza, A reduced computationaland geometrical framework for inverse problems in hemodynamics, Int. J. Numer. MethodsBiomed. Eng. 29 (2013), no. 7, 741–776. MR 3081353

[38] Yu.N. Lazarev, P.V. Petrov, and D.M. Tartakovsky, Interface dynamics in randomly heteroge-neous porous media, Advances in Water Resources 28 (2005), 393–403.

[39] Chad Lieberman, Karen Willcox, and Omar Ghattas, Parameter and state model reduction forlarge-scale statistical inverse problems, SIAM J. Sci. Comput. 32 (2010), no. 5, 2523–2542.MR 2684726 (2011h:65007)

[40] J.-L. Lions, Controle optimal de systemes gouvernes par des equations aux derivees partielles,Avant propos de P. Lelong, Dunod, Paris, 1968. MR MR0244606 (39 #5920)

[41] , Some methods in the mathematical analysis of systems and their control, KexueChubanshe (Science Press), Beijing, 1981. MR MR664760 (84m:49003)

[42] Xiang Ma and Nicholas Zabaras, An efficient Bayesian inference approach to inverse problemsbased on an adaptive sparse grid collocation method, Inverse Problems 25 (2009), no. 3,035013, 27. MR 2480183 (2010e:65082)

[43] James Martin, Lucas C. Wilcox, Carsten Burstedde, and Omar Ghattas, A stochastic New-ton MCMC method for large-scale statistical inverse problems with application to seismicinversion, SIAM J. Sci. Comput. 34 (2012), no. 3, A1460–A1487. MR 2970260

[44] Youssef Marzouk and Dongbin Xiu, A stochastic collocation approach to Bayesian inferencein inverse problems, Commun. Comput. Phys. 6 (2009), no. 4, 826–847. MR 2672325(2011g:62072)

[45] Youssef M. Marzouk and Habib N. Najm, Dimensionality reduction and polynomial chaosacceleration of Bayesian inference in inverse problems, J. Comput. Phys. 228 (2009),no. 6, 1862–1902. MR 2500666 (2010b:65023)

[46] Youssef M. Marzouk, Habib N. Najm, and Larry A. Rahn, Stochastic spectral methods forefficient Bayesian solution of inverse problems, J. Comput. Phys. 224 (2007), no. 2, 560–586. MR 2330284 (2008b:65074)

[47] H. N. Najm, B. J. Debusschere, Y. M. Marzouk, S. Widmer, and O. P. Le Maıtre, Uncertaintyquantification in chemical systems, Internat. J. Numer. Methods Engrg. 80 (2009), no. 6-7,789–814. MR 2583862

[48] Pekka Neittaanmaki, Jurgen Sprekels, and Dan Tiba, Optimization of elliptic systems, SpringerMonographs in Mathematics, Springer, New York, 2006, Theory and applications. MR

27

Page 28: OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH … · OPTIMAL CONTROL OF SYSTEMS GOVERNED BY PDES WITH RANDOM PARAMETER FIELDS ROXANA TANASE , CATALIN TRENCHEA y, AND IVAN YOTOV

2183776 (2006j:49003)[49] F. Nobile, R. Tempone, and C. G. Webster, An anisotropic sparse grid stochastic collocation

method for partial differential equations with random input data, SIAM J. Numer. Anal.46 (2008), no. 5, 2411–2442. MR MR2421041 (2009c:65331)

[50] H. Osnes and H.P. Langtangen, An efficient probabilistic finite element method for stochasticgroundwater flow, Advances in Water Resources 22 (1998), 185–195.

[51] J. Ray, Y. M. Marzouk, and H. N. Najm, A Bayesian approach for estimating bioterror attacksfrom patient data, Stat. Med. 30 (2011), no. 2, 101–126. MR 2758268

[52] D.M. Tartakovsky, Probabilistic risk analysis in subsurface hydrology, Geophys. Res. Lett. 34(2007), L05404.

[53] D.M. Tartakovsky, A. Guadagnini, and M. Riva, Stochastic averaging of nonlinear flows inheterogeneous porous media, J. Fluid Mech. 492 (2003), 47–62.

[54] D.M. Tartakovsky and S.P. Neuman, Extension of “transient flow in bounded randomly hetero-geneous domains 1. Exact conditional moment equations and recursive approximations”,Water Resour. Res. 35 (1999), no. 6, 1921–1925.

[55] E. Vanmarcke, Random Fields: Analysis and Synthesis, 3rd ed., The MIT Press, Cambridge,MA, 1988.

[56] Jiang Wan and Nicholas Zabaras, A Bayesian approach to multiscale inverse problems usingthe sequential Monte Carlo method, Inverse Problems 27 (2011), no. 10, 105004, 25. MR2835979 (2012h:62365)

[57] Jingbo Wang and Nicholas Zabaras, Hierarchical Bayesian models for inverse problems in heatconduction, Inverse Problems 21 (2005), no. 1, 183–206. MR 2146171 (2006a:80005)

[58] D. Xiu and D.M. Tartakovsky, Numerical methods for differential equations in random do-mains, SIAM J. Sci. Comput. 28 (2006), no. 3, 1167–1185.

[59] N. Zabaras, Solving stochastic inverse problems: a sparse grid collocation approach, Large-scale inverse problems and quantification of uncertainty, Wiley Ser. Comput. Stat., Wiley,Chichester, 2011, pp. 291–319. MR 2856661

[60] N. Zabaras and B. Ganapathysubramanian, A scalable framework for the solution of stochasticinverse problems using a sparse grid collocation approach, J. Comput. Phys. 227 (2008),no. 9, 4697–4735. MR 2406554 (2009i:65013)

28