iterative regularization methods for inverse problems: lecture 1

59
Iterative Regularization Methods for Inverse Problems: Lecture 1 Thorsten Hohage Institut für Numerische und Angewandte Mathematik Georg-August Universität Göttingen Madrid, April 4, 2011

Upload: vudat

Post on 14-Feb-2017

232 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Iterative Regularization Methods for Inverse Problems: Lecture 1

Iterative Regularization Methodsfor Inverse Problems: Lecture 1

Thorsten Hohage

Institut für Numerische und Angewandte MathematikGeorg-August Universität Göttingen

Madrid, April 4, 2011

Page 2: Iterative Regularization Methods for Inverse Problems: Lecture 1

Outline

1 Definitions of inverse problems and ill-posedness

2 Examples of inverse problems

3 Introduction to regularization methods

4 Basics of linear regularization theory

Page 3: Iterative Regularization Methods for Inverse Problems: Lecture 1

Keller’s definition of an inverse problem

“We call two problems inverses of one another if theformulation of each involves all or part of the solutionof the other. Often, for historical reasons, one of thetwo problems has been studied extensively for sometime, while the other is newer and not so wellunderstood. In such cases, the former problem iscalled the direct problem, while the latter is called theinverse problem.”

J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 4: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 Chicken Suzuki2 Washington, Irving3 Nine, W

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 5: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 6: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 7: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 8: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 9: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 10: Iterative Regularization Methods for Inverse Problems: Lecture 1

some examples ...

What are the questions to which the answer is:1 answer: Chicken Suzuki1 question: What is the name of the only surviving kamikaze

pilot?2 answer: Washington, Irving2 question: What is the capital of the United States, Max?3 answer: Nine, W3 question: Do you spell your name with a V, Herr Wagner?

from J.B. Keller. Inverse Problems. Am. Math. Mon., 83:107-118,1976

Page 11: Iterative Regularization Methods for Inverse Problems: Lecture 1

Hadamard’s definition ofwell-posedness

DefinitionA problem is called well-posed if

1 there exists a solution to the problem (existence),2 there is at most one solution to the problem (uniqueness),3 the solution depends continuously on the data (stability).

Otherwise the problem is called ill-posed.

Page 12: Iterative Regularization Methods for Inverse Problems: Lecture 1

ill-posedness of inverse problems

In many cases one of two problems, which are inverse to eachother, is ill-posed. In this case we call it the inverse problemand the other one the direct of forward problem. All inverseproblems we will consider in the following are ill-posed.

Page 13: Iterative Regularization Methods for Inverse Problems: Lecture 1

discussion of the first two Hadamardcriteria

• Existence of a solution to the inverse problem is clear if thedata space is defined as set of solutions to the directproblem.

• A solution may fail to exist if the data are perturbed bynoise. This problem will be addressed below.

• Uniqueness of a solution to an inverse problem is often noteasy to show. Obviously, it is an important issue.

• If uniqueness is not guaranteed by the given data, theneither additional data have to be observed or the set ofadmissible solutions has to be restricted using a-prioriinformation on the solution. In other words, a remedyagainst non-uniqueness can be a reformulation of theproblem.

Page 14: Iterative Regularization Methods for Inverse Problems: Lecture 1

discussion of the third Hadamardcriterion

• Among the three Hadamard criteria, a failure to meet thethird one is most delicate to deal with.

• In this case inevitable measurement and round-off errorscan be amplified by an arbitrarily large factor and make acomputed solution completely useless.

• Until the beginning of the last century it was generallybelieved that for natural problems the solution will alwaysdepend continuously on the data. ’natura non facit salti.’

• Only in the second half of the last century it was realizedthat a huge number of problems aring in science andtechnology are ill-posed in any reasonable mathematicalsetting.

Page 15: Iterative Regularization Methods for Inverse Problems: Lecture 1

ill-posedness in terms of operatorequations

Suppose the inverse problem can be formulated as an operatorequation

F (x) = y

where x denotes the unknown solution and y the given data.Then the inverse problem is well-posed in the sense ofHadamard if

1 F is surjective (existence)2 F in injective (uniqueness)3 F−1 is continuous (stability)

Typically, the third condition is violated for inverse problems!

Page 16: Iterative Regularization Methods for Inverse Problems: Lecture 1

Outline

1 Definitions of inverse problems and ill-posedness

2 Examples of inverse problems

3 Introduction to regularization methods

4 Basics of linear regularization theory

Page 17: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 1: numerical differentiation

problem: Given a noisy signal g : R→ R estimate its derivative.

A classical example where this appears is:estimation of the density of a random variable• Let X1, . . . ,Xn be independent copies of a random variable

X with values in [0,1] and unknown density f .• We can estimate the distribution function

g(x) :=

∫ x

0f (t) dt = P(X ≤ x)

of f from our data by

gn(x) :=1n

#i : Xi <= x .

Page 18: Iterative Regularization Methods for Inverse Problems: Lecture 1

operator formulation

We define the forward problem to be the evalutation of theintegral

(TDf )(x) :=

∫ x

0f (t) dt for x ∈ [0,1]

for a given f ∈ C([0,1]). The inverse problem consists insolving the equation

TDf = g

for a given g ∈ C([0,1]) satisfying g(0) = 0, or equivalentlycomputing f = g′.

Page 19: Iterative Regularization Methods for Inverse Problems: Lecture 1

ill-posedness of numericaldifferentiation

• Obviously, the equation TDf = g has a solution f inC([0,1]) if and only if g ∈ C1([0,1]).

• The inverse problem is well-posed if we choose the norm‖g‖∞ + ‖g′‖∞ in the image space.

• However, the error in our data is only bounded with respectto the supremum norm. ´

Page 20: Iterative Regularization Methods for Inverse Problems: Lecture 1

ill-posedness of numericaldifferentiation

Let us assume that we are given noisy data gδ ∈ C([0,1])satisfying

‖gδ − g‖∞ ≤ δ

with noise level 0 < δ < 1. The functions

gδn(x) := g(x) + δ sinnxδ, x ∈ [0,1],

n = 2,3,4, . . . satisfy this error bound, but for the derivates

(gδn)′(x) = g′(x) + n cosnxδ, x ∈ [0,1]

we find that‖(gδn)′ − g′‖∞ = n.

Hence, the error in the solutions tends blows up without boundas n→∞ although the error in the data is bounded by δ. Thisshows the ill-posedness of the problem.

Page 21: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 2: backwards heat equation

forward problem: Given f ∈ L2([0,1]) find g(x) = u(x ,T )(T > 0) where u : [0,1]× [0,T ]→ R satisfies

∂tu(x , t) = ∆u(x , t), x ∈ (0,1), t ∈ (0,T ),

u(0, t) = u(1, t) = 0, t ∈ (0,T ],

u(x ,0) = f (x), x ∈ [0,1].

Physical interpretation: f may describe a temperature profile attime t = 0. On the boundaries of the interval [0,1] thetemperature is kept at 0. The task is to find the temperature attime t = T .

Page 22: Iterative Regularization Methods for Inverse Problems: Lecture 1

backwards heat equation: inverseproblem

The inverse problem consists in finding the initial temperaturegiven the temperature at time t = T .

inverse problem: Given g ∈ L2([0,1]), find initial valuesf ∈ L2([0,1]) such that the corresponding solution u to thedirect problem satisfies u(·,T ) = g.

Page 23: Iterative Regularization Methods for Inverse Problems: Lecture 1

separation of variablesLet fn :=

√2∫ 1

0 sin(πnx)f (x) dx denote the Fourier coefficientsof f with respect to the complete orthonormal system√

2 sin(πn·) : n = 1,2, . . . of L2([0,1]). A separation ofvariables leads to the formal solution

u(x , t) =√

2∞∑

n=1

fne−π2n2t sin(nπx).

Introducing the operator TBH : L2([0,1])→ L2([0,1]) by

(TBHf )(x) :=

∫2∞∑

n=1

(e−π

2n2T sin(nπx) sin(nπy))

f (y) dy ,

we may formulate the inverse problem as an integral equationof the first kind

TBHf = g.

Page 24: Iterative Regularization Methods for Inverse Problems: Lecture 1

severe ill-posedness of the inverseproblem

• The direct solution operator TBH damps out high frequencycomponents with an exponentially descreasing factore−π

2n2T .• Therefore, in the inverse problem a data error in the nth

Fourier component of g is amplified by the factor eπ2n2T !

This shows that the inverse problem is severely ill-posed.• The inverse problem does not have a solution for arbitrary

g ∈ L2([0,1]).

Page 25: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 3: computerized tomography

forward problem: Given the absorpion coefficient for x-rays in aslice of the body, compute all shadow images by integration!

−→

inverse problem: Given all shadow images, find the absorptioncoefficient!

Page 26: Iterative Regularization Methods for Inverse Problems: Lecture 1

Radon transform• Mathematically the forward problem of computarized

tomography is described by the following Radon transform

R : L2(B)→ L2(S1 × [−1,1])

with B := x ∈ R2 : |x | ≤ 1 and S1 := ∂B.• For a direction ϑ ∈ S1 define an orthogonal direction byϑ⊥ :=

( 0 1−1 0

)ϑ, extend f ∈ L2(B) by 0 to R2, and define

(Rf )(ϑ, s) :=

∫ ∞−∞

f (sϑ+ tϑ⊥) dt .

Johann Radon, 1887–1956

Page 27: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 4: decovolution problems inimaging

confocal fluorescence mi-croscopy• Focused light

excites fluorescentmarkers in theobject.

• 3d imaging of livinigcells

Page 28: Iterative Regularization Methods for Inverse Problems: Lecture 1

the Hubble space telescope

• In early 1990 the Hubble Space Telescope was launchedinto the low-earth orbit outside of the disturbingatmosphere in order to provide images with aunprecedented spatial resolution.

• Unfortunately, soon after launch a manufacturing error inthe main mirror was detected, causing severe sphericalaberrations in the images.

• Therefore, before the space shuttle Endeavour visited thetelescope in 1993 to fix the error, astronomers employedinverse problem techniques to improve the blurred images.

Page 29: Iterative Regularization Methods for Inverse Problems: Lecture 1

deblurring problem

• The true image f and the blurred image g are related by afirst kind integral equation∫ ∞

−∞

∫ ∞−∞

k(x , y ; x ′, y ′)f (x ′, y ′) dx ′ dy ′ = g(x , y)

where k is the blurring function.• k(·; x0, y0) describes the blurred image of a point source at

(x0, y0).• It is usually assumed that k is spatially invariant, i.e.

k(x , y ; x ′, y ′) = h(x − x ′, y − y ′), x , x ′, y , y ′ ∈ R.

h is called point spread function.

Page 30: Iterative Regularization Methods for Inverse Problems: Lecture 1

operator equation• For a spatially invariant psf the direct problem is described

by the convolution operator

(TDBf )(x , y) :=

∫ ∞−∞

∫ ∞−∞

h(x − x ′, y − y ′)f (x ′, y ′) dx ′ dy ′.

• The inverse problem is then described by the operatorequation

TDBf = g .

• In principle the solution can be computed by the Fourierconvolution theorem:

f =1

2πF−1(1/h)Fg.

• The multiplication by 1/h is unstable since h := Fhvanishes asymptotically for large arguments. Therefore theinverse problem is ill-posed.

Page 31: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 5: inverse scattering problemsacoustic waves:c = speed of soundU(t , x) = velocity potential:−∂U

∂t = pressuregrad U = density · velocity

ω = frequencyk = ω/c = wave number

• wave equation: ∂2U∂t2 = c2∆U

• time-harmonic waves: U(x , t) = <(u(x)e−iωt)

• Helmholtz equation: ∆u + k2u = 0• boundary conditions:

• sound-soft obstacles Ω: pressure vanishes at the boundary,i.e. u = 0 on ∂Ω

• sound-hard obstacles Ω: normal component of velocityvanishes at the boundary, i.e ∂u

∂ν = 0 on ∂Ω.

Page 32: Iterative Regularization Methods for Inverse Problems: Lecture 1

forward problemΩ ⊂ Rm compact obstacle, Rm \ Ω connectedui(x) = e−ikx ·d incident plane wave with direction |d | = 1k ≈ (diam Ω)−1 wave number

Given the domain Ω and the incident field ui find scattered fieldus such that the total field u := ui + us satisfies

1 the Helmholtz equation ∆u + k2u = 0 in Rm\Ω2 the Sommerfeld radiation condtion r

m−12(∂us∂r − ikus

) r→∞−→ 03 either the boundary condition u = 0 or ∂u

∂ν = 0 on ∂Ω

Page 33: Iterative Regularization Methods for Inverse Problems: Lecture 1

inverse obstacle scattering problem

• The scattered field us has the asymptotic behavior

us(x) =eik |x |

|x |m−1

2

(u∞

(x|x |

)+ O

(1|x |

)), |x | → ∞ .

The amplitude factor u∞, which is an analytic function ofthe direction x

|x | is called the far field pattern or scatteringamplitude of us.

• Given far field patterns u∞ corresponding to one or manyincident waves ui , find the shape of the scatterer Ω!

Page 34: Iterative Regularization Methods for Inverse Problems: Lecture 1

example 6: Electrical ImpedanceTomography (EIT)

Let Ω ⊂ Rm describe an electrically conducting medium.

u voltage σ > 0 conductivityE = −grad u electric field j electric currentH magnetic field I = σ ∂u

∂ν boundary current• derivation of partial differntial equation in Ω:

Ohm’s law: j = σE = −σgrad uMaxwell eq.: rot H = j + ∂t (εE)Apply div and use identity div rot = 0 and assumption∂tE = 0:

0 = div j = −divσgrad u in Ω.

• conservation of charge (Gauß law):∫∂Ω I ds = 0

• Since the voltage is only determined up to a constant, wecan normalize u by

∫∂Ω u ds = 0.

Page 35: Iterative Regularization Methods for Inverse Problems: Lecture 1

Electrical Impedance Tomography(EIT)

Direct problem: Given σ, determine the voltage u|∂Ω on theboundary for all current distributions I satisfying thecompatibility condition

∫∂Ω I ds = 0 by solving the elliptic

boundary value problem

− divσgrad u = 0 inΩ,

σ∂u∂ν

= I on∂Ω

with the normalization condition∫∂Ω u ds = 0. In other words,

determine the Neumann-to-Dirichlet mapΛσ : H−1/2

(∂Ω)→ H1/2 (∂Ω) defined by ΛσI := u|∂Ω.

Inverse problem: Given measurements of the voltagedistributation u|∂Ω for all current distributations I, i.e. given theNeumann-to-Dirichlet map Λσ, reconstruct the conductivity σ.

Page 36: Iterative Regularization Methods for Inverse Problems: Lecture 1

Outline

1 Definitions of inverse problems and ill-posedness

2 Examples of inverse problems

3 Introduction to regularization methods

4 Basics of linear regularization theory

Page 37: Iterative Regularization Methods for Inverse Problems: Lecture 1

singular value decomposition (SVD)

• X ,Y Hilbert spaces• T ∈ L(X ,Y) compact, dim T (X ) =∞• P orthogonal projection onto N(T ) := f ∈ X : Tf = 0.

TheoremThere exist singular values σ0 ≥ σ1 ≥ · · · > 0 and orthonormalsystems u0,u1, . . . ⊂ X and v0, v1, . . . ⊂ Y such that

f =∞∑

n=0

〈f ,un〉un + Pf ,

Tf =∞∑

n=0

σn 〈f ,un〉 vn

for all f ∈ X . The σn are uniquely determined by T and satisfylimn→∞ σn = 0. (σn,un, vn) is called a singular system of T .

Page 38: Iterative Regularization Methods for Inverse Problems: Lecture 1

alternative formulation of the SVD

• Assume for simplicity that N(T ) = 0 and T (X ) = Y.• Then un : n ∈ N ⊂ X and vn : n ∈ N ⊂ Y are Hilbert

bases.• Denote by U : X → l2(N0), Uf :=

∑∞n=0 〈f ,un〉un and

V : Y → l2(N), Vg :=∑∞

n=0 〈g, vn〉 vn the correspondingunitary operators, and Σ : l2(N)→ l2(N), (Σa)n := σnan.

• Then the following diagram commutes:

X T−−−−→ Y

U

y V

yl2(N)

Σ−−−−→ l2(N)

• In other words T = V ∗ΣU. (This generalizes the matrixSVD.)

Page 39: Iterative Regularization Methods for Inverse Problems: Lecture 1

sequence model

• By the previous formulation of the SVD, every operatorequation Tf = g with a compact operator in Hilbert spacescan be reduced to a multiplication operator Σ on thesequence space l2(N).

• In other words, the operator equation Tf = g decouples viathe SVD into a countable number of scalar linear equations

σnfn = gn, n ∈ N

for the Fourier coefficients fn := 〈f ,un〉 and gn := 〈g, vn〉.

• Recall, however, that limn→∞ σn = 0. Therefore,(

gnσn

)n∈N

does not necessarily belong to l2(N) for (gn)n∈N ∈ l2(N).This yields:

Page 40: Iterative Regularization Methods for Inverse Problems: Lecture 1

Picard criterion

Theorem (Picard)Under the assumptions above the equation Tf = g is solvable ifand only if the Picard criterion

∞∑n=0

1σ2

n|〈g, vn〉|2 <∞

is satisfied. Then the solution is given by

f =∞∑

n=0

1σn〈g, vn〉un .

Page 41: Iterative Regularization Methods for Inverse Problems: Lecture 1

sequence model with noise

• Now assume that the data g = Tf † are not given exactly,but only with some measurement errors err,

gδ = g + err

with a known bound ‖err‖ ≤ δ.• Solving the equations σnf δn = gδn yields

f δn =gδnσn

= f †n +errn

σn.

• Since limn→∞ σn = 0, this shows that we have

infinite noise amplification!

Page 42: Iterative Regularization Methods for Inverse Problems: Lecture 1

truncated SVD (spectral cut-off)

• One remedy is to choose some cut-off parameter α anddefine

f δn :=

nσn, if σ2

n > α

0 else.

• This ensures that noise amplification is bounded by 1√α

.

• Disadvantage: The computation of the truncated SVDrequires explicite knowledge of an SVD of T .This is only known in a few exceptional cases, and thenumerical computation of an SVD is often prohibitivelyexpensive.

Page 43: Iterative Regularization Methods for Inverse Problems: Lecture 1

Tikhonov regularization

• f δα = miminum of the strictly convex, quadratic functional

Jα(f ) := ‖Tf − gδ‖2Y + α‖f − f0‖2X

α > 0 regularization parameter, f0 ∈ X initial guess .• First order optimality conditions: (Necessary and sufficient

due to strict convexity) For all h ∈ X we have

0 = J ′α[f δα]h = 2⟨

Tf δα − gδ,Th⟩Y

+ 2α⟨

f δα − f0,h⟩X.

• explicit formula:

f δα = (T ∗T + αI)−1(

T ∗gδ + αf0)

Page 44: Iterative Regularization Methods for Inverse Problems: Lecture 1

iterated Tikhonov regularization

• Once we have computed the Tikhonov solution f δα we mayfind a better approximation by applying Tikhonovregularization again using f δα as initial guess f0.

• This leads to iterated Tikhonov regularization:

f δα,0 := 0

f δα,m := (T ∗T + αI)−1(T ∗gδ + αf δα,m−1), m ≥ 1

• Note that only one operator T ∗T + αI has to be inverted tocompute f δα,n for any n ∈ N. If we use, e.g., the LUfactorization to apply (T ∗T + αI)−1, the computation of f δα,nfor n ≥ 2 is not much more expensive than the computationof f δα,1.

Page 45: Iterative Regularization Methods for Inverse Problems: Lecture 1

Landweber iteration• Introduce the output-least-squares functional

J(f ) := ‖Tf − gδ‖2Y

• The direction of steepest descent is given at f is given by−gradJ(f ) = −T ∗(Tf − gδ).

• This leads to the following iteration formula for minimizing Jknown as Landweber iteration:

f δn+1 := f δn − µT ∗(Tf − gδ)

µ > 0 is a step-length parameter.• If f0 = 0, an induction argument shows that

f δn =n−1∑j=0

(I − µT ∗T )jµT ∗gδ .

Page 46: Iterative Regularization Methods for Inverse Problems: Lecture 1

functional calculus

• Let A : X → X be compact and self-adjoint, and letun : n ∈ N ⊂ X be a Hilbert basis of eigenvectors with

Aun = λnun .

• For a bounded function ϕ : σ(A)→ R on the spectumσ(A) := λn : n ∈ N ∩ 0 define ϕ(A) ∈ L(X ) by

ϕ(A)f :=∞∑

n=0

ϕ(λn) 〈f ,un〉un

• Properties:• p(A) =

∑mj=0 pjAj if p(t) =

∑mj=0 pj t j

• (αϕ+ βψ)(A) = αϕ(A) + βψ(A) for α, β ∈ R• (ϕ · ψ)(A) = ϕ(A)ψ(A)• ‖ϕ(A)‖ ≤ supt∈σ(A) |ϕ(t)|

Page 47: Iterative Regularization Methods for Inverse Problems: Lecture 1

spectral regularization methods

• Consider a family of functions qα : σ(T ∗T )→ R, α > 0such that

• limα→0 qα(t) = 1t for all t > 0.

• supt∈σ(A) |qα(t)| ≤ Ceα for all α > 0.

• Define a regularized solution to Tf = gδ by

f δα := qα(T ∗T )T ∗gδ

• Examples:

regularization method qα(t)spectral cut-off 1

t χ[α,∞)(t)

Tikhonov regularization 1α+t

iterated Tikhonov (t+α)n−αn

t(t+α)n

Landweber iteration n = 1/α∑n−1

j=0 (1− µt)j

Page 48: Iterative Regularization Methods for Inverse Problems: Lecture 1

estimation of the approximation error• We study the error of the reconstruction fα := qα(T ∗T )T ∗g

for exact data g = Tf †.• This so-called approximation error is given by

f † − fα = (I − qα(T ∗T )T ∗T )f † = rα(T ∗T )f †

where rα(t) := 1− tqα(t).• Assumption: |rα(t)| ≤ Cr for all α, t ≥ 0• Examples:

regularization method qα(t) rα(t)spectral cut-off 1

t χ[α,∞)(t) χ[0,α)(t)

Tikhonov regularization 1α+t

αα+t

iterated Tikhonov (t+α)n−αn

t(t+α)n

(αα+t

)m

Landweber iteration n = 1/α∑n−1

j=0 (1− µt)j (1− µt)n

Page 49: Iterative Regularization Methods for Inverse Problems: Lecture 1

estimation of the propagated datanoise error

• The total error for noisy data can be estimated as follows:

‖f † − f δα‖ ≤ ‖f † − fα‖+ ‖fα − f δα‖

• For the propagated data noise error we obtain the estimate

‖fα − f δα‖2 = ‖qα(T ∗T )T ∗(g − gδ)‖2 = ‖T ∗qα(TT ∗)err‖2

= 〈TT ∗qα(TT ∗),qα(TT ∗)err〉≤ ‖TT ∗qα(TT ∗)‖ ‖qα(TT ∗)‖‖err‖2

(supt≥0

tqα(t)

)(supt≥0

qα(t)

)δ2

≤ (1 + Cr )Ce

αδ2

Page 50: Iterative Regularization Methods for Inverse Problems: Lecture 1

Outline

1 Definitions of inverse problems and ill-posedness

2 Examples of inverse problems

3 Introduction to regularization methods

4 Basics of linear regularization theory

Page 51: Iterative Regularization Methods for Inverse Problems: Lecture 1

regularization methods: notation

• We consider a family of continuous (not necessarily linear)operators Rα : Y → X defined for α in some index set Awhich approximate the unbounded operator T−1

• Let α : (0,∞)× Y → A be a parameter choice rule. For agiven noisy data gδ ∈ Y and noise level δ > 0 such that‖gδ − g‖ ≤ δ the exact solution is approximated by

T−1g ≈ Rα(δ,gδ)gδ.

Examples:• Tikhonov regularization: Rα = (αI + T ∗T )−1T ∗

• spectral cutoff: Rαg :=∑n:σn≥α

1σn〈g,gn〉ϕn

Page 52: Iterative Regularization Methods for Inverse Problems: Lecture 1

regularization methods: definition

Definition

• The pair (R, α) is called a regularization method for theproblem Tϕ = g if the worst case error tends to 0 with thenoise level, i.e.

sup∥∥∥Rα(δ,gδ)gδ − T−1g

∥∥∥ : gδ ∈ Y, ‖gδ − g‖ ≤ δ

→ 0 as δ → 0

for all g ∈ R(T ).• α is called an a-priori parameter choice rule if α(δ, gδ)

depends only on δ. Otherwise α is called an a-posterioriparameter choice rule.

Page 53: Iterative Regularization Methods for Inverse Problems: Lecture 1

error decomposition

Let α ∈ A = R and assume that Rα are linear operators withRαg → T−1g as α→ 0 for all g ∈ R(T ). Then the total errorcan be decomposed by the triangle inequality

‖Rαgδ − T−1g‖ ≤ ‖Rαgδ − Rαg‖+ ‖Rαg − T−1g‖

into• a propagated data noise error ‖Rαgδ − Rαg‖ ≤ δ‖Rα‖,

which explodes as α→ 0 and• an approximation error ‖Rαg − T−1g‖, which tends to 0 asα→ 0

We have a trade-off between accuracy (small α) and stability(large α).

Page 54: Iterative Regularization Methods for Inverse Problems: Lecture 1

balancing the error components

Page 55: Iterative Regularization Methods for Inverse Problems: Lecture 1

Morozov’s discrepancy principle

For a fixed parameter τ ≥ 1 choose the largest α > 0 for which‖Tf δα − gδ‖ ≤ δ. Here f δα := Rαgδ denotes the reconstruction forthe regularization parameter α.

α(δ, gδ) := supα > 0 : ‖Tf δα − gδ‖ ≤ τδ

Do not try to fit the noise!

For iterative methods such as Landweber iteration, thediscrepancy principle consists in stopping the iteration at thefirst index N for which ‖TϕδN − gδ‖ ≤ τδ.

Page 56: Iterative Regularization Methods for Inverse Problems: Lecture 1

numerical realization of thediscrepancy principle

• For Tikhonov regularization and most other regularizationmethods the function α 7→ ‖Tf δα − gδ‖ is monotonelyincreasing. Therefore, α(δ, gδ) can be found by a simplebisection algorithm.

• Faster convergence can be achieved by Newton’s methodapplied to β := 1/α as unknown. The functionf (β) := ‖Tϕδ1/β − gδ‖2 turns out to be convex andmonotonely decreasing. Therefore, Newton’s method isglobally convergent.

Page 57: Iterative Regularization Methods for Inverse Problems: Lecture 1

error-free parameter choice rules

Theorem (Bakushinskiı)Let T : X → Y be one-to-one with dense range. Assume thereexists a regularization method (Rα, α) for Tϕ = g with aparameter choice rule α(δ, gδ), which depends only on gδ, butnot on δ. Then T−1 is continuous.

Page 58: Iterative Regularization Methods for Inverse Problems: Lecture 1

negative resultsWe consider regularization methods (Rα, α) which satisfy thefollowing assumption:

Rα : Y → X , α ∈ A ⊂ (0,∞) are linear operators and

limδ→0

supα(δ, gδ) : gδ ∈ Y, ‖gδ − g‖ ≤ δ

= 0.

Then Rα converges pointwise to T−1:

limα→0

Rαg = T−1g for all g ∈ R(T ).

TheoremAssume that T−1 is unbounded and the assumptions abovehold true. Then• the operators Rα cannot be uniformly bounded with

respect to α and• the operators RαT cannot be norm convergent to I asα→ 0.

Page 59: Iterative Regularization Methods for Inverse Problems: Lecture 1

arbitrarily slow convergence

TheoremAssume that there exist a regularization method (Rα, α) forTϕ = g and a continuous function ϕ : [0,∞)→ [0,∞) withϕ(0) = 0 such that

sup∥∥∥Rα(δ,gδ)g

δ − T−1g∥∥∥ : gδ ∈ Y, ‖gδ − g‖ ≤ δ

≤ ϕ(δ)

for all g ∈ R(T ) with ‖g‖ ≤ 1 and all δ > 0. Then T−1 iscontinuous.