inversion of 2d nmr data

20
CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 12, Number 1, Spring 2004 INVERSION OF 2D NMR DATA CHRISTOPHER BOSE AND ROBERT PICH ´ E Based on results obtained at the Seventh Annual PIMS Industrial Problem Solving Workshop, May 2003. Original problem submitted by Schlumberger Ltd., Calgary, Alberta. 1 Introduction Schlumberger Limited is a multinational company sup- plying oilfield and information services to a worldwide energy market. These services include both exploration and production tools ranging through seis- mic and remote sensing, well-logging and reservoir optimization. The prob- lem described in this report is related to well-logging via Nuclear Magnetic Resonance (NMR), a relatively new and developing tool with potential to re- veal a range of reservoir properties including porosity and saturation, as well as physical properties of the petroleum deposit. In order to recover this information from NMR spectra the company must have an effective, efficient and robust algorithm to perform inversion from the dataset to the unknown probability distribution on magnetic relaxation times. This ill-posed problem is encountered in diverse areas of magnetic imaging and there does not appear to be an ‘off-the-shelf’ solution which the company can apply to its problem. Company scientists have developed a sophisticated algorithm which performs well on some simple test datasets, but they are inter- ested in knowing if there are simpler approaches which could work effectively, or if some limited but useful properties of the density are accessible with a totally different approach. Our report is organised as follows. In Section 2 we present a careful and complete description of the problem and the work already done by the com- pany. In Section 3 we discuss Truncated Singular Value Regularisation and Tikhonov Regularisation and show how some ‘off-the-shelf’ Matlab code may be used to good effect on the test datasets provided by the company. In Sec- tion 4 we show that one can incorporate higher order regularisation into the AMS subject classification: Primary 47A52; Secondary 15A29. Keywords: Inverse problems, image processing, regularisation, Kronecker product. Copyright c Applied Mathematics Institute, University of Alberta. 67

Upload: others

Post on 04-Nov-2021

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: INVERSION OF 2D NMR DATA

CANADIAN APPLIEDMATHEMATICS QUARTERLYVolume 12, Number 1, Spring 2004

INVERSION OF 2D NMR DATA

CHRISTOPHER BOSE AND ROBERT PICHE

Based on results obtained at the Seventh Annual PIMS Industrial ProblemSolving Workshop, May 2003. Original problem submitted by SchlumbergerLtd., Calgary, Alberta.

1 Introduction Schlumberger Limited is a multinational company sup-plying oilfield and information services to a worldwide energy market. Theseservices include both exploration and production tools ranging through seis-mic and remote sensing, well-logging and reservoir optimization. The prob-lem described in this report is related to well-logging via Nuclear MagneticResonance (NMR), a relatively new and developing tool with potential to re-veal a range of reservoir properties including porosity and saturation, as wellas physical properties of the petroleum deposit.

In order to recover this information from NMR spectra the company musthave an effective, efficient and robust algorithm to perform inversion from thedataset to the unknown probability distribution on magnetic relaxation times.This ill-posed problem is encountered in diverse areas of magnetic imagingand there does not appear to be an ‘off-the-shelf’ solution which the companycan apply to its problem. Company scientists have developed a sophisticatedalgorithm which performs well on some simple test datasets, but they are inter-ested in knowing if there are simpler approaches which could work effectively,or if some limited but useful properties of the density are accessible with atotally different approach.

Our report is organised as follows. In Section 2 we present a careful andcomplete description of the problem and the work already done by the com-pany. In Section 3 we discuss Truncated Singular Value Regularisation andTikhonov Regularisation and show how some ‘off-the-shelf’ Matlab code maybe used to good effect on the test datasets provided by the company. In Sec-tion 4 we show that one can incorporate higher order regularisation into the

AMS subject classification: Primary 47A52; Secondary 15A29.Keywords: Inverse problems, image processing, regularisation, Kronecker product.

Copyright c©Applied Mathematics Institute, University of Alberta.

67

Page 2: INVERSION OF 2D NMR DATA

68 CHRISTOPHER BOSE, ET AL.

company’s existing algorithm, answering one specific question raised at thebeginning of the workshop. In Section 5 we record our unsuccessful attemptto establish an iterative algorithm for the positively constrained inversion. Fi-nally in the last section we review our conclusions and make suggestions forfuture work.

The authors are pleased to acknowledge the excellent work done by otherparticipants in the Schlumberger workshop group at the IPSW. First, LalithaVenkataramanan, Schlumberger’s representative at the workshop did an ad-mirable job of presenting the problem and filling in the background materialas required during the week. Lalitha proved to be an outstanding mathemat-ical colleague and it is a pleasure to acknowledge her contribution. Our sixstudent participants showed a lot of stubborn tenacity in mastering the mate-rial that was being discussed, and have contributed much to the exposition inthis article. Their names and affiliations are as follows: Zhenlu Cui (FloridaState University), Xinghua Deng and Qian Wang (University of Alberta), YingHan (McGill University), Qingguo Li (Simon Fraser University) and Lin Zhou(New Jersey Institute of Technology).

2 Problem description Schlumberger is interested in using NuclearMagnetic Resonance (NMR) analysis for exploration in the oil and gas in-dustry. The model problem presented to our group at the workshop involvedthe recovery of a two-dimensional probability distribution f(x, y) on magneticfield relaxation times in two directions, x, the so-called longitudinal relaxationtime and y the transverse relaxation time. The data collected is known to be aconvolved image of the relaxation time distribution according to the followingformula

(1) d(τ1, τ2) =

∫∫(1 − 2e−τ1/x)e−τ2/yf(x, y) dx dy.

Other types of data can be collected, involving different convolution ker-nels, but the forward model, in any case is in the form of a 2-D FredholmIntegral of the First Kind. Since the transformation involves a smooth ker-nel, it is well known that the corresponding inverse problem is ill-posed [5,p.2]. None the less, it is important for the company’s program to provide somesort of stable and computationally tractable inversion scheme.

The continuous forward model (1) is mainly of theoretical interest since inpractice, data is collected at discrete values in the τ1τ2-domain. Therefore,for the rest of this analysis we will assume the data function d is replaced bya data matrix D of dimension m2 × m1. Convolution kernels are similarly

Page 3: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 69

discretized as matrices K1 and K2 with dimensions m1 × n1 and m2 × n2,respectively and a discrete form of the equation (1) is rewritten as

(2) D = K2FKT1 .

The discrete density F is now an n2 × n1 matrix. K1 and K2 are (gen-erally) rank-deficient with infinite condition number and singular values de-caying quickly to zero. So, as expected, the ill-posed problem leads to anill-conditioned finite-dimensional inversion (2).

Three sets of test data were provided to our group for use during the work-shop. Distribution files F were of size n2×n1 = 100×100. Kernel discretisa-tion led to K1 and K2 of size 30× 100 and 4000× 100 respectively. There arereasonable grounds for the asymmetric choice in the discretisation grid here.For test inversion problems we replace the data D computed from equation (2)by

D = D + E = K2FKT1 + E

where E is mean-zero Gaussian noise. The object is to recover F . The choiceof signal to noise ratio for the various test files will be discussed later in thenumerical results section.

Suppose for the moment we take a completely naive point of view and con-vert our problem to a standard one-dimensional least squares approximation

(3) vec(F ) = argminf

‖Kf − vec(D)‖2

where vec(·) represents the operator making a vector from a matrix by stackingcolumns, K = K1 ⊗ K2 is the Kronecker product of the convolution kernelsand f = vec(F ). Since K is huge (m1m2 × n1n2 = 120, 000× 10, 000) anddense, we may have difficulty fitting it into the RAM memory of a PC evenif we ignore the computational complexity of the positivity constraint f ≥ 0and ill-posed nature of the high dimension inversion! For example, in [3] asimilar problem arising in medical imaging was analysed using a CRAY su-percomputer. Therefore, we conclude that a numerically reasonable approachof the type required by the company should try to work directly with the fac-tored form (2). This observation was known to the company scientists. Forthis reason, most of our analysis will be centred on the factored problem of thetype

(4) F = arg minF≥0

‖K2FKT1 − D‖2

Fro

Page 4: INVERSION OF 2D NMR DATA

70 CHRISTOPHER BOSE, ET AL.

where ‖ · ‖Fro denotes the Frobenius matrix norm.The problem proposor (L.V.) described a three step approach to the the

optimization in (4) which the company has found to be effective on the testdatasets. First, the problem dimension is significantly reduced by projection.The range of this projection is related to the singular value decomposition(SVD) truncation of the convolution matrices. Next, the ill-conditioned (butlower-dimension) problem is regularised as a positively constrained Tikhonovoptimization in unfactored form

(5) vec(F ) = argminf

(‖Kf − vec(D)‖2 + λ2‖f‖2

)

where λ is the regularisation parameter. Finally, this constrained problem issolved by the method in Butler, Reeds and Dawson [1] (BRD) which trans-forms to an unconstrained optimization with respect to a derived objectivefunction. Details, including methods to choose the regularisation parameterand performance on the test problems may be found in [11].

With this background in place our group was asked to consider three linesof investigation.

First, are there other numerically tractable (and possibly simpler) ap-proaches to the inversion problem (2)? In the next section, we describe threeanswers to this question. First we consider using truncated singular value de-composition (TSVD) and Tikhonov regularisation on the factored form (4),greatly reducing the computational and algorithmic complexity of previousmethods. Performance on the test datasets is presented. We also considerbriefly a direct Galerkin-type approach.

Next, we were asked consider the possibility of extending the BRD-methoddescribed by the proposor to higher-order Tikhonov regularization. In particu-lar, can we replace the problem (5) with

(6) F = arg minF≥0

{‖K2FKT

1 − D‖2Fro + λ2(‖LF‖2

Fro + ‖FLT ‖2Fro)

}

where L invokes the discrete first derivatives on the square matrix F ? Wepresent mixed results for this second question in that we can transform theproblem (6) into a standard problem of the type (5), to which the BRD methodcan subsequently be applied, but we cannot arrange that the transformed prob-lem has the desirable Kronecker product structure. In a slightly different direc-tion we consider if the company’s idea for an iterative algorithm can be adaptedto higher-order regularisation. Unfortunately the same problems which led tothe use of the BRD method appear to confound this approach as well.

Page 5: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 71

Finally, the company scientists believe that it may not be necessary to ob-tain complete inversion of the problem but that some macroscopic informationabout the distribution of relaxation times (moments or (x1, x2)-correlationsfor example) may be sufficient. While this question may be amenable to aGalerkin approach, without prior information about a restricted class of pos-sible distributions our group saw no tractable way to make progress in thisdirection during the week of the workshop.

3 TSVD and Tikhonov regularisation

The theory Let us first set up a unified framework for these two well knownregularisation methods in non-factored problems. Consider the discrete linearsystem

(7) Kf = d

where K is m × n. Let K = UΣV T be a SVD where Σ is m × n diagonal,U is m×m orthogonal, and V is n× n orthogonal. The Tikhonov and TSVDregularised solutions of (7) are

(8) freg = V φ(ΣT )UT d

where

Tikhonov TSVD

φ(σ) =σ

σ2 + λ2, φ(σ) =

1

σif |σ| > λ

0 otherwise

is applied elementwise. The nonnegative regularisation parameter λ affects theamount of smoothing of the regularised solution. Its value can be selected byminimising the generalised cross validation (GCV) function

G(λ) =

( ‖Kfreg − d‖trace(UΣφ(ΣT )UT − I)

)2

=

(‖c ◦ (UT d)‖sum(c)

)2

where the vector norm is Euclidean, ◦ is the Hadamard product (elementwisemultiplication), and

c = diag (Σφ(ΣT )) − 1.

Now remember that the coefficient matrix in (7) has the Kronecker productstructure

K = K1 ⊗ K2

Page 6: INVERSION OF 2D NMR DATA

72 CHRISTOPHER BOSE, ET AL.

where K1 is m1 × n1 and K2 is m2 × n2 with m1m2 ≥ n1n2. As we haveobserved, the key to effective algorithms is to rewrite formulae in ways thatavoid explicitly forming the Kronecker product of full matrices.

Let

F = reshapen2×n1[f ] and D = reshapem2×m1

[d].

Then the linear system (7) is obtained by applying the vec operator to bothsides of the matrix equation

(9) K2FKT1 = D

This equation does not involve the Kronecker product. Similar techniques canbe used to eliminate expensive Kronecker products from the regularisation for-mulae, as follows.

Let K1 = U1Σ1VT1 and K2 = U2Σ2V

T2 be SVDs. Then

(U1 ⊗ U2)(Σ1 ⊗ Σ2)(V1 ⊗ V2)T

is an SVD of K [7, Thm 4.2.15]. The formula (8) for the regularised solutioncan therefore be written

Freg = reshapen2×n1[freg]

= reshapen2×n1[(V1 ⊗ V2)φ(ΣT

1 ⊗ ΣT2 )(U1 ⊗ U2)

T d]

= V2 · reshapen2×n1[φ(ΣT

1 ⊗ ΣT2 )vec(UT

2 DU1)] · V T1 .

(10)

The Kronecker product formation and multiplication in (10) only involves di-agonal matrices, so the formula can be implemented efficiently with appropri-ate data structures. Further savings are possible by using the “economy size”version of the SVD of K2 when m2 > n2.

Similarly, the formula for the GCV function can be written

G(λ) =

(‖C ◦ (UT2 DU1)‖Fro

sum(C)

)2

where

C = reshapem2×m1[diag((Σ1 ⊗ Σ2)φ(ΣT

1 ⊗ ΣT2 ))] − 1.

Here again the only Kronecker products are of diagonal matrices.

Page 7: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 73

Numerical results The problem proposer provided K1 (of size 30×100), K2

(of size 4000× 100), and three different 100× 100 F matrices. Measurementdata was generated by adding zero mean pseudorandom noise E to K2FKT

1 ;the noise variance was set so that ‖E‖Fro = 0.05‖K2FKT

1 ‖Fro.Each of the following regularisations (including SVD and GCV curve com-

putations) took about 12 seconds to compute in Matlab 5.2 on a 30 MB mem-ory partition of a 400 MHz Powerbook. The GCV minimisation appears to se-lect reasonable regularisation parameters, and Tikhonov and TSVD regularisa-tion give about the same results for all three models. The data and Matlab codeare available at http://alpha.cc.tut.fi/˜piche/ipsw2003/

0 50 1000

20

40

60

80

100F

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

010

2030

0

2000

4000-2

0

2

Data

100

10-9

10-8

10-7

λ

GTikh

minimum GCV at λ =1.2906

0 50 1000

20

40

60

80

100

FTikh

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

model A, Tikhonov regularisation

Page 8: INVERSION OF 2D NMR DATA

74 CHRISTOPHER BOSE, ET AL.

0 50 1000

20

40

60

80

100F

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

010

2030

0

2000

4000-2

0

2

Data

100

10-9

10-8

10-7

λ

Gtsvd

minimum GCV at λ =1.9375

0 50 1000

20

40

60

80

100

Ftsvd

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

model A, TSVD regularisation

Page 9: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 75

0 50 1000

20

40

60

80

100F

-5

0

5

x 10-3

010

2030

0

2000

4000-2

0

2

Data

100

10-9

10-8

10-7

λ

GTikh

minimum GCV at λ =3.022

0 50 1000

20

40

60

80

100

FTikh

-5

0

5

x 10-3

model B, Tikhonov regularisation

Page 10: INVERSION OF 2D NMR DATA

76 CHRISTOPHER BOSE, ET AL.

0 50 1000

20

40

60

80

100F

-5

0

5

x 10-3

010

2030

0

2000

4000-2

0

2

Data

100

10-9

10-8

10-7

λ

Gtsvd

minimum GCV at λ =3.3163

0 50 1000

20

40

60

80

100

Ftsvd

-5

0

5

x 10-3

model B, TSVD regularisation

Page 11: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 77

0 50 1000

20

40

60

80

100F

-4

-2

0

2

4

x 10-3

010

2030

0

2000

4000-2

0

2

Data

100

10-8

10-7

λ

GTikh

minimum GCV at λ =4.5564

0 50 1000

20

40

60

80

100

FTikh

-4

-2

0

2

4

x 10-3

model C, Tikhonov regularisation

Page 12: INVERSION OF 2D NMR DATA

78 CHRISTOPHER BOSE, ET AL.

0 50 1000

20

40

60

80

100F

-4

-2

0

2

4

x 10-3

010

2030

0

2000

4000-2

0

2

Data

100

10-8

10-7

λ

Gtsvd

minimum GCV at λ =5.3422

0 50 1000

20

40

60

80

100

Ftsvd

-4

-2

0

2

4

x 10-3

model C, TSVD regularisation

Page 13: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 79

Parameterised methods Our group briefly considered the possibility of usinga Galerkin approach to the inversion problem (1). Thus, we make the ansatz

(11) f(x, y) =∑

i

f(x, y,Pi)

where the f(x, y,Pi) are a finite set of parameterised basis functions withparameter values Pi.

We remark that this approach reduces to the analysis of the previous para-graphs by the choice of the basis functions as ‘delta functions’

f(·, Fi j) = Fi jδ(xi, yj)(·)centred on the points of the discretisation lattice and the optimal parameterselection is the matrix F of the previous analysis. As we have discovered thisis a high-dimensional, ill-posed and (because of the positivity requirement)nonlinear problem. Our question then is this: Can a judicious choice of basisfunctions lead to a significantly smaller parameter space (instead of the 10,000- dimensional space already encountered)? Of course the ill-posed nature of theproblem must reappear in the Galerkin method as the number of basis functionsincreases, no matter how cleverly this basis is chosen.

The following are a few examples for the basis functions which seem well-suited to the test problems given to the group.

1. Gaussian functions (P = {x0, y0, σ, P})

g(x, y) =1√2πσ

exT P T Px

where x = (x − x0, y − y0)T .

2. Box functions with centre at (x0, y0), and dimension of its base are 2a and2b and height 1/4ab.

3. Pyramid functions with centre at (x0, y0), and square base dimension of 2aand 2b and height 3/4ab.

For example, when using a basis consisting of one box function

P = {c, x0, y0, a, b}

to approximate f(x, y) the Fredholm integral becomes

DP(τ1, τ2) =c

4ab

∫ x0+a

x0−a

∫ y0+b

y0−b

(1 − 2e−τ1/x)(e−τ2/y) dy dx

=c

4ab

∫ x0+a

x0−a

(1 − 2e−τ1/x) dx

∫ y0+b

y0−b

e−τ2/y dy.

(12)

Page 14: INVERSION OF 2D NMR DATA

80 CHRISTOPHER BOSE, ET AL.

There is no closed form solution to this integral, which can only be solvednumerically.

Based on this simple parameterization, the problem of approximatingf(x, y) is transformed as a nonlinear optimization problem stated as follows:

For given τ1 ∈ [0, T1], τ2 ∈ [0, T2] and data D(τ1, τ2), the objective is tofind

P = {c, x0, y0, a, b}, c ≥ 0

such that

F = arg minf(x,y,P)

‖DP − D‖2Fro.

Unfortunately, due to time constraints we were unable to conduct numericaltests on this optimization problem during the week of the workshop.

We note that even though this overly simplified approach has no hope of es-tablishing fine structure of the underlying density, it would be interesting to seeif macroscopic properties desired by the company scientists could be isolatedwith such a relatively low-dimensional parameterisation. On the other hand,the method depends on a priori information about the the density, likely a fatalflaw for any robust numerical package of the type required by the company.

4 Higher-order Tikhonov regularisation The Tikhonov regularised so-lution described Section 3 is the minimiser of the objective function

1

2‖Kf − d‖2 +

1

2λ2‖f‖2

=1

2‖K2FKT

1 − D‖2Fro +

1

2λ2‖F‖2

Fro.

(13)

A more general regularisation has the objective function’s second term in theform

1

2λ2‖LF‖2

Fro

where the operator L is chosen to penalise undesired features of the solution.When L has the same Kronecker product structure as K, then it is straight-forward to develop efficient regularisation algorithms along the lines of theprevious section.

In this section we will consider the more difficult problem of incorporating

Page 15: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 81

a non-factored regularisation term. For example, setting

L =

1−1 1

−1 1. . . . . .

−1 1−1

(n+1)×n

the discrete first derivative, and changing the regularisation term in (13) to be

1

2λ2(‖LF‖2

Fro + ‖FLT‖2Fro)

in effect regularises by the boundary value problem

∆dF = 0

F (1, j) = F (n, j) = 0 for all j

F (i, 1) = F (i, n) = 0 for all i

where ∆d denotes the discrete Laplacian. The boundary conditions ensure thatL has trivial kernel which will be useful for us later. Similar considerationswould allow regularisation with respect to higher order derivatives, for exam-ple replacing L with the discrete Laplacian operator plus appropriate boundaryconditions to ensure a trivial kernel.

The estimation of F is equivalent to solving the following problem

(14) F = argminF≥0

{‖K2...etc...‖2

Fro)}

,

where K1, K2 are the convolution kernels and D is noisy data. Note that inthis section we are keeping the notation simple by assuming a square, n × nunknown F but the method trivially extends to rectangular F .

The first term in the two-dimensional problem in (14) can be transformedto a one-dimensional problem as before:

‖K2FKT1 − D‖2

Fro = ‖Kf − d‖2,

where the vectors f = vec(F ) and d = vec(D) are obtained from matrices F

and D, respectively, and K = K1 ⊗ K2.

Page 16: INVERSION OF 2D NMR DATA

82 CHRISTOPHER BOSE, ET AL.

Next, let

LF = LFI = (I ⊗ L)f = L1f,

FLT = IFLT = (L ⊗ I)f = L2f.

Then

‖LF‖2Fro + ‖FLT ‖2

Fro = ‖L1f‖2 + ‖L2f‖2

= fT LT1 L1f + fT LT

2 L2f

= fT (LT1 L1 + LT

2 L2)f

= fT LT Lf

= ‖Lf‖2,

where L is the upper triangular Cholesky factor of the positive definite andsymmetric matrix (LT

1 L1 + LT2 L2).

Assume for the moment that L−1 is positive in the sense that L−1g ≥ 0whenever g ≥ 0. Then with g = Lf the objective function in (14) becomesone-dimensional as:

minf≥0

(‖Kf − d‖2 + λ2‖Lf‖2

)= min

eL−1g≥0

(‖KL−1g − d‖2 + λ2‖g‖2

)

≤ ming≥0

(‖Kg − d‖2 + λ2‖g‖2

),

where K = KL−1. In this case we suggest to take vec(F ) = L−1g as anestimate of the minimiser in (14).

Regarding the assumptions made in the previous paragraph we note thatit is a straightforward calculation to show that LT L = (LT

1 L1 + LT2 L2) is

a banded, symmetric, positive definite matrix with non-positive off-diagonalelements. In [10] it is shown that such Stieltjes matrices have non-negative(elementwise) inverses. While we have not been able to prove the same thingfor the Cholesky factor L, we believe it to be true for the general class ofdiscrete differentiation operators that we have in mind for applications. Inparticular, all of our numerical examples have exhibited this property. Wesuggest that the general fact may already be known in the literature and if not,

Page 17: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 83

it would make an interesting problem for future investigation. Perhaps a moreinteresting and important issue is to show that the value of the optimizationproblem above posed in terms of g ≥ 0 is the same as the value of the f ≥ 0-problem in order justify our use of L−1g as a rigorous estimate for vec(F )above.

Finally, as we have pointed out before, methods which involve unfactorisedconvolutions are computationally unwieldy and should be avoided. One strat-egy to deal with this involves replacing the higher-order regularisation term Lby its nearest Kronecker product L ∼ L1 ⊗ L2. In [9] this approach has beenapplied to the case of regularization by the discrete H q Sobolev seminorm. Inthis case, the approximation is not difficult to compute, and favourable numer-ical results are obtained on test problems of a reasonable size.

All of these points merit further investigation.

5 Duality Extending the notation of the previous section we define

(15) Q(f) =1

2‖Kf − d‖2 +

1

2λ2‖Df‖2

and rewrite the optimization problem

(16) Minimize Q(f), subject to f ≥ 0.

Here we are assuming that the regularisation operator D and regularisationparameter λ have been given to us in advance.

Standard duality analysis and the principle of strong duality implies theKarush-Kuhn-Tucker necessary optimality conditions on f and µ (the La-grangian dual vector):

f ≥ 0, µ ≥ 0,∑

i

µifi = 0, ∇Q(f) = µ.

A straightforward calculation gives

∇Q(f) = KT (Kf − d) + λ2DT Df.

Substituting this result into the KKT conditions yields our basic optimalityconditions

(17)KT

i (Kf − d) = −λ2DTi Df if fi > 0

KTi (Kf − d) ≥ −λ2DT

i Df if fi = 0.

Page 18: INVERSION OF 2D NMR DATA

84 CHRISTOPHER BOSE, ET AL.

Here Ki, Di = denote the i-th columns respectively. An important point to bemade here is that (17) are equivalent to the KKT-optimality conditions.

It is possible to write conditions (17) as a closed form expression involvingf . First we write

(18) g = λ−2(d − Kf)

after which we findDT Df = max[0, KT g].

Consider now the case of first order regularisation where D = I . Then

f = max[0, KT g]

and it is tempting to attempt to recover f via an iterative scheme. However,in practice this approach leads to serious convergence problems as describedin [11]. It is exactly at this point that the BRD method [1] provides a way toavoid a direct iterative approach. Details are to be found in [11].

The proposor has asked if an iterative method can be salvaged or, failingthat can the BRD method be applied when D 6= I . We were not able to answerthis question clearly during the week of the workshop, however we record herefor completeness some observations made by both the workshop members andthe problem proposor.

First, suppose we define Γ = diag(‖Di‖2), the diagonal elements of DT D.Γ is strictly positive on the diagonal. Writing DT Df = (DT D − Γ)f + Γfwe can rewrite the above closed form expression as

(19) f = max[0, Γ−1(KT g − (DT D − Γ)f)].

Here we are using the fact that the max operator commutes with Γ.If we denote by f the least squares best (unregularised) solution we can

further simplify (19) as

f = max[0, Γ−1(λ−2KT K(f − f) − (DT D − Γ)f ]

so the iterative properties of the map

f → Γ−1λ−2KT Kf − Γ−1(λ−2KT K + DT D − Γ)f

need to be explored. In our opinion, the main barrier to convergence is thenonlinear effect invoked by the max operator in the above iterative scheme.

Page 19: INVERSION OF 2D NMR DATA

INVERSION OF 2D NMR DATA 85

6 Conclusions and future work In this report we have shown how rela-tively simple, off-the-shelf code can be effectively applied to solve FredholmIntegrals of the first kind through TSVD and Tikhonov regularisation. Higher-order regularisation can also be incorporated with some additional technicaldifficulties, depending on the nature of the regularising operator. Iterativeschemes for solving regularised problems are known in the literature, but workremains in order apply these ideas to the present setting.

Future work

Bidiagonalisation vs SVD: Elden’s bidiagonalisation algorithm [2] for com-puting Tikhonov regularised solutions is normally faster than the SVD-based formula (8). Developing a version of Elden’s algorithm that exploitsthe Kronecker product structure would be a good research topic. The workof Faucett and Fulton [4] could be a starting point. However, we expect thata Matlab implementation (without MEX files) of such an algorithm wouldprobably not be any faster than the SVD-based algorithm presented here.

Factored form of higher-order regularisation: It should be straightforwardto develop efficient algorithms when the regularisation operator has factoredform. The technique of regularisation by factored approximations to thediscrete Sobolev approximation as developed in [9] shows promise in thisregard. More generally, devising such factored penalisation operators is aninteresting topic for future work. With all such higher-order methods, theconnection between the first order regularised problem and the higher-orderproblem should be investigated.

Nonnegative constraints: A number of iterative methods are available forregularisation with nonnegative constraints on the solution [12, Chapter 9].It should be straightforward to recode these algorithms to exploit Kroneckerproduct structure. Again, the key to obtaining efficient code is to eliminateexpensive Kronecker products from formulae appearing in the algorithm.For example, the gradient projection method involves the objective func-tion and the gradient. The Tikhonov regularisation objective function (13)has the gradient

(20) KT (Kf − d) + λ2f = vec(KT2 (K2FKT

1 − D)K1 + λ2F )

The right hand side formulae of (13) and (20) are the ones to use in theiterative algorithm.

Page 20: INVERSION OF 2D NMR DATA

86 CHRISTOPHER BOSE, ET AL.

REFERENCES

1. J. P. Butler, J. A. Reeds and S. V. Dawson, Estimating solutions of the first kind integralequations with nonnegative constraints and optimal smoothing, SIAM J. Numer. Anal. 18(3) (1981), 381–397.

2. L. Elden, Algorithms for the regularization of ill-conditioned least squares problems, BIT 17(1977), 134–145.

3. A. E. English, K. P. Whittall, M. L. G. Joy and R. M. Henkelman, Quantitative two-dimen-sional time correlation relaxometry, Magn. Reson. Med. 22 (1991), 425–434.

4. Donald W. Fausett and Charles T. Fulton, Large least squares problems involving Kroneckerproducts, SIAM J. Matrix Anal. Appl. 15 (1) (1994), 219–227.

5. C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the FirstKind, Pitman, Boston, 1984.

6. Per Christian Hensen, Regularization Tools, a Matlab Package for Analysis and Solution ofDiscrete Ill-Posed Problems, [email protected], http://www.imm.dtu.dk/˜pch,2001.

7. Roger A. Horn and Charles R. Johnson, Topics in Matrix Analysis, Cambridge UniversityPress, 1991.

8. Julie Kamm and James G. Nagy, Kronecker product and SVD approximations in imagerestoration, Linear Algebra and Applications 284 (1998), 177–192.

9. Robert Piche, Regularization operators for multidimensional inverse problems with Kro-necker product structure, European Congress on Computational Meth. in Appl. Sci. andEng. , ECCOMAS 2004, (P. Neittaanmaki, T. Rossi, K. Majava, O. Pironneau, eds.) Jyvaskyla,24–28 July, 2004.

10. R. Varga, Matrix Iterative Analysis, Prentice-Hall, 1962, p. 85.11. L. Venkataramanan, Y. Song and M. D. Hurlimann, Solving Fredholm integrals of the first

kind with tensor product structure in 2 and 2.5 dimensions, IEEE Trans. Sign. Proc. 50 (5)(2002), 1017–1026.

12. Curtis R. Vogel, Computational Methods for Inverse Problems, SIAM, 2002.

DEPARTMENT OF MATHEMATICS AND STATISTICS, UNIVERSITY OF VICTORIA,P.O. BOX 3045, STN CSC, VICTORIA, BC, CANADA V8W 3P4E-mail address: [email protected]

MATHEMATICS, TAMPERE UNIVERSITY OF TECHNOLOGY, P.O. BOX 553, FIN-33101,TAMPERE, FINLANDE-mail address: [email protected]