an iteration method for stable analytic continuation
TRANSCRIPT
Applied Mathematics and Computation 233 (2014) 203–213
Contents lists available at ScienceDirect
Applied Mathematics and Computation
journal homepage: www.elsevier .com/ locate/amc
An iteration method for stable analytic continuation q
http://dx.doi.org/10.1016/j.amc.2014.01.0530096-3003/� 2014 Elsevier Inc. All rights reserved.
q The project is supported by the Natural Science Foundation of Jiangsu Province of China for Young Scholar (No. BK20130118), the FundamentalFunds for the Central Universities (No. JUSRP1033) and the NNSF of China (Nos. 11171136, 11271163, 11371174).⇑ Corresponding author at: School of Science, Jiangnan University, Wuxi 214122, Jiangsu Province, PR China.
E-mail addresses: [email protected], [email protected] (H. Cheng).
Hao Cheng a,b,⇑, Chu-Li Fu b, Yuan-Xiang Zhang b
a School of Science, Jiangnan University, Wuxi 214122, Jiangsu Province, PR Chinab School of Mathematics and Statistics, Lanzhou University, Lanzhou 730000, Gansu Province, PR China
a r t i c l e i n f o
Keywords:Numerical analytic continuationIll-posed problemIteration regularization methodError estimateA posteriori parameter choice
a b s t r a c t
In the present paper we consider the problem of numerical analytic continuation for ananalytic function f ðzÞ ¼ f ðxþ iyÞ on a strip domain Xþ ¼ fz ¼ xþ iy 2 Cjx 2 R;0 <y < y0g. The key difference with the known methods is that a novel iteration regularizationmethod with a priori and a posteriori parameter choice rule is presented. The comparison innumerical aspect with other methods is also discussed.
� 2014 Elsevier Inc. All rights reserved.
1. Introduction
The problem of numerical analytic continuation is a very interesting and meaningful topic, and it has many practicalapplications, e.g., in medical imaging [1], the inversion of Laplace transform [6], inverse scattering problems [10] and soon. Take the scattering problem for example, determining the pion-nucleon coupling constant and the cross sections withunstable particles by analytic continuation of the scattering data are usually encountered, and we refer the reader to refer-ence [2] in detail.
In [7,8], we have considered the following numerical analytic continuation problem with different non-iterative regular-ization methods.
Problem 1.1. Let
Xþ ¼ fz ¼ xþ iy 2 Cjx 2 R; 0 < y < y0g
be the strip domain in complex plane C, where i is the imaginary unit and y0 is a positive constant. The functionf ðzÞ ¼ f ðxþ iyÞ is an analytic function in �Xþ. The data is only given approximately on the real axis y ¼ 0, i.e.,f ðzÞjy¼0 ¼ f ðxÞ is known approximately, and the noisy data is denoted by f dðxÞ. We will reconstruct the function f ðzÞ onXþ by using the data f dðxÞ.
This problem is seriously ill-posed and therefore it is very important to study different highly efficient algorithm for it. In[9], the authors give a mollification method with Dirichlet kernel to solve another strip domain X ¼ fz ¼ xþiy 2 Cjx 2 R; jyj < r; r > 0is a constantg. But the mollification in [9] can not be directly applied to the domain Xþ sincethe Dirichlet kernel function cannot be obtained on it. In[7,8] the authors give the Fourier and modified Tikhonov methods,respectively. They obtain order optimal error estimate with effective algorithm. However, error estimates for them onlychoosing the regularization parameter by a priori rule. In the present paper a new regularization method of iteration type
Research
204 H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213
for solving this problem will be given. Although this method has been used by Deng and Liu to deal with the sidewaysparabolic equation [3,4], however, we give another way to choose the regularization parameter by an a posteriori rule.We will use this method solving Problem 1.1, not only give a priori and a posteriori rule for choosing regularization parameterwith strict theory analysis, but also make comparison in numerical aspects with other methods.
The outline of the paper is as follows. In Section 2, a Hölder-type error estimate is obtained for the a priori parameterchoice rule. The a posteriori parameter choice rule is given in Section 3, which also leads to a Hölder-type error estimate.For the convenience of comparison with other methods, the same numerical examples as in [7,8] are considered in Section 4,which demonstrate the effectiveness of the new method.
2. A priori parameter choice
Let g denote the Fourier transform of a function g defined by
gðnÞ :¼ 1ffiffiffiffiffiffiffi2pp
Z 1
�1e�ixngðxÞdx: ð2:1Þ
Assume that
f ð� þ iyÞ 2 L2ðRÞ for 0 6 y 6 y0; ð2:2Þ
where the norm kfk in L2ðRÞ is defined by:
kfk :¼Z 1
�1jf ðxÞj2dx
� �12
:
It is easy to know by Parseval formula that there holds kfk ¼R1�1 j ^fðnÞj2dn
� �12. From [7,8], we know
dfð� þ iyÞðnÞ ¼ e�yn f ðnÞ; ð2:3Þor equivalently,
f ðxþ iyÞ ¼ 1ffiffiffiffiffiffiffi2pp
Z 1
�1eixne�yn f ðnÞdn: ð2:4Þ
Assume the exact data f ðxÞ and the measured data fdðxÞ both belong to L2ðRÞ, the noise level is given by
kf � f dk 6 d; ð2:5Þ
and there is an a priori bound E such that
kf ð� þ iy0Þk 6 E: ð2:6Þ
For simplicity, we decompose R into the following parts I and W, where
I :¼ fn 2 R; n � 0g; and W :¼ fn 2 R; n P 0g: ð2:7Þ
For every gðnÞ 2 L2ðRÞ, define
gþðnÞ :¼gðnÞ; n P 0;0; n < 0;
�
g�ðnÞ :¼0; n P 0;gðnÞ; n < 0;
�
then gðnÞ ¼ gþðnÞ þ g�ðnÞ, and L2ðRÞ ¼ L2ðIÞ � L2ðWÞ.For n 2W , the problem is well-posed and we can take the regularization approximation solution in the frequency domainas
df dk ð� þ iyÞðnÞ ¼ e�yn f dðnÞ; n 2W; k ¼ 1;2; . . . ð2:8Þ
For n 2 I, we introduce an iteration scheme as the following form:
df dk ð� þ iyÞðnÞ ¼ ð1� kÞ df dk�1ð� þ iyÞðnÞ þ ke�yn f dðnÞ; n 2 I; k ¼ 1;2; . . . ð2:9Þ
with initial guess df d0 ð� þ iyÞðnÞ, and k ¼ ey0n < 1 which plays an important role in the convergence proof. From formula (2.9),
it is easy to know
df dk ð� þ iyÞðnÞ ¼ ð1� kÞk df d
0 ð� þ iyÞðnÞ þXk�1
i¼0
ð1� kÞike�yn f dðnÞ ¼ ð1� kÞk df d0 ð� þ iyÞðnÞ þ ð1� ð1� kÞkÞe�yn f dðnÞ: ð2:10Þ
H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213 205
Therefore, the approximate solution of Problem 1.1 has the following form in the frequency domain:
df dk ð� þ iyÞðnÞ ¼
e�yn f dðnÞ; n P 0;
ð1� kÞk df d0 ð� þ iyÞðnÞ þ ð1� ð1� kÞkÞe�yn f dðnÞ; n < 0;
(ð2:11Þ
or equivalently,
f dk ðxþ iyÞ ¼ 1ffiffiffiffiffiffiffi
2pp
Z 1
�1eixn df d
k ð� þ iyÞðnÞdn; ð2:12Þ
where df dk ð� þ iyÞðnÞ is given by (2.11).
Lemma 2.1. For 0 6 k � 1; c P 1, there holds:
ð1� kÞck 6 1cþ 1
; ð2:13Þ
and
1� ð1� kÞc
k6 c: ð2:14Þ
Proof. Denoting UðkÞ ¼ ð1� kÞck and WðkÞ ¼ 1� ð1� kÞc � ck, then U0ðkÞ ¼ �cð1� kÞc�1kþ ð1� kÞc and
W0ðkÞ ¼ cð1� kÞc�1k� ð1� kÞc � c. Setting U0ðkÞ ¼ 0, we have k ¼ 1cþ1. Note that Uð0Þ ¼ Uð1Þ ¼ 0;UðkÞ has a unique maximal
value at k ¼ 1cþ1. Therefore, UðkÞ ¼ ð1� kÞck 6 1
cþ1 61c. Because of W0ðkÞ � 0 and Wð0Þ ¼ 0, we can easily obtain WðkÞ � 0. The
proof of the lemma is completed. h
Theorem 2.2. Let f ðxþ iyÞ be the solution with exact data, and f dk ðxþ iyÞ be its regularization approximation given by (2.12) withdf d
0 ð� þ iyÞð�; yÞ ¼ 0. Let assumptions (2.5) and (2.6) be satisfied and take k ¼ bEdc, where btc denotes the largest integer not exceedingt, then there holds the estimate
kf ð� þ iyÞ � f dk ð� þ iyÞk 6 2E
yy0 d1� y
y0 þ d: ð2:15Þ
Proof. Due to the Parseval formula and the triangle inequality, we have
kf ð� þ iyÞ � f dk ð� þ iyÞk2 ¼ k dfð� þ iyÞð�; yÞ � df d
k ð� þ iyÞð�; yÞk2
¼ k dfð� þ iyÞð�; yÞ � df dk ð� þ iyÞð�; yÞk2
L2ðIÞ þ k dfð� þ iyÞð�; yÞ � df dk ð� þ iyÞð�; yÞk2
L2ðWÞ: ð2:16Þ
Case I: While n 2W , combining (2.3), (2.5) and (2.12), we have
k dfð� þ iyÞð�; yÞ � df dk ð� þ iyÞð�; yÞkL2ðWÞ ¼ ke
�yn f ðnÞ � e�yn f dðnÞkL2ðWÞ 6 kf ðnÞ � f dðnÞkL2ðWÞ 6 kf ðnÞ � f dðnÞk 6 d ð2:17Þ
Case II: For n 2 I, combining (2.3), (2.5), (2.6) and (2.12), we have
k dfð� þ iyÞð�; yÞ � df dk ð� þ iyÞð�; yÞkL2ðIÞ ¼ ke
�yn f ðnÞ � ð1� ð1� kÞkÞe�yn f dðnÞkL2ðIÞ 6 ke�yn f ðnÞ
� ð1� ð1� kÞkÞe�yn f ðnÞkL2ðIÞ þ kð1� ð1� kÞkÞe�ynðf ðnÞ � f dðnÞÞkL2ðIÞ ¼ kð1� kÞkeðy0�yÞn f ðnÞe�y0nkL2ðIÞ
þ kð1� ð1� kÞkÞe�ynðf ðnÞ � f dðnÞÞkL2ðIÞ 6 Esupn<0ð1� kÞkeðy0�yÞn þ dsup
n<0ð1� ð1� kÞkÞe�yn: ð2:18Þ
In the following proof, we take n0 ¼ �lnE
dy0
. For the first term on the right-hand side of (2.18) we have
Esupn<0ð1� kÞkeðy0�yÞn
6 max E supn06n<0
ð1� kÞkeðy0�yÞn; Esupn<n0
ð1� kÞkeðy0�yÞn
( )
¼max E supn06n<0
ð1� kÞkke�yn; Esupn<n0
ð1� kÞkeðy0�yÞn
( )6max E
e�yn0
kþ 1; Eeðy0�yÞn0
� �:
Since k ¼ bEdc, we have kþ 1 > Ed and k < E
d, therefore,
Esupn<0ð1� kÞkeðy0�yÞn
6 max Ey
y0 d1� yy0 ; E
yy0 d1� y
y0
n o;¼ E
yy0 d1� y
y0 ; ð2:19Þ
and for the second term on the right-hand side of (2.18),
206 H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213
d supn<0ð1� ð1� kÞkÞe�yn
6max d supn06n<0
ð1� ð1� kÞkÞe�yn; d supn<n0
ð1� ð1� kÞkÞe�yn
( )
¼max d supn06n<0
ð1� ð1� kÞkÞe�yn; d supn<n0
1� ð1� kÞk
keðy0�yÞn
( )6max de�yn0 ; dkeðy0�yÞn0
n o6 max E
yy0 d1� y
y0 ; Ey
y0 d1� yy0
n o¼ E
yy0 d1� y
y0 : ð2:20Þ
Combining inequality (2.17) and (2.19) with (2.20), the proof of this theorem is completed. h
3. The discrepancy principle
In this section, we discuss an a posteriori stopping rule for iterative scheme (2.11) which is based on the discrepancy prin-ciple of Morozov [5,11] in the following form:
kf d � df dc ð� þ iyÞðnÞjy¼0k ¼ sd; ð3:1Þ
where s > 1 is a constant and c denotes the regularization parameter. In the numerical experiments, we can take the iter-ation depth
k ¼ arg minfbcc;bccþ1g
kf d � df dc ð� þ iyÞðnÞjy¼0k; ð3:2Þ
and df d0 ð� þ iyÞðnÞ ¼ 0, thus (3.1) can be simplified to
kf d � df dc ð� þ iyÞðnÞjy¼0k ¼ kf d � ð1� ð1� kÞcÞf dkL2ðIÞ ¼ kð1� kÞc f dkL2ðIÞ ¼ sd: ð3:3Þ
The following results are obvious.
Lemma 3.1. Set qðcÞ ¼ kf d � df dc ð� þ iyÞðnÞjy¼0k ¼ kð1� kÞc f dkL2ðIÞ. If 0 < sd < kf dk, then
(a) q is a continuous function,(b) limc!0þqðcÞ ¼ kf dk,(c) limc!þ1qðcÞ ¼ 0(d) q is a strictly decreasing function.
Remark 3.2. From Lemma 3.1, we know that there exists a unique solution c satisfying Eq. (3.1).
Lemma 3.3. Setting xdcð�; yÞ ¼ e�yn f ðnÞ � ð1� ð1� kÞcÞe�yn f dðnÞ, then the following inequality holds
kxdcð�; yÞkL2ðIÞ 6 kx
dcð�; y0Þk
yy0
L2ðIÞkxdcð�;0Þk
1� yy0
L2ðIÞ : ð3:4Þ
Proof.
xdcð�;0Þ ¼ f ðnÞ � ð1� ð1� kÞcÞf dðnÞ;
xdcð�; y0Þ ¼ e�y0n f ðnÞ � ð1� ð1� kÞcÞe�y0n f dðnÞ:
kxdcð�;yÞk
2L2ðIÞ ¼
Zn<0ðe�yn f ðnÞ�ð1�ð1�kÞcÞe�yn f dðnÞÞ
2dn¼
Zn<0ðe�ynÞ2ðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2yy0 ðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2ð1� yy0Þdn
6
Zn<0ðe�ynÞ
2y0y ðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2dn
� � yy0Z
n<0ðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2dn
� �1� yy0
¼Z
n<0e�2y0nðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2dn
� � yy0Z
n<0ðf ðnÞ�ð1�ð1�kÞcÞf dðnÞÞ
2dn
� �1� yy0
¼kxdcð�;y0Þk
2yy0
L2ðIÞkxdcð�;0Þk
2ð1� yy0Þ
L2ðIÞ : ð3:5Þ
From (3.3) and (2.13), and noticing that c should be greater than one in practical computation, we know
H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213 207
sd ¼ kð1� kÞc f dðnÞkL2ðIÞ 6 kð1� kÞcðf dðnÞ � f ðnÞÞkL2ðIÞ þ kð1� kÞc f ðnÞkL2ðIÞ
¼ kð1� kÞcðf dðnÞ � f ðnÞÞkL2ðIÞ þ kð1� kÞckf ðnÞe�y0nkL2ðIÞ 6 d sup0<k<1
ð1� kÞc þ E sup0<k<1
ð1� kÞck 6 dþ Ec;
i.e.,
cd 6E
ðs� 1Þ : � ð3:6Þ
Lemma 3.4. The following inequality holds
kxdcð�;0ÞkL2ðIÞ 6 ðsþ 1Þd: ð3:7Þ
Proof. Due to the triangle inequality and (3.3), there holds
kxdcð�;0ÞkL2ðIÞ ¼ kf ðnÞ � ð1� ð1� kÞcÞf dðnÞkL2ðIÞ 6 kf ðnÞ � f dðnÞkL2ðIÞ þ kð1� kÞc f dðnÞkL2ðIÞ 6 dþ sd
¼ ðsþ 1Þd: � ð3:8Þ
Lemma 3.5. The following inequality holds
kxdcð�; y0ÞkL2ðIÞ 6
sEs� 1
: ð3:9Þ
Proof. Due to the triangle inequality and (2.14), we know
kxdcð�; y0ÞkL2ðIÞ ¼ ke�y0n f ðnÞ � ð1� ð1� kÞcÞe�y0n f dðnÞkL2ðIÞ
6 ke�y0n f ðnÞ � ð1� ð1� kÞcÞe�y0n f ðnÞkL2ðIÞ þ kð1� ð1� kÞcÞe�y0nðf ðnÞ � f dðnÞÞkL2ðIÞ
¼ kð1� kÞce�y0n f ðnÞkL2ðIÞ þ k1� ð1� kÞc
kðf dðnÞ � f ðnÞÞkL2ðIÞ 6 Eþ cd: ð3:10Þ
Substituting (3.6) into (3.10), we obtain
kxdcð�; y0ÞkL2ðIÞ 6
sEs� 1
: � ð3:11Þ
Theorem 3.6. Let f ðxþ iyÞ be the exact solution of Problem 1.1, and f dk ðxþ iyÞ be its regularization approximation defined by
(2.12) with f d0 ðxþ iyÞ ¼ 0. If the a priori bound (2.6) is valid and the iteration (2.10) is stopped by the discrepancy principle
(3.1), then
k f ð� þ iyÞ � f dc ð� þ iyÞk 6 CE
yy0 d1� y
y0 þ d; ð3:12Þ
where C ¼ ðsþ 1Þð sðs�1Þðsþ1ÞÞ
yy0 .
Proof.
kf ð� þ iyÞ � f dc ð� þ iyÞk ¼ k dfð� þ iyÞðnÞ � df d
c ð� þ iyÞðnÞk
6 k dfð� þ iyÞðnÞ � df dc ð� þ iyÞðnÞkL2ðIÞ þ k dfð� þ iyÞðnÞ � df d
c ð� þ iyÞðnÞkL2ðWÞ ð3:13Þ
It is easy to know for n 2W ,
kf ð� þ iyÞ � f dc ð� þ iyÞkL2ðWÞ ¼ ke�yn f dðnÞ � e�yn f ðnÞkL2ðWÞ 6 kf dðnÞ � f ðnÞk 6 d: ð3:14Þ
For n 2 I, by combining (3.5), (3.7) and (3.9), we have
kf ð� þ iyÞ � f dc ð� þ iyÞkL2ðIÞ ¼ kxd
cð�; yÞkL2ðIÞ 6sE
s� 1
� � yy0
ðsþ 1Þdð Þ1�y
y0 ;¼ CEy
y0 d1� yy0 ; ð3:15Þ
208 H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213
where C ¼ ðsþ 1Þð sðs�1Þðsþ1ÞÞ
yy0 .
Combining (3.14) and (3.15), the proof of Theorem 3.6 is completed. h
4. Numerical examples
In this section some numerical examples are given to verify the validity of the iteration regularization method proposedin this paper. We use the Fast Fourier and Inverse Fourier transform to complete our numerical experiments. In these numer-ical experiments we always take y0 ¼ 1 and fix the domain
Fig
Xþ ¼ fz ¼ xþ iy 2 Cj jxj � 10;0 < y < 1g:
Suppose the vector F represents samples from the function f ðxÞ, then we can obtain the perturbation data through
Fd ¼ F þ �randnðsizeðFÞÞ; ð4:1Þ
where the function ‘‘randn(�)’’ generates arrays of random numbers whose elements are normally distributed with mean 0,variance r2 ¼ 1. The error is given by
d ¼ kFd � Fkl2 :¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
M þ 1
XMþ1
n¼1
jFdðnÞ � FðnÞj2vuut ; ð4:2Þ
here we usually choose M ¼ 100 and Fdcðxþ iyÞ to represent the discrete regularization solution of Fðxþ iyÞ.
In all numerical experiments, we compute the approximation Fdcðxþ iyÞ according to Theorem 2.2. And we can take
E ¼ kfkl2ðRÞ; s ¼ 1:1, a priori parameter k ¼ bEdc and a posteriori parameter k according to (3.1) for calculation. To compare
the regularization effect, we use the same examples as in [7,8].The absolute error eaðFð� þ iyÞÞ and the relative error erðFð� þ iyÞÞ are defined by
eaðFð� þ iyÞÞ :¼ kFdkð� þ iyÞ � Fð� þ iyÞkl2 ;
(a)
−10 −5 0 5 10−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Rea
l par
t at y
=0.1
exact solutionε=1e−2ε=1e−3
(b)
−10 −5 0 5 10−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1
Imag
inar
y pa
rt at
y=0
.1
exact solutionε=1e−2ε=1e−3
(c)
−10 −5 0 5 10−0.5
0
0.5
1
1.5
2
2.5
Rea
l par
t at y
=0.9
exact solutionε=1e−2ε=1e−3
(d)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Imag
inar
y pa
rt at
y=0
.9
exact solutionε=1e−2ε=1e−3
. 1. Example 1: (a) The real part at y ¼ 0:1 , (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:9, (d) The imaginary part at y ¼ 0:9.
Fig. 2.
H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213 209
erðFð� þ iyÞÞ :¼kFd
kð� þ iyÞ � Fð� þ iyÞkl2
kFð� þ iyÞkl2;
respectively. For simplicity, the modified Tikhonov method [8], the Fourier method [7], and the iteration method are abbre-viated to MTM, FM, and IM, respectively.
Example 3.1. The function
f ðzÞ ¼ e�z2 ¼ e�ðxþiyÞ2 ¼ ey2�x2 ðcos 2xy� i sin 2xyÞ;
is analytic in the domain
Xþ ¼ fz ¼ xþ iy 2 Cjx 2 R; 0 < y < 1g;
with
f ðzÞjy¼0 ¼ e�x2 2 L2ðRÞ;
and
Ref ðzÞ ¼ ey2�x2cos 2xy;
Imf ðzÞ ¼ ey2�x2sin 2xy:
Fig. 1 is the comparison of the real and imaginary parts of the exact solution f ðzÞ and the approximate solution f dk ðzÞ at
y ¼ 0:1 and y ¼ 0:9 for different noise level � ¼ 10�2; 10�3 for the a priori rule. Figs. 2 and 3 are the comparison a prioriand a posteriori rules of the real and imaginary parts for the exact f ðzÞ and the approximate solution f d
k ðzÞ at y ¼ 0:1 andy ¼ 0:9 for different noise level � ¼ 10�2; 10�3. From Figs. 1–3, we can find that the smaller the � is, the better the computedapproximation is, and the bigger the y is, the worse the computed approximation is. Although a posteriori parameter choice isindependent of the a priori bound E, we can easily find that it also works well.
Tables 1–3 are the comparisons of three regularization methods. From these tables, we know that the numerical results ofIM are as well as those of MTM and FM.
(a)
−10 −5 0 5 10−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Rea
l par
t at y
=0.1
exact solutiona−prioria−posteriori
(b)
−10 −5 0 5 10−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1
Imag
inar
y pa
rt at
y=0
.1
exact solutiona−prioria−posteriori
(c)
−10 −5 0 5 10−0.5
0
0.5
1
1.5
2
2.5
Rea
l par
t at y
=0.9
exact solutiona−prioria−posteriori
(d)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Imag
inar
y pa
rt at
y=0
.9
exact solutiona−prioria−posteriori
Example 1: � ¼ 10�2, (a) The real part at y ¼ 0:1, (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:9, (d) The imaginary part at y ¼ 0:9.
(a)
−10 −5 0 5 10−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Rea
l par
t at y
=0.1
exact solutiona−prioria−posteriori
(b)
−10 −5 0 5 10−0.1
−0.08
−0.06
−0.04
−0.02
0
0.02
0.04
0.06
0.08
0.1
Imag
inar
y pa
rt at
y=0
.1
exact solutiona−prioria−posteriori
(c)
−10 −5 0 5 10−0.5
0
0.5
1
1.5
2
2.5
Rea
l par
t at y
=0.9
exact solutiona−prioria−posteriori
(d)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Imag
inar
y pa
rt at
y=0
.9
exact solutiona−prioria−posteriori
Fig. 3. Example 1: � ¼ 10�3, (a) The real part at y ¼ 0:1, (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:9, (d) The imaginary part at y ¼ 0:9.
Table 1Example 3.1: noise level � ¼ 10�1.
y MTM FM IM
ea er ea er ea er
0.3 0.0820 0.3010 0.1265 0.5086 0.0805 0.30830.6 0.1298 0.3636 0.1698 0.5211 0.1423 0.46230.9 0.3486 0.6227 0.3786 0.7406 0.2510 0.4659
Table 2Example 3.1: noise level � ¼ 10�2.
y MTM FM IM
ea er ea er ea er
0.3 0.0068 0.0250 0.0201 0.0809 0.0122 0.04660.6 0.0268 0.0751 0.0330 0.1013 0.0300 0.09750.9 0.0837 0.1494 0.1373 0.2686 0.0878 0.2026
Table 3Example 3.1: noise level � ¼ 10�3.
y MTM FM IM
ea er ea er ea er
0.3 0.0018 0.0066 0.0110 0.0444 0.0011 0.00420.6 0.0076 0.0212 0.0170 0.0523 0.0057 0.01860.9 0.0732 0.1308 0.0210 0.0411 0.0330 0.0761
210 H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213
(a)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Rea
l par
t at y
=0.1
exact solutiona−prioria−posteriori
(b)
−10 −5 0 5 10−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Imag
inar
y pa
rt at
y=0
.1
exact solutiona−prioria−posteriori
(c)
−10 −5 0 5 10−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Rea
l par
t at y
=0.9
exact solutiona−prioria−posteriori
(d)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Imag
inar
y pa
rt at
y=0
.9
exact solutiona−prioria−posteriori
Fig. 4. Example 2: � ¼ 10�1, (a) The real part at y ¼ 0:1, (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:9, (d) The imaginary part at y ¼ 0:9.
(a)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Rea
l par
t at y
=0.1
exact solutiona−prioria−posteriori
(b)
−10 −5 0 5 10−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Imag
inar
y pa
rt at
y=0
.1
exact solutiona−prioria−posteriori
(c)
−10 −5 0 5 10−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
Rea
l par
t at y
=0.9
exact solutiona−prioria−posteriori
(d)
−10 −5 0 5 10−1.5
−1
−0.5
0
0.5
1
1.5
Imag
inar
y pa
rt at
y=0
.9
exact solutiona−prioria−posteriori
Fig. 5. Example 2: � ¼ 10�2, (a) The real part at y ¼ 0:1, (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:9, (d) The imaginary part at y ¼ 0:9.
H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213 211
Table 4Example 3.2: noise level� ¼ 10�1.
y MTM FM IM
ea er ea er ea er
0.3 0.0659 0.0839 0.2453 0.3231 0.0379 0.05000.6 0.1540 0.1598 0.1264 0.1341 0.0800 0.09320.9 0.4042 0.3217 0.1611 0.1299 0.1850 0.1783
Table 5Example 3.2: noise level � ¼ 10�2.
y MTM FM IM
ea er ea er ea er
0.3 0.0287 0.0366 0.0252 0.0332 0.0166 0.02190.6 0.0912 0.0946 0.0638 0.0677 0.0761 0.08860.9 0.2968 0.2363 0.1290 0.1040 0.2725 0.2626
(a)
−10 −5 0 5 10−1
0
1
2
3
4
5
6
7
Rea
l par
t at y
=0.1
exact solutiona−porioria−posteriori
(b)
−10 −5 0 5 10−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Imag
inar
y pa
rt at
y=0
.1
exact solutiona−porioria−posteriori
(c)
−10 −5 0 5 10−2
−1
0
1
2
3
4
5
6
7
Rea
l par
t at y
=0.5
exact solutiona−porioria−posteriori
(d)
−10 −5 0 5 10−4
−3
−2
−1
0
1
2
3
4
Imag
inar
y pa
rt at
y=0
.5
exact solutiona−porioria−posteriori
Fig. 6. Example 2: � ¼ 10�2, (a) The real part at y ¼ 0:1, (b) The imaginary part at y ¼ 0:1, (c) The real part at y ¼ 0:5, (d) The imaginary part at y ¼ 0:5.
212 H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213
Example 3.2. The function
f ðzÞ ¼ cos z ¼ cosðxþ iyÞ ¼ cosh y cos x� i sinh y sin x;
is also an analytic function in the domain
Xþ ¼ fz ¼ xþ iy 2 Cjx 2 R; 0 < y < 1g;
with
f ðzÞjy¼0 ¼ cos x;
H. Cheng et al. / Applied Mathematics and Computation 233 (2014) 203–213 213
and
Ref ðzÞ ¼ cosh y cos x;
Imf ðzÞ ¼ � sinh y sin x:
Figs. 4 and 5 are the comparison of the a priori and a posteriori rules of the real and imaginary parts for the exact f ðzÞ andthe approximate solution f d
k ðzÞ at y ¼ 0:1 and y ¼ 0:9 for different noise level � ¼ 10�1; 10�2. Although cos x R L2ðRÞ, Figs. 4and 5 show that the computed approximation is also well.
From Tables 4 and 5, we also know that the numerical results of IM are as well as those of MTM and FM.
Example 3.3. If the function f is a ‘‘piece’’ of an analytic function then this approximation is often possible. This example istypical.
Let
f ðzÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi36� z2p
¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi36� ðxþ iyÞ2
q; jxj < 6;
0; jxjP 6:
(
Fig. 6 is the comparison of the a priori and a posteriori rules of the real and imaginary parts for the exact f ðzÞ and theapproximate solution f d
k ðzÞ at y ¼ 0:1 and y ¼ 0:5 with � ¼ 10�2, which shows that our method is also effective for a ‘piece’of an analytic function.
5. Conclusion
In this paper a new iteration regularization method is given for solving the numerical analytic continuation problem on astrip domain. The a priori and a posteriori rules for choosing regularization parameter with strict theory analysis are pre-sented. In numerical aspect, the comparison with other methods show that the proposed method also works effective.
References
[1] R.G. Airapetyan, A.G. Ramm, Numerical inversion of the Laplace transform from the real axis, J. Math. Anal. Appl. 248 (2000) 572–587.[2] G.F. Chew, Proposal for determining the pion-nucleon coupling constant from the angular distribution for nucleon-nucleon scattering, Phys. Rev. 112
(2) (1958) 1380–1383.[3] Y.J. Deng, Z.H. Liu, Iteration methods on sideways parabolic equations, Inverse Prob. 25 (2009) 095004 (14pp).[4] Y.J. Deng, Z.H. Liu, New fast iteration for determining surface temperature and heat flux of general sideways parabolic equation, Nonlinear Anal. Real
World Appl. 12 (1) (2011) 156–166.[5] H.W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, Kluwer Academic Publishes, Boston, 1996.[6] C.L. Epstein, Introduction to the Mathematics of Medical Imaging, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2008.[7] C.L. Fu, Z.L. Deng, X.L. Feng, F.F. Dou, A modified Tikhonov regularization for stable analytic continuation, SIAM J. Numer. Anal. 47 (2009) 2982–3000.[8] C.L. Fu, F.F. Dou, X.L. Feng, Z. Qian, A simple regularization method for stable analytic continuation, Inverse Prob. 24 (2008) 06500 (15pp).[9] D.N. Hào, H. Sahli, Stable analytic continuation by mollification and the fast Fourier transform, Methods of Complex and Clifford Analysis, SAS Int. Publ,
Delhi, 2004. 143–152.[10] K. Miller, G.A. Viano, On the necessity of nearlybestpossible methods for analytic continuation of scattering data, J. Math. Phys. 14 (8) (1973) 1037–
1048.[11] A.N. Tikhonov, V.Y. Arsenin, Solutions of Ill-posed Problems, John Wiley and Sons, New York, 1977.