a note on processes with random stationary increments
TRANSCRIPT
Statistics and Probability Letters 94 (2014) 153–161
Contents lists available at ScienceDirect
Statistics and Probability Letters
journal homepage: www.elsevier.com/locate/stapro
A note on processes with random stationary increments
Haimeng Zhang a,∗, Chunfeng Huang b,c
a Department of Mathematics and Statistics, University of North Carolina at Greensboro, Greensboro, NC 27412, USAb Department of Statistics, Indiana University, Bloomington, IN 47408, USAc Department of Geography, Indiana University, Bloomington, IN 47408, USA
a r t i c l e i n f o
Article history:Received 7 November 2013Received in revised form 14 May 2014Accepted 14 July 2014Available online 21 July 2014
Keywords:Intrinsic stationarityVariogramSpectrumStructure functionInversion formula
a b s t r a c t
When the correlation theory is considered for the processes with random stationary incre-ments, Yaglom (1955) has developed the spectral representation theory. In this note, wecomplete this development by obtaining the inversion formula of the spectrum in terms ofthe structure function.
© 2014 Elsevier B.V. All rights reserved.
1. Introduction
When the process is assumed to be with random stationary nth increments, Yaglom (1955) has studied its correlationand developed the spectral representation theory. The results in Yaglom (1955) have profound impacts on various fieldsincluding the studies of fractional Brownianmotion (Mandelbrot andVanNess, 1968), intrinsic random function (Solo, 1992;Huang et al., 2009), and self-similar process (Unser and Blu, 2007; Blu and Unser, 2007). It also has a strong connection tothe concept of generalized random process (Itô, 1954; Gel’fand, 1955; Gel’fand and Vilenkin, 1964; Yaglom, 1987).
The covariance function is used to characterize the second-order dependency. When the process is assumed to bestationary, its corresponding spectral density is widely used to describe the periodical components and frequencies(Brockwell and Davis, 2009). The spectrum and its covariance function are connected through the inversion of Fouriertransformation. When the process is intrinsically stationary, the variogram often replaces the covariance function to modelthe dependency (Cressie, 1993). Furthermore, when the process is with stationary nth increments, the notion of structureequation (Yaglom, 1955) or generalized covariance function (Matheron, 1973) is developed. Such applications are essentialin universal kriging (Cressie, 1993; Chilès and Delfiner, 2012). The spectral representation of such structure equation orgeneralized covariance has been obtained in Yaglom (1955) and Matheron (1973). In this note, we derive the inversionformula where the spectrum can be represented by the structure function. We are not aware of such an explicit formula inliterature. This enhances the understanding of the process, where our theorem offers a way to estimate the spectral functionfrom an easily estimated structure function. In addition, by directly applying our formula, one can derive the spectral densityfunction of commonly used power variogram.
∗ Corresponding author. Tel.: +1 336 334 5836; fax: +1 336 334 5949.E-mail addresses: [email protected] (H. Zhang), [email protected] (C. Huang).
http://dx.doi.org/10.1016/j.spl.2014.07.0160167-7152/© 2014 Elsevier B.V. All rights reserved.
154 H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161
2. Main results
In Yaglom (1955), a random process {X(t), t ∈ R} is considered, where its nth difference with step τ > 0 is defined as
∆(n)τ X(t) =
nk=0
(−1)knk
X(t − kτ).
Such a random nth increment is called stationary if its second moment
E{∆(n)τ1
X(t + s)∆(n)τ2 X(s)} ≡ D(n)(t; τ1, τ2)
exists and does not depend on s. Yaglom (1955) has termed this as the structure function of such increments, and has shownthat it has the following spectral representation
D(n)(t; τ1, τ2) =
∞
−∞
eitλ(1 − e−iτ1λ)n(1 − eiτ2λ)n(1 + λ2)n
λ2ndF(λ), (1)
where the spectral function F(λ) is a non-decreasing boundedmeasure.When τ1 = τ2, the above structure function becomes
D(n)τ (t) ≡ D(n)(t; τ , τ ) =
∞
−∞
eitλ2n(1 − cos(τλ))n(1 + λ2)n
λ2ndF(λ). (2)
In our next theorem, we obtain the inversion theorem where we express the spectral function in terms of the structurefunction.
Theorem 1. Let λ1 < λ2 be the continuity points of F(λ). We have
F(λ2) − F(λ1) = limT→∞
12π
T
−TD(n)
τ (t)dt λ2
λ1
e−iλtq(λ; n, τ )dλ
, (3)
where,
q(λ) ≡ q(λ; n, τ ) =λ2n
2n(1 − cos(τλ))n(1 + λ2)n.
Here τ > 0 is selected so that the function (1 − cos(λτ))/λ2 has no zeros for λ ∈ [λ1, λ2].
Remark 1. When n = 0, ∆(n)τ X(t) = X(t), that is, the process is stationary itself. The structure function becomes a
conventional covariance function. The inversion formula in Theorem 1 is just a regular Fourier inversion (Hannan, 1970;Doob, 1953).
The proof of Theorem 1 follows along the lines in obtaining the inversion formula for Fourier transform in literature, forexample, see Hannan (1970, Chapter 2), and Doob (1953, Chapter XI). Here we denote φ(λ) = 1[λ1,λ2](λ) as the indicatorfunction and let h(λ) = φ(λ)q(λ). Similar to Yaglom (1955, page 96), τ > 0 is selected such that for λ ∈ [λ1, λ2],(1−cos(τλ))/λ2 has no zeros, and so is bounded away from zero. Therefore, there exists Ch > 0 such that 0 ≤ h(λ) ≤ Ch <∞, λ ∈ R. Moreover, h(λ) is continuous for λ ∈ [λ1, λ2], and is absolutely integrable. Hence, its Fourier transform exists,and is given by h(t) =
∞
−∞e−iνth(ν)dν, t ∈ R. The following properties for h(t) are easily obtained: h(−t) = h(t), h(t) is
uniformly continuous and bounded for t ∈ R.
Lemma 1. For the Fourier transform function h(t) of h(ν), we have
limT→∞
12π
T
−T
1 −
|t|T
eiνt h(t)dt = h(ν). (4)
Moreover, the above convergence is bounded and uniformly over all ν ∈ R.
Proof. First, since h(λ) is absolutely integrable, from the Fubini’s theorem, one has
12π
T
−T
1 −
|t|T
eiνt h(t)dt =
∞
−∞
h(λ)dλ
12π
T
−Tei(ν−λ)t
1 −
|t|T
dt
.
Now the above inner integral can be simplified as follows, when λ = ν, T
−Tei(ν−λ)t
1 −
|t|T
dt = 2
T
0cos((λ − ν)t)
1 −
tT
dt
=2T1 − cos((λ − ν)T )
(λ − ν)2=
1T
sin2
(λ−ν)T2
λ−ν2
2 → 0, as T → ∞.
H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161 155
Note the above limit uniformly converges for any finite interval of λ not containing ν. When λ = ν, T
−T
1 −
|t|T
dt = 2
T
0
1 −
tT
dt = T → ∞, as T → ∞.
Note that
12πT
sin2
(λ−ν)T2
λ−ν2
2 = δT (λ − ν),
with the convention that it takes the limit T/2π if λ = ν, then
12π
T
−T
1 −
|t|T
eiνt h(t)dt =
∞
−∞
h(λ)δT (λ − ν)dλ.
In fact, the right hand side above is the so-called Dirichlet type of integral in literature. Next we will show that ∞
−∞
h(λ)δT (λ − ν)dλ − h(ν)
→ 0, as T → ∞,
uniformly on ν.Note that for every ε > 0, choose η = ε, such that for |x| < η, we have |h(x+ν)−h(ν)| < ε/3 by the uniform continuity
of h(ν). In addition, for this ε > 0, we choose K > 0 such that|x|>K
δT (x)dx < ε/(6Ch).
For the above selected η and K , we then have the following decomposition ∞
−∞
h(λ)δT (λ − ν)dλ − h(ν)
=
∞
−∞
|h(λ) − h(ν)|δT (λ − ν)dλ
=
∞
−∞
|h(x + ν) − h(ν)|δT (x)dx
=
|x|<η
+
η≤|x|≤K
+
|x|>K
|h(x + ν) − h(ν)|δT (x)dx
= I + II + III,
saying. Now we can bound each of the above integrals.
I <ε
3
RδT (x)dx =
ε
3, III ≤ 2Ch
|x|>K
δT (x)dx <ε
3.
Furthermore, for the given η, K > 0, we choose T > L such that 4/(Lη2) < ε12ChK
. Therefore, if η ≤ |x| ≤ K ,
δT (x) ≤4
Tη2≤
4Lη2
<ε
12ChK.
Hence,
II =
η≤|x|≤K
|h(x + ν) − h(ν)|δT (x)dx <ε
12ChK
η≤|x|≤K
|h(x + ν) − h(ν)|dx ≤ε
3.
Putting all together, for T > Lwith L not depending on ν, we have ∞
−∞
h(λ)δT (λ − ν)dλ − h(ν)
<ε
3+
ε
3+
ε
3= ε.
Finally, taking ε = 1 and for T large, we have ∞
−∞
h(λ)δT (λ − ν)dλ ≤ h(ν) + 1 ≤ Ch + 1, (5)
uniformly bounded. �
Lemma 2. For h(t), we have
limT→∞
12π
1T
T
−T|t|eiνt h(t)dt = 0, (6)
156 H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161
and moreover, for all T ≥ 0, there exists M0 > 0 not depending on ν , such that 1T T
−T|t|eiνt h(t)dt
≤ M0. (7)
Proof. We first note that for λ ∈ [λ1, λ2], q(λ) has continuous derivatives, and hence there exists a constant Cq > 0 suchthat |q′(λ)| ≤ Cq. Observe that T
−T|t|eiνt h(t)dt =
T
0teiνt h(t) + e−iνt h(t)
dt = 2
T
0tdt
λ2
λ1
cos((λ − ν)t)q(λ)dλ
,
and apply the integration by parts to obtain λ2
λ1
cos((λ − ν)t)q(λ)dλ =1t
(sin((λ2 − ν)t)q(λ2) − sin((λ1 − ν)t)q(λ1)) −
λ2
λ1
sin((λ − ν)t)q′(λ)dλ
,
which implies T
0tdt
λ2
λ1
cos((λ − ν)t)q(λ)dλ
= q(λ2)
T
0sin((λ2 − ν)t)dt + q(λ1)
T
0sin((ν − λ1)t)dt
−
λ2
λ1
q′(λ)dλ T
0sin((λ − ν)t)dt. (8)
Again, the Fubini’s theorem was used. Noticing that |q′(λ)| ≤ Cq when λ ∈ [λ1, λ2], we have 1T T
−T|t|eiνt h(t)dt
≤ 2(q(λ2) + q(λ1) + Cq(λ2 − λ1)),
implying (7) by denotingM0 as the right hand side in the above inequality. Moreover, on the first two terms of (8), we have,if ν = λ1 or λ2,
1Tq(λ2)
T
0sin((λ2 − ν)t)dt ≤ q(λ2)
2(λ2 − ν)T
,1Tq(λ1)
T
0sin((ν − λ1)t)dt ≤ q(λ1)
2(ν − λ1)T
,
then both terms tend to zero as T tends to infinity. For the third term, λ2
λ1
q′(λ)dλ T
0sin((λ − ν)t)dt =
λ2
λ1
q′(λ)dλ
sin2
(λ−ν)T2
λ−ν2
,
where the inner parenthesis takes its limit 0 if λ = ν. After divided by T , and applying the same approach used for proving(4), one can obtain that
12πT
λ2
λ1
q′(λ)dλ T
0sin((λ − ν)t)dt → 0, as T → ∞.
This shows the pointwise convergence of (6) for ν ∈ R. �
Lemma 3. For h(ν), we have
h(ν) = limT→∞
12π
T
−Teiνt h(t)dt, (9)
and the above convergence is bounded.
Proof. Notice the following
12π
T
−Teiνt h(t)dt =
12π
T
−T
1 −
|t|T
eiνt h(t)dt +
12πT
T
−T|t|eiνt h(t)dt.
The results are now obvious from Lemmas 1 and 2, (5) and (7). �
Proof of Theorem 1. We first notice that
var(∆(n)τ X(t)) = D(n)
τ (0) =
∞
−∞
2n(1 − cos τλ)n(1 + λ2)n
λ2ndF(λ) < ∞, (10)
H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161 157
since ∆(n)τ X(t) is second-order stationary. Now the right hand side of (3) becomes
12π
limT→∞
T
−Th(t)dt
∞
−∞
eitλq−1(λ)dF(λ)
= lim
T→∞
∞
−∞
q−1(λ)dF(λ)
12π
T
−Teitλh(t)dt
.
The interchange of integrals in the last identity is due to the Fubini’s theorem through the boundedness of h(t) and (10) over(t, λ) ∈ [−T , T ] × (−∞, ∞). From Lemma 3 and the integrability condition (10), we apply the dominated convergencetheorem to get
limT→∞
∞
−∞
q−1(λ)dF(λ)
12π
T
−Teitλh(t)dt
=
∞
−∞
q−1(λ)dF(λ)
limT→∞
12π
T
−Teitλh(t)dt
=
∞
−∞
q−1(λ)h(λ)dF(λ) =
∞
−∞
φ(λ)F(dλ) = F(λ2) − F(λ1),
since λ1 and λ2 are the continuity points of F .
Remark 2. Under the assumption for Theorem 1, if D(n)τ (t) is continuous on any finite interval [−T , T ] for T > 0, then we
can apply the Fubini’s theorem on (t, ν) ∈ [−T , T ] × [λ1, λ2],
F(λ2) − F(λ1) = limT→∞
12π
λ2
λ1
q(ν)dν T
−Te−iνtD(n)
τ (t)dt.
Furthermore, if the following limit
limT→∞
12π
T
−Te−iνtD(n)
τ (t)dt
exists and is denoted by fD(ν) ≡ fD(ν; n, τ ), then for λ1 < λ2 continuity points of F ,
F(λ2) − F(λ1) =
λ2
λ1
fD(ν)q(ν)dν.
Remark 3. From Remark 2, if one further assumes that the spectral function F(λ) is absolutely continuous with spectraldensity f (λ), one has
f (λ) = fD(λ)q(λ), λ ∈ R. (11)
This inversion formula furthers the development in Yaglom (1955). When the process is stationary, the spectral densityplays an important role (Brockwell and Davis, 2009). Clearly, when the process is with stationary nth increments, such aspectral density can help understand the process (Mandelbrot and Van Ness, 1968; Solo, 1992). In addition, our theoremoffers a way to estimate such a function from an easily estimated structure function.
Remark 4. From Remark 3, it is quite interesting to note that the spectral density function f (λ) given by (11) is free of τ . Wenotice that the structure function D(n)
τ (t) = E[∆(n)τ X(s+ t)∆(n)
τ X(s)] is positive definite. By Bochner’s Theorem (for example,Brockwell and Davis, 2009), if its spectral density fD(λ) exists, it has the following spectral representation:
D(n)τ (t) =
∞
−∞
eitλfD(λ)dλ.
Comparing the above representation with (2), we have (11), and it is clear that f (λ) is free of τ . Therefore, the spectralfunction exists regardless of which step τ one takes on the differences of the process.
Remark 5. One of the most widely used real-valued processes with stationary nth random increments is the case whenn = 1, underwhich∆τX(t) = X(t+τ)−X(t) is assumed to be stationary. This is usually termed as an intrinsically stationaryprocess in literature. Here, the variogram function 2γ (τ) = var(X(t + τ) − X(t)) is commonly used in spatial statistics(Cressie, 1993; Stein, 1999; Huang et al., 2011a; Chilès and Delfiner, 2012). One can show that (Yaglom, 1987, Section 23)D(1)
τ (t) = γ (t + τ) + γ (t − τ) − 2γ (t), and γ (τ) =12D
(1)τ (0), τ > 0.
There are a few things we can learn from our development. First, it is clear that D(1)τ (t) is a covariance function, which
directly implies Theorem 4 in Ma (2004). Note that here we do not assume γ (·) to be bounded, as imposed by Ma (2004).Second, from (11), one has
f (λ) =
limT→∞
12π
T
−TD(1)
τ (u)e−iuλdu
·λ2
2(1 + λ2)(1 − cos λτ), λ > 0. (12)
158 H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161
This gives the spectral density function of a structure function directly; hence one can obtain the spectral density for thevariogram aswell. For example, if one considers Brownianmotion X(t)which has the covariance function cov(X(s), X(t)) =
min(s, t), its structure function can then be computed as D(1)τ (u) = (τ −|u|)I|u|<τ . Therefore, from (12), the spectral density
is given by
f (λ) =
limT→∞
12π
T
−T(τ − |u|)I|u|<τ e−iuλdu
·
λ2
2(1 + λ2)(1 − cos λτ)=
12π(1 + λ2)
, λ > 0.
If one uses the spectral representation of the variogram in Christakos (1984, equation 35), or Yaglom (1987, equation 4.250a),the spectral density in γ (τ) = 2
∞
0 (1 − cos τλ)fγ (λ)dλ will be of the form fγ (λ) =1+λ2
λ2f (λ) =
12πλ2
. This is commonlyviewed as the spectral density function of Brownian motion.
Remark 6. Besides the Brownian motion in Remark 5, power variogram 2γ (u) = |u|α, 0 < α < 2 is widely studied andcommonly used in practice (Yaglom, 1987; Cressie, 1993; Huang et al., 2011b). It appears that the spectral density functionof the power variogram can be obtained through the variogram function representation (for examples, page 407 in Yaglom,1987 or page 532 of Schoenberg, 1938). In this note, by directly applying (12), we can obtain its spectral density functionthrough structure functions.
Proposition 1. The spectral density function for the power variogram 2γ (u) = |u|α, 0 < α < 2 is given by
f (λ) =Γ (α + 1)2π(1 + λ2)
sin (απ/2)
λα−1
, λ > 0. (13)
To prove this proposition, we need the following lemma.
Lemma 4. Let 0 < α < 2 and λ > 0.
1. For each fixed τ > 0, we have∞
τ
((u + τ)α + (u − τ)α − 2uα) cos(uλ)du < ∞. (14)
2. For 1 ≤ α < 2,
limτ→0+
∞
0
(u + τ)α + |u − τ |
α− 2uα
τ 2
cos(uλ)du = α(α − 1)
∞
0
cos(uλ)
u2−αdu, (15)
and for 0 < α < 1,
limτ→0+
∞
0
(u + τ)α + |u − τ |
α− 2uα
τ 2
cos(uλ)du = αλ
∞
0
sin(uλ)
u1−αdu. (16)
The proof of Lemma 4 is tedious and is deferred in the Appendix.
Proof of Proposition 1. The casewhenα = 1 is given in Remark 5.Wenow first consider the existence of fD(λ) in Remark 5.Notice that for τ > 0,
limT→∞
12π
T
−TD(1)
τ (u)e−iuλdu = limT→∞
14π
T
−T(|u + τ |
α+ |u − τ |
α− 2|u|α)e−iuλdu
=12π
∞
0((u + τ)α + |u − τ |
α− 2uα) cos(uλ)du < ∞
from (14) in Lemma 4. Therefore, we apply (12) to get
f (λ) =λ2
4π(1 + λ2)(1 − cos λτ)
∞
0cos(uλ) ((u + τ)α + |u − τ |
α− 2uα) du.
Note that f (λ) is free of τ , and so taking the limit as τ → 0+ and applying Lemma 4, we have
f (λ) = limτ→0+
λ2
4π(1 + λ2)
∞
0
(u + τ)α + |u − τ |
α− 2uα
1 − cos(λτ)
cos(uλ)du
=
14π(1 + λ2)
lim
τ→0+
∞
0
(u + τ)α + |u − τ |
α− 2uα
τ 2
cos(uλ)du
lim
τ→0+
λ2τ 2
1 − cos(τλ)
H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161 159
=
α(α − 1)
2π(1 + λ2)
∞
0
cos(uλ)
u2−αdu, if 1 ≤ α < 2,
αλ
2π(1 + λ2)
∞
0
sin(uλ)
u1−αdu, if 0 < α < 1,
=Γ (α + 1)2π(1 + λ2)
sin (απ/2)
λα−1
,
giving (13). The last equality is due to (3.761.9) and (3.761.4) of Gradshteyn and Ryzhik (2007).
Acknowledgments
The authors appreciate the comments from an anonymous reviewer, which greatly enhanced the quality of the paper.The authors also acknowledge the support of NSF-DMS 1208847 and NSF-DMS 1412343 for this work.
Appendix
Proof of Lemma 4. When α = 1, the results are trivial. Hence we only prove (14)–(16) for α = 1. For this aim, we makeuse of Binomial series expansion for (1 + x)α (for example, Fischer, 1983, pages 406–412). In particular,
(1 + x)α =
lk=0
α
k
xk + Rl+1(x), for 0 < x < 1, and
(1 + x)α =
lk=0
α
k
xk + R∗
l+1(x), for − 1 < x < 0,
with
|Rl+1(x)| ≤
α
l + 1
xl+1, for 0 < α < 2, and
|R∗
l+1(x)| ≤
|(α)l+1|
l!|x|l+1, if α > 1,
|(α)l+1|
l!|x|l+1
(1 + x)1−α, if α < 1
with (α)k = α(α − 1) · · · (α − k + 1). Now we use the above expansion for1 +
τu
α and1 −
τu
α with u > τ > 0 andl = 3.
1 +τ
u
α
= 1 + ατ
u+
α(α − 1)2!
τ 2
u2+
(α)3
3!τ 3
u3+ R4(τ/u),
1 −τ
u
α
= 1 − ατ
u+
α(α − 1)2!
τ 2
u2−
(α)3
3!τ 3
u3+ R∗
4(−τ/u),
with
|R4(τ/u)| ≤
α
l + 1
(τ/u)4, if 0 < α < 2, and
|R∗
4(−τ/u)| ≤
|(α)4|
3!(τ/u)4, if α > 1,
|(α)4|
3!(τ/u)4
(1 − τ/u)1−α, if 0 < α < 1.
Hence,1 +
τ
u
α
+
1 −
τ
u
α
− 2 = α(α − 1)τ 2
u2+ R4(τ/u) + R∗
4(−τ/u).
That is, for u > τ,
(u + τ)α + (u − τ)α − 2uα
τ 2−
α(α − 1)u2−α
=uα
τ 2
1 +
τ
u
α
+
1 −
τ
u
α
− 2
−α(α − 1)
u2−α
=uα
τ 2
α(α − 1)
τ 2
u2+ R4(τ/u) + R∗
4(−τ/u)
−α(α − 1)
u2−α
=uα
τ 2R4(τ/u) +
uα
τ 2R∗
4(−τ/u).
160 H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161
1. When 1 ≤ α < 2, the above quantity can then be bounded by (u + τ)α + (u − τ)α − 2uα
τ 2−
α(α − 1)u2−α
≤uα
τ 2
α
4
τ
u
4+
uα
τ 2
|(α)4|
3!
τ
u
4≡
C(α)τ 2
u4−α, (17)
where C(α) = α
4
+|(α)4|3! > 0 solely depending on α. Therefore, forM ≫ τ , ∞
Mcos(uλ)
(u + τ)α + (u − τ)α − 2uα
τ 2
du − α(α − 1)
∞
M
cos(uλ)
u2−αdu
≤
∞
M
(u + τ)α + (u − τ)α − 2uα
τ 2−
α(α − 1)u2−α
du≤ C(α)τ 2
∞
M
duu4−α
=C(α)τ 2
3 − α
1M3−α
→ 0, as M → ∞.
2. When 0 < α < 1, we have (u + τ)α + (u − τ)α − 2uα
τ 2−
α(α − 1)u2−α
≤uα
τ 2
α
4
τ
u
4+
uα
τ 2
|(α)4|
3!
τ
u
4 1(1 − τ/u)1−α
, (18)
and so ∞
Mcos(uλ)
(u + τ)α + (u − τ)α − 2uα
τ 2
du − α(α − 1)
∞
M
cos(uλ)
u2−αdu
≤
α
4
τ 2
∞
M
duu4−α
+1τ 2
|(α)4|
3!
∞
M
uα
τu
4(1 − τ/u)1−α
du
≤
α
4
τ 2
3 − α
1M3−α
+|(α)4|
3!τ 2
(M − τ)1−α
∞
M
1u3
du
=
α
4
τ 2
3 − α
1M3−α
+|(α)4|
3!τ 2
2(M − τ)1−αM2→ 0, as M → ∞.
Note that for 0 < α < 2,∞
M
cos(uλ)
u2−αdu → 0, as M → ∞ H⇒
∞
τ
(u + τ)α + (u − τ)α − 2uα
τ 2
cos(uλ)du < ∞,
and so is (14), completing the first part. For the second part, when 1 < α < 2, we consider the following difference, ∞
0
(u + τ)α + |u − τ |
α− 2uα
τ 2
cos(uλ)du − α(α − 1)
∞
0
cos(uλ)
u2−αdu
≤
τ
0
(u + τ)α + (τ − u)α − 2uα
τ 2
cos(uλ)du
+
τ
0α(α − 1)
cos(uλ)
u2−α
du+
∞
τ
(u + τ)α + (u − τ)α − 2uα
τ 2
cos(uλ)du − α(α − 1)
∞
τ
cos(uλ)
u2−αdu
≡ I + II + III.
Note that from (17),
II + III ≤ α(α − 1) τ
0
1u2−α
du + C(α)τ 2
∞
τ
1u4−α
du
= ατ α−1+
C(α)
3 − α
τα−1
=
α +
C(α)
3 − α
τ α−1
→ 0,
as τ → 0. For term I , after applying L’Hôpital rule, one can easily obtain that I → 0 as τ → 0. When 0 < α < 1, we cannotuse (18) for our purpose. We first convert it to the case of 1 < α + 1 < 2 and then apply (15). Using integration by parts,one can obtain
∞
0cos(uλ) ((u + τ)α + |u − τ |
α− 2uα) du =
λ
1 + α
∞
τ
sin(uλ)(u + τ)α+1
+ (u − τ)α+1− 2uα+1 du
+
τ
0sin(uλ)
(u + τ)α+1
− (τ − u)α+1− 2uα+1 du .
H. Zhang, C. Huang / Statistics and Probability Letters 94 (2014) 153–161 161
After divided by τ 2, one can easily apply the same reasoning as before to show that
limτ→0+
∞
τ
sin(uλ)
(u + τ)α+1
+ (u − τ)α+1− 2uα+1
τ 2
du = α(α + 1)
∞
0
sin(uλ)
u1−αdu.
In addition, applying L’Hôpital rule, one easily has
limτ→0+
τ
0sin(uλ)
(u + τ)α+1
− (τ − u)α+1− 2uα+1
τ 2
du = 0.
Combining the above both limits gives (16). This completes the proof of Lemma 4.
References
Blu, T., Unser, M., 2007. Self-similarity: part II - optimal estimation of fractal processes. IEEE Trans. Signal Process. 55, 1364–1378.Brockwell, P.J., Davis, R.A., 2009. Time Series: Theory and Methods, second ed. Springer, New York.Chilès, J., Delfiner, P., 2012. Geostatistics: Modeling Spatial Uncertainty, second ed. Wiley, New York.Christakos, G., 1984. On the problem of permissible covariance and variogram models. Water Resour. Res. 20 (2), 251–265.Cressie, N., 1993. Statistics for Spatial Data, revised ed. Wiley, New York.Doob, J.L., 1953. Stochastic Processes. Wiley, New York.Fischer, E., 1983. Intermediate Real Analysis. Springer, New York.Gel’fand, I.M., 1955. Generalized random processes. Dok. Acad. Nauk SSSR 100, 853–856.Gel’fand, I.M., Vilenkin, N.Y., 1964. Applications of Harmonic Analysis (Generalized Functions), Vol. 4. Academic Press, New York.Gradshteyn, I.S., Ryzhik, I.M., 2007. Tables of Integrals, Series, and Products, seventh ed. Academic Press, New York.Hannan, E.J., 1970. Multiple Time Series. Wiley, New York.Huang, C., Hsing, T., Cressie, N., 2011a. Nonparametric estimation of variogram and its spectrum. Biometrika 98, 775–789.Huang, C., Yao, Y., Cressie, N., Hsing, T., 2009. Multivariate intrinsic random functions for cokriging. Math. Geosci. 41, 887–904.Huang, C., Zhang, H., Robeson, S., 2011b. On the validity of commonly used covariance and variogram functions on the sphere. Math. Geosci. 43, 721–733.Itô, K., 1954. Stationary random distributions. Mem. College Sci. Univ. Kyoto, Series A 28, 209–223.Ma, C., 2004. The use of the variogram in construction of stationary time series models. J. Appl. Probab. 41, 1093–1103.Mandelbrot, B.B., Van Ness, J.W., 1968. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 10, 422–437.Matheron, G., 1973. The intrinsic random functions and their applications. Adv. Appl. Probab. 5, 439–468.Schoenberg, I., 1938. Metric spaces and positive definite functions. Trans. Amer. Math. Soc. 44, 522–536.Solo, V., 1992. Intrinsic random functions and the paradox of 1/f noise. SIAM J. Appl. Math. 52, 270–291.Stein, M., 1999. Interpolation of Spatial Data: Some Theory for Kriging. Springer, Berlin.Unser, M., Blu, T., 2007. Self-similarity: Part I - splines and operators. IEEE Trans. Signal Process. 55, 1352–1363.Yaglom, A.M., 1955. Correlation theory of processes with random stationary nth increments. Mat. Sb. 37, 141–196. (in Russian). English translation in
(1958) American Mathematical Society Translations: Series 2, 8, 87–141. American Mathematical Society, Providence, R.I., 1958.Yaglom, A.M., 1987. Correlation Theory of Stationary and Related Random Functions, Vol. I. Springer, New York.