covariances of zero crossings in gaussian processes · 2012. 10. 11. · kedem [14] has developed...

20
COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES MATHIEU SINN * AND KARSTEN KELLER Abstract. For a zero-mean Gaussian process, the covariances of zero crossings can be ex- pressed as the sum of quadrivariate normal orthant probabilities. In this paper, we demonstrate the evaluation of zero crossing covariances using one-dimensional integrals. Furthermore, we provide asymptotics of zero crossing covariances for large time lags and derive bounds and approximations. Based on these results, we analyze the variance of the empirical zero crossing rate. We illustrate the applications of our results by autoregressive (AR), fractional Gaussian noise and fractionally integrated autoregressive moving average (FARIMA) processes. Key words. zero crossing, binary time series, quadrivariate normal orthant probability, AR(1) process, fractional Gaussian noise, FARIMA(0,d,0) process 1. Introduction. Indicators of zero crossings are widely applied in various fields of engineering and natural science, such as the analysis of vibrations, the detection of signals in presence of noise and the modelling of binary time series. A large number of literature has been contributed to the studies of zero crossing analysis. Dating back to the 1940es, telephony engineers found that replacing the original speech signal with rectangular waves having the same zero crossings retained high intelligibility [5]. Since the beginning of digital processing of speech signals, empirical rates of zero crossings have been used for the detection of pitch frequencies and to distinguish voiced and unvoiced intervals [11, 19]. For a discrete-time stationary Gaussian process or a sampled random sinusoid, the zero crossing rate is related to the first-order autocorrelation and to the dominant spectral frequency. Kedem [14] has developed estimators for autocorrelations and spectral frequencies by higher order zero crossings and shows diverse applications. Ho and Sun [12] have proved that the empirical zero crossing rate is asymptotically normally distributed if the autocorrelations of the Gaussian process decay faster than k - 1 2 . Coeurjolly [7] has proposed to use zero crossings to estimate the Hurst parameter in fractional Gaussian noise, which generally can be applied to the estimation of monotonic functionals of the first-order autocorrelation. Coeurjolly’s estimator has been used to the analysis of hydrological time series [16] and atmospheric turbulence data [20]. Up to now, no closed-form expression is known for the variance of the empirical zero crossing rate. Basically, covariances of zero crossings are sums and products of four-dimensional normal orthant probabilities which can be evaluated only numer- ically in general. Abrahamson [1] derives an expression involving two-dimensional integrals for the special case of orthoscheme probabilities and gives a representa- tion of any four-dimensional normal orthant probability as the linear combinations of six orthoscheme probabilities. For some simpler correlation structure, Cheng [6] proposes expressions involving the dilogarithm function. Kedem [14], Damsleth and El-Shaarawi [9] introduce approximations for processes with short memory. Most re- cent approaches apply Monte Carlo sampling for four dimensions and higher (see [8] for an overview). * Universit¨ at zu ubeck, Institut ur Mathematik, Wallstr. 40, D-23560 ubeck, Germany ([email protected]). Universit¨ at zu ubeck, Institut ur Mathematik, Wallstr. 40, D-23560 ubeck, Germany ([email protected]). 1

Upload: others

Post on 11-Sep-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES

MATHIEU SINN∗ AND KARSTEN KELLER†

Abstract. For a zero-mean Gaussian process, the covariances of zero crossings can be ex-pressed as the sum of quadrivariate normal orthant probabilities. In this paper, we demonstratethe evaluation of zero crossing covariances using one-dimensional integrals. Furthermore, we provideasymptotics of zero crossing covariances for large time lags and derive bounds and approximations.Based on these results, we analyze the variance of the empirical zero crossing rate. We illustratethe applications of our results by autoregressive (AR), fractional Gaussian noise and fractionallyintegrated autoregressive moving average (FARIMA) processes.

Key words. zero crossing, binary time series, quadrivariate normal orthant probability, AR(1)process, fractional Gaussian noise, FARIMA(0,d,0) process

1. Introduction. Indicators of zero crossings are widely applied in various fieldsof engineering and natural science, such as the analysis of vibrations, the detection ofsignals in presence of noise and the modelling of binary time series. A large number ofliterature has been contributed to the studies of zero crossing analysis. Dating backto the 1940es, telephony engineers found that replacing the original speech signal withrectangular waves having the same zero crossings retained high intelligibility [5]. Sincethe beginning of digital processing of speech signals, empirical rates of zero crossingshave been used for the detection of pitch frequencies and to distinguish voiced andunvoiced intervals [11, 19].

For a discrete-time stationary Gaussian process or a sampled random sinusoid,the zero crossing rate is related to the first-order autocorrelation and to the dominantspectral frequency. Kedem [14] has developed estimators for autocorrelations andspectral frequencies by higher order zero crossings and shows diverse applications.Ho and Sun [12] have proved that the empirical zero crossing rate is asymptoticallynormally distributed if the autocorrelations of the Gaussian process decay faster thank−

12 . Coeurjolly [7] has proposed to use zero crossings to estimate the Hurst parameter

in fractional Gaussian noise, which generally can be applied to the estimation ofmonotonic functionals of the first-order autocorrelation. Coeurjolly’s estimator hasbeen used to the analysis of hydrological time series [16] and atmospheric turbulencedata [20].

Up to now, no closed-form expression is known for the variance of the empiricalzero crossing rate. Basically, covariances of zero crossings are sums and products offour-dimensional normal orthant probabilities which can be evaluated only numer-ically in general. Abrahamson [1] derives an expression involving two-dimensionalintegrals for the special case of orthoscheme probabilities and gives a representa-tion of any four-dimensional normal orthant probability as the linear combinationsof six orthoscheme probabilities. For some simpler correlation structure, Cheng [6]proposes expressions involving the dilogarithm function. Kedem [14], Damsleth andEl-Shaarawi [9] introduce approximations for processes with short memory. Most re-cent approaches apply Monte Carlo sampling for four dimensions and higher (see [8]for an overview).

∗Universitat zu Lubeck, Institut fur Mathematik, Wallstr. 40, D-23560 Lubeck, Germany([email protected]).

†Universitat zu Lubeck, Institut fur Mathematik, Wallstr. 40, D-23560 Lubeck, Germany([email protected]).

1

Page 2: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

2 M. SINN AND K. KELLER

In this paper, we propose a simple formula for the exact numerical evaluationof zero crossing covariances and derive their asymptotics, bounds and approxima-tions. The results are obtained by analyzing partial derivatives of four-dimensionalorthant probabilities with respect to correlation coefficients. In Theorem 3.4, we givea representation of zero crossing covariances and four-dimensional normal orthantprobabilities by the sum of four one-dimensional integrals. By a Taylor expansion,we derive Theorem 4.1, which gives asymptotics of zero crossing covariances for largetime lags. In particular, when the autocorrelation function of the underlying processdecreases to 0 with the same order of magnitude as a function f(k), the zero crossingcovariances decrease to 0 with the same order of magnitude as (f(k))2.

Theorem 5.3 states sufficient conditions on the autocorrelation structure of theunderlying process to obtain lower and upper bounds by setting equal certain corre-lation coefficients in the orthant probabilities. Approximations of these expressionsgiven by Theorem 5.5.

In Theorem 6.1 we establish asymptotics of the variance of the empirical zerocrossing rate. Furthermore, we discuss how the previous results can be used fora numerical evaluation of the variance. In Section 7, we apply the results to zerocrossings in AR(1) processes, fractional Gaussian noise and FARIMA(0,d,0) models.

2. Preliminaries. Let Y = (Yk)k∈Z be a stationary and non-degenerate zero-mean Gaussian process on some probability space (Ω,A,P) with autocorrelationsρk = Corr(Y0, Yk) for k ∈ Z. For k ∈ Z, let

Ck := 1Yk>0,Yk+1<0 + 1Yk<0,Yk+1>0

be the indicator of a zero crossing at time k. Since Y is stationary, P(Ck = 1) isconstant in k, and the empirical zero crossing rate cn := 1

n

∑nk=1 Ck is an unbiased

estimator of P(C1 = 1), that is, E(cn) = P(C0 = 1) for all n ∈ N. Denote thecovariance of zero crossings by

γk := Cov(C0, Ck)

for k ∈ Z. The variance of cn is given by

Var(cn) =1n2

(nγ0 + 2

n−1∑k=1

(n− k) γk

). (2.1)

This paper investigates the evaluation of γ0, γ1, γ2, . . . . Next, we give closed-formexpressions for γ0 and γ1 based on well-known formulas for the evaluation of bi- andtrivariate normal orthant probabilities.

2.1. Orthant probabilities. For a non-singular strictly positive definite andsymmetric matrix Σ ∈ Rn×n with n ∈ N, let φ(Σ, ·) denote the Lebesgue density ofthe n-dimensional normal distribution with zero means and the covariance matrix Σ,that is

φ(Σ,x) =((2π)n|Σ|

)− 12 exp−1

2xT Σ−1x

for x ∈ Rn, where |Σ| denotes the determinant of Σ. The n-dimensional normalorthant probability with respect to Σ is given by

Φ(Σ) :=∫

[0,∞)n

φ(Σ,x) dx.

Page 3: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 3

If Z = (Z1, Z2, . . . , Zn) is a non-degenerate zero-mean Gaussian random vector andΣ = Cov(Z) = (Cov(Zi, Zj))n

i,j=1 is the covariance matrix of Z, then

P(Z1 > 0, Z2 > 0, . . . , Zn > 0) = Φ(Σ).

Furthermore, if a1, a2, . . . , an > 0 and A = diag(√a1,

√a2, . . . ,

√an ) is the n × n

diagonal matrix with entries√a1,

√a2, . . . ,

√an on the main diagonal, then AΣA

is the covariance matrix of AZ = (√a1Z1,

√a2Z2, . . . ,

√anZn). Consequently,

Φ(AΣA) = Φ(Σ). By choosing a1 = a2 = . . . = an = a and ai = Var(Zi) fori = 1, 2, . . . , n, respectively, we obtain

Φ(Σ) = Φ(a · Σ) and Φ(Corr(Z)) = Φ(Cov(Z)). (2.2)

The following closed-form expressions for two- and three-dimensional normal orthantprobabilities are well-known (see, e.g., [2]).

Lemma 2.1. Let (Z1, Z2, Z3) be a zero-mean non-degenerate Gaussian randomvector and ρij = Corr(Zi, Zj) for i, j ∈ 1, 2, 3. Then

P(Z1 > 0, Z2 > 0) =14

+12π

arcsin ρ12,

P(Z1 > 0, Z2 > 0, Z3 > 0) =18

+14π

arcsin ρ12 +14π

arcsin ρ13 +14π

arcsin ρ23.

Lemma 2.1 allows to derive a closed-form expression for the probability of achange, namely,

P(C0 = 1) = 1−P(Y0 > 0, Y1 > 0)−P(Y0 < 0, Y1 < 0)

=12− 1π

arcsin ρ1. (2.3)

Furthermore,

γ0 = P(C0 = 1) (1−P(C0 = 1))= (1− 2P(Y0 > 0, Y1 > 0)) 2P(Y0 > 0, Y1 > 0)

=14− 1π2

(arcsin ρ1)2 (2.4)

and

γ1 = P(C0 = 1, C1 = 1)− (P(C0 = 1))2

= 2P(Y0 > 0, −Y1 > 0, Y2 > 0)− (1− 2P(Y0 > 0, Y1 > 0))2

=12π

arcsin ρ2 −1π2

(arcsin ρ1)2. (2.5)

If k > 1, then γk can be expressed as the sum and product, respectively, of bi- andquadrivariate normal orthant probabilities,

γk = Cov(1− C0, 1− Ck) = 2P(Y0 > 0, Y1 > 0, Yk > 0, Yk+1 > 0)+2P(Y0 > 0, Y1 > 0, −Yk > 0, −Yk+1 > 0)− 4P(Y0 > 0, Y1 > 0)P(Yk > 0, Yk+1 > 0). (2.6)

Note that, in general, no closed-form expression is available for normal orthant prob-abilities of dimension n ≥ 4.

Page 4: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

4 M. SINN AND K. KELLER

2.2. Context of the investigations. We consider the problem of evaluatingγk for k > 1 in a more general context. For this, let R denote the set of r =(r1, r2, r3, r4, r5, r6) ∈ [−1, 1]6 such that the matrix

Σ(r) :=

1 r1 r2 r3r1 1 r4 r5r2 r4 1 r6r3 r5 r6 1

is strictly positive definite, that is, xT Σ(r)x > 0 for x ∈ R4 \ 0. Note that Σ(R) isthe set of 4× 4-correlation matrices of non-degenerate Gaussian random vectors, andr ∈ R implies that all components of r lie within (−1, 1).

For h ∈ [−1, 1] consider the diagonal matrix

Ih := diag(1, h, h, h, h, 1).

If r, s ∈ R, then xT Σ(h · r + (1− h) · s)x = hxT Σ(r)x + (1− h)xT Σ(s)x > 0 for allx ∈ R4 \ 0 and h ∈ [0, 1], in other words, R is convex. Furthermore, r ∈ R impliesI−1 r ∈ R. This can be seen as follows: If r ∈ R, then there exists a zero-mean non-degenerate Gaussian random vector Z = (Z1, Z2, Z3, Z4) such that Cov(Z) = Σ(r).Since Z′ = (Z ′1, Z

′2, Z

′3, Z

′4) := (Z1, Z2,−Z3,−Z4) is also non-degenerate Gaussian,

the matrix Cov(Z′) = Σ(I−1 r) is strictly positive definite, too, and hence I−1 r ∈ R.Now, since I1 r = r and

Ih r =1 + h

2I1 r +

1− h

2I−1 r

for all r ∈ R and h ∈ [−1, 1], the convexity of R implies that Ih r ∈ R for all r ∈ Rand h ∈ [−1, 1], a fact we will repeatedly use in the rest of the paper.

For r ∈ R, write

Φ(r) = Φ(Σ(r)),

and define

Ψ(r) := 2 Φ(r) + 2 Φ(I−1 r)− 4 Φ(I0 r). (2.7)

Note that if Z = (Z1, Z2, Z3, Z4) is a zero-mean non-degenerate Gaussian randomvector with covariance matrix Cov(Z) = Σ(r), then

Ψ(r) = 2P(Z1 > 0, Z2 > 0, Z3 > 0, Z4 > 0)+2P(Z1 > 0, Z2 > 0, −Z3 > 0, −Z4 > 0)− 4P(Z1 > 0, Z2 > 0)P(Z3 > 0, Z4 > 0).

Thus, according to (2.6),

γk = Ψ(ρ1, ρk, ρk+1, ρk−1, ρk, ρ1) (2.8)

for k > 1. The evaluation of Φ and Ψ is the main concern of this paper. In Sec. 3and Sec. 4, we consider the general problem to evaluate Φ(r) and Ψ(r) for arbitraryr = (r1, r2, r3, r4, r5, r6) ∈ R. In Sec. 5, we focus on the special case where r1 = r6and r2 = r5.

Page 5: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 5

3. Numerical evaluation. The following lemma establishes basic equationsand closed-form expressions for Φ and Ψ in some special cases.

Lemma 3.1. For every r = (r1, r2, r3, r4, r5, r6) ∈ R,

Ψ(r) = Ψ(I−1 r) = Ψ(−r1, −r2, r3, r4, −r5, −r6)= Ψ(−r1, r2, −r3, −r4, r5, −r6).

If r2 = r3 = r4 = r5 = 0, then Ψ(r) = 0 and

Φ(r) =(1

4+

12π

arcsin r1)(1

4+

12π

arcsin r6). (3.1)

Proof. The first equation follows by the definition of Ψ and I0 I−1 = I0. Now,let Z = (Z1, Z2, Z3, Z4) be zero-mean Gaussian with Cov(Z) = Σ(r). Define Z′ =(Z ′1, Z

′2, Z

′3, Z

′4) := (Z1,−Z2,−Z3, Z4) and r′ := (−r1,−r2, r3, r4,−r5,−r6). Since

Cov(Z′) = Σ(r′), the second equation follows because

Ψ(r) = Cov(1Z1>0,Z2>0 + 1Z1<0,Z2<0, 1Z3>0,Z4>0 + 1Z3<0,Z4<0)= Cov(1Z1>0,−Z2<0 + 1Z1<0,−Z2>0, 1Z3>0,−Z4<0 + 1Z3<0,−Z4>0)= Ψ(r′) ,

Applying Ψ(r) = Ψ(I−1 r) to r = (−r1,−r2, r3, r4,−r5,−r6) yields the third equation.Now, assume r2 = r3 = r4 = r5 = 0. Since r = I−1 r = I0 r, we obtain Ψ(r) =

0. Furthermore, if Z = (Z1, Z2, Z3, Z4) is zero-mean non-degenerate Gaussian withCov(Z) = Σ(r), then (Z1, Z2) and (Z3, Z4) are independent. Thus, (3.1) follows fromLemma 2.1.

Note that bounds for Ψ(r) can be obtained by the Berman-inequality, namely,

|Ψ(r)| ≤ 2π

5∑k=2

|rk|√1− rk

.

(see [17]). In the remaining part of this section, we show how to compute Ψ(r) andΦ(r) for any r ∈ R by the numerical evaluation of four one-dimensional integrals.According to a formula first given by David [10], this also allows to evaluate normalorthant probabilities of dimension n = 5. Next, we derive explicit formulas for thepartial derivatives of Φ and Ψ with respect to r2, r3, r4 and r5.

3.1. Partial derivatives. For r ∈ R and i, j ∈ 1, 2, 3, 4, let σ′ij(r) denotethe (i, j)-th component of the inverse (Σ(r))−1 of Σ(r). It is well-known that theinverse and any principal submatrix of a symmetric strictly positive definite matrixis symmetric and strictly positive definite (see [13]). Now, for fixed k ∈ 1, 2, . . . , 6,let i, j with i 6= j be the unique subset of 1, 2, 3, 4 such that rk does not lie in thei-th row and j-th column of Σ(r). Using the so-called reduction formula for normalorthant probabilities (see [18], [4]), we obtain the first partial derivative of Φ withrespect to rk,

∂Φ∂rk

(r) =1

2π√

1− r2kΦ(

(σ′ii(r) σ′ij(r)σ′ij(r) σ′jj(r)

)−1

)

for r = (r1, r2, r3, r4, r5, r6) ∈ R. Note that the argument of Φ is a principal submatrixof (Σ(r))−1 and thus strictly positive definite. By the first equation in (2.2),

∂Φ∂rk

(r) =1

2π√

1− r2kΦ(

(σ′ii(r) −σ′ij(r)

−σ′ij(r) σ′jj(r)

)).

Page 6: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

6 M. SINN AND K. KELLER

Now, let σij(r) = −|Σ(r)|σ′ij(r) if i 6= j, and σij(r) = |Σ(r)|σ′ij(r) if i = j fori, j ∈ 1, 2, 3, 4. By the second equation in (2.2) and the formula for two-dimensionalnormal orthant probabilities in Lemma 2.1,

∂Φ∂r2

(r) =1

2π√

1− r22

(14

+12π

arcsinσ24(r)√

σ22(r)σ44(r)

), (3.2)

∂Φ∂r3

(r) =1

2π√

1− r23

(14

+12π

arcsinσ23(r)

(√σ22(r)σ33(r)

), (3.3)

∂Φ∂r4

(r) =1

2π√

1− r24

(14

+12π

arcsinσ14(r)√

σ11(r)σ44(r)

), (3.4)

∂Φ∂r5

(r) =1

2π√

1− r25

(14

+12π

arcsinσ13(r)√

σ11(r)σ33(r)

)(3.5)

for r = (r1, r2, r3, r4, r5, r6) ∈ R. Note that σij(r) is equal to the determinant of thematrix obtained by deleting the ith row and the jth column of Σ(r), multiplied with(−1)i+j+1 if i 6= j (see [13]). We obtain

σ11(r) = 1− r24 − r25 − r26 + 2r4r5r6, (3.6)σ22(r) = 1− r22 − r23 − r26 + 2r2r3r6, (3.7)σ33(r) = 1− r21 − r23 − r25 + 2r1r3r5, (3.8)σ44(r) = 1− r21 − r22 − r24 + 2r1r2r4, (3.9)σ13(r) = r2 − r1r4 + r3r4r5 − r2r

25 − r3r6 + r1r5r6, (3.10)

σ14(r) = r3 − r1r5 + r2r4r5 − r3r24 − r2r6 + r1r4r6, (3.11)

σ23(r) = r4 − r1r2 + r2r3r5 − r4r23 − r5r6 + r1r3r6, (3.12)

σ24(r) = r5 − r1r3 + r2r3r4 − r5r22 − r4r6 + r1r2r6. (3.13)

The following Corollary is an immediate consequence of (3.2)-(3.5) and (3.6)-(3.13).

Corollary 3.2. For every r ∈ R, the partial derivatives of Φ of any order existand are continuous at r.

The next Lemma gives the partial derivatives of Ψ with respect to ri for i =2, 3, 4, 5. For r ∈ R, let

ψi(r) :=∂Ψ∂ri

(r).

Lemma 3.3. For every r = (r1, r2, r3, r4, r5, r6) ∈ R,

ψ2(r) =1

π2√

1− r22arcsin

σ24(r)√σ22(r)σ44(r)

, (3.14)

ψ3(r) =1

π2√

1− r23arcsin

σ23(r)√σ22(r)σ33(r)

, (3.15)

ψ4(r) =1

π2√

1− r24arcsin

σ14(r)√σ11(r)σ44(r)

, (3.16)

ψ5(r) =1

π2√

1− r25arcsin

σ13(r)√σ11(r)σ33(r)

. (3.17)

Page 7: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 7

Proof. Let i ∈ 2, 3, 4, 5. Here we denote by Ih the mapping r 7→ Ih r from Ronto itself. By the definition of Ψ,

ψi(r) = 2∂Φ∂ri

(r) + 2∂(Φ I−1)

∂ri(r)− 4

∂(Φ I0)∂ri

(r). (3.18)

With (3.1) we obtain (Φ I0)(r) =(

14 + 1

2π arcsin r1)(

14 + 1

2π arcsin r6). Thus, r 7→

(Φ I0)(r) is constant in ri and, consequently, the last term on the right side of (3.18)is equal to 0. Furthermore, because ∂I−1

∂ri(r) = −1, the chain rule of differentiation

yields

∂(Φ I−1)∂ri

(r) = − ∂Φ∂ri

(I−1(r)).

According to (3.6)-(3.13), σij(I−1(r)) = σij(r) if (i, j) ∈ (1, 1), (2, 2), (3, 3), (4, 4),and σij(I−1(r)) = −σij(r) if (i, j) ∈ (1, 3), (1, 4), (2, 3), (2, 4). Since x 7→ arcsinx isan odd function, inserting I−1(r) instead of r into (3.2)-(3.5) yields (3.14)-(3.17).

3.2. Integral representation. Next we state the main result of this section.Note that a similar representation of Ψ(r) as in (3.19) is used for the proof of theBerman inequality (see above).

Theorem 3.4. For every r = (r1, r2, r3, r4, r5, r6) ∈ R,

Ψ(r) =5∑

i=2

ri

∫ 1

0

ψi(Ih r) dh, (3.19)

Φ(r) =(1

4+

12π

arcsin r1)(1

4+

12π

arcsin r6)

+18π

5∑i=2

arcsin ri +Ψ(r)

4, (3.20)

Φ(r)− Φ(I−1 r) =14π

5∑i=2

arcsin ri. (3.21)

Proof. Let r ∈ R. Since Ih r ∈ R for all h ∈ [0, 1], the mapping

[0, 1] 3 h 7→ u(h) := Ψ(Ih r)

is well-defined, being the concatenation of h 7→ Ih r and Ψ. Clearly, u(1) = Ψ(r) and,by Lemma 3.1, u(0) = 0. Since Ψ has continuous partial derivatives (see Lemma 3.3),u is differentiable at h for all h ∈ [0, 1], and hence, by the Fundamental Theorem ofCalculus and the chain rule of differentiation,

Ψ(r) = u(0) +∫ 1

0

u′(h) dh =∫ 1

0

5∑i=2

ri∂Ψ∂ri

(Ih r) dh =5∑

i=2

ri

∫ 1

0

ψi(Ih r) dh.

Analogously, define v(h) := Φ(Ih r) for h ∈ [0, 1]. According to (3.2)-(3.5) and (3.14)-(3.17),

∂Φ∂ri

(r) =1

8π√

1− r2i+ψi(r)

4

Page 8: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

8 M. SINN AND K. KELLER

for i = 2, 3, 4, 5. Consequently,

∫ 1

0

v′(h) dh =5∑

i=2

ri

∫ 1

0

∂Φ∂ri

(Ih r) dh

=18π

5∑i=2

ri

∫ 1

0

1√1− r2i h

2dh +

14

5∑i=2

ri

∫ 1

0

ψi(Ih r) dh

=18π

5∑i=2

arcsin ri +Ψ(r)

4.

According to (3.1),

v(0) = Φ(I0 r) =(1

4+

12π

arcsin r1)(1

4+

12π

arcsin r6),

and hence (3.20) follows. (3.21) is an immediate consequence of (3.20), Lemma 3.1and the fact that x 7→ arcsinx is an odd function.

Note that for i = 2, 3, 4, 5, the mappings

h 7→ ψi(Ih r)

have bounded derivatives on [0, 1]. (The derivatives are easily obtained from (3.14)-(3.17).) Moreover for fixed r ∈ R, upper and lower bounds can be given in a closedform which allows to evaluate the integrals in Theorem 3.4 numerically to any desiredprecision.

4. Asymptotically equivalent expressions. For fixed n ∈ N, let (r(k))k∈N,(s(k))k∈N be sequences of vectors in Rn with r(k) = (r1(k), r2(k), . . . , rn(k)) ands(k) = (s1(k), s2(k), . . . , sn(k)) for k ∈ N. We write

r(k) ∼ s(k)

and say that (r(k))k∈N and (s(k))k∈N are asymptotically equivalent iff ri(k) ∼ si(k)for all i ∈ 1, 2, . . . , n, that is, limk→∞

ri(k)si(k) = 1 where 0

0 := 1.The following theorem relates asymptotics of special sequences in R and asymp-

totics of the corresponding values of Ψ. In Corollary 5.1, this result is used for derivingasymptotics of zero crossing covariances.

Theorem 4.1. Let (r(k))k∈N be a sequence in R. If there exist an f : N → Rwith limk→∞ f(k) = 0 and an α = (α1, α2, α3, α4, α5, α6) ∈ R6 with |α1|, |α6| < 1such that r(k) ∼ (α1, α2 f(k), α3 f(k), α4 f(k), α5 f(k), α6), then

Ψ(r(k)) ∼ (f(k))2 q(α)2π2

√(1− α2

1)(1− α26)

+ O((f(k))4

),

where q(α) = α1α6

∑5i=2 α

2i−2α1(α2α3+α4α5)−2α6(α2α4+α3α5)+2(α2α5+α3α4).

Proof. Let r(k) = (r1(k), r2(k), r3(k), r4(k), r5(k), r6(k)) for k ∈ N. Accordingto Corollary 3.2, Taylor’s Theorem asserts for each k ∈ N the existence of h1(k) ∈ [0, 1]

Page 9: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 9

and h2(k) ∈ [−1, 0] such that

Φ(r(k)) = Φ(I0 r(k)) +5∑

i=2

ri(k)∂Φ∂ri

(I0 r(k)) +12

5∑i,j=2

ri(k)rj(k)∂2Φ∂ri∂rj

(I0 r(k))

+16

5∑i,j,l=2

ri(k)rj(k)rl(k)∂3Φ

∂ri∂rj∂rl(I0 r(k))

+124

5∑i,j,l,m=2

ri(k)rj(k)rl(k)rm(k)∂4Φ

∂ri∂rj∂rl∂rm

((I0 + h1(k)(I1 − I0)) r(k)

)and, using the fact that I0 I−1 = I0,

Φ(I−1 r(k)) = Φ(I0 r(k))−5∑

i=2

ri(k)∂Φ∂ri

(I0 r(k)) +12

5∑i,j=2

ri(k)rj(k)∂2Φ∂ri∂rj

(I0 r(k))

− 16

5∑i,j,l=2

ri(k)rj(k)rl(k)∂3Φ

∂ri∂rj∂rl(I0 r(k))

+124

5∑i,j,l,m=2

ri(k)rj(k)rl(k)rm(k)∂4Φ

∂ri∂rj∂rl∂rm

((I0 − h2(k)(I−1 − I0)) r(k)

).

Since I0 + h(I1 − I0) = Ih and I0 + h(I−1 − I0) = I−h for h ∈ [0, 1],

2 Φ(r(k)) + 2 Φ(I−1 r(k)) = 4Φ(I0 r(k)) + 25∑

i,j=2

ri(k)rj(k)∂2Φ∂ri∂rj

(I0 r(k))

+112

5∑i,j,l,m=2

ri(k)rj(k)rl(k)rm(k)∂4Φ

∂ri∂rj∂rl∂rm(Ih1(k) r(k))

+112

5∑i,j,l,m=2

ri(k)rj(k)rl(k)rm(k)∂4Φ

∂ri∂rj∂rl∂rm(Ih2(k) r(k)). (4.1)

Under the assumptions, ri1(k) ri2(k) . . . rin(k) ∼ αi1αi2 . . . αin

(f(k))n for all i1, i2, . . . ,in ∈ 2, 3, 4, 5 with n ∈ N. According to the definition of Ψ, inserting these asymp-totically equivalent expressions into (4.1) yields

Ψ(r(k)) ∼ 2 (f(k))25∑

i,j=2

αiαj∂2Φ∂ri∂rj

(I0 r(k)) + (f(k))4R(k) (4.2)

with

R(k) =112

5∑i,j,l,m=2

αiαjαlαm(∂4Φ

∂ri∂rj∂rl∂rm(Ih1(k) r(k)) +

∂4Φ∂ri∂rj∂rl∂rm

(Ih2(k) r(k))).

Note that |α1|, |α6| < 1 implies I0 α ∈ R. Furthermore, according to Corollary 3.2,the second derivatives of Φ are continuous in I0 α. Since limk→∞ I0 r(k) = I0 α, we

Page 10: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

10 M. SINN AND K. KELLER

obtain

limk→∞

∂2Φ∂ri∂rj

(I0 r(k)) =∂2Φ∂ri∂rj

(I0 α)

for all i, j ∈ 2, 3, 4, 5. Inserting (4.3)-(4.6) from Lemma 4.2 below with r = α into(4.2), we obtain

Ψ(r(k)) ∼ 2(f(k))2 q(α)4π2

√(1− α2

1)(1− α26)

+ (f(k))4R(k),

with q(α) as given above.In order to prove that (f(k))4R(k) = O

((f(k))4

), we show supk∈NR(k) < ∞.

Because limk→∞ r(k) = I0 α = limk→∞ I−1 r(k), the set

S := I0 α ∪⋃

k∈N

r(k), I−1 r(k)

is closed in R6. Since S ⊂ [−1, 1]6, the convex hull of S is compact. Now, because

S :=⋃

k∈N

Ih r(k) : h ∈ [−1, 1]

is a subset of the convex hull of S and the fourth partial derivatives of Φ are continuousat every point of R (see Corollary 3.2),

sup∂4Φ

∂ri∂rj∂rl∂rm(S) <∞

for all i, j, l,m ∈ 2, 3, 4, 5, and hence the result follows.Lemma 4.2. For r = (r1, r2, r3, r4, r5, r6) ∈ R, the second partial derivatives of

Φ with respect to r2, r3, r4, r5 at I0 r are given by

∂2Φ∂2ri

(I0 r) =r1r6

4π2√

(1− r21)(1− r26)for i = 2, 3, 4, 5, (4.3)

∂2Φ∂r2∂r3

(I0 r) =∂2Φ∂r4∂r5

(I0 r) =−r1

4π2√

(1− r21)(1− r26), (4.4)

∂2Φ∂r2∂r4

(I0 r) =∂2Φ∂r3∂r5

(I0 r) =−r6

4π2√

(1− r21)(1− r26), (4.5)

∂2Φ∂r2∂r5

(I0 r) =∂2Φ∂r3∂r4

(I0 r) =1

4π2√

(1− r21)(1− r26). (4.6)

Proof. Let k ∈ 2, 3, 4, 5. According to (3.2)-(3.5), there exist unique numbersi, j ∈ 1, 2, 3, 4 such that ∂Φ

∂rk(r) = 1

2π f(r)(

14 + 1

2π g(r))

with

f(r) =1√

1− r2kand g(r) = arcsin

σij(r)√σii(r)σjj(r)

.

Clearly, f(I0 r) = 1 and ∂f∂rl

(I0 r) = 0 for l = 2, 3, 4, 5, and hence ∂2Φ∂rk∂rl

(I0 r) =1

4π2∂g∂rl

(I0 r). Since σij(I0 r) = 0 and the first derivative of arcsin in 0 is 1, we obtain

∂2Φ∂rk∂rl

(I0 r) =1

4π2√σii(I0 r)σjj(I0 r)

∂σij

∂rl(I0 r).

Now, the result follows by (3.10)-(3.13).

Page 11: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 11

5. Bounds and approximations. Next, we apply the previous results to vec-tors r = (r1, r2, r3, r4, r5, r6) ∈ R with r1 = r6 and r2 = r5. Let π∗(r) := (r1, r2, r3, r4, r2, r1)for r = (r1, r2, r3, r4) ∈ (−1, 1)4, and

R∗ := r ∈ (−1, 1)4 |π∗(r) ∈ R.

Clearly, r = (r1, r2, r3, r4) ∈ R∗ if and only if

Σ(π∗(r)) =

1 r1 r2 r3r1 1 r4 r2r2 r4 1 r1r3 r2 r1 1

is strictly positive definite. Because R∗ ⊂ R and R is convex, r, s ∈ R∗ impliesπ∗((1− h) r + h s) ∈ R and hence (1− h) r + h s ∈ R∗ for all h ∈ [0, 1]. Thus, R∗ isconvex. Now, define

Ψ∗(r) := Ψ(π∗(r))

for r ∈ R∗. According to (2.8),

γk = Ψ∗(ρ1, ρk, ρk+1, ρk−1) (5.1)

for k > 1. The following corollary is a special case of Theorem 4.1.Corollary 5.1. Let (r(k))k∈N be a sequence in R∗ and assume f : N → R is

a function with limk→∞ f(k) = 0.(i) If r(k) ∼ (α1, α2 f(k), α3 f(k), α4 f(k)), for some vector α = (α1, α2, α3, α4)

with |α1| < 1, then

Ψ∗(r(k)) ∼ (f(k))2 q(α)2π2(1− α2

1)+ O

((f(k))4

),

where q(α) = α21(2α

22 + α2

3 + α24)− 4α1α2(α3 + α4) + 2(α2

2 + α3α4).(ii) If f(k + 1) ∼ βf(k) for some β 6= 0 and there exists an α with |α| < 1 such

that r(k) ∼ (α, f(k), f(k + 1), f(k − 1)), then

Ψ∗(r(k)) ∼ (f(k))2 (2− α(β + β−1))2

2π2(1− α2)+ O

((f(k))4

).

(iii) If the assumptions of (ii) hold with β = 1, then

Ψ∗(r(k)) ∼ 2 (f(k))2 (1− α)π2(1 + α)

+ O((f(k))4

).

Proof. (i) follows by Theorem 4.1 and the fact that π∗(r(k)) is asymptoticallyequivalent to (α1, α2 f(k), α3 f(k), α4 f(k), α2 f(k), α1).

(ii) is a special case of (i) where r(k) ∼ (α, f(k), β f(k), f(k)/β) and thus

q(α, 1, β, 1/β) = α2(2 + β2 + β−2)− 4α(β + β−1) + 4= (2− α(β + β−1))2 .

Now, (iii) is obvious.

Page 12: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

12 M. SINN AND K. KELLER

5.1. Lower and upper bounds. Theorem 5.3 below gives sufficient conditionson r ∈ R∗ to obtain lower and upper bounds for Ψ∗(r) by setting r2, r3, r4 equal tor3 and r4, respectively. We first prove the following lemma.

Lemma 5.2. For every r = (r1, r2, r3, r4) ∈ R∗,

∂Ψ∗

∂r2(r) =

2π2

√1− r22

arcsinσ13(π∗(r))√

σ11(π∗(r))σ22(π∗(r)), (5.2)

∂Ψ∗

∂r3(r) =

1π2

√1− r23

arcsinσ23(π∗(r))σ22(π∗(r))

, (5.3)

∂Ψ∗

∂r4(r) =

1π2

√1− r24

arcsinσ14(π∗(r))σ11(π∗(r))

, (5.4)

and

σ13(π∗(r)) = r2 − r1r3 + r2r3r4 − r32 − r1r4 + r2r21, (5.5)

σ14(π∗(r)) = r3 − 2r1r2 + r4r22 − r3r

24 + r4r

21, (5.6)

σ23(π∗(r)) = r4 − 2r1r2 + r3r22 − r4r

23 + r3r

21. (5.7)

Proof. The validity of (5.5)-(5.7) directly follows from (3.10)-(3.12). Furthermore,

∂Ψ∗

∂r2(r) =

∂Ψ∂r2

(π∗(r)) +∂Ψ∂r5

(π∗(r)) and∂Ψ∗

∂ri(r) =

∂Ψ∂ri

(π∗(r)) for i = 3, 4.

Since σ11(π∗(r)) = σ44(π∗(r)), σ22(π∗(r)) = σ33(π∗(r)) and σ13(π∗(r)) = σ24(π∗(r))(compare to (3.6)-(3.9), (3.10) and (3.13)), we obtain (5.2)-(5.4) by equations (3.14)-(3.17) in Lemma 3.3.

Theorem 5.3. Let r = (r1, r2, r3, r4) ∈ R∗ with r4, r2 ≥ r3 ≥ 0. For h ∈ [0, 1],define sh := (1− h) · r + h · (r1, r3, r3, r3) and th := (1− h) · r + h · (r1, r4, r4, r4).

1. If 1 + r1 − 2r3 > 0 and σ13(π∗(sh)), σ14(π∗(sh)) ≥ 0 for all h ∈ [0, 1], then

Ψ∗(r) ≥ Ψ∗(r1, r3, r3, r3). (5.8)

2. If 1 + r1 − 2r4 > 0 and σ13(π∗(th)), σ23(π∗(th)) ≥ 0 for all h ∈ [0, 1], then

Ψ∗(r1, r4, r4, r4) ≥ Ψ∗(r). (5.9)

Proof. 1. First, note that the set of eigenvalues of Σ(π∗(r1, r3, r3, r3)) is given by1− r1, 1+ r1− 2r3, 1+ r1 +2r3. Under the assumptions, each eigenvalue is strictlylarger than 0, so (r1, r3, r3, r3) ∈ R∗. Because R∗ is convex, we have sh ∈ R∗ for allh ∈ [0, 1]. Hence, f(h) := Ψ∗(sh) is well-defined for all h ∈ [0, 1]. Since f(0) = Ψ∗(r)and f(1) = Ψ∗(r1, r3, r3, r3), it is sufficient to show that h 7→ f(h) is monotonicallydecreasing on [0, 1], or, equivalently,

f ′(h) = (r3 − r2)∂Ψ∗(sh)∂r2

+ (r3 − r4)∂Ψ∗(sh)∂r4

≤ 0

for all h ∈ [0, 1]. With the assumptions r3 − r2 ≤ 0 and r3 − r4 ≤ 0, a sufficientcondition for this inequality to be satisfied is σ13(π∗(sh)) ≥ 0 and σ14(π∗(sh)) ≥ 0 forall h ∈ [0, 1] (compare to (5.2) and (5.4)).

Page 13: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 13

2. Analogously, define g(h) := Ψ∗(th), and note that a sufficient condition for

g′(h) = (r4 − r2)∂Ψ(th)∂r2

+ (r4 − r3)∂Ψ(th)∂r3

≥ 0

is given by σ13(π∗(th)) ≥ 0 and σ23(π∗(th)) ≥ 0 for all h ∈ [0, 1].As the proof of Theorem 5.3 shows, a sufficient condition for strict inequality in

(5.8) is given by r4 > r3 and σ14(π∗(sh)) > 0 for some h ∈ [0, 1], or r2 > r3 andσ13(π∗(sh)) > 0 for some h ∈ [0, 1]. Analogously, a sufficient condition for strictinequality in (5.9) is given by r4 > r3 and σ23(π∗(th)) > 0 for some h ∈ [0, 1], orr4 > r2 and σ13(π∗(th)) > 0 for some h ∈ [0, 1].

The next lemma gives easily verifiable conditions for the assumptions of Theorem5.3.

Lemma 5.4. Let r = (r1, r2, r3, r4) ∈ R∗ with r1 ≤ 0 and r2, r3, r4 ≥ 0. Thenσ13(sh), σ14(sh) > 0 and σ13(th), σ23(th) > 0 for all h ∈ [0, 1].

Proof. For fixed h ∈ [0, 1], let sh = (s1, s2, s3, s4). Clearly, s1 ≤ 0 and s2,s3, s4 ∈ [0, 1). Because σ13(π∗(sh)) ≥ s2 − s32 and σ14(π∗(sh)) ≥ s3 − s3s

24, we

obtain σ13(sh), σ14(sh) > 0. Analogously, let th = (t1, t2, t3, t4), and note thatσ23(π∗(th)) ≥ t4 − t4t

23.

5.2. Approximations of the bounds. Next, we analyze approximations ofthe lower and upper bounds of Ψ∗(r) given by Theorem 5.3. Let R∗∗ be the set ofr = (r1, r2) ∈ (−1, 1)2 such that π∗∗(r) := (r1, r2, r2, r2) ∈ R∗ or, equivalently,

Σ(π∗∗(r)) =

1 r1 r2 r2r1 1 r2 r2r2 r2 1 r1r2 r2 r1 1

is strictly positive definite. Since the set of eigenvalues of Σ(π∗∗(r)) is given by1− r1, 1 + r1 + 2r2, 1 + r1 − 2r2,

R∗∗ = (r1, r2) ∈ (−1, 1)2 | 2 |r2| < 1 + r1 . (5.10)

For r ∈ R∗∗, define

Φ∗∗(r) := Φ(π∗∗(r)) and Ψ∗∗(r) := Ψ(π∗∗(r)).

Note that, σii(π∗∗(r)) = (1 − r1)(1 + r1 − 2r22) for i = 1, 2, 3, 4 and σ13(π∗∗(r)) =σ14(π∗∗(r)) = σ23(π∗∗(r)) = σ24(π∗∗(r)) = r2(1 − r1)2 (compare to (3.6)-(3.13)).Hence, according to (3.2)-(3.5),

∂Φ∗∗

∂r2(r) =

∂Φ∂r2

(π∗∗(r)) +∂Φ∂r3

(π∗∗(r)) +∂Φ∂r4

(π∗∗(r)) +∂Φ∂r5

(π∗∗(r)) (5.11)

=2

π√

1− r22

(14

+12π

arcsinr2(1− r1)

1 + r1 − 2r22

).

By formula (3.19), we obtain the integral representation

Ψ∗∗(r) =4r2π2

∫ 1

0

1√1− r22h

2arcsin

r2(1− r1)h1 + r1 − 2r22h2

dh

=4π2

∫ r2

0

1√1− t2

arcsin(1− r1)t

1 + r1 − 2t2dt.

Page 14: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

14 M. SINN AND K. KELLER

As the following theorem shows, Ψ∗∗(r) can be approximated monotonically frombelow by successively adding further terms of the Taylor expansion of Ψ∗∗(r) in (r1, 0).

Theorem 5.5. For every r = (r1, r2) ∈ R∗∗,

∂lΦ∗∗

∂lr2((r1, 0)T ) ≥ 0 for l ∈ N0, (5.12)

Ψ∗∗(r) = 4∞∑

l=1

r2l2

(2l)!∂2lΦ∗∗

∂2lr2(r1, 0). (5.13)

Proof. Let r = (r1, r2) ∈ R∗∗. We define f(x) := 12π arcsinx for x ∈ (−1, 1), and

g1(x) := x(1 − r1), g2(x) := 11+r1−2x2 , g(x) := g1(x) · g2(x) and h(x) := f(g(x)) for

x ∈(− 1+r1

2 , 1+r12

). Clearly, Φ∗∗(r1, 0) ≥ 0, hence (5.12) is true for l = 0. According

to (5.10), |r2| < 1+r12 , so (5.11) yields ∂Φ∗∗

∂r2(r) = f ′(r2) + 4f ′(r2)h(r2). Applying

Leibniz’s rule gives

∂lΦ∗∗

∂lr2((r1, 0)T ) = f (l)(0) + 4

l−1∑k=0

(l − 1k

)f (k+1)(0)h(l−1−k)(0)

= f (l)(0) + 4l∑

k=1

(l − 1k − 1

)f (k)(0)h(l−k)(0)

for l ∈ N. Note that arcsinx =∑∞

n=03·5·...·(2n−1)

2·4·...·(2n)·(2n+1)x2n+1, so f (l)(0) ≥ 0. Therefore,

in order to prove (5.12), it is sufficient to show that h(l)(0) ≥ 0 for all l ∈ N.

Let g2(x) = f2(f1(x)) with f1(x) := 1+r1−2x2, f2(x) := 1x . Note that f (l)

1 (0) 6= 0only if l ∈ 0, 2. For each l ∈ N, we can write g(l)

2 (0) = (f2 f1)(l)(0) as the sumof terms f (k)

2 (f1(0)) · f (i1)1 (0) · f (i2)

1 (0) · . . . · f (ik)1 (0) with k, i1, i2, . . . , ik ∈ N which

satisfy i1 + i2 + . . .+ ik = l. Each term can only be non-zero if i1 = i2 = . . . = ik = 2,hence a necessary condition for g(l)

2 (0) 6= 0 is that l is even. Moreover, g(l)2 (0) > 0

in this case, since f (k)2 (f1(0)) = (−1)kk!(1 + r1)−(k+1) and f

(2)1 (0) = −4. Note that

g(k)1 (0) 6= 0 only if k = 1, consequently, by Leibniz’s rule, g(l)(0) = l·g(1)

1 (0)·g(l−1)2 (0) =

l · (1− r1) · g(l−1)2 (0) for all l ∈ N, and hence g(l)(0) ≥ 0 for all l ∈ N.

Now, similarly as above, we can write h(l)(0) = (f g)(l)(0) for each l ∈ N asthe sum of products consisting of factors of the form f (k)(g(0)) = f (k)(0) and g(m)(0)with k,m ∈ N, implying h(l)(0) ≥ 0.

In order to prove (5.13), first note that g2 and g = g1g2 have power series expan-sions at 0 with the radius of convergence 1+r1

2 , and g((− 1+r1

2 , 1+r12

)) ⊂ (−1, 1). Since

f has a power series expansion at 0 with the radius of convergence 1, according to ele-mentary properties of power series, the mapping · 7→ ∂Φ∗∗

∂r2(r1, ·) = f ′(·)+4f ′(·)f(g(·))

has a power series expansion at 0 with the radius of convergence 1+r12 , and hence it

also holds for the mapping · 7→ Φ∗∗(r1, ·).Now, note that r2 ∈

(− 1+r1

2 , 1+r12

)(see (5.10)). Therefore, according to the

Page 15: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 15

definition of Ψ,

Ψ∗∗(r) = 2 Φ∗∗(r) + 2 Φ∗∗(r1,−r2)− 4 Φ∗∗(r1, 0)

= 2∞∑

l=0

rl2

l!∂lΦ∗∗

∂lr2(r1, 0) + 2

∞∑l=0

(−r2)l

l!∂lΦ∗∗

∂lr2(r1, 0)− 4 Φ∗∗(r1, 0)

= 4∞∑

l=1

r2l2

(2l)!∂2lΦ∗∗

∂2lr2(r1, 0).

The proof is complete.

6. The variance of the empirical zero crossing rate. In this section, weapply the previous results to the analysis of the variance of empirical zero crossingrates. Recall formula (2.1),

Var(cn) =1n2

(nγ0 + 2

n−1∑k=1

(n− k) γk

).

In order to evaluate Var(cn) numerically, we can use formulas (2.4) and (2.5) for thecomputation of γ0 and γ1. For k > 1, formula (5.1) yields

γk = Ψ∗(ρ1, ρk, ρk+1, ρk−1) ,

and the right hand side can be evaluated numerically using the integral representationof Ψ given in (3.19).

When n is large, an “exact” numerical evaluation of γk for every k = 0, 1, . . . , n−1is time-consuming. A quick way for getting approximate values of Var(cn) is to useapproximations of γk in terms of the function Ψ∗∗. If the assumptions of Theorem5.3 are satisfied, this yields upper and lower bounds for γk. A further speed-up canbe achieved by using the finite-order approximations of Ψ∗∗ provided by Theorem5.5. For instance, when the autocorrelations of Y are not too large, one can use thefirst-order approximation

γϑ(k) ≈ 2(1− ρϑ(1))π2(1 + ρϑ(1))

(ρϑ(k))2.

An alternative method for computing approximate values of Var(cn) is to use theexact values of γk for k = 2, 3, . . . until the relative error of the approximations fallsbelow a given threshold ε > 0, and then to use the approximations of γk. If therelative error does not get larger than ε anymore, then also the relative error of theresulting approximation of Varϑ(cn) is not larger than ε. For the calculations behindFigures 7.1-7.3, we have used this method with the threshold ε = 0.001.

The following theorem establishes asymptotics of Var(cn).Theorem 6.1. Suppose there exists a mapping f : N → R such that ρk ∼ f(k).(i) If |f(k)| = o(k−β) with β > 1

2 , then σ2 := γ0 + 2∑∞

k=1 γk <∞ and

Var(cn) ∼ σ2 n−1 .

(ii) If f(k) = αk−12 for some α ∈ (−1, 1) \ 0, then

Var(cn) ∼ 4α2(1− ρ1)π2(1 + ρ1)

lnnn

.

Page 16: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

16 M. SINN AND K. KELLER

(iii) If f(k) = αk−β for some α ∈ (−1, 1) \ 0 and β ∈ (0, 12 ), then

Var(cn) ∼ 4α2(1− ρ1)π2(1 + ρ1)(1− 2β)

n−2β .

Proof. (i) According to Corollary 5.1 (i), we have γk = O((f(k))2), which showsthat

∑∞k=1 |γk| <∞. By the Dominated Convergence Theorem, we obtain

n−1∑k=1

n− k

nγk ∼ lim

n→∞

∞∑k=1

maxn− k

n, 0

γk

=∞∑

k=1

limn→∞

maxn− k

n, 0

γk =

∞∑k=1

γk .

Now, with formula (2.1), the result follows.(ii) Note that f(k) ∼ f(k + 1) and thus, according to Corollary 5.1 (iii),

γk ∼2α2 (1− ρ1)π2(1 + ρ1)

k−1 .

Using the fact that∑n−1

k=1 k−1 ∼ lnn, we obtain

n−1∑k=1

γk ∼2α2 (1− ρ1)π2(1 + ρ1)

lnn . (6.1)

Furthermore, we have

n−1∑k=1

γk −n−1∑k=1

n− k

nγk =

n−1∑k=1

k

nγk

∼ 1n

n−1∑k=1

2α2 (1− ρ1)π2(1 + ρ1)

= o(lnn),

which shows that

n−1∑k=1

γk ∼n−1∑k=1

n− k

nγk .

According to formula (2.1), we obtain

Var(cn) ∼ 2n

n−1∑k=1

γk ,

and together with (6.1) the statement follows.(iii) The proof is similar to (ii), using the fact

∑n−1k=1 k

−2β ∼ 11−2β n

1−2β .

7. Examples. In this section, we apply the previous results to empirical zerocrossing rates in AR(1) processes, fractional Gaussian noise and ARFIMA(0,d,0) pro-cesses.

Page 17: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 17

7.1. AR(1) processes. Assume that Y is an AR(1) process with autoregres-sive coefficient a ∈ (−1, 1), that is, Y is stationary, non-degenerate and zero-meanGaussian with the autocorrelations ρk = ak for k ∈ N0 (where 00 := 1). Accordingto formula (2.3),

P(C0 = 1) =12− 1π

arcsin a, (7.1)

hence the higher the autoregressive coefficient, the lower the probability of a zerocrossing.

Fig. 7.1. Variance of cn in AR(1) processes for a ∈ (−1, 1) and n = 10, 11, . . . , 100.

By using the method explained in Sec. 6, we can evaluate the variance of cn.Figure 7.1 displays the values of Var(cn) for n = 10, 11, . . . , 100 and a ranging in(−1, 1). For fixed n, the variance of cn tends to 0 as a tends to −1 and 1, respectively.According to (7.1), the probability of a zero crossing is equal to 1 and 0 in these limitcases, and thus cn is P-almost surely equal to 1 and 0, respectively.

For fixed a, the variance of cn is decreasing in n. In particular, according toTheorem 6.1 (i),

Var(cn) ∼ σ2 n−1

where σ2 := γ0 + 2∑∞

k=1 γk < ∞. In the case a = 0, formulas (2.4) and (2.5) yieldγ0 = 1

4 and γ1 = 0. Furthermore, according to Lemma 3.1,

γk = Ψ∗(0, 0, 0, 0) = 0

for all k > 1. Therefore, Var(cn) = 14n in this case.

Remarkably, Var(cn) is always identical for a and −a. In fact, one can show thatγk is identical for a and −a for all k ∈ Z. For k = 0 and k = 1, this is an immediateconsequence of formulas (2.4) and (2.5) and the fact that (arcsin a)2 = (arcsin(−a))2.For k > 1, this is true because, according to Lemma 3.1,

Ψ∗(a, ak, ak+1, ak−1) = Ψ∗(−a, (−a)k, (−a)k+1, (−a)k−1).

Page 18: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

18 M. SINN AND K. KELLER

7.2. Fractional Gaussian noise. Assume that Y is fractional Gaussian noise(fGn) with the Hurst parameter H ∈ (0, 1), that is, Y is stationary, non-degenerateand zero-mean Gaussian with the autocorrelations

ρk =12(|k + 1|2H − 2|k|2H + |k − 1|2H

)for k ∈ Z. With ρ1 = 22H−1 − 1, we obtain that the probability of a zero crossing isgiven by

P(C0 = 1) =12− 1π

arcsin(22H−1 − 1)

= 1− 2π

arcsin 2H−1, (7.2)

where the second equation follows from arcsinx = 2 arcsin√

(1 + x)/2− π2 . Thus, the

larger the Hurst parameter, the lower the probability of a zero crossing.

Fig. 7.2. Variance of cn in fGn for H ∈ (0, 1) and n = 10, 11, . . . , 100.

Figure 7.2 displays Var(cn) for n = 10, 11, . . . , 100 and H ranging in (0, 1). Forfixed n, the variance tends to 0 as H tends to 1. Note that the probability of a zerocrossing is 0 in the limit case (see (7.2)), and thus cn is almost surely equal to 0.

Next, we derive asymptotics of Var(cn). It is well-known that ρk ∼ H(2H −1)k2H−2 as k →∞ (see [3]). According to Theorem 6.1 (i), we obtain

Var(cn) ∼ σ2 n−1

for H < 34 , where σ2 := γ0 +2

∑∞k=1 γk <∞. In the case H = 1

2 , where ρk = 0 for allk > 0, we obtain Var(cn) = 1

4n by the same argument as in the case a = 0 for AR(1)processes. If H = 3

4 , then Theorem 6.1 (ii) yields

Var(cn) ∼ 9 (√

2− 1)16π2

lnnn

Page 19: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES 19

(in particular, H2(2H − 1)2(22−2H − 1) = 964 (√

2− 1) in this case). Finally, if H > 34 ,

then Theorem 6.1 (iii) yields

Var(cn) ∼ 4H2 (2H − 1)2 (22−2H − 1)π2 (4H − 3)

n4H−4.

7.3. ARFIMA(0,d,0) processes. If Y is an ARFIMA(0,d,0) process with thefractional differencing parameter d ∈

(− 1

2 ,12

), then Y is stationary, non-degenerate

and zero-mean Gaussian with the autocorrelations

ρk =Γ(1− d) Γ(k + d)Γ(d) Γ(k + 1− d)

for k ∈ Z. With ρ1 = d1−d , we obtain

P(C0 = 1) =12− 1π

arcsind

1− d

for the probability of a zero crossing.

Fig. 7.3. Variance of cn in ARFIMA(0,d,0) for d ∈(− 1

2, 12

)and n = 10, 11, . . . , 100.

Figure 7.3 displays the variance of cn for n = 10, 11, . . . , 100 and d ∈(− 1

2 ,12

).

The picture is very similar to Figure 7.2. In particular, the variance is only slowlydecreasing for large parameter values and tends to 0 as d tends to 1

2 .Next, we derive asymptotics of Var(cn). It is well-known that ρk ∼ Γ(1−d)

Γ(d) k2d−1

as k →∞ (see [3]). According to Theorem 6.1 (i), we obtain

Var(cn) ∼ σ2 n−1

for d < 14 , where σ2 := γ0 + 2

∑∞k=1 γk < ∞. In the case d = 0, where ρk = 0 for all

k > 0, we obtain Var(cn) = 14n . If d = 1

4 , then Theorem 6.1 (ii) yields

Var(cn) ∼2

(Γ( 3

4 ))2

π2(Γ( 1

4 ))2

lnnn

Page 20: COVARIANCES OF ZERO CROSSINGS IN GAUSSIAN PROCESSES · 2012. 10. 11. · Kedem [14] has developed estimators for autocorrelations and ... Basically, covariances of zero crossings

20 M. SINN AND K. KELLER

In the case d > 14 , Theorem 6.1 (iii) yields

Var(cn) ∼ 4 (Γ(1− d))2 (1− 2d)π2 (Γ(d))2 (4d− 1)

n4d−2.

REFERENCES

[1] Abrahamson, I. G., Orthant probabilities for the quadrivariate normal distribution. Ann. Math.Statist. 35 (1964), 1685-708.

[2] Bacon, R. H., Approximation to multivariate normal orthant probabilities. Ann. Math. Statist.34 (1963), 191-98.

[3] Beran, J., Statistics for Long-Memory Processes. London: Chapman and Hall (1994).[4] Berman, S. M., Sojourns and Extremes of Stochastic Processes. Wadsworth and Brooks/Cole,

Pacific Grove, California (1992).[5] Chang, S., Pihl, G. E. and Essigmann, M. W., Representations of speech sounds and some of

their statistical properties, Proc. IRE, Vol. 39 (1951), 147-53.[6] Cheng, M. C., The orthant probability of four Gaussian variates. Ann. Math. Statist. 40 (1969),

152-61.[7] Coeurjolly, J. F., Simulation and identification of the fractional Brownian motion: A biblio-

graphical and comparative study. J. Stat. Software 5 (2000).[8] Craig, P., A new reconstruction of multivariate normal orthant probabilities. Journal of the

Royal Statistical Society: Series B (Statistical Methodology) Volume 70 Issue 1 (2008), 227- 243.

[9] Damsleth, E. and El-Shaarawi, A. H., Estimation of autocorrelation in a binary time series.Stochastic Hydrol. Hydraul. 2 (1988), 61-72.

[10] David, F. N., A note on the evaluation of the multivariate normal integral. Biometrika 40(1953), 458-459.

[11] Ewing, G. and Taylor, J., Computer recognition of speech using zero-crossing information.IEEE Transactions on Audio and Electroacoustics, Volume 17, Issue 1 (1969), 37 - 40.

[12] Ho, H.-C. and Sun, T. C., A central limit theorem for noninstantaneous filters of a stationaryGaussian process. J. Multivariate Anal. 22 (1987), 144-55.

[13] Horn, R. A. and Johnson, C. R., Matrix Analysis, Cambridge University Press (1985).[14] Kedem, B., Time Series Analysis by Higher Order Crossings. New York: IEEE Press (1994).[15] Keenan, D. MacRae, A Time Series Analysis of Binary Data, Journal of the American Statis-

tical Association, Vol. 77, No. 380 (1982), 816-21.[16] Markovic, D. and Koch, M., Sensitivity of Hurst parameter estimation to periodic signals in

time series and filtering approaches. Geophysical Research Letters 32 (2005), L17401.[17] Piterbarg, V. I., Asymptotic Methods in the Theory of Gaussian Processes and Fields. American

Mathematical Society, Providence, Rhode Island (1996).[18] Plackett, R. L., A reduction formula for normal multivariate integrals. Biometrika 41 (1954),

351-60.[19] Rabiner, L. R. and Schafer, R. W., Digital processing of speech signals. London: Prentice-Hall

(1978).[20] Shi, B., Vidakovic, B., Katul, G. and Albertson, J. D., Assessing the effects of atmospheric

stability on the fine structure of surface layer turbulence using local and global multiscaleapproaches. Physics of Fluids 17 (2005), 055104.