on the estimation of the parameters of multivariate stable distributions

17
ON THE ESTIMATION OF THE PARAMETERS OF MULTIVARIATE STABLE DISTRIBUTIONS Yu. Davydov Laboratoire de Statistique et Probabilit´ es, Universit´ e des Sciences et Technologies de Lille, F-59655 Villeneuve d’Ascq, France and V. Paulauskas Department of Mathematics and Informatics, Vilnius University, Naugarduko 24, Vilnius 2006 Lithuania Abstract In the paper the asymptotic normality for a new estimator for the spectral measure of a multivariate stable distribution is proved. Also an estimator for the density of a multivari- ate stable distribution is proposed, its properties are investigated. The dependence of a stable density on exponent α and the spectral measure is investigated. 1. INTRODUCTION Suppose that we have a sample X j =(X j 1 ,...,X jd ), j =1, 2,...,N , taken from a multi- variate distribution F , which belongs to the domain of normal attraction of a multivariate stable distribution G α,Γ whose characteristic function (ch.f.) is given by the following expression ϕ α,Γ (t)= ( exp ' - R S k |(t, s)| α ( 1 - i sign((t, s))tg πα 2 ) Γ(ds) , α 6=1 exp ' - R S d-1 |(t, s)| ( 1+ i π 2 sign ( (t, s))ln|(t, s)| ) Γ(ds) , α = 1, where S d-1 = {x R d : kxk =1} ,0 <α< 2, and Γ is a finite measure. The pair (α, Γ) completely characterizes multivariate stable distribution, but for our purposes it will be convenient to introduce one more parameter σ α = Γ(S d-1 ), which allows us to consider normalized spectral measure ˜ Γ= σ -α Γ. Therefore for us it will be convenient to use the following parameterization form of multivariate stable distributions: we say that a random vector (r.v.) Y has a multivariate stable distribution G = G α,σ,Γ with parameters (α, σ, Γ) if its ch.f. is of the form ϕ α,σ,Γ (t)= Ee i(t,Y ) = = ( exp ' - σ α R S d-1 |(t, s)| α ( 1 - i sign((t, s))tg πα 2 ) Γ(ds) , α 6=1 exp ' - σ R S d-1 |(t, s)| ( 1+ i π 2 sign ( (t, s))ln|(t, s)| ) Γ(ds) , α = 1. (1) 1

Upload: independent

Post on 12-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ON THE ESTIMATION OF THE PARAMETERSOF MULTIVARIATE STABLE DISTRIBUTIONS

Yu. DavydovLaboratoire de Statistique et Probabilites,

Universite des Sciences et Technologies de Lille,

F-59655 Villeneuve d’Ascq, France

and

V. PaulauskasDepartment of Mathematics and Informatics,

Vilnius University, Naugarduko 24, Vilnius 2006

Lithuania

Abstract

In the paper the asymptotic normality for a new estimator for the spectral measure of amultivariate stable distribution is proved. Also an estimator for the density of a multivari-ate stable distribution is proposed, its properties are investigated. The dependence of astable density on exponent α and the spectral measure is investigated.

1. INTRODUCTION

Suppose that we have a sample Xj = (Xj1, . . . , Xjd), j = 1, 2, . . . , N , taken from a multi-variate distribution F , which belongs to the domain of normal attraction of a multivariatestable distribution Gα,Γ whose characteristic function (ch.f.) is given by the followingexpression

ϕα,Γ(t) =

{exp

{− ∫Sk|(t, s)|α(

1− i sign((t, s))tg πα2

)Γ(ds)

}, α 6= 1

exp{− ∫

Sd−1 |(t, s)|(1 + i π

2 sign((t, s))ln|(t, s)|)Γ(ds)

}, α = 1,

where Sd−1 = {x ∈ Rd: ‖x‖ = 1} ,0 < α < 2, and Γ is a finite measure. The pair (α, Γ)completely characterizes multivariate stable distribution, but for our purposes it will beconvenient to introduce one more parameter σα = Γ(Sd−1), which allows us to considernormalized spectral measure Γ = σ−αΓ. Therefore for us it will be convenient to use thefollowing parameterization form of multivariate stable distributions: we say that a randomvector (r.v.) Y has a multivariate stable distribution G = Gα,σ,Γ with parameters (α, σ,Γ)if its ch.f. is of the form

ϕα,σ,Γ(t) = Eei(t,Y ) =

=

{exp

{− σα∫

Sd−1 |(t, s)|α(1− i sign((t, s))tg πα

2

)Γ(ds)

}, α 6= 1

exp{− σ

∫Sd−1 |(t, s)|

(1 + i π

2 sign((t, s))ln|(t, s)|)Γ(ds)

}, α = 1.

(1)

1

Here Γ is a normalized spectral measure, i.e. Γ(Sd−1) = 1. We are interested in theproblem of estimation of these parameters from the data Xj , j = 1, 2, . . . , N . Supposethat we have constructed the consistent estimators αN , σN and ΓN and we know someasymptotic properties of these estimators, allowing to state that αN − α , σN − σ andΓN − Γ are small (for the last quantity it is necessary to define more precisely what wemean by saying that a signed measure is small). Let g(x) ≡ gα,σ,Γ(x) be a density ofGα,σ,Γ , then one can pose a question how to estimate the quantity

supx∈Rd

|g(x)− gαN ,σN ,ΓN

(x)| (2)

and how practically to construct the estimator for g . These problems, as well as theproblem of generating values of multivariate stable r. v. with given parameters, are ratherdifficult and in the paper we propose some approach to this problem. The list of papersdevoted to generation of stable vectors or estimation of their parameters is not big, we canmention Byczkowski et al (1993), Cheng, Rachev (1995), Davydov, Paulauskas, Rackauskas(1999) (since we shall refer to this paper many times, we shall use the abbreviation [DPR]for it) and recent papers Nolan (1998) and Davydov and Nagaev (1999).

Main accent in the paper is given to the estimation of the spectral measure Γ. Theestimation of exponent α is investigated more intensively and at present one can proposeseveral possible procedures for estimating α . For a moment let us suppose that the sampleXi , i = 1, 2, . . . , N is taken from a stable distribution (the general case of the domain ofattraction of a stable vector is similar) and for simplicity of writing, let Γ be symmetric.Then it is known that for each t ∈ Sd−1 (Xi, t) = Yi,t , i = 1, . . . , N , will be a samplefrom one dimensional stable random variable having characteristic exponent α , thereforewe can use all possible estimators, known for the one-dimensional case . But here it isappropriate to mention, that the problem of the choise of the direction t ∈ Sd−1 is nottrivial. Despite the fact that for all directions, for which Y1,t is not degenerate, we will getconsistent estimators, the asymptotic properties of these estimators may depend on scaleparameter of Yi,t which is

σα

Sd−1| < t, x > |αΓ(dx)

and thus is dependent on t, σ and Γ. Most probably it is possible to optimize the choiseof t considering the rate of convergence and limit laws for estimators under consideration.Therefore it seems that better approach was proposed in [DPR], where in d-dimensionalsetting it was constructed an estimator of exponent α , independent of dimension d andspectral measure. Although in this paper it was stated that this estimator is applicableonly in case 0 < α < 1, changing some arguments it is possible to show that the proposedestimator can be applied in all range of exponents 0 < α < 2 and we intend to addressthis problem in the forthcoming paper.

In the same above mentioned paper there was proposed an estimator of normalized spectralmeasure Γ and now we shall introduce it. Suppose that N = n2 and divide sample Xj ,j = 1, 2, . . . , N into n groups Vn1, . . . , Vnn each containing n vectors from the sample.

2

Let M(1)ni = max

{‖Xj‖: Xj ∈ Vni

}and let Xni = Xj = Xj(n,i) , where the index j(n, i)

is such that M(1)ni = ‖Xj(n,i)‖ . We set

Θni =Xni

‖Xni‖ , i = 1, 2, . . . , n. (3)

Random vectors Θn1, . . . , Θnn are i.i.d. and in [DPR] it is proved that empirical distribu-tion based on the sample Θn1, . . . , Θnn is the consistent estimator for the spectral measureΓ, namely,

Γn(B) =1n

n∑

i=1

1IB(Θni)a.s.−→Γ(B). (4)

Here 1IB denotes the indicator function of a set B . In [DPR] asymptotic normality foran estimator of α was proved, here we shall considered the asymptotic properties of theestimator Γn .

Since Γn in (4) gives the estimator for the normalized spectral measure, it remains toestimate the scale parameter σ . Let as assume again that the sample is taken from astable distribution. We can use moment type estimators. If Y is a r. v. with ch.f. ϕα,σ,Γ

then ch.f. of Y = σ−1Y is ϕα,1,Γ , therefore for any 0 < p < α

E‖Y ‖p = σpC(α, p, Γ)

where C(α, p, Γ) = E‖Y ‖p . Thus as the estimator for σ we can take

σN =

1

NC(α, p, Γ)

N∑

j=1

‖Xj‖p

1/p

. (5)

Another possibility is again to use ”directional” estimators and scale parameter for Yt =(Y, t). Namely, as the estimator for σ we can take

σN = σN (t) =

(1

NC1(α, p, Γ, t)

N∑

i=1

|(Xi, t)|p)1/p

. (6)

where C1(α, p, Γ, t) =(∫

Sd−1 | < t, x > |αΓ(dx))1/α (

E|Y0|p)1/p and Y0 is one-dimensional

stable random variable with ch.f. exp{−|t|α} .

Both estimators (5) and (6) depend on unknown parameters α and Γ, therefore to evaluateC(α, p, Γ) and C1(α, p, Γ, t) we must use estimated values of these parameters αn and Γn ,that is in (5) and (6) we take C(αn, p, Γn) and C1(αn, p, Γn, t), respectively.

The paper is organized as follows. Section 2 contains some auxiliary results and in Section3 we give the estimates for the quantity (2). From the estimates, formulated in this sectionone can derive estimates for the difference of probabilities

|Gα1,σ1,Γ1(B)−Gα2,σ2,Γ2(B)|

3

for any Borel set B ∈ Rd . This problem has an independent interest and one can mentionthat even in Gaussian case (α = 2) the dependence of Gaussian probabilities on covariancematrix is not simple one.

There is some overlapping between Sections 2 and 3 and the paper Davydov and Nagaev(1999), since these results were obtained at the same time in Lille, Vilnius and Torun.

The main result of the paper is contained in Section 4. Under additional condition (28)in Theorem 3 we prove the asymptotic normality for the selfnormalized estimator of thespectral measure Γ on a fixed set B ∈ Sd−1 . In the last section we propose the estimatorof the density of a stable r. v. which can be practically constructed. The accuracy of thisestimator we estimate using results of Section 3.

2. AUXILIARY RESULTS

Let (X, ρ) be a complete separable metric space and let Π and M stand for the class of allprobabilistic measures and all finite measures on X , respectively. Let m be some metricon Π. For any Q ∈ M we denote Q = (Q(X))−1Q , Q ∈ Π. Let Qi ∈ M , qi = Qi(X),i = 1, 2. We define

dm(Q1, Q2) = |q1 − q2|+ m(Q1, Q2)min{q1, q2}.

Proposition 1. Suppose that m satisfies m(ν1, ν2) 6 1 for all ν1, ν2 ∈ Π . Then dm is ametric on M with the following properties

a) dm(ν1, ν2) = m(ν1, ν2) for νi ∈ Π , i = 1, 2 ;

b) ∀a > 0 dm(aQ1, aQ2) = adm(Q1, Q2) ;

c) if Q ∈ Π a > 0 , b > 0 then

dm(aQ, bQ) = |a− b|

d) if m metrizes the weak convergence in Π , then the dm metrizes the weak convergencein M ;

e) if Qn weakly converges to Q , then dπ(Qn, Q) → 0 , where π is Prokhorov metric on Π ;

f) Let X = ∪iAi , where Ai ∩ Ai = ∅ , Ai are Borel sets in X and let xi ∈ Ai . Letus define the following operator of ”discretization” F ≡ F{Ai,xi}: X → X , F (x) = xi ifx ∈ Ai . Let us define for any Borel set A a new discrete measure

(QF−1

)(A) =

i:xi∈A

Q(Ai).

Then

dm(Q,QF−1) = Q(X)m(Q, QF−1);

4

h) let Q1 and Q2 be two discrete measures with the same support, i.e., Q1 = Σpkδxk,

Q2 = Σqkδxk. Let p = Σpk , q = Σqk . Suppose that a metric m is majorized by the

metric of the total variation ‖ ‖var . Then

dm(Q1, Q2) 6 3‖Q1 −Q2‖var.

Remark. Without assumption m(ν1, ν2) 6 1 the triangular inequality for dm can beviolated. We can introduce another function dm(Q1, Q2) = |q1 − q2| + m(Q1, Q2) onM ×M which will be metric without this assumption, but we loose the property b). Sincemost commonly used metrics m are bounded by 1, it is more convenient to work with themetric dm .

Proof of Proposition 1. The verification that dm is a metric is easy and is omitted.Properties a)–c) follow from the definition of dm . Let us prove d). For Q ,Qn ∈ M , letq = Q(X), qn = Qn(X). If dm(Qn, Q) → 0, then qn → q and m(Qn, Q) → 0. Thereforeby the assumption Qn ⇒ Q , where ⇒ stands for the weak convergence of distributions.Let h stand for a bounded continuous function on X . Then

∣∣∣∣∫

X

h(x)dQn(x)−∫

X

h(x)dQ(x)∣∣∣∣ 6 ∆1 + ∆2,

where

∆1 =∣∣∣∣∫

X

h(x)qndQn(x)−∫

X

h(x)qndQ(x)∣∣∣∣ = qn

∣∣∣∣∫

h(x)dQn(x)−∫

h(x)dQ(x)∣∣∣∣ → 0,

∆2 =∣∣∣∣∫

X

h(x)qndQ(x)−∫

X

h(x)qdQ(x)∣∣∣∣ 6 |qn − q|

X

|h|dQ → 0,

as n →∞ .

e) can be proved in a similar way. The property f) follows from the relation

(aQ)F−1 = a(QF−1),

valid for all a > 0. To prove h) without loss of generality we suppose that q 6 p . By thedefinition and assumption

dm(Q1, Q2) = |p− q|+ m(Q1, Q2)q 6 ‖Q1 −Q2‖var + q‖Q1 − Q2‖var.

It is easy to note that

q‖Q1 − Q2‖var = q∑

k

∣∣∣pk

p− qk

q

∣∣∣ 6 q∑

k

∣∣∣pk

p− pk

q

∣∣∣ +∑

k

|pk − qk| =

= |p− q|+∑

k

|pk − qk| 6 2‖Q1 −Q2‖var.

5

The proposition is proved.

The next proposition in spirit is close to the estimates given in Byczkowski et all (1993).As in property e) π stands for the Prokhorov metric. By ωh we denote the modulus ofcontinuity of a function h .

Proposition 2. Let Q1, Q2 ∈ M , qi = Qi(X) , q1 < q2 . Let h be a real-valued continuousfunction on X , 0 6 h(x) 6 1 for all x . Then

∆:=∣∣∣∣∫

X

h dQ1 −∫

X

h dQ2

∣∣∣∣ 6 dπ(Q1, Q2) + q1ωh

(q−11 dπ(Q1, Q2)

). (7)

Proof. By Lemma 1 from Davydov (1997) we have

∆ 6∣∣∣∣∫

X

hq1dQ1 −∫

X

hq1dQ2

∣∣∣∣ + |q1 − q2

∣∣∣∣∫

X

h dQ2

∣∣∣∣ 6

6 (q2 − q1) + π(Q1, Q2)q1 + ωq1h

(π(Q1, Q2)

)=

= dπ(Q1, Q2) + q1ωh

(π(Q1, Q2)

).

Since π(Q1, Q2) 6 q−11 dπ(Q1, Q2), we get (11), and the proof is complete.

Remark 1. If Q1, Q2 ∈ Π, then (7) coincides with an estimate of Lemma 1 from Davydov(1997).

Remark 2. If a function h is Lipshitsian, i.e., |h(x)− h(y)| 6 Cρ(x, y), then

∆ 6 (1 + C)dπ(Q1, Q2). (8)

If a function h has a modulus of continuity satisfying ωh(δ) 6 Cδα , 0 < α < 1, then

∆ 6 dαπ(Q1, Q2)

[Cq1−α

1 + d1−απ (Q1, Q2)

]

or, since dπ(Q1, Q2) 6 q2 ,

∆ 6 dαπ(Q1, Q2)(Cq1−α

1 + q1−α2 ). (9)

Proposition 3. Let ν1 and ν2 be two measures on Sd−1 , qi = νi(Sd−1) and let θ0 ∈Sd−1 . Then

∆:=∣∣∣∣∫

Sd−1| < θ0, θ > |αν1(dθ)−

Sd−1| < θ0, θ > |αν2(dθ)

∣∣∣∣ 6

6{

(q1−α1 + q1−α

2 )(dπ(ν1, ν2)

)α, 0 < α < 1,

(1 + α)dπ(ν1, ν2), 1 6 α < 2.

(10)

6

Prof. We apply Proposition 2 with X = Sd−1 , Qi = νi , and h(θ) = | < θ0, θ > |α . If0 < α < 1, then for all a, b > 0 |aα − bα| 6 |a− b|α , therefore ωh(δ) 6 δα . If 1 6 α < 2,using inequality |aα − bα| 6 α|a− b| valid for 0 6 a, b 6 1, we get ωh(δ) 6 αδ . Now from(8) and (9) we get (10) and the proposition is proved.

3. ESTIMATES OF THE DIFFERENCE OF TWO STABLE DENSITIES

Now we are prepared to estimate the quantity given in (2). We subdivide this procedureinto two steps: at first we compare densities, having the same exponent α , but differentspectral measures, and then we estimate the difference of densities with different exponentsbut with the same spectral measure.

Let gi(x) = gα,σi,Γi(x), i = 1, 2, and let for simplicity of writing νi = σα

i Γi , qi = σαi

(=

νi(Sd−1)). We consider symmetric measures Γi , i = 1, 2 for the following reason. In non-

symmetric case the parameterization of multivariate stable laws is not continuous withrespect to α in the neighborhood of 1 (due to the presence of the function tg πα

2 in theexpression of ch.f.), and we are forced to impose some technical conditions. Denoting

I(t, ν, α) =∫

Sd−1| < t, s > |αν(ds)

we introduce functions Ii(t) = I(t, νi, α), i = 1, 2. For t ∈ Rd we denote θt = t‖t‖ ∈ Sd−1 .

Then Ii(t) = ‖t‖αIi(θt). We assume that both stable measures are non-degenerate, i.e.

minθ∈Sd−1

Ii(θ) > 0, i = 1, 2. (11)

As usual, a ∧ b = min{a, b} , a ∨ b = max{a, b} .

Now we formulate the first estimate between introduced density functions gi of two stablerandom vectors with the same exponent α .

Theorem 1. Suppose that spectral measures νi , i = 1, 2 satisfy (11). Then

δ ≡ δ(g1, g2) = supx∈Rd

|g1(x)− g2(x)| 6 C(α, d)KL(dπ(ν1, ν2))β , (12)

where β = 1 ∧ α ,

C(α, d) = 21−dα−1π−d/2Γ( d

α+ 1

)(Γ(

d

2))−1

, (13)

K = K(ν1, ν2) =[(1 ∨ α)(q1 ∧ q2)1−β + (q1 ∨ q2)1−β

], (14)

L = L(ν1, ν2) =(

minθ∈Sd−1

min{I1(θ), I2(θ)

})− dα−1

.

Here Γ is the well-known gamma-function.

Remark. In Byczkowski et al (1993) there was proved a similar result in the case whereν2 is a discrete measure constructed from ν1 .

7

Proof. By the inversion formula for a ch. f. we have

δ 6 (2π)−d

Rd

∣∣ exp{− ‖t‖αI1(θt)

}− exp{− ‖t‖αI2(θt)

}∣∣. (15)

Making the change of variables t → (r, θ), r = ‖t‖ , θ = t‖t‖ ∈ Sd−1 , we get

δ 6 (2π)−d

Sd−1χd−1(dθ)

R+

rd−1∣∣e−rαI1(θ) − e−rαI2(θ)

∣∣dr, (16)

where χd−1 is the normalized Haar measure on Sd−1 . Let m = m(θ) = I1(θ) ∧ I2(θ),M = M(θ) = I1(θ) ∨ I2(θ). Then

J(θ) =∫

R+

rd−1|e−rαI1(θ) − e−rαI2(θ)|dr =1α

R+

rdα−1e−rm|1− e−r(M−m)|dr. (17)

Taking into account elementary inequality |1− e−x| 6 |x| for x > 0, we easily get

J(θ) 6 (M −m)m− dα−1Γ(

d

α+ 1). (18)

Since M −m = |I1(θ)− I2(θ)| , applying Proposition 3, inequality (10), we get

|I1(θ)− I2(θ)| 6 K(dπ(ν1, ν2)

)β, (23)

with K given in (14). Collecting inequalities (15)–(19) we get (12), and Theorem 1 isproved.

Corollary 1. Let gn n > 1 be a sequence of stable densities with exponent α and spectralmeasures νn . Suppose that νn ⇒ ν , and g is a stable density, corresponding to ν . Then

supx|gn(x)− g(x)| → 0.

Proof. From Proposition 1, a) it follows that dπ(νn, ν) → 0, therefore it is sufficient toverify that constants Kn = K(νn, ν), defined in (14) are bounded.

Corollary 2. If νi = σαΓi , Γi probability measures on Sd−1 , then

sup |q1(x)− q2(x)| 6 C(α, d, σ)L1

(π(Γ1,Γ2)

)β (20)

where β = 1 ∧ α , C(α, d, σ) = C(α, d)(2 ∨ (1 + α))σ−d

L1 = L1(Γ1, Γ2) = minθ∈Sd−1

min(I(θ, Γ1, α), I(θ, Γ2, α)

)

and C(α, d) is defined in (13).

8

Corollary 3. If νi = σαi Γ , then

sup |g1(x)− g2(x)| 6 C(α, d)K(ν1, ν2)L2|σα1 − σα

2 |β , (21)

whereK(ν1, ν2) = K(ν1, ν2) · (σ1 ∧ σ2)−d−α

L2 =(

minθ∈Sd−1

I(θ, Γ, α))− d

α−1.

Now we consider two stable densities with the same spectral measure ν , but havingdifferent exponents. Let ϕi(x) = gαi,σi,Γ(x), where Γ is a symmetric probability measureand q = σα1

1 = σα22 . Again let ν = σα1

1 Γ and let us denote

Ji(t) = I(t, ν, αi).

We assume that linear support (in Rd ) of the measure ν is not concentrated in a subspaceof Rd , then

minθ∈Sd−1

Ji(θ) > 0, i = 1, 2.

Theorem 2. If 0 < αi < 2 , i = 1, 2 then

∆ = ∆(ϕ1, ϕ2) = supx|ϕ1(x)− ϕ2(x)| 6 C(α1, α2, ν, d)|α2 − α1|, (22)

where C(α1, α2, ν, d) is given in (27).

Proof. Similarly as in the proof of Theorem 1, using the relation Ji(t) = ‖t‖αiJi(θt) wehave

∆ 6 (2π)−d

Rd

∣∣ exp{−‖t‖α1J1(θt)} − exp{−‖t‖α2J2(θt)}∣∣dt. (23)

NowV = | exp{−‖t‖α1J1(θt)} − exp{−‖t‖α2J2(θt)}

∣∣ 6 V1 + V2, (24)

whereV1 = exp{−‖t‖α1J1(θt)}

∣∣ exp{− ‖t‖α1

(J2(θt)− J1(θt)

)}− 1∣∣,

V2 = exp{−‖t‖α2J2(θt)}∣∣1− exp

{− (‖t‖α1 − ‖t‖α2)J2(θt)

}∣∣.Without loss of generality we assume that α2 > α1 .

Using the elementary inequalities

|1− ax| 6 a|x|(ln a)|x|, a > 0

|1− e−z| 6 |z|e|z|,we can estimate

supθ∈Sd−1

|J1(θ)− J2(θ)| 6 C1(α1, α2, ν)|α1 − α2|,∣∣‖t‖α2 − ‖t‖α1

∣∣ 6 ‖t‖α2∣∣ln‖t‖∣∣|α2 − α1|,

9

V1 6 C1(α1, α2, ν)‖t‖α1 |α2 − α1| exp{−‖t‖α1(J1(θt)−

∣∣J1(θt)− J2(θt)∣∣)},

where C1 ≡ C1(α1, α2, ν) = q sup0<a<1 aα2 |ln a| .Let us denote J0

i = minθ∈Sd−1 Ji(θ). We assume that |α2 − α1| is sufficiently small,therefore

C2 ≡ C2(α1, α2, ν) = J01 − C1(α1, α2, ν)|α2 − α1| > 0,

ThusV1 6 C1‖t‖α1 exp{−C2‖t‖α1}|α2 − α1|. (25)

The quantity V2 we must estimate more carefully:

V2 6∣∣‖t‖α1 − ‖t‖α2

∣∣J2(θt) exp{− (‖t‖α2} −

∣∣‖t‖α1 − ‖t‖α2∣∣)J2(θ2)

},

we consider separately ‖t‖ > 1 and ‖t‖ 6 1. In the first case |t|α2 −∣∣|t|α1 − |t|α2

∣∣ = |t|α1

since we assumed α2 > α1 , and in the second case we can simply estimate

exp{− (‖t‖α2} − ∣∣‖t‖α1 − ‖t‖α2

∣∣)J2(θ2)}

6 C3 exp{− ‖t‖α

1 J02

},

with C3 = exp{J02 + q} . Therefore we get

V2 6 C4‖t‖α2 |ln‖t‖| exp{−J02‖t‖α1}|α2 − α1| (26)

with C4 = qC3 . From (23), (24), (25) and (26) we get (22) with

C(α1, α2, ν, d) = (2π)−d

Rd

{C1‖t‖α1 exp{−C2‖t‖α1}

+ C4‖t‖α∣∣ln‖t‖

∣∣ exp{−J02‖t‖α}}dt.

(27)

It is possible to get more concise expression of C(α1, α2, ν, d) passing to the polar coor-dinates in the integral in (27) and using gamma function, but since the constants are notthe main our concern in the paper, we leave this expression as it is.

4. THE ASYMPTOTIC NORMALITY OF THE ESTIMATOR Γn

Now we consider the asymptotic properties of the estimator Γn for the spectral measure,given in (4). We remind that Γ in formula (1) stands for the normalized spectral measureof a multivariate stable random vector with a distribution Gα,σ,Γ . As in the introduction,let Xj , j = 1, 2, . . . N = n2 , be a sample taken from a distribution Gα,σ,Γ (or from adistribution which is in the domain of normal attraction of Gα,σ,Γ ).

We define random vectors Θni , i = 1, 2, . . . , n by means of (3) and for a fixed Borel set B

on Sd−1 the estimator Γn(B) by means of (4). We shall prove the asymptotic normalityfor Γn(B) considering a fixed set B . Another possibility is to consider this estimator as asequence of the processes indexed by some class of Borel sets on Sd−1 and to try to provelimit theorem (as n →∞) for this sequence.

10

Since this approach requires more sophisticated technique from the theory of empiricalprocesses indexed by sets, we confine ourselves with comparatively simple case.

Let us denote Γ(B) = a , 1IB(θni) = ηni , i = 1, 2, . . . , n . Then

Zn: = Γn(B)− Γ(B) =1n

n∑

i=1

(ηni − a),

√nZn = n−1/2

n∑

i=1

(ηni − a) = n−1/2n∑

i=1

(ηni − Eηni) +√

n(Eηn1 − a).

Denote γn = Eηn1 − a , b2n = E(ηn1 − Eηn1)2 , b2 = a(1− a),

Sn = n−1/2n∑

i=1

(ηni − Eηni),

Tn =√

nZn(n−1

∑ni=1 η2

ni −(n−1

∑ni=1 ηni

)2)1/2.

The assumption that X1 belongs to the domain of normal attraction of Gα,1,Γ means that

lims→∞

sαP{ X1

‖X1‖ ∈ B, ‖X1‖ > s}

= Γ(B).

Now we suppose that the stronger relation holds: for sufficiently large s and for someβ > 0

sαP{ X1

‖X1‖ ∈ B, ‖X1‖ > s}

= Γ(B) + O(s−β). (28)

The main result of this section can be stated as follows.

Theorem 3. Suppose that (28) holds with β > 12α . Then as n →∞

Tn ⇒ N(0, 1) (29)

where, as usual, N(0, 1) stands for the standard normal law.

Remark. In the proof we get also asymptotic normality for√

nZn , but the variance ofthe limit normal law is dependent on unknown parameter Γ(B) , which we are estimating.

Proof of Theorem 3. It is easy to see that we get (29) if we prove the following relations:

Sn ⇒ N(0, b2), (30)

√nγn → 0, (31)

11

1n

n∑

i=1

η2ni −

( 1n

n∑

i=1

ηni

)2 P→b2, (32)

As in the proof of Theorem 9 from [DPR], (30) follows from the central limit theorem fortriangular array with i.i.d. random variables in each row and the relation

b2n: = E(ηn1 − Eηn1)2 = Eη2

n1 − (Eηn1)2 = a + γn − (a + γn)2 → b2,

if (31) holds.

Also it is easy to see that applying law of large numbers for triangular arrays (for detailssee the proof of Theorem 9 in the above mentioned paper) we have (32). Thus we see thatin order to prove the theorem it remains to establish the relation (31). We shall provemore general result.

Lemma 1. If (28) holds for some β > 0 , then

|γn| 6 C(α, β)max(n−1, n−β/α

). (33)

Proof of Lemma 1. In order to prove (33) we need to show that

P{Θni ∈ B} = Γ(B) + Rn, (34)

with remainder term Rn = O(max

(n−1, n−β/α

)). Let us denote

Gn(x) = P{

max16i6n−1

‖Xi‖ 6 x}.

Using the definition of Θni it is not difficult to see that

P{Θni ∈ B} = n

∫ ∞

0

P{ X1

‖X1‖ ∈ B, ‖X1‖ > r}

Gn(dr).

Denote Gn(x) = Gn(xn1/α). Assumption (28) implies (we remind that Γ(sd−1) = 1) thatfor large s

P{‖X1‖ > s

}= s−α + O(s−α−β). (35)

Therefore it is easy to get the relation

limn→∞

Gn(x) = G0(x) ={

e−x−α

, x > 0,0, x 6 0.

Using (28) and the fact that∫∞0

y−αdG0(y) = 1, we have (34) with Rn =∑4

i=1 Rni ,where

Rn1 = n

∫ s0

0

P{X1‖X1‖−1 ∈ B, ‖X1‖ > r

}dGn(r),

Rn2 = −Γ(B)∫ s0

0

y−αdG0(y),

Rn3 = Γ(B)∫ ∞

s0

y−αd(Gn(y)−G0(y)

),

Rn4 = Cn−β/α

∫ ∞

s0

y−α−βdGn(y).

12

Here s0 = s0n−1/α and we shall choose s0 later. It is easy to see that

Rn1 6 n(1− P

{‖X1‖ > s0

})n

= n(1− rn−1)n 6 ne−12 r,

where r = r(n, s0) = nP{‖X1‖ > s0

}> 1

2ns−α0 . For the last inequality we have used

(35). Thus, if we choose

s0 =( n

Kln n

)1/α

,

with sufficiently large K , then we get

Rn1 = o(n−1). (36)

Simple calculations show thatRn2 = o(n−1). (37)

The main remainder term is Rn3 and to estimate it firstly we must estimate the differenceGn(y) − G0(y). Although there are papers, devoted to the rates of convergence in limittheorems for maxima of independent random variables, we did not try to apply knownresults. Since rather simple expansion of logarithmic function gives the result, we easilyget the following estimates which are sufficient for our purposes.

Lemma 2. Let Xi i > 1 be i.i.d. random vectors satisfying (35). Then for y > cn−1/α

|Gn(y)−G0(y)| 6 C(α, β)e−y−α

(n−β/αy−α−β + n−1y−2α) (38)

andsup

y|Gn(y)−G0(y)| = C(α, β)max(n−1, n−β/α). (39)

Proof of Lemma 2. To prove (38) we write

lnGn(y) = nln(1− P{‖X1‖ > n1α y}),

then use (35) and the expansion of logarithmic function. After rather simple calculationswe get (38). Since for 0 < y < cn−1/α both terms Gn(y) and G0(y) are of the ordero(n−1), (39) follows from (38). Lemma 2 is proved.

Now we can estimate the term Rn3 . Integrating by parts, we get

|Rn3| = Γ(B)(R

(1)n3 + R

(2)n3

), (40)

whereR

(1)n3 = s−α

0 |Gn(s0)− G0(s0)|,

R(2)n3 = α

∫ ∞

s0

|Gn(y)− G0(y)|y−α−1dy.

13

Since s0 = (Kln n)−1/α > Cn−1/α , we can use (38) to estimate both quantities R(j)n3 ,

j = 1, 2. After simple calculations we get

R(1)n3 = o(n−1),

R(2)n3 6 C(α, β)max

(n−1, n−β/α

).

(41)

Here it is appropriate to mention that if we use (39) instead of (38) when estimating R(2)n3

we should get additional factor ln n .

In a similar way we estimate Rn4 :

Rn4 = cn−β/α

∫ ∞

s0

y−α−βdGn(y) = cn−β/α(R

(1)n4 + R

(2)n4

), (42)

whereR

(1)n4 =

∫ ∞

s0

y−α−βdG0(y),

R(2)n4 =

∫ ∞

s0

y−α−βd(Gn(y)−G0(y)

).

It is easy to see thatR

(1)n4 6 C(α, β) (43)

and R(2)n4 can be estimated in a similar way as Rn3 :

R(2)n4 6 C(α, β)max

(n−1, n−β/α

). (44)

Collecting estimates (36), (37), (40), (41)-(44) we get (33). Lemma 1 is proved and since(31) follows from (33) if β > α

2 , Theorem 3 is also proved.

5. CONSTRUCTION AND THE PROPERTIES OF AN ESTIMATOR FORSTABLE DENSITY

Now we have all ingredients which are theoretically necessary to estimate a density of astable random vector. We may assume, that from a sample we can construct estimatorsfor the parameters α , σ , Γ, let say, αn , σn , Γn . Then Theorem 1 and its corollaries andTheorem 2 allow us to estimate the quantity (2). Still there remains practical question howto construct a stable density with the estimated parameters. It was stressed in Byczkowskiet all (1993) and Nolan (1999) that only in the case of discrete spectral measure theconstruction of the density computationally is feasible. On the other hand, in the abovementioned first paper it was demonstrated that in a sense discrete spectral measures are”dense” - for any spectral measure Γ one can find a discrete spectral measure Γ such thatcorresponding stable densities are arbitrary close in uniform distance.

In this section we adopt the proposed estimation of the spectral measure to estimate stabledensity and we assume that d > 2. As earlier, let X1, . . . , XN be a sample from a stable

14

distribution Gα,σ,Γ and let Γn be the estimator of Γ, defined in (4). Now we define apartition of Sd−1 as a collection of sets {Aj , j = 1, . . . ,m} , ∪Aj = Sd−1 Aj ⊂ Sd−1 ,Ak ∩ Aj = ∅ , diam (Aj) < C(d)m− 1

d−1 . Let xj ∈ Aj , j = 1, . . . , m be such thatAj ⊂ V

(xj , C(d)m− 1

d−1), where V (x, r) stands for an open ball in Rd with the center

x and the radius r . Let Πm : Sd−1 → Sd−1 , Πm(x) =∑m

j xjIAj(x) and νm = νΠ−1

m ,Γm = ΓΠ−1

m . As an estimator for Γm we take a discrete measure with mass Γn(Aj) atpoints xj , j = 1, . . . ,m and we denote this measure by Γn,m . Let νn,m = σα

n Γn,m . Themeasure νn,m is discrete and we take it as an estimator for spectral measure ν , that is,in (2) as the estimator for g we take g

αn,νn,m. To construct this density we can use the

algorithm, given in Nolan (1999) (see formula (4.1) therein). Denoting ‖x‖∞ = sup |x(t)| ,we have

‖gα,σ,Γ − gαn,νn,m

‖∞ 6 ‖gα,ν − gαn,ν

‖+ ‖gαn,ν

− gαn,νn,m

‖∞. (45)

Applying Theorems 1 and 2 we have

‖gα,ν − gαn,ν

‖∞ = O(|α− αn|

), (46)

‖gαn,ν

− gαn,νn,m

‖∞ = O((|σα − σα

n |+ π(Γ, Γn,m))β)

, (47)

where β = min(1, αn). Although in this paper we do not deal with asymptotic propertiesof α− αn and σα− σα

n , now we estimate the quantity π(Γ, Γn,m), which presents the mainterm in the estimator of the quantity ‖gα,σ,Γ − g

αn,νn,m‖∞ .

Theorem 4. Let Γ be a symmetric spectral measure in the representation (1), and letassumption (28) holds with β > α

2 . Then for any 0 < γ < 12d with probability 1 for all

sufficiently large n

π(Γ, Γn,γ

)6 n−γ . (48)

Proof of Theorem 4. From the definition of the metric π and the construction of the mapΠm we have

π(Γ, Γm) 6 C(d)m− 1d−1 . (49)

For any two probabilistic measures µ1 and µ2 π(µ1, µ2) 6 ‖µ1 − µ2‖var and for discretemeasures with the same support ‖µ1 − µ2‖var =

∑ |pk − qk| , where pk and qk are massesof µ1 and µ2 , respectively. Therefore, taking into account that both measures Γm andΓn,m are discrete with the support {xj , j = 1, . . . ,m} and denoting pnj = E1IAj (θni),

15

j = 1, . . . , m , we have the following estimates:

P{π(Γm, Γn,m) > δ

}6 P

{‖Γm − Γn,m‖var > δ}

=

= P{ m∑

j=1

|Γn(Aj)− Γ(Aj)| > δ}

6

6m∑

j=1

P{|Γn(Aj)− Γ(Aj)| > δm−1

}6

6m∑

j=1

P{|Γn(Aj)− pnj | > δm−1 − |pnj − Γ(Aj)|

}6

6m∑

j=1

P{|Γn(Aj)− pnj | > δm−1 − C max

(n−1, n−β/α

}.

(50)

For the last estimate we used (33). Now we set m = nγ(d−1) , δ = n−γ with 0 < γ < 12d

and since β > α2 , from (50) we get

P{π(Γm, Γn,m) > n−γ

}6

m∑

j=1

P{|Γn(Aj)− pnj | > 1

2n−dγ

}. (51)

Since Γn(Aj) is a sum of i.i.d. random variables, we can apply moment inequality (Th2.10 from Petrow (1995)). We get that for any integer k there exists a constant C(k)depending only on k such that

P{|Γn(Aj)− pnj | > δ

}6 C(k)δ−2kn−kpnj . (52)

From (51), (52) and the relation∑m

j pnj = 1, we get that for any 0 < γ < 12d and for any

integer kP

{π(Γm, Γn,m) > n−γ

}6 C(k)n−k(1−2dγ).

Applying Borel–Cantelli lemma we conclude that for any 0 < γ < 12d with probability 1

π(Γm, Γn,m) 6 n−γ . (53)

(49) and (53) prove the theorem.

Usually estimators αn and σαn have

√n convergence rate, then from (45)–(47) and The-

orem 4 we get that with probability one the rate of convergence to zero of the quantity‖gα,σ,Γ − g

αn,νn,m‖∞ is close to n−

12d β with β = min(1, α). Taking into account that

n =√

N , where N is the size of initial sample, we see that rate is rather slow. At presentwe do not know what is the right order of the convergence to zero of the proposed estimator,and it seems that it is not an easy problem.

Another problem is how restrictive is the assumption (28). Even if X1 is a multivariatestable vector, it is not known if (28) holds in general case. It can be explained by the fact

16

that the expansions of multivariate stable densities gα,σ,Γ(x) are not investigated to suchextent as it is done in one-dimensional case. There is a paper of Archipov (1989), whereasymptotic expansions of g(x) are given but under additional assumption that the spectralmeasure Γ has a density which itself is sufficiently smooth. Thus under this additionalassumption from the results of Arkhipov one can derive the relation (28) with β = α .

The last remark concerns the assumption of symmetricity of a spectral measure. Sincethis assumption is caused by discontinuity of the parameterization (1) of stable laws atpoint α = 1, it is easy to see that restricting the values of α to be outside the interval(1 − ε, 1 + ε) we can get all formulated results without assumption of symmetricity, butwith possible dependence of constants on ε . Most probably, the better way to overcomethis difficulty would be to use another parameterization of multivariate stable laws.

REFERENCES

Arkhipov S.V. (1989), The density functions asymptotic representation in the case ofmultidimensional strictly stable distributions, in: Stability Problems for Stochastic Models,Lecture Notes in Math., 1412, Springer, Berlin.

Byczkowski T., Nolan J.P., and Rajput B., (1993), Approximation of MultidimensionalStable Densities, J. of Mult. Anal., 46(1), 13–31.

Cheng B. N., and Rachev S. T., (1995) Multivariate Stable Future Prices, Math. Finance,5, 133-153.

Davydov Yu., (1997), On the rate of strong convergence for convolutions, J. Math. Sci-ences, 83, 3, 393–396.

Davydov Yu., Nagaev A. V., (1999) Theoretical aspects of simulation of random vectorshaving a symmetric stable distribution, submitted to J. Multivar. Anal.

Davydov Yu., Paulauskas V., and Rackauskas A., (1999) More on p -stable convex sets inBanach spaces, to appear in J. Theor. Probab.

Nolan J. P., (1998), Multivariate stable distributions: approximation, estimation, simula-tion and identification, in: R.Adler, R.E. Feldman and M. Taqqu (eds), A Practical Guideto Heavy Tails, p. 509-525, Birkhauser, Boston.

Petrov V. V. (1995), Limit Theorems of Probability Theory. Sequences of IndependentRandom Variables. Clarendon Press, Oxford.

Samorodnitsky G., Taqqu M. S., (1994) Stable Non–Gaussian Random Processes. Stochas-tic Models with Infinite Variance, Chapman & Hall, New York, London.

17