asymptotic expansions in the central limit theorem in hilbert...
TRANSCRIPT
ASYMPTOTIC EXPANSIONS IN THE CENTRAL LIMIT THEOREM IN HILBERT SPACE
V. Bentkus UDC 519.214.5
Let Sn be the sum of n independent random elements of the Hilbert space H, f:H + R l be a sufficiently smooth function, whose derivatives grow no faster than exponentials. In this paper we find asymptotic expansions for Ef(Sn). These expansions imply asymptotic expansions for probabilities of the form P{w(Sn) < r}, where the function w:H § R 1 is a polynomial of the second degree.
1. Basic Results
All results of the paper are obtained for differently distributed summands (cf. Secs. 2, 3, 4). Here we give formulations for identically distributed summands.
We introduce some notation: H is a real separable Hilbert space with scalar product (-, .) and norm I-[; X, XI,...,X n are independent identically distributed random elements of the space H, EX = 0, EIx[ 2 = I; Y, Yl,.-.,Yn are independent identically distributed cen- tered Gaussian random elements, while the covariances of X and Y coincide, covX = cov Y; S n =
(XI+ ... +)(.,)]Vn; Xf'7=Xl~ixt~V~l , Xf-;=l~ix~>g~ is the truncation of X at level I/n; S~ "~=
(xi:~ + . . + x f O/F , .
Further, dhlf(x) = f(1)(x)hlis the 1-fold derivative of the function f:H § R 1 at the point xeH in the direction hsH, [[f(n(x) ll is the norm of the multilinear form f(1)(x). If the function f is differentiable a sufficient number of times, then, as is familiar, dhzdh2f(x) = dh2dhlf(x). Hence, if we are given a polynomial P(zl ..... Zn) with real (or complex) coeffi- cients in n formal commuting variables zl,... ,zn, then we can always define the operator P(dhl,...,dh) properly in the natural sense.
We recall that the Edgewort--Cramer polynomials &(m2,...,mk+2) of the formal variables m2,...,mk+ 2 are defined by means of the expansion in formal power series
0o Go
exp { t-' [In (1+ ~ mj, ] ]fi)-m2t~]2] } : I P~(m~ . . . . . mk+,)t' j =2 k=O
(cf. [I]). We transform the polynomials ~ into new polynomials Pk by the substitution mll... mt,+~z] ''''zt''s Of the information about Pk we need only the fact that
Pk (z~ . . . . . zk) = E ak. I . . . . . . t. z~' . . . z , ,
where the collection of nonzero coefficients ak,ll,...,ls depends only on k; here ak, 1 , ,1 s can be nonzero only if 11 +... + I s ~< k + 2s, k + 2 /> 11 >/... ~> I s i 2. This becomes Iclear in the course of the proof of Theorem I.I. In particular,
=Zl124-zl-2]8 +ZlZ2/72. P 0 = l , PI=z~]6, P~ 4 2~2 3 s
On the function f:H § R I we shall impose the conditions (here el, c2, Y, a ~ 0, ~ < I/I0)
I f ( ~ i < c a + c ~ l x [ V , (1.1) [[f(~)(x)[i~ae=lxl, k = l , . . . , m. ( 1 . 2 )
Th roughou t t h i s p a p e r ( i f n o t h i n g i s s a i d t o t h e c o n t r a r y ) , we s h a l l assume t h a t a l l random elements which occur are independent.
THEOREM 1.1. Let the integer p ~ 2, EIxIP < ~ the function f satisfy (1.1) with y = p and (1.2) with m = 3p -- 3. Then
Institute of Mathematics and Cybernetics, Academy of Sciences of the Lithuanian SSR. Translated from Litovskii Matematicheskii Sbornik (Lietuvos Matematikos Rinkinys), Vol. 24, No. 3, pp. 29-50, July-September, 1984. Original article submitted June 17, 1983.
210 0363-1672/84/2403- 0210508.50 �9 1985 Plenum Publishing Corporation
p--2
Ey(S,,)=EfO')+ ~ n-~:=Ee,(d, ..... , arx)f(r')+~, k=l
where the reminder r admits the estimate
Ir!<<.can-('-=):2 { (l +c2EIXIDEI Xr F + Ei X :~ l'+~/V-n}, ( 1 . 3 )
the constant c = c(a, p, ci, c2) is such that
!~?~(ax ...... dr) l~c(~, / ,)aE[X[ ~+~.
some notation: ~(s) = (~,''',~s) is a positive integral multiindex, ~(s)! = We introduce ~I! .-. ~s !, l~(S) l = Zl +... + ~s the differential operator
& d~. Q~ = Q, (l (s)) = (d~ - d ~ ) . . . (dx, -
The symbol E' w i l l d e n o t e s u m m a t i o n o v e r a l l m u l t i i n d i c e s L ( s ) = (Lx . . . . ,L s) s a t i s f y i n g 3 ~<
Zi ~< p , 3 ~< ~1 ~< P -- ~1 + 2 , . . . . 3 ~< L s ~< p -- (~1 + . . . + Z s - i ) + 2 ( s -- 1 ) .
THEOREM 1 . 2 . L e t t h e i n t e g e r p t> 2 , E I x I P < ~ t h e f u n c t i o n f s a t i s f y ( 1 . ~ ) w i t h y = p and (1.2) with m = 3p -- 3. Then
Ef(S.) = El(Y) + Bx +... + B,_~ + r,
where the remainder r admits estimate (1.3),
k
B,= Z Z ' C~n-I~'~'t/=EQ'f(V I--s/nY)/l(')!,
s = l [l(s) !=2a+k
iB~t<.c(=, k)an-~!~EIXl ~+~ k = l , . . . , p - 2 .
Let
where
THEOREM 1.3. Let (1.2) hold with m = 3p -- 3. Then
Ef(SfT)=Ef(Y)+ A~+ .. . +A,_~+rx ,
k
Z Z' --nn~: -l:*'ll2 EOV'-~ ,,, l-s/n Y), (I .4) A~= s=l ll(s) l=2s+k
[A~[<~c(x, k)a(EiXv,-~[~+n-k'~ElX/~]k+2), (I .5)
If in addition (I.1) holds, then
Ef(S~) = E f ( Y ) + A1 + . �9 + Ap_ ~ + r2,
where Ak is defined in (1.4) and satisfies (1.5), and the remainder r admits the estimate
I r~t <~c(~, ?, y, cl, c2)a { E I X F Z ]~+n-('-l)/~E[XV-dt'+z +c.,_E[ Xv. ~ [vn(2-v)12+c2El X FE[XvW [~-}.
Remark 1.4. Estimate (1.5) for the terms Ak of the asymptotic expansion of Theorem 1.3 can be improved. We denote by suppf the closure of the set {x:f(x) ~ 0}. Then on the right
side of (1.5) one can write the factor sup,~<s~<kPIV l~-sinYesuppf}. Analogous corrections also hold for Theorems 1.1 and 1.2.
G~tze [2] found the asymptotic expansion of Theorem 1.1 with remainder ~'=O(n -~-~)/2 • E IXf+I) under the assumption that the function f has bounded derivatives. This result can be considered as a generalization of the familiar estimate of Paulauskas [3] and Zolotarev [ 4 ] , E f ( S ~ ) - E f ( Y ) = O ( n -lr2 E [ X I ~ , i f l l f(~)(x)l l~<l. Our e x p a n s i o n i n Theorem 1.1 i n t h e c a s e E • IXI3 < ~ implies Ef(S n) -- Ef(Y) = O(n -I/z); here we impose the restriction If(x) I~c~+c~!x[:~
211
(which obviously follows from i]f(3)(x)][~<l ) and the relatively weak restriction ! [~~ ~-+~', 0 <~ i <~ 3, c~ < 1/10 on the growth of the derivatives. This result is, apparently, new even in the finite-dimensional case -- in GStze and Hipp [5], for the validity of the analogous asymptotic expansions in a finite-dimensional space, the derivatives of the function f have to grow no faster than a polynomial.
The form of the terms of the above-cited asymptotic expansions is similar to the form of the asymptotic expansions givenby Bergstrem [14] , and Statulyavichus [15] for random elements with values in a finite-dimensional space.
It seems to us that the asymptotic expansion of Theorems 1.2 and 1.3 is preferable to the expansions of Theorem I. I for the following reasons: Firstly, the expansion of Theorem 1.3 is more precise, since it implies the expansions of Theorems 1.2 and 1.1; secondly, the expansions of Theorems 1.3 and 1.2 have comparatively simple and easily visible form; thirdly, for the existence of the expansion of Theorem 1.3 the condition EIXI z < ~ suffices, and not the condition EIxIP < ~. As to the possible deficiencies of the expansions of Theorems 1.2 and 1.3, one should note the circumstance that these expansions do not have the property of uniqueness of the terms of the expansion AI,...,Ap-2, in contrast with the classical expan- sion in power series in Theorem I.I.
We note that the asymptotic expansions for smooth functionals of the present paper can be carried over to the case of separable Banach sapces with the orders of the remainder terms preserved.
In what follows we give asymptotic expansions for characteristic functions. In our subsequent paper we shall show that these expansions imply asymptotic expansions for prob- abilities of the form P{w(S n) < r}, where w:H § R l is a polynomial of the second order, reR I. The outline of the proof of the asymptotic expansions for characteristic functions hardly differs from the outline of the proof of the asymptotic expansions for smooth functionals. The only essential difference is that in the estimates of the remainder term we must not lose the factor characterizing the decrease of the characteristic function. Here certain methods developed in G6tze [10], Yurinskii [6], Zaleskii [7], Nagaev [8], and Zalesskii and Sazonov [9] turn out to be useful.
We let u:H + R z be a continuous linear functional, C:H § H be a bounded symmetric opera- tor, w(x)=(Cx, x)+u(x),r, H(x)=I I (x . . . . . x), x ~ H be an m-linear form, q~(x)=l-l(x)exp {itw(x)}. We
a l s o deno te by i~n the s y m m e t r i z a t i o n of t he random e lement X. ~n . J
n
Definition 1.5 Let V=(VI, V,)~ I-I ~' . . . . . . . . , {Xj ~ ~, Y~} , the integers s and 1 be nonnegative, i = I
N = [(n -- s)/(1 + I)] be the greatest integer in (n -- s)/(1 + I). By the characteristic •215 X, C) of the sum Sn we mean
k N
V l ~ k ~ N j = I / = k + l
Definition 1.6. Let the integers s and 1 be nonnegative, N = [(n -- s)/(il + 2)]. By the Gaussian characteristic of the sum Sn (or the characteristic of the Gaussian vector Y) we mean
• (t, s, 1) = E 1/" exp { 4it (N/n) (CY1, r2) },
where the independent random elements Y, Yl, Y2 are identically distributed.
THEOREM 1.7. Let the integer p ~ 2. Then
E~(S~ "~ + a ) = E ~ ( Y + a ) + A I + . . . +Ap_2+r,
where
k
s = l ] I (s) 1 = 2 8 - b k
IAkl<~c(m, k, t e l , lul)]lIIIl(l + l tl3k)(l +iaL~+") • t, k, 3k +m+ l ) (E lXr l~+n-~t~ftX.e-;Ik+D,
Iri<~c(m, p, ICI, !uL);IIIll(l +ltl3p-D(l +tal3p+"-D.• p - l , 3p+m-2)(EfXc~L~+n-~'-I~I~EIXr
212
If H(x) -- I in the estimate for Ak and r one can set ]IHII = I, m = 0 and replace the factors l+Itlak, l+III a~-a by ItJ+ltl at,lti+ItI a~-'~ , respectively.
THEOREM 1.8. Let the integer p I> 2, EIXI p < ~. Then the quantity
k
V c.,,. s = l ii(s)[=2s+k
admits the estimate
[Bk [ ~< q i1 l i II (! + I t }ak) ( 1 + I a ['~ +") xg (t, k 3k + m + I) n - kr'- E [ X 1 *+'~.
Further one has the estimates
]k +2 IA~,--Bkl<~c~lll-II[(l+ttlak)(l+ia!Sk+"')zg(t, k, 3 k + m + I ) n - ~ I ~ E I X F , , ,
p-2 p--2
I I . . . . . . k = I k==]
x zg (t, p - 1, 3p + m - 2) n - ' - z v z (El X/;, l" + E I X :-~ ]'+ ~/ l /n) ,
. . . . . . ~ tag4"'• k, 3 k + m + l ) n - k . ' 2 E [ X l k+~, k = 1 , 2. IEP~(dx. dxo~(Y+a)[<c, II~i(l+:.t'. ak) (l+ta~ j . . . . p _
The c o n s t a n t s q = q ( m , k , ICI, ul) , c~=c~(m,p, IC[, lu i ) . I f g ( x ) = I t h e n i n a l l t h e e s t i m a t e s o f t h e t h e o r e m one c a n s e t II~ll = 1, m = 0 a n d r e p l a c e t h e f a c t o r s ! + [ t t 3k, ! + ] t p p-3 by I ! l ~ l t [ ~', I t l + l t p -:~ , respectively.
2. Auxiliary Results
Throughout the entire paper we shall use the following notation:
XI,...,Xn are independent random elements with values in the space H, having mean zero and satisfying P{Xj = 0} < I, j = I, .... n; S n = Xl + ... + Xn; YI,.--,Yn are independent Gauss- ian random elements such that EYj =EXj, cov ~ = c o v X j , j = l . . . . . n; .g~=Xl~lx~,.> X~=Xl{~x>,~. where I A . I 1 I is the indicator of the event A; S n = Xl + ... + Xn;
t7 n a
.%= ~ EIXjlF; L,= ~ EI~II ' ; :d ,= ~.ELX~!'; j=l j=l J=~.
T r A i s t h e t r a c e o f t h e o p e r a t o r A:H + H.
LEMMA 2 . 1 . L e t ~ < 1 / I 0 , T r c o v S n = 1 n
. . . . , [ I {xJ, r , , 0}, +v,. j = l
Then
Eexp { ~.lVl} <~c(cO< oo. ( 2 . 1 )
Remark . I t i s p r o v e d i n [ 7 ] , Lemma 1, c h a t f o r q /> 0 E I v I q ~< c ( q ) < 0% O b v i o u s l y t h i s fact follows from (2.1). We note also that the condition a < I/I0 can be relaxed somewhat.
Proof. Suppose to be definite Vi = Xi, I ~< i ~< k, V i = Yi, k < i ~< l, Vi = 0, k > I.
k /
Letting X= S X), Y= S Y~ we have i = 1 i = k + l
E e x p { e l V l ) < ~ E e x p f x [ X l } E e x p { ~ Z Y ; ) . (2.2)
Since Eexp {elY[ 2} < m for small e (according to a familiar theorem of Skorokhod--Fernik--
k
Shepp), it remains to estimate Eexp {~JXJ}. Letting X=X-EX, B 2-- SE{ ~II~ we get i=I
p {[Xt>~ x} ~P [IXI+!EX!>~ x} <~P {[Xi>~x -1 } <~
213
(we apply the analog of Bernshtein's inequality in a Hilbert space, cf. Yurinskii [lJ, Corol- lary on p. 491])
if x ) 2. Hence
~< 2 exp { - ( x - I)"/(2B "2 + 8 ( x - 1)) } ~< 2 exp { - ( x - 1)/10 },
Eexp{=i, Xi}<<-c(~)+ ~ e=r Xi>~n)<c(~ I e . . . . /m=c(~Q<oo, t t = 2 ~ 1 = 2
if a < I/I0. The lemma is proved.
The following lemma is proved for the case of identically distributed surmnands and f(x) Ix[ ~ by Zalesskii and Sazonov [9, Lemma I].
LEMMA 2.2. Let the function f:H § R be measurable, ~O, If(x)!~c1+celxl =,~'covS.=l . Then
it n
A: = IEffS,,)-Ef(SO] <(c,+c(~Oc~) ~, P {IXjl~ I} +c(~x)c2A~+c(~)csn(~-~)/2 Mz I P{IXj 1~> 1}. J = l j = i
Proof. Letting l~=l{~x,l~1) , we have
A <~ I E II I, ( f (S,) - f (S~)) + ~ E (l - I j) ( I f (S.) I + [,f (S~) I) < c~ Z Z (1 - I,) + c~ E E (1 - I j) (I S. r + 1 S.' 1 ")
( t h e sums and p r o d u c t s a r e t a k e n o v e r j , 1 ~ j ~< n ) . F o r t h e r e s t t h e p r o o f i s an a l m o s t l i t e r a l r e p e t i t i o n o f t h e p r o o f o f Lemma 1 i n [ 9 ] . H e r e one s h o u l d u s e Lermna 2.1 and t h e e s t i m a t e E I S , I~<~c(~.)n{~-2)t~M~ , w h i c h f o l l o w s f r o m t h e f a m i l i a r i n e q u a l i t y EIS.t~<~c(~)E x
n
. \=]S ( I I X'~[s) ' =>~2, and the obvious estimate
j=l
LEMMA 2.3. 2, then
n . n
t ~ tj t=/s<<.c(g)n(=-s)/~ ~ tl/s, ~>~2, t z , . . . , t,>tO. j = l j = l
Let Trcov S n = J. Then for p >~ s i> 2 one has L~<~L(p ~-2)/("-~) .
Proof. The first inequality real random variables (cf. Petrov from the first, if one notes that
LEMMA 2.4. If TrcovS n = I,
If p >~ sl, s2 >I
(& + L~,) (As + r~) ~< 3 (As + L~ s,+~,-')j,-%
i s p r o v e d a l m o s t word f o r word l i k e t h e a n a l o g o u s one f o r [ 1 2 , Lemma 2 ] ) . The s e c o n d i n e q u a l i t y o b v i o u s l y f o l l o w s Ls ~>L2 ~> 1.
then for p >i 2 one has
Et YjIP <--c(p) (EI (Xj)llS + EI X) ]r), ( 2 . 3 ) n
f E 1 Y~ 1" ~< c (p) (As + Lp). ( 2 . 4 ) 1--I
Proof. Let the Gaussian vector Z with values in the Banach space B have mean zero. Then for p, q > 0
�9 cl (P, q) (E[Z [01/v < (E I Z Iq) tlq ~ cs (P, q) (E I Z 191In.
In fact, if the Gaussian vector Z is concentrated in a finite-dimensional subspace of the space B, then (2.5) coincides with a well-known result of Hoffnmn--Joergensen. If Z is not concentrated in a finite-dimensional subspace, then as is well known, Z can be represented in the form of an almost-surely convergent series of independent one-dimensional Gaussian vectors, and (2.5) is preserved by continuity. (2.3) follows from the inequalities
E [ Y~ [n < c (p) E "/e [ Yj 12 = c (p) E "/e I Xj 12 ~ c (p) (E I Xsl 12 + E [ X) ['),
and estimate�9 (2.4) follows from (2.3).
(2.5)
3. Asymptotic Expansions of Smooth Functionals
We introduce notation: | 1 . Xlk; Sk = X~ + . . + Zk = Y1 + �9 �9 �9 + Yr,;
214
here the summands Yil,...,Yiq
LEMMA 3 . 1 . different iable,
Then
Z;~, (i (~)) =Za~ (i~ . . . . . io) = Y~+, + . . . + Y~
are omitted ;
Q}, = Q~ ( l% i ~ ) = (d~}~ - d ~ ) . . . (dk}~'~ - dr~),~ "
= 4 ) . . .
l = l (a) = (l~ . . . . . la), ./=/(a} = (j, . . . . . jg).
Let ~ < I/I0, a i> 0, Trcov S n = I the function g:H § R l be m times grechet
llg(~)(x)tl<~ae =lxl, x~H, k = 0 . . . . . m + l .
m k
E(a'~}-&s)g( ,_~+ ~.( 3), A: =Eg(Si+Zk. ( i to) ) ) -Eg(Zk+Z~.( i ( ' ) ) ) = r + I I/l! ~ ' z S 1 Z i (q) 1=3 .]= 1
where the remainder r admits the estimate
t rt<~c(a, ~., m ) ( A ~ + L , . + 0 . "1 +
Proof. Let Wj = Sj- I Zjn(i(q)). Then A = Al +... + Ak, where Aj = gg(Wj + X]) -- Eg(Wj--~j). To estimate Aj we expand the function g in a Taylor series with remainder in integral form. We get
A ~ = r l + 1/ l !E(dx} ,drQg(W')" ( 3 . I) I=0
where
! 1
ra=r1+r l=c(m ) ' " ' f (1 -,)-'m-m-lax~ g (Wj .q- 'g ,X))d 'g d t -c(m) f (1 - x J . . . . . . aYd+l g(l/Vj..l,T, yd)d,,~. o o
T h e t e r m w i t h i n d e x l = 0 i n ( 3 . 1 ) i s e q u a l t o z e r o , a n d t h e t e r m s w i t h i n d i c e s ~ = 1, 2 , c a n be estimated as follows:
IE(dx}--drs)g(Ws) l = [Edxj, g(Ws) l <<.aElXsl I~ E e x p { ~ [W s l} < c (z, a) ElXsl t 2
[using Lemma 2. I)1,
I E (d~ - d2i) g (W~) I ~< c (~., a) E IX~I I ~-
We e s t i m a t e r '~ . A p p l y i n g Lemmas 2 . 1 a n d 2 . 4 , we h a v e 1
ir]'l<~c(m)E ( I v "+~' "(=+~'(Wj+':Yj)t[d-: <~c(a, m)ElYjl"+ae=irs ' s I 1 a j i iN , " 0
<~c(=, a, r, OEll~lYj!~'+~EXl~e '==r/ <~c(~, a, m)E[ Y~l'+~<~c(oL, a, m)(EIX~aI"+E!A~ tm+~).
!
The estimation of rz is done more simply and we omit it,
Ir~l~<c(~, a, r~)ElxJI ~§
Summation over j concludes the proof of the lemma.
The asymptotic expansion of the following Theorem 3.2 is the most general and precise in the present section. All the remaining asymptotic expansions for smooth functionals are obtained by transformations and estimates of terms of the expansion of Theorem 3.2.
THEOREM 3.2. Let the integer p I> 2, ~ < 1/10, a ~> 0, Trcov S n = I the function f:H R I be 3(p -- I) times Frechet differentiable,
]]f(~)(x)l,<.~.~ . . . . !, x~H, k = 0 , . . . , 3 ( p - l ) .
Then for n i>- p -- 2 one has the asymptotic expansion
215
p-2
Ef(S},)=r+Ef(Z,,)+ I I ' ~r 1/l(~)!EQ]f(Zn(J(s)))=r+Ef(Z.) +A~+''" +A.-2, s= I l(S3 j(s)
k
= 1/l(.)! EQ~sf(Z. (/(~))), s=[ i l f s ) ] = 2 x + k j(s)
Proof. sion
(3 .2)
(3 .3 )
is over all l (s) = (7~1 .... ,l s) satisfying 3<~ll<<.p, 3<~A.~p-l~+2 . . . . , the summation in the sum Y'
3<~l~<<.p-(11+ ... +/~_i)+2(s-I), the summation in the sum E" is over all j[s) = (jl .... ,Js) sat- isfying I ~< Js < Js-z <... < jl ~< n. One has the estimates
[Aki<~c(~, k, a)(A~.+L~+~), k = l . . . . . p - 2 , (3.4)
!ri~<c(~, P, a)(Ae+L.+0. (3 .5)
Let A = Ef(S n) -- Ef(Zn). We show that for k = 1,...,p -- I, one has the expan-
(3.6)
where
A = Ck + B~ + rk,
BI=0, k--I
Bk= S ~ ' Z " I/I(~!EQ~f(Z"(/(s)))" 2<~k<~p-l, s ~ l l (s) j(n)
C~= E ' ! " 1/l(k)!EQ~f(SJ~-l+Z&"(,.i(k-1))) ' 1 <.<k<.Np-2, i(~) j(k)
Cp_~=0,
and the remainder r k admits the estimate
Irkl~c(~, p, a)(A~+tp+0- (3.7)
This will prove the validity of the expansion (3.2) from the hypotheses of the theorem, since (3.6) coincides with (3.2) for k = p -- I.
For k = I expansion (3.6) has the form
p n
E(d~},- dD) f (~,_~ + Zj,.)
and coincides with the expansion of Lermma (3.1), if in the hypotheses of this lemma one sets g = f, m = p, k = n.
Let us assume that we have already proved
A=Ck_l+B,_a+rk_l, Letting
1 < k - 1 ~<p-2.
we have D k + Bk_l = B k and
ok= l ' Y." l(k-x) j(~-*)
A = Ck_ 1 q - B ~ _ 1-1- rk_ 1 = C k -k-B k q- ( C k - 1 - - Dk -- C~) + r~_ 1.
Hence (3.6) will be proved if we verify that
I C~_ I - D~ " C~ I <<. c (~, p , a)(A~+L,+I). (3.8) In the proof of (3.8), to simplify the writing it is convenient to introduce the notation
k--I
�9 . . ~ X | V=(V1, V~-1)e H { j., YJ.}; (3.9) m=l
216
the summation sign E denotes summation over all V in (3.9) for fixed J l , . ' . , J k - 1 ; V
dtk-'ftx~ e~=V~/IV, I, l <~i<~k-l" g ( x ) = d ~ , , . . . , _ , . , , ,
the sign E' means that the expectation is taken over all random elements standing to the right of E' except for VI,...,Vk-I.
Substituting the expressions for Ck-1, Dk and Ck and estimating each summand of the sum obtained in modulus, considering the notation introduced we get
tc~_,-D~-r E' Z" 'f~'~-'>~ E Elv, l, .... iv,_,I'~-, i,z'g(s)._.-,+z,,-,.(;(~-~')) l(,~-a) j ( ~ - D V
p - t l a '-x) )+2 (k--D J ~ - 1 - 1 [
-~,~(z.(y>))- - ~ E ~u~,(~_~,pg(S,_,+z,~.(~,~-~>))[. (3.1o) ~=~ &=~
For k -- I = 1,...,p -- 2 one has p + 2k -- I ~ 3p -- 3. Consequently,
-x)~+2(k-I)+l)(x)ii~iij . , ~ ~(p+2k-- 1) s ~ H ~ ae~ !xl ) fig, p-u'~ ~ ) H and t o e s t i m a t e t h e modut i unde r t he summation s igns in (3 .10) we can ap p ly Lemma ( 3 . 1 ) . This modulus does not exceed
c (~, a, p) (A=+Lp_, f~-,, i+2k-1)' (3.1 1 )
We also have
By ( 3 . 1 1 ) , (3.12)
iCe_l-Ok- C~i~e(~, a, p)
El V,I". . . [ Vk-a [~-,=(EIXJ,~,+EI Y,,In)(E X)~_,I'~-'+E'~ YJ~ ~ i"-')- g
i t f o l l o w s f rom (3 ,10 ) t h a t k - 1 n
E' '""-'" H (~,..+ E ~I~,!')(A,+g__,,,.-,,,+~_,)<. . (k-z) m= 1 j = 1 I
(we apply Lemmas 2.3 and 2.4)
~e(~, a, p)
Thus, theorem.
(3 .12)
E ' I[I(k-1)!(A~+ LP+1)<<'c(~" a, p)(A2+Lp+0.
l(k -a)
( 3 . 8 ) , and w i t h i t t h e e x p a n s i o n ( 3 . 2 ) a r e p roved under the h y p o t h e s e s o f t he
It remains to prove (3.4). Considering the notation (3.9), estimating the modulus of each summand under the summation signs in (3.3), we get
k
[A~l~<c(~, k, a) ~ ~ ' E" ~ e lv ' l "" l v'l'~ s= I ! l tx) !=2s+k j(s> V
(Lemma 2.4) k
.<ct~ k,~,) l E' s = l ]l(S)~=2s--k
(A2 + L4)- - -(As +L1) <
(Lemma 2 .3 ) ~<c(~'., k, a)(A2+L~+s)-
The theorem is proved.
From Lemma 2.2 and Theorem 3.2 we get
COROLLARY 3.3. Let the hypotheses of Theorem 3.2 hold and in addition
(3.13)
Ifc~.)l.sc,+c,~Ix[", c ! , c~, T~O.
217
Then
where
E,c(s,,) = EHz, , ) + A. + . . . + A~_~ + , .
[r t ~ c (x. p, a) (Az + Lp+ !) + c 60 c,~ A v + [cI + c (y) c2 (1 + n (v--')'= My)] A2 (3.14)
(the quantities AI,...,Ap-2 satisfy the conditions of Theorem 3.2).
COROLLARY 3.4 (Identically Distributed Summands). Let the random elements XI,...,Xn be identically distributed. Then
k
A,,= I I ' CT'/l(')!E(dtx~-cl~;')'"(d~: -d~)f(!/ i-~s~Z")"
s = l ilfS) l=2s+k
P r o o f . S ince t h e summands in t h e f o r m u l a f o r Ak a r e i d e n t i c a l l y d i s t r i b u t e d , one can
sum o v e r j ( s ) and n o t e t h a t C;~,= / . l. j(s)
P r o o f of Theorem 1 .3 . In Theorem 3 .2 and C o r o l l a r i e s 3 .3 and 3 .4 we r e p l a c e X, x . . . . . X~
by xlC~n / / n , �9 . . , xn~n/r
THEOREM 3 . 5 . Le t t h e h y p o t h e s e s o f Theorem 3 .2 h o l d , Ap < ~,
I f ( x ) l<~ ,+c~ lx l ", r e~:>o) ( 3 . 1 5 )
Then
Ef (S , )=Ef (Z , )+BI+. . . +B~,_~+r,
k
B,= Z S' s = t [ r ' , = 2 s + k jfs)
tB~,t4c(e, k, a),~[k+g, t<~k<~p-2,
!rt<-.e&, p, a, % cz){(l+c~nW-~-V2M~)A,+L~+t}.
where
Let
Proof. We verify that
IAk--BkI<~C(O~, k, a)Ak+2, l<~k<~p'-2.
s
v=(v,, ..., v,)~ l-[ (x),, r~,}..
, , X z. We set V m = Vm, if Vm = YJm; vm = XJ m' if V m = Jm" We have
k
l r . Z I ' I" Y:. d';:).r(Z~ s = ! IltSl~=2s+k j<'} V
The e x p r e s s i o n unde r t h e summat ion s i g n in (3 .17) does n o t exceed
s
:"" '~"""+' ..d~;-r(Z,(i"9) =~1+. . . +~,. Y, led',,.., a ' C ' ' , , ~, - ,:, ~ . : . . ; �9 . .
i = 1
We estimate gi. If V i = YJi' then 6 i = O. Now if V i = X~i, then
~, ~ cO-, a)ei Vl I " " "i V,_~ t"-' I Xj,~} ' )V;+l 1"+,...! V; I". Hence, summing and applying the analog of Lemma 2.4 for the moments, we have
s s ,t s 8
V" ~ ' I ~'= ! I I ~,~<c(7.. a) I M , , M6_ Ah:vf,,+. M:~c(~., a) lAhM~+2, :. '! I" i = l i = 1 i is} V i = 1 i ~ ' l
(3,16)
(3.17)
(3.18)
(3.19)
218
where $ = ( ! / r ( s i n c e I~(S) l = 2s + k ) . We have
A~ M~ + ~ ~ c (k) A :, (A~ + z + L~ + 2) ~ c (k) (At, A ~ + ~ + At) ~ c (k) A~ + ~ ( 3 .20 )
(Lemma 2 .3 and t he i n e q u a l i t y ~i ~ k + 2 ) . S u b s t i t u t i n g (3 .20 ) i n t o ( 3 . 1 9 ) , and t h e n t he e s t i m a t e o b t a i n e d i n t o ( 3 . 1 7 ) , we g e t ( 3 . 1 6 ) . ( 3 . 1 6 ) o b v i o u s l y i m p l i e s t h e e s t i m a t e f o r the remainder r under the conditions of the theorem. The estimate I Bk I ~ cMk+ z is proved more simply than (3.16) and we omit it. The theorem is proved.
Proof of Theorem 1.2. In Theorem 3.5 one renormalizes and takes into account the fact that the summands are identically distributed.
Proof of Theorem 1~1. The expansion of Theorem 1.1 is derived from the expansion of Theorem 1.2. For this, with the help of the expansion in Taylor series of each term in the expansion of Theorem 1.2, it is necessary to get rid of the factor r -- S/n, and then show that the differentiations in Gaussian directions can be replaced by differentiation in non- Gaussian directions and to use the fact that the coefficients of the expansion in power series are determined uniquely.
We begin the proof. We shall adhere to the notation from the formulation of the theo- rem. We let in addition
s
V=(V~, . . . , V , ) E H {,g,, Y,}, U=(Yt . . . . . Yq)- i= l
The terms of the expansion of Theorem 1.2 have the form
Edvffl, I - s /n Y). C('3)17 . . . . t / I / 2 ' / . . . . . . 7 i -
where l<<.k~p-2, l~<s<~& [ l (~I=2s+k.
(3.21)
1 .<m .+~s, ~&=,t#, d"
We c o n s i d e r t h e e x p r e s s i o n o f more g e n e r a l form [ o m i t t i n g t h e i n e s s e n t i a l f a c t o r c ( s ) h e r e ]
A: =,f" (l~J~ lJl)12Ed~cti,[(]/ l - s ] n Y ) (3.22)
such that p = p -- 2 + 2m-- l~l -- l jl ~> 0 [here j = (jl,...,jq), ji ~> I].
Representing the Gaussian element Y in the form Y = v/l -- s/nZl + ~Z2, L(Y) = L(ZI) = L(Zz), ZI, Z2 independent, and using the expansion of the function d,l,d.l.f(Y) in Taylor series in powers of vts/nZ2, we get
V U
P
A = r +nm-O"+'~t)12 Ed~d{if (Y) - ~ nm-('t'+ 'J'+i)12 s'l=i[ Edl, d~d} , f (]/ 1 - s in IO, (3 .23 ) +-1
where the remainder r admits the estimate
[ r I <~ c (~, p, a)n-(P-~)l~ (E I~Xv.-ff I p + n-ll 2 E LX):71P+9. (3 .24)
We verify (3.24)~ We have
]r l~c(p)n- ( ' -a )J 'E iV , [" . . . lV , l '~ E sup IZ2!p+aif ,~-=+~)(V-i--sinY+~l/s/nZ., .)! ' . (3 .25 ) 0 ~ < ~ < 1
Further
E sup iZg+]p+lllftP-2+2") (V 1-s}n Y+v V~nZ2) LI <~c(~, a)Eexp{u.i YI}EtZ~]P+lexp{=iZ.,.t}<~c(~, a, p).
Applying Lemma 2.4, and then the analog of Lemma 2.3 for moments, we have
E]V~[I,. . .!V, ' / '<c(p)E]X]~,. . .EiX]',<~ c(p)(lg!X]P)('l'-2*)l(p-2)<c(p)E[Xl~
since E ] X [ P > I , ( ] t [ - 2 s ) / ( p -2 )=k / (p -2 )<~ l .
By (3.25), equality
(3.26)
(3.27)
(3.26), and (3.27), for the validity of (3.24) it remains to prove the in-
n-~'2E[XtP <~c(p)(El+u TIr +n-1/++E[~ ~'7' t~+~). (3 .28 )
219
Two cases are possible: a) E X;-!P~<1/2, E l Y v'~i~'>~l/2; b) EIXv.7, tP> 112. E'gV;;I~'<~I/2
i <~EiXi~ I/2+E}XVa !~<~2E I y~'a~:~'<~2(EiX'e'7}p+~)r,/,~,~-~) 4c(p)EiX:",; i"-L
In ca se b)
I "~ 1/2 +Et X~,'7 !~V2EiXFT, [~,
In case a)
from which (3.28) follows.
Using (3.23), we prove that there exist differential operators ~.~(dxl , dxi,...) that
such
p--2
Ef(S,,) = E l ( Y ) + ~ n-ki 2 EPk.f(Y) + r, (3 .29 ) k = l
where the remainder r admits the estimate from the conditions of the theorem being proved. For this we represent the expansion of Theorem 1.2 in the form of a sum of terms of the form (3.22), and then we express each of them by means of (3.23). After this each new term having the factor ~-- s/n we represent repeatedly with the help of (3.23). Repeating this proce- dure after c(p) steps we come to an expansion of the form (3.29) with certain ~ = ~(dxl,... ,
dy I, dY2,...). 21t is known (cf. [2], Lemma 4.1), that if the function g is sufficiently smooth, then Edy~g(Yl) can be represented in the form c(1)Edll ... d~zg(Y1). Since Ed~l+ig •
(Yl) = 0, we can assume that the differential operators in (3.29) are independent of dyl,... ,
dY n �9
Let the function g:R I § C I be sufficiently smooth and have bounded derivatives, the ran- dom variable X be bounded. Then (cf. [2])
p--2
Eg (S,) = Eg ( ~ + ~ n-~/iEPkg (Y) + 0 (n-<P-x)/~) k = l
(Pk a r e t h e Edgewort--Cramer p o l y n o m i a l s ) , and i t f o l l o w s f rom (3 . 29 ) t h a t
p- -2
Eg(S,)=Eg(Y)+ f n-~/~EPkgfY)+~
Since t h e c o e f f i c i e n t s o f t h e power s e r i e s a r e u n i q u e ,
EPkg (Y) = EP~g ( ~ . ( 3 . 3 0 )
I f we t a k e g(x)=e ~t~, t, x eRa , and l e t ms = EX s , t h e n (3 . 30 ) becomes an e q u a t i o n f o r a p a i r o f p o l y n o m i a l s in t h e v a r i a b l e s t e ~ . ( m a , .... rap)eR~-~ I t i s known t h a t t h e r e e x i s t s a bounded random v a r i a b l e X0 w i t h moments m~ = 0, m~ = 1, m~ . . . . ,m~ such t h a t t h e q u a n t i t i e s m~, m~, m~ .... ,m~-l, m~ + ~ for small E are also moments of a bounded random variable (this assertion can be obtained with the help of a linear transformtion of the measure from Theorem IV.4.5 in [13]). Based on this fact we can now deduce comparatively simply that the polynomials Pk and Pk coincide. Thus, in (3.29) the polynomials Pk are the Edgeworth-Cramer polynomials. Theorem 1.1 is proved.
4. Asymptotic Expansions of Characteristic Functions
In the present section the notation concerning the random elements XI,...,Xn is the same as in Secs. 2 and 3.
n
Definition 4.1. Let V=(VI .... , V,,)=~H{~f ), Y~} , the integers s and I be nonnegative, the i=I
set N = {1,...,n}, the sets AI,...A~+IcN be disjoint, the set DeN. By the characteristic •215 X~ ...... Y,, C) of the sum Sn we mean the quantity
V cardD~<s A~U...uAt+z=NI~D l<~q<.l B=Aq j e B k~A ~B
By the Guassian characteristic • of the sum S n we mean
card D-~,~ .4~U...UA~+I=N\D 1-<.q<.l B=Au I ~.r B JeAo~B I J
220
It is easy to verify that x ( t , s , 1 ) ~ • l), • 2 1 5 Analogous inequalities
are also valid for
LEMMA 4.2. Let TrcovSn = I, the form ~(x)=O(x ..... x),x~H, be k-linear, aeH. Then n
for each (Va ..... V,)~ {0, X), Ys} such that among Vz,...,V n no more than s coincide with
j= 1
zero, one has i E O ( Z ) e x p { i t w ( Z + a ) } <<e(k)l[O~ z(t, s, k + l ) ,
where Z = Vl +... + Vn.
Proof. Let D denote the set of those indices j, for which Vj = 0, AI,...,Ak+2 be a partition of the set {1,...,n}\D. We write the sum Z in the form z = F1 + ... + Fk+2, where
F= I V,, n z~Jlq
Using the k-linearity of the form ~, we have
k~2 kq-2
IEe(Z)e,-'~z+->]~< ~ . . . ~ t e e ( r , . . . . . . r;~)e ''~Cz+=~: i1=1 i~=I
Among r i z , . . . ,rik a t l e a s t one o f E l , . . . ,Fk+z i s m i s s i n g . L e t Fq be m i s s i n g . Then, u s i n g t h e e s t i m a t e o f Lmema 2 . 1 , we g e t
e O (P, . . . . . . r , ) e "w 'z+"' I ~< i] Oli El Fq l . . . i I'i, ', I Er, e"W <z+,).
~<c(k)l!Oi' inf El,'~exD 2# CVj , , V~ . (4 .1 ) B~ U B==F e jeBt k eB~
In the last inequality we have used the estimate
! eexp [ i,w (~ + ~ + ?) } [ ~< El"' exp { 2it (C~, ~) },
which h o l d s f o r any i n d e p e n d e n t ~, ~, ? . T h i s e s t i m a t e i s p r o v e d j u s t a s in t h e c a s e C = I , u = 0 (cf. gurinskii, Lemma 2.1). The assertion of the proposition follows in an obvious way
from (4.1).
LEMMA 4.3. If the functions f, g:H = C I are differentiable sufficiently many times, then
dh,. . . dh (fg) (x) = Z d a j ( x ) dA..g (x), (4 .2 )
where the sum is taken over all partitions of the set {hl,...,hm} into disjoint subsets AI,
Az [if A = {gl,.-.,gs}, then we let dAf(x) = dg I ..... dgsf(X)].
The function 9(x)=exp{/[ w(x)} is infinitely Frechet different iable and
dh~.., dz, ~ (x) = 9 (x) E (it) q d m w ( x ) . . . d.% w (x), ( 4 . 3 )
where the sum is taken over all disjoint partitions A~ U ...U Aq={hl, ..,h,,} into nonempty sub- sets AI,...,Aq (q can depend on the partition), whose cardinality does not exceed two.
The proof goes by means of induction on m. To prove (4.3)it is necessary to consider
that w"'(x) - 0.
LEMNA 4 . 4 . Le t t h e fo rm I I ( x ) , x ~ H be m - l i n e a r , T r c o v S , = l , (V1 . . . . . V~.)~ U [0; X~; Ys} ;
h e r e no more t h a n s among V z , . . . , V n c o i n c i d e w i t h z e r o , Z = V I + . . . +V , , ? ( x ) = F l ( x ) e x p ~ i t w ( x ) } . Then t h e f u n c t i o n z ~-,'.5(z)= E ? ( Z + z ) i s i n f i n i t e l y F r e c h e t d i f f e r e n t i a b l e , 'e;~'~(-)=f?u>(Z+ z) and f o r L = 1, 2 , . . . one h a s
'Stt)fz)ii~<c(I CI, tzJ-l, m, /)'; 17 !(1-- i t} ' ) ( ! + lz t l+'') • s, r e + i + 1). (4.4)
If ~(x) - I, (4.4) can be improved to
li,5,<t)fz)[[<c(1, l u l , ! C b ( [ t l + i t ! ' ) ( 1 + I t / ) • s, l + l). ( 4 . 5 )
P r o o f . We o n l y p r o v e ( 4 . 4 ) , s i n c e t he p r o o f o f ( 4 . 5 ) i s s i m p l e r . L e t Ihz[ = . . . = !h;[ = l . The first equation in Lemma 4.3 implies
!,,~ ~,o (,7) hz... h, !~ ~,'IEd~,TT (Z+z)dAe ~`" {z+~'], ( 4 . 6 )
221
where the sum g' is taken over all disjoint partitions AIUA,_ -'~-~n,,..,h~l,
The second equation in Lemma 4.3 implies
"Z ~ eit~ �9 i , EdA~flt +~).J,~.: (z+z):'..~c(l)(l+ t I) V,, Ed~ h ( Z + z ) d ~ w ( Z + z ) . . . d ~ , w ( Z + z ) ( 4 . 7 )
where the sum g" is over all disjoint partitions B~U .. UB,=A~ into nonempty subsets Bz,..., Bq of cardinality no greater than two.
Let
~'~ ={g~, ", g-:i, 0~<y~l , B1 ={f,}, -. B ; �9 , ~ = , f ~ ) , O<~'d.<< 42: ,
t h e s e t s B ~ + z , . . . , B q h a v e two e l e m e n t s , {g~ . . . . ,g , , J ; . . . . ,f.~} c {hi . . . . . h,}. S i n c e w ' ( Z + z ) h = u ( h ) + 2(CZ, h) + 2 ( C z , h ) , w"(Z + z ) h g = 2 (Ch , g ) , m u l t i p l y i n g and u s i n g t h e s y m m e t r y and ( m - y ) - l i n e a r i t y o f t h e f o r m dAlg(Z + z) = c (m, y ) ~ ( Z + z , . . . , Z + z , gz . . . . , g y ) i n t h e f i r s t m - - u variables, we get
~EdA, H(Z-bz)dB, w ( Z + z ) . . . d B . w ( Z + z ) l <.'-.c(IC 1, ! u l , m , l ) E ~ l E r l ( Z . . . . . Z, z . . . . . z, g~ . . . . . gv) "n times v, times
x(CZ, f , ) . . . ( C Z , fO(Cz , .f~p+O'" .(Cz,. .,i~, ~ f ~o,~(z+,) I: = c ( l C l , Jui,, m, l )2~'!E(b(Z)&~(z+,)[ , ( 4 . 8 )
where Yz, Y2 /> 0 , Yz + Y2 = m - - y , i z , . . . , i M i s a p e r m u t a t i o n o f t h e number s 1 , . . . , I s , 0 <<, ~< M, i n t h e sum X" ' t h e number o f sunrnands i s bounded a b o v e by a number w h i c h d e p e n d s o n l y
on m and l . The e x p r e s s i o n ~ on t h e r i g h t s i d e o f ( 4 . 8 ) i s a m u l t i l i n e a r f o r m in t h e v a r i a b l e Z I t s o r d e r i s e q u a l t o Y1 + 0 and d o e s n o t e x c e e d m + l , and t h e norm does n o t e x c e e d I1~11 ~< c ( l , m, ICI) I IEII ( I + I z l m + c ) . H e n c e , a p p l y i n g Lemma 4 . 2 , we h a v e
' E ~ ( Z ) e itw(z+~) <~c(l, m, ! Ci)i[ II i](1 +Iz]'+O• , s, m + l + 1). ( 4 . 9 )
Combin ing ( 4 . 6 ) - ( 4 . 9 ) , we g e t ( 4 . 4 ) . The lemma i s p r o v e d .
LENNA 4 . 5 . L e t a ~ H , h 1 . . . . Ih~EH, h~]= . . . = [ h ~ l = l , r I ( x ) , x e H , be an m - l i n e a r f o r m , 9 ( x ) = I I (x)exp {itw(x)}, | . . . . h g ( x ) . Then f o r i n t e g r a l p >~ 2 one h a s
p k
A : =E| Zk , ( i (q) )+a)-EO(Zk + Zk.(i(q')+a) =," + ~ 1/1! ~ E(dtx]-d~)O(S)_ ,+ Z,,(i(q,)+a), l = 3 j = l
where the remainder r admits the estimate
r'<.c(p, s, m, CI, u[)HIli!(l+~til p+~+~)tl+iae f+~+~+~)• q+l, p+s+rn+2)(A~+L~+~). (4.10)
If K(x) - I (4. |0) can be improved to
r <~c(p, s, C], lu l )~t~( l+i t i~+9( l+!af+~+~)• , q + l , p+s+2)(A~+L~+O ( 4 . 1 1 )
(we are using the notation introduced at the beginning of Sec. 3).
The pPoof is analogous to the proof of Lemma 3. I with the difference that instead of the a priori estimates of the derivatives of the function g in Lemma 3. ] one should use the estimates of Lemma 4.4.
We only prove (4.10), since the proof of (4.11) is analogous. We have
A=A~+ ... +A~, (4.12)
where
A j = E O ( X J + b ) - E @ ( Y j + b ) , b=bj=Sj_1+Zi.( i(q))+ a.
E x p a n d i n g t h e f u n c t i o n x ~ | i n a T a y l o r s e r i e s w i t h r e m a i n d e r i n i n t e g r a l f o r m , we h a v e
Aj=Ajo+ - - . + As,+1, ( 4 . 1 3 )
w h e r e
Ajv=Ed~ 0 (b)/yl -Ed~. O (b)/y!, 0 <~y ~<p,
t
�9 (zXJ + b) driP., 0
222
We estimate Ii.
1
I~ = E f (I - "r)" d~+' | (v Y~ + b~ d'r/p!. 0
For this we apply Lemma 4.4
iI~ 4c(p) sup E'~l~+~ilE~e~<~+~+x)(w::J+b)!,. , . <~c(p, s, m, !Cl,;]uil)lIHilfl+ltl~+~+9(l+ ai~+~+m+x ) 0 ~ - . ~ < 1 ' '
• q + l , p+s+m+2)Ellf)f+x(l+[~Y]lv+~+"+~),
E;kf) v+x (1 + i 701 'v+s+"+l) ~< 2El 70 [~+1.
One estimates 12 analogously, but here one should estimate the moments of the random element Yj in terms of the moments of the random element Xj with the help of Lemma 2.4. Hence
i~j,~+~ ~<c(p, s, m, ]C], lul)IIIIIl(l+It['+'+X)(l+laf+~+m+gz(t , q + l , p+s+m+2)(E[X~x[~+glXJI'+9. (4.14)
and are estimated analogously while the admit estimates with r igh t side of 14). Moreover, Aj0 = 0. Summing all these estlmates in (4.13) and (4.12), we get the
expansion (4.10). The lemm~a is proved.
THEOREM 4.6. Let ~(x)=II(x)exp{itw(x)}, where rl(x),xelf is an m-linear form. S n = I and the integer p /> 2. Then
p--2
IEg(S},+a)-E~(Z"+a)=r + ~ ~ ' 1 " 1/l(')! EQ]~(Z,(](~))+a)=A~+... +Av_~+r, s = l l (S) j (s )
k V" Ak= E ~ ' ~.a 1/l(~)! EQ~e?(Z,(j('))+a),
s = l l l ( S ) ! = 2 s + k j (s)
the summation in the sum Y' is over all l (s) satisfying 3<.11~p, (11+ ... +l,_0+2(s-l), the summation in the sum Z" is over all j (s) Js-l <... < jl ~< n. One has
3 .<.L,~p-ll+2 . . . . . 3<~ls~.p- , satisfying I ~< Js <
A~ .~c(m, k, ICI, [z ; l ) l l l I ! l ( l+! t i3k)( l+lark+~)/ .~( t , k, 3k+m+ l)(A2+Lk+z),
i r i~c(p, m, !C i, lul)]lHl](l+lt~"-~)(l+lai 3"+~-8) z(t , p - l , 3p-2+m)(A~+L~+O.
If ~(x) = I, (4.15) and (4.16) can be improved by setting II~II = I, m = 0 and replacing the factors l+Itl a~, 1+It[ 3P-~ by ItI+Iti 3k, Itl+Itl 3p-:~ , respectively.
The proof of the theorem is analogous to the proof of Theorem 3.2 with the difference that instead of Lemma 3.1 one should use Lemma 4.5. Hence we omit the literally repetitive part of the arguments. Moreover, we also omit the improvement of (4.15) and (4.16) when ~(x) E I, since this case is simpler than the case of general H(x).
Let A=Eg(S~+~-E~(Z,+~. The theorem will be proved if we verify that for k = I .... , p -- I, one has the decomposition
~=c~+&+~,
where
Let Tr covx
(4.15)
(4.16)
(4.17)
k - 1
s= I [(~) i s )
c,= y,' ~" l/t(,~! Ee~,~ (s~_~ +z~,. (i'" '>.' +<,). Z~k~ j(k)
l ~k<~p-2, Cp_l=0,
and t h e r e m a i n d e r r k a d m i t s t h e e s t i m a t e ( 4 . 1 8 ) f rom t h e c o n d i t i o n s o f t h e t h e o r e m . For k = 1 t he d e c o m p o s i t i o n (4 .17 ) c o i n c i d e s w i t h t he d e c o m p o s i t i o n of Lernma 4 . 5 , i f orie t a k e s
223
s = 0 there. Assuming that (4.17 is true for 1,...,k -- I, we prove that it is also true for k. Letting
[~z--J) j~k.-Ji
j u s t as in the p r o o f of Theorem 3.2 we ge t t h a t f o r (4 .17) to be v a l i d i t s u f f i c e s to v e r i f y that
I C , . _ , - D , - G I ~ c ( p , .,,, IC'l, [ , ,b:! l ! (I 4-1ti;~"-:~)(I + [a l ~'~ .... bx( t , 7 , - t . 3#--2~.,,,.ifA=+Lk~[!. (4 .18)
The v e r i f i c a t i o n of (4 .18) i s analogous to the v e r i f i c a t i o n of ( 3 . 8 ) , bu t i n s tead of Lemma 3.1 one should use Lemma 4.5. We omit this verification. Relation (4.17), which is proved, implies the decomposition from the conditions of the theorem and (4.16).
In proving (4.15), analogously to the proof of (3.4) [cf. (3.13)] we come to the neces- sity of estimating the expression
~ . = I E 4 ~ . . ~ ( z ~ ( / " ~ ) . ~ ~ l 1~1 = =1,,,1=~ ~- " , �9 , . . . . . (4.19)
Lemma 4.4 gives an estimate of this expression; here the characteristic ~ in the estimate of this lemma can be replaced by the Gaussian characteristic xg , in view of the fact that in the sum Zn(j(s)) there are only Gaussian summands
~ c ( m , k, ICI [ul)i lUil( l+[t i :~k)(l+ia 'a* '• , b ~ .g ( t , k , 3k ) . (4.20)
(4 .15) i s p r oved . The theorem i s p roved .
COROLLARY 4.7 (Identically Distributed Summands). Let the random elements Xz,...,Xn be identically distributed. Then
k
a v ' -",
s=l 11 (s) t = 2 s + k
Proof of Theorem 1.7. One must renormalize in Theorem 4.6 and Corollary 4.7.
THEOREM 4 . 8 . Let the h y p o t h e s e s o f Theorem 4 .6 h o l d and Ap < ~. Then t he q u a n t i t y
k p ~ t,
s = l i ! (s) : = 2 s k k j(s)
admits the estimate
]Bkl<<l l i] l id + ItPb(i +lal~*+'")~(t, k, 3k+m+ 1) Ma+> (4.21)
In a d d i t i o n one has
i&_Bkl~clllNll(~ Title;)(1 +[al 3~+') ~(t, #, 3 k + m + l)Ak+a, k = 1 . . . . . p - 2 . (4 .22)
The constant cz = cz(m, k, Icl, lul). If H(x) ~ I, (4.21) and (4.22) can be improved ~ one can set m = 0, II~ll = I and replace the factor I + ]tl 3k by It] + Itl ak.
The proof of the theorem is analogous to the proof of Theorem 3.5, while in the course of the proof the expression (4.19) appears, which is estimated in (4.20). The theorem is proved.
Proof of Theorem 1.8. Some of the inequalities of Theorem 1.8 are direct consequences of Theorem 4.8 and the fact that the summands are identically distributed (here it is again necessary to renormalize). The estimates concerning the Edgewort-Cramer polynomials are verified analogously to the proof of Theorem 1.1. A minor difference is that it is necessary to use Lemma 4.4. As a result we arrive at the polynomials Pk, introduced in the course of the proof of Theorem 1.1. But it was proved there that Pk = Pk" The theore~is proved.
2.
LITERATURE CITED
A. Bikyalis, "Remainder terms in asymptotic expansions for characteristic functions and their derivatives," Liet. Mat. Rinkinys, ~, No. 4, 571-582 (1967). F. GStze, "On Edgewort expansions in Banach spaces," Ann. Probab., 9, No. 5, 852-859 ( 198 t ) .
224
3. V. I. Paulauskas, "Convergence of certain functionals of sums of independent random variables in a Banach space," Liet. Mat. Rinkinys., 16, No. 3, 103-121 (1976).
4. V.M. Zolotarev, "Approximation of distributions of sums of independent random variables with values in infinite-dimensional spaces," Teor. Veroyatn. Primen., 21, No. 4, 741- 757 (1976).
5. F. G~tze and C. Hipp, "Asymptotic expansions in the central limit theorem under moment conditions," Z. Wahr. Verw. Geb., 42, 67-87 (1978).
6. V. V. Yurinskii, "Precision of the normal approximation of the probability of landing in a ball," Teor. Veroyatn. Primen., 27, No. 2, 270-278 (1982).
7. B, A. Zalesskii, "Estimate of the precision of the normal approximation in Hilbert space," Teor. Veroyatn. Primen., 27, No. 2, 279-285 (1982).
8. S. V. Nagaev, "On accuracy of normal approximation for distribution of sum of indepen- dent Hilbert space-valued random variables," in: Fourth USSR--Japan Symposium on Prob- ability Theory and Mathematical Statistics. Tbilisi, 1982. Abstracts of Communications, Vol. II, pp. 130-131.
9. B. A. Zalesskii and V. V. Sazonov, "Proximity of moments for a normal approximation in a Hilbert space," Teor. Veroyatn. Primen., 28, No. 2, 252-263 (1983).
10. F. GStze, "Asymptotic expansions for bivariate von Mises functionals," Z. Wahr. Verw. Geb., 50, No. 3, 333-355 (1979).
11. V. V. Yurinskii, "Exponential inequalitie s for sums of random vectors," J. Multivar. Anal., 6, No. 4, 473-499 (1976).
12. V. V. Petrov, Sums of Independent Random Variables, Springer-Verlag (1975). 13. S. Karlin and W. J. Studden, Chebyshev Systems and Their Application to Analysis and
Statistics, Krieger (1966). 14. H. Bergstrem, "On asymptotic expansions of probability functions," Skandinavisk Aktuarie-
tidskrift, Nos. I-2, 1-34 (1951). 15. V. A. Statulyavichus, "Asymptotic expansion of the characteristic function of the sum
of independent random variables," Liet. Mat. Rinkinys, ~, No. 2, 227-232 (1962).
OPTIMAL STATISTICAL ESTIMATORS OF SPECTRAL DENSITY IN L 2
R. Bentkus UDC 519.24
I. Introduction
In this paper we study statistical estimators which can be represented as smoothed periodograms of the spectral density of a stationary random sequence Xt, t =...,--I, 0, I, .... Such estimators are completely determined by their spectral windows (s.w.) or, equivalently, their covariance windows (c.w.). As loss function we take the square of the norm in the space L2(--~, ~). A priori information about the sequence Xt is expressed by the condition
Xe~ , where ~ is some set of random sequences. For two forms of set ~ , defined by restric- tions on the covariance function and the fourth cumulant of the sequence Xt, for each fixed size of sample N we give the minimax, C.w. We study the rate of convergence of maximal risk to zero as N § ~ both in the case of the c.w. found in the paper and in the case of classical estimators. We establish thebest order of decrease of risk which can be given by certain widely applied s.w. and c.w.
Let Xt, t =...,--I, 0, I,... be a real random sequence which is stationary in the broad sense, with mean EXt = 0, eovariance functions (c.f.) c k and spectral density (s.d.) f(%), i.e.,
EX~+kX=ck= .if(X)eikXdZ, k . . . . . - 1 , O, 1 . . . . ( 1 . 1 )
Institute of Mathematics and Cybernetics, Academy of Sciences of the Lithuanian SSR. Translated from Litovskii Matematicheskii Sbornik (Lietuvos Matematikos Rinkinys), Vol. 24, No. 3, pp. 51-69, July-September, 1984. Original article submitted November 9, 1983.
0363-1672/84/2403- 0225508.50 �9 1985 Plenum Publishing Corporation 225