on estimation of finite population variance

16
This article was downloaded by: [Mount Allison University 0Libraries] On: 02 September 2013, At: 01:45 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Interdisciplinary Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tjim20 On estimation of finite population variance Javid Shabbir a & Sat Gupta b a Department of Statistics , Quaid-i-Azam University , Islamabad , 45320 , Pakistan E- mail: b Department of Mathematical Sciences , University of North Carolina at Greensboro , 383 Bryan Building Greensboro , NC , 27402 , USA E-mail: Published online: 31 May 2013. To cite this article: Javid Shabbir & Sat Gupta (2006) On estimation of finite population variance, Journal of Interdisciplinary Mathematics, 9:2, 405-419, DOI: 10.1080/09720502.2006.10700453 To link to this article: http://dx.doi.org/10.1080/09720502.2006.10700453 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Upload: sat

Post on 14-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On estimation of finite population variance

This article was downloaded by: [Mount Allison University 0Libraries]On: 02 September 2013, At: 01:45Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: MortimerHouse, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Interdisciplinary MathematicsPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/tjim20

On estimation of finite population varianceJavid Shabbir a & Sat Gupta ba Department of Statistics , Quaid-i-Azam University , Islamabad , 45320 , Pakistan E-mail:b Department of Mathematical Sciences , University of North Carolina at Greensboro ,383 Bryan Building Greensboro , NC , 27402 , USA E-mail:Published online: 31 May 2013.

To cite this article: Javid Shabbir & Sat Gupta (2006) On estimation of finite population variance, Journal ofInterdisciplinary Mathematics, 9:2, 405-419, DOI: 10.1080/09720502.2006.10700453

To link to this article: http://dx.doi.org/10.1080/09720502.2006.10700453

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose ofthe Content. Any opinions and views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be reliedupon and should be independently verified with primary sources of information. Taylor and Francis shallnot be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and otherliabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: On estimation of finite population variance

On estimation of finite population variance

Javid Shabbir ∗

Department of Statistics

Quaid-i-Azam University

Islamabad 45320

Pakistan

Sat Gupta †

Department of Mathematical Sciences

University of North Carolina at Greensboro

383 Bryan Building

Greensboro, NC 27402

USA

Abstract

Following Searls (1964), we propose an estimator for estimating the finite populationvariance. This estimator is the combination of Singh et al. (1973), and Prasad and Singh (1992)estimators and has an improvement over Singh et al. (1973), Prasad and Singh (1992), andseveral other estimators under certain conditions. Validity of proposed estimator is examinedby using seven numerical examples.

Keywords : Auxiliary variable, bias, mean square error, variance, efficiency.

1. Introduction

Estimating the finite population variance has great significance invarious fields such as industry, agriculture, and medical and biological sci-ences, where we come across populations which are likely to be skewed.

∗E-mail: [email protected]†E-mail: [email protected]

——————————–Journal of Interdisciplinary MathematicsVol. 9 (2006), No. 2, pp. 405–419c© Taru Publications

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 3: On estimation of finite population variance

406 J. SHABBIR AND S. GUPTA

Many authors have used the information on auxiliary variables in estimat-ing the population mean (Y) and population variance (S2

y) of the studyvariable y . Singh et al. (1988) studied the estimation of variance by us-ing information on both the mean (X) and variance (S2

x) for an auxiliaryvariable x . Upadhyaya and Singh (2001) estimated the population stan-dard deviation (Sy) by using the information on the auxiliary variable.The other important work in this area is by Singh et al. (1973), Das andTripathy (1978), Upadhyaya and Singh (1983), Isaki (1983), Srivastava andJhajj (1980, 1983), Prasad and Singh (1990, 1992), Gandge et al. (1991), Gar-cia and Cebrian (1996), Cebrian and Garcia (1997), and Biradar and Singh(1998). In this paper we focus on the estimation of finite population vari-ance. The following notations are used throughout the paper.

Let y and x be the study and the auxiliary variables respectivelymeasured on a simple random sample without replacement (SPSWOR).

Let y and x be the sample means and s2y =

n∑

i=1(yi − y)2/(n − 1) and

s2x =

n∑

i=1(xi − x)2/(n − 1) be sample variances of study and auxiliary

variables respectively. Let us define δ0 = (s2y−S2

y)/S2y , δ1 = (s2

x − S2x)/

S2x , δ2 = (x − X)/X . Therefore E(δi) = 0 for i = 0, 1, 2 and E(δ2

0) =(λ40 − 1)γ = λ∗40γ , E(δ2

1) = (λ04 − 1)γ = λ∗04γ , E(δ22) = C2

xγ , E(δ0δ1) =(λ22 − 1)γ = λ∗22γ , E(δ0δ2) = λ21Cxγ , where γ = (1 − f )/n and

f = n/N . Also, let C2x = µ02/X2 and λpq = µpq/(µp/2

20 µq/202 ) , where

µpq =N∑

i=1(yi − Y)p(xi − X)q/(N − 1) . For simplicity, we assume that n

is quite large as compared to N , so we can ignore the finite populationcorrection term.

We consider the following estimators from various sources which arebased on the know ledge of X and S2

x .

2. Various variance estimators

We consider the following estimators. The expressions B(·) , V(·) andM(·) denote the bias, variance and mean square error respectively of dif-ferent estimators. We use M(·) instead of V(·) where B(·) is zero.

(i) Usual variance estimator

t0 = s2y . (2.1)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 4: On estimation of finite population variance

FINITE POPULATION VARIANCE 407

The bias and variance of t0 to first degree of approximation, are given by

B(t0) = 0 (2.2)

and

M(t0) =S2

y

nλ∗40 . (2.3)

(ii) Singh et al. estimator

Singh et al. (1973) considered the following estimator

t1 = α1s2y , (2.4)

where α1 is a Searl (1964) constant to be determined later.

Using the first order approximation, the bias and MSE of t1 are given by

B(t1) =S2

y

n[n(α1 − 1)] (2.5)

and

M(t1) =S2

y

n[n(α1 − 1)2 +α2

1λ∗40] . (2.6)

The MSE of t1 is optimum for α1 =n

n + λ∗40= α∗1 (say) and is given by

M∗(t1) =S4

y

n

[nλ∗40

(n + λ∗40)

]. (2.7)

(iii) Das and Tripathy estimator

Das and Tripathy (1978) considered the following estimator

(a) t2 = s2y

(Xx

)α2

, (2.8)

where α2 is a constant to be determined later.

The bias and MSE of t2 , to first degree of approximation, are given by

B(t2) =S2

y

n

[α2(α2 + 1)

2C2

x −α2λ21Cx

](2.9)

and

M(t2) =S4

y

n

[λ∗40 +α2

2C2x − 2α2λ21Cx

]. (2.10)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 5: On estimation of finite population variance

408 J. SHABBIR AND S. GUPTA

The MSE of t2 is minimum for α2 =λ21

Cx= α∗2 (say), and is given by

M∗(t2) =S4

y

n

[λ∗40 − λ2

21

]. (2.11)

Das and Tripahty (1978) considered another estimator as given below

(b) t3 = s2y

(X

X +α3(x− X)

), (2.12)

where α3 is a constant to be determined later.

The bias and MSE of t3 , to first degree of approximation, are given by

B(t3) =S2

y

n

[α2

3C2x −α3λ21Cx

](2.13)

and

M(t3) =S4

y

n

[λ∗40 +α2

3C2x − 2α3λ21Cx

]. (2.14)

The MSE of t3 is optimum for α3 =λ21

Cx= α∗3 (say) and is given by

M∗(t3)S4

y

n

[λ∗40 − λ2

21

]. (2.15)

Das and Tripathy (1978) considered a third estimator which is given by

(c) t4 = s2y

(S2

xS2

x +α4(s2x − S2

x)

), (2.16)

where α4 is a constant to be determined later.

The bias and MSE of t4 , to first degree of approximation, are given by

B(t4) =S2

y

n

[α2

4λ∗40 −α4λ∗22

](2.17)

and

M(t4) =S4

y

n

[λ∗40 +α2

4λ∗04 − 2α4λ∗22

]. (2.18)

The MSE of t4 is optimum for α4 =λ∗22λ∗04

= α∗4 (say) and is given by

M∗(t4) =S4

y

n

[λ∗40 −

λ∗222

λ∗04

]. (2.19)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 6: On estimation of finite population variance

FINITE POPULATION VARIANCE 409

(iv) Isaki estimator

Isaki (1983) introduced the following variance ratio estimator

t5 = s2y

(S2

xs2

x

). (2.20)

The bias and MSE of t5 , to first degree of approximation, are given by

B(t5) =S2

y

n[λ∗04 − λ∗22] (2.21)

and

M(t5) =S4

y

n[λ∗40 + λ∗04 − 2λ∗22] . (2.22)

(v) Singh et al. estimator

Singh et al. (1988) considered the following estimator

(a) t6 = W1s2y + W2(X− x) , (2.23)

where W1 and W2 are suitably chosen constant which need not add up toone.

The bias and MSE of t6 to first degree of approximation are given by

B(t6) =S2

y

n[n(W1 − 1)] (2.24)

and

M(t6) =S4

y

n

[W2

1 (n + λ∗40) + W22

S2x

S4y− 2W1W2λ21

Sx

S2y− 2W1n + n

]. (2.25)

The optimum values of W1 and W2 after minimizing M(t6) are

W1 =n

(n + λ∗40)− λ221

= W∗1 (say)

and

W2 =nS2

yλ21

Sx[(n + λ∗40)− λ221]

= W∗2 (say).

Substituting optimum values of W1 and W2 in (2.25), we get minimumMSE of t6 which is given by

M∗(t6) =S4

y

n

[n(λ∗40 − λ2

21)(n + λ∗40 − λ2

21)

]. (2.26)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 7: On estimation of finite population variance

410 J. SHABBIR AND S. GUPTA

Singh et al. (1988) also considered the following estimator

(b) t7 = W3s2y + W4(S2

x − s2x), (2.27)

where W3 and W4 are suitably chosen constant which need not add up toone.

The bias and MSE of t7 to first degree of approximation are given by

B(t7) =S2

y

n[n(W3 − 1)] (2.28)

and

M(t7)=S4

y

n

[W2

3 (n + λ∗40) + W24 λ∗04

S4x

S4y− 2W1W2λ

∗22

S2x

S2y− 2W3n + n

]. (2.29)

The optimum values of W3 and W4 after minimizing M(t7) are

W3 =nλ∗04

λ∗04(n + λ∗40)− λ222

= W∗3 (say)

and

W4 =nλ∗22

(Sy

Sx

)2

λ∗04(n + λ∗40)− λ∗222

= W∗4 (say).

Substituting these optimum values in (2.29), we get the optimum MSEgiven by

M∗(t7) =S4

y

n

[n(λ∗40λ

∗04 − λ∗2

22)λ∗04(n + λ∗40)− λ∗2

22

]. (2.30)

(vi) Prasad and Singh estimator

Prasad and Singh (1992) introduced the following estimator

t8 = α8

(s2

yS2

xs2

x

), (2.31)

where α8 is a Searls (1964) constant to be determined later.

The bias and MSE of t8 , to the first degree of approximation, are given by

B(t8) =S2

y

n[α8(n + λ∗04 − λ∗22)− n] (2.32)

and

M(t8) =S4

y

n[α2

8(n + λ∗40 + 3λ∗04 − 4λ∗22)− 2α8(n + λ∗04 − λ∗22) + n)]. (2.33)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 8: On estimation of finite population variance

FINITE POPULATION VARIANCE 411

The MSE of t8 is optimum for α8 =n + λ∗04 − λ∗22

n + λ∗40 + 3λ∗04 − 4λ∗22= α∗8 (say) and

is given by

M∗(t8) =S∗yn

[n− (n + λ∗04 − λ∗22)

2

(n + λ∗40 + 3λ∗04 − 4λ∗22)

]. (2.34)

(vii) Garcia and Cebrian estimator

Garcia and Cebrian (1996) introduced the following estimator

t9 = s2y

(S2

xs2

x

)α9

, (2.35)

where α9 is a constant to be determined later.

The bias and MSE of t9 , to first degree of approximation, are given by

B(t9) =S2

y

n

[α9(α9 + 1)

2λ∗04 −α9λ

∗22

](2.36)

and

M(t9) =S4

y

n

[λ∗40 +α2

9λ∗04 − 2α9λ∗22

]. (2.37)

The MSE of t9 is optimum for α9 =λ∗22λ∗04

= α∗9 (say) and is given by

M∗(t9) =S4

y

n

[λ∗40 −

λ∗222

λ∗04

]. (2.38)

(viii) Upadhyaya and Singh estimator

Upadhyaya and Singh (2001) suggested the following estimator

(a) t10 = s2y +α10(X− x), (2.39)

where α10 is a constant to be determined later.

The bias and MSE of t10 to first degree of approximation are given below

B(t10) = 0 (2.40)

and

M(t10) =S4

y

n

[λ∗40 +α2

10S2

xS4

y− 2α10

Sx

S2yλ21

]. (2.41)

The MSE of t10 is optimum for α10 = λ21S2

y

Sx= α∗10 (say) and is given by

M∗(t10) =S4

y

n

[λ∗40 − λ2

21

]. (2.42)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 9: On estimation of finite population variance

412 J. SHABBIR AND S. GUPTA

Upadhyaya and Singh (2001) suggested another estimator

(b) t11 = s2y +α11(S2

x − s2x), (2.43)

where α11 is a constant to be determined later.

The bias and MSE of t11 to first degree of approximation are given by

B(t11) = 0 (2.44)

and

M(t11) =S4

y

n

[λ∗40 +α2

11S4

xS4

yλ∗04 − 2α11

S2x

S2yλ∗22

]. (2.45)

The MSE of t11 is optimum for α11 =λ∗22λ∗04

S2y

S2x

= α∗11 (say) and is given by

M∗(t11) =S4

y

n

[λ∗40 −

λ∗222

λ∗04

]. (2.46)

Upadhyaya and Singh (2001) considered yet another estimator given by

(c) t12 = s2y

(Xx

). (2.47)

The bias and MSE of t12 to first degree approximation are given by

B(t12) =S2

y

n[C2

x − λ21Cx] (2.48)

and

M(t12)S4

y

n[λ∗40 + C2

x − 2λ21Cx]. (2.49)

3. Proposed estimator

Following Searls (1964), we propose an estimator which is given by

tP = λtm , (3.1)

where λ is the Searls (1964) constant whose value is to be determined later.

Here tm is the combination of Singh et al. (1973), and Prasad and Singh(1992) and is defined as

tm = K1s2y + K2s2

y

(S2

xs2

x

), (3.2)

where K1 , K2 are weights such that K1 + K2 = 1.

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 10: On estimation of finite population variance

FINITE POPULATION VARIANCE 413

From (3.2), we have

tm = K1S2y(1 + δ0) + K2S2

y(1 + δ0)(1− δ1)−1,

tm − S2y = S2

yδ0 + K2S2y(−δ1 − δ0δ1 + δ2

1) + higher order terms. (3.3)

Simplifying (3.3) after ignoring the higher order terms, we get bias of tm

as

B(tm) = E(tm − S2y) =

S2y

n[K2(λ∗04 − λ∗22)]. (3.4)

Similarly the MSE of tm is given by

M(tm) =S4

y

n[λ∗40 + K2

2λ∗04 − 2K2λ∗22]. (3.5)

Differentiating (3.5) with respect to K2 , we get

K2 =λ∗22λ∗04

= K∗2 (say).

Substitution of optimum value of K2 in (3.5), we get

M∗(tm) =S4

y

n

(λ∗40 −

λ∗222

λ∗04

). (3.6)

Expression in (3.6) is equal to the variance of linear regression estimator

for population variance S2lr = s2

y + b(S2y − s2

y) , where b =s2

yλ∗22

s2xλ∗04

is the

sample regression coefficient. By (3.1) and (3.3), the proposed estimatorbecomes

tP = λ[S2y{1 + δ0 + K2(−δ1 − δ0δ1 + δ2

1)}],tP − S2

y = S2y[(λ− 1) + λ{δ0 + K2(−δ1 − δ0δ1 + δ2

1)}]. (3.7)

Solving (3.7) to first order of approximation, the bias of tP is given by

B(tP) =S2

y

n[λ{n + K2(λ∗04 − λ∗22)} − n] . (3.8)

From (3.7), the MSE of tp is given by

M(tP) = E(tP − S2y)

2

= S4yE[(λ− 1) + λ{δ0 + K2(−δ1 − δ0δ1 + δ2

1)}]2.

M(tP) =S4

y

n[λ2(n + λ∗40 + K2

2λ∗04 + 2K2λ∗04 − 4K2λ

∗22)

−2λ(n + K2λ∗04 − K2λ

∗22) + n]. (3.9)

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 11: On estimation of finite population variance

414 J. SHABBIR AND S. GUPTA

Setting∂M(tP)

∂λ= 0, we get

λ =n + K2(λ∗04 − λ∗22)

n + λ∗40 + K22λ∗04 + 2K2λ

∗04 − 4K2λ

∗22

= λ∗ (say). (3.10)

Substitution of (3.10) in (3.1) yields the optimum estimator (OE) which isgiven by

t(0)P =

n + K2(λ∗04 − λ∗22)n + λ∗40 + K2

2λ∗04 + 2K2λ∗04 − 4K2λ

∗22

tm . (3.11)

Now (3.11) can be used if parameters λ04 , λ40 and λ22 are known. Theseare stable quantities as discussed by Murthy (1967), and Sahi and Sahi(1985). The constant K2 can also be fixed by the experimenter.

Substitution of (3.10) in (3.9), we get the optimum MSE of tP i.e. MSE oft(0)P which is given by

M(t(0)P ) =

S4y

n

[n− {n + K2(λ∗04 − λ∗22)}2

n + λ∗40 + K22λ∗04 + 2K2(λ∗04 − 2λ∗22)

]. (3.12)

Now substitution of K2 =λ∗22λ∗04

= K∗2 (say) in (3.11), we get the estimator,

t(0)∗P which is given by

t(0)∗p =

n + λ∗22 − (λ∗222/λ∗04)

n + λ∗40 + 2λ∗222 − 3(λ∗2

22/λ∗04)t(0)m , (3.13)

where

t(0)m = K∗1 s2

y + K∗2 s2y

(S2

xs2

x

). (3.14)

The optimum MSE of t(0)∗P is given by

M(t(0)∗P ) =

S4y

n

n−

{n + λ∗22 −

λ∗222

λ∗04

}2

n + λ∗40 + 2λ∗22 − 3 λ∗222

λ∗04

. (3.15)

4. Efficiency comparison

The comparison of estimator t(0)∗P can be made with other estimators

ti with respect to t0 (i = 0, 1, . . . , 12) discussed in Section 2. We can find

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 12: On estimation of finite population variance

FINITE POPULATION VARIANCE 415

the relative efficiency (RE) of t j as

RE =M(t0)

M(t j) or M∗(t j)× 100, j = 1, 2, . . . , 12, P(0)∗ .

Define:

A = n + λ∗22 −λ∗2

22λ∗04

; B = n + λ∗40 + 2λ∗22 − 3λ∗2

22λ∗04

;

C =nλ∗40

(n + λ∗40); D = n + λ∗04 − λ∗22 ;

E = n + λ∗40 + 3λ∗04 − 4λ∗22 ; F = n(λ∗40 − λ221) ;

G = (n + λ∗40 − λ221) ; H = n(λ∗04λ

∗40 − λ∗2

22) ;

I = λ∗04(n + λ∗40)− λ∗222 .

It is easy to verify the following conditions under which the proposed es-timator will be better than the other estimators discussed earlier.

Conditions:

(i) M(t(0)∗P ) < M(t0) if λ∗40 + (A2/B)− n > 0.

(ii) M(t(0)∗P ) < M∗(t1) if C + (A2/B)− n > 0.

(iii) M(t(0)∗P ) < M∗(tk) (k = 2, 3, 10) if λ∗40 + (A2/B)− (n + λ2

21) > 0.

(iv) M(t(0)∗P )< M(tl) (l =4, 9, 11) if λ∗40+(A2/B)−(n + (λ∗2

22/λ∗04))>0.

(v) M(t(0)∗P ) < M(t5) if λ∗40 + λ∗04 + (A2/B)− (n + 2λ∗22) > 0.

(vi) M(t(0)∗P ) < M∗(t6) if (F/G) + (A2/B)− n > 0.

(vii) M(t(0)∗P ) < M∗(t7) if (H/I) + (A2/B)− n > 0.

(viii) M(t(0)∗P ) < M∗(t8) if (A2/B)− (D2/E) > 0.

(ix) M(t(0)∗P ) < M(t12) if λ∗40 + C2

x + (A2/B)− (n + 2λ21Cx) > 0.

5. Description of data

For comparison, we consider the following seven data sets from var-ious sources.

Data 1 ([Cochran (1977), p. 325]).

Y : number of persons per block, X : number of rooms per block,

S2y = 214.69 , S2

x = 56.76 , λ∗40 = 1.2387 , λ∗04 = 1.3523 , λ∗22 = 0.5432 ,λ21 = 0.4536 , Cx = 0.1281 , Cy = 0.1450 , X = 58.8 , ρ = 0.6515 , n = 10 .

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 13: On estimation of finite population variance

416 J. SHABBIR AND S. GUPTA

Data 2 ([Cochran (1977), p. 152]).

Y : number of inhabitants in 1930, X : number of inhabitants in 1920,

S2y = 16447.18 , S2

x = 11829.11 , λ∗40 = 4.3177 , λ∗04 = 3.8078 , λ∗22 = 4.045 ,λ21 = 1.8274 , Cx = 0.988 , Cy = 0.9504 , X = 114.625 , ρ = 0.9931 ,n = 16 .

Data 3 ([Cochran (1977), p. 203]).

Y : actual weight of peaches on each tree,

X : eye estimate of weight of peaches on each tree,

S2y = 99.81 , S2

x = 85.09 , λ∗40 = 0.9249 , λ∗04 = 1.5932 , λ∗22 = 1.1149 ,λ21 = 0.1875 , Cx = 0.1621 , Cy = 0.1840 , X = 56.9 , ρ = 0.9937 , n = 10 .

Data 4 ([Sukhatme and Sukhatme (1970), p. 185]).

Y : wheat acreage in 1937, X : wheat acreage in 1936,

S2y = 26456.89 , S2

x = 22355.76 , λ∗40 = 2.1842 , λ∗04 = 1.2030 , λ∗22 =1.5597 , λ21 = 0.6665 , Cx = 0.5625 , Cy = 0.6163 , X = 265.8 , ρ = 0.977 ,n = 10 .

Data 5 ([Upadhyaya and Singh (2001)]).

Y : census population in year 1971, X : census population in year 1961,

S2y = 71899173.02 , S2

x = 40608000.69 , λ∗40 = 39.8536 , λ∗04 = 47.1567 ,λ∗22 = 42.7615 , λ21 = 5.9786 , Cx = 2.1971 , Cy = 2.1118 , X = 2900.4 ,ρ = 0.9948 , n = 142 .

Data 6 ([Singh et al. (1988)]).

Y : the number of agriculture laborers for 1971,

X : the number of agriculture laborers for 1961,

S2y = 3187.30 , S2

x = 1654.40 , λ∗40 = 23.8969 , λ∗04 = 36.8898 , λ∗22 =24.8142 , λ21 = 3.4347 , Cx = 1.6198 , Cy = 1.4451 , X = 25.111 , ρ =0.7273 , n = 30 .

Data 7 ([Singh (2003)]).

Y : amount (in $1000) of normal estate farm loans in different states during1997,

X : amount (in $1000) of normal estate farm loans different than 1997.

S2y = 342021.5 , S2

x = 1176526 , λ∗40 = 2.5822 , λ∗04 = 3.5247 , λ∗22 =1.8411 , λ21 = 0.9387 , Cx = 1.2352 , Cy = 1.0529 , X = 878.16 , ρ =0.8038 , n = 8.

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 14: On estimation of finite population variance

FINITE POPULATION VARIANCE 417

By using the above data sets, the RE of different estimators are givenin Table 1. Tables 2 and 3 describe the optimum values of different estima-tors and verification of various conditions.

Table 1Relative efficiencies in percentage of different estimators withrespect to t0

Estimator Data 1 Data 2 Data 3 Data 4 Data 5 Data 6 Data 7

t0 100.00 100.00 100.00 100.00 100.00 100.00 100.00t1 112.39 126.99 109.25 121.84 128.07 179.66 132.28tk (k = 2, 3, 10) 119.92 441.34 103.95 125.53 969.69 197.50 151.80tl (l = 4, 9, 11) 121.38 20834.24 639.15 1347.98 3698.20 331.90 159.34t5 82.33 12162.54 320.81 815.61 2679.59 214.32 106.50t6 132.31 468.33 113.20 147.37 997.75 277.16 184.08t7 133.77 20861.23 648.40 1369.82 3726.26 411.55 191.62t8 112.95 13129.45 391.84 818.14 3162.84 825.50 215.02t12 108.76 256.56 103.88 124.75 216.48 155.24 144.34

t(0)∗P 143.14 24990.12 749.18 1434.48 4389.51 851.15 241.03

In Table 1, estimator t5 is inferior for Data 1 but the efficiency is muchbetter for other data sets. The efficiency of proposed estimator t(0)∗

P ismuch better as compared to other estimators for all data sets

Table 2Optimum values of various constants used in different estimators

Estimator Data 1 Data 2 Data 3 Data 4 Data 5 Data 6 Data 7

α∗1 0.89 0.79 0.91 0.82 0.78 0.56 0.76α∗2 = α∗3 3.54 1.85 1.16 1.18 2.72 2.12 0.76α∗4 = α∗9 0.41 1.06 0.70 1.30 0.91 0.67 0.52α∗8 0.82 1.01 0.93 1.01 0.96 0.64 0.70α∗10 12.93 276.34 2.03 117.94 67455.53 269.15 295.99α∗11 1.52 1.48 0.82 1.53 0.82 0.78 0.96W∗

1 0.91 0.94 0.92 0.85 0.97 0.71 0.82W∗

2 11.72 260.42 1.86 100.46 65558.07 191.79 244.09W∗

3 0.91 1.00 0.99 0.98 0.99 0.81 0.83W∗

4 1.38 1.47 0.81 1.51 1.59 1.04 0.13λ∗ 0.88 1.01 0.95 1.03 0.97 0.71 0.78

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 15: On estimation of finite population variance

418 J. SHABBIR AND S. GUPTA

Table 3Verification of conditions derived in efficiency comparison

Estimator Data 1 Data 2 Data 3 Data 4 Data 5 Data 6 Data 7

Condition (i) 0.373 4.300 0.801 2.032 38.945 21.089 1.511Condition (ii) 0.234 3.383 0.723 1.640 30.212 10.494 0.881Condition (iii) 0.167 0.961 0.766 1.587 3.202 9.292 0.629Condition (iv) 0.155 0.003 0.021 0.010 0.169 4.392 0.549Condition (v) 0.639 0.018 0.165 0.115 0.579 8.343 1.354Condition (vi) 0.071 0.905 0.694 1.330 3.086 5.815 0.331Condition (vii) 0.061 0.003 0.019 0.007 0.162 2.999 0.276Condition (viii) 0.231 0.016 0.113 0.115 0.352 0.084 0.129Condition (ix) 0.273 1.665 0.767 1.598 17.502 12.586 0.717

6. Conclusion

In Table 1, it is observed that the estimator t(0)∗P is more efficient

than all other estimators ti (i = 0, 1, . . . , 12) for data sets. Estimatorstl (l = 4, 9, 11) , t7 and t8 also perform well but are not better than t(0)∗

P .According to Table 3, all of the codnitions derived in Section 4 for the su-periority of t(0)∗

P hold true comfortably for all data sets indicating that theproposed estimator may prove better in most situations.

References

[1] R. S. Biradar and H. P. Singh (1998), Predictive estimators of finitepopulation variance, Calcutta Statistical Association Bulletin, Vol. 48,pp. 229–235.

[2] W. G. Cochran (1977), Sampling Techniques, John Wiley and Sons,New York.

[3] A. A. Cebrian and R. M. Garcia (1997), Variance estimation us-ing auxiliary information: an almost multivariate ratio estimator,Metrika, Vol. 45, pp. 171–178.

[4] A. K. Das and T. P. Tripathy (1978), Use of auxiliary information inestimating the finite population variance, Sankhya, C, pp. 139–148.

[5] R. M. Garcia and A. A. Cebrian (1996), Repeat substitution method:the ratio estimator for the population variance, Metrika, Vol. 43,pp. 101–105.

[6] S. N. Gandge, S. G. Prabhu and Ajgaonkar (1991), The Searl’s tech-nique in regression method of estimation, Metrika, Vol. XLIX (1–4),pp. 255–262.

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013

Page 16: On estimation of finite population variance

FINITE POPULATION VARIANCE 419

[7] C. T. Isaki (1983), Variance estimation using auxiliary information,Journal of American Statistical Association, Vol. 78, pp. 117–123.

[8] M. N. Murthy (1967), Sampling Theory and Methods, Calcutta Statisti-cal Publishing Society, Calcutta.

[9] B. Prasad and H. P. Singh (1990), Some improved ratio-type estima-tors of finite population variance in sample surveys, Communicationin Statistics: Theory and Methods, Vol. 19 (3), pp. 1127–1139.

[10] B. Prasad and H. P. Singh (1992), Unbiased estimators of finite popu-lation variance using auxiliary information in sample surveys, Com-munication in Statistics: Theory and Methods, Vol. 21 (5), pp. 1367–1376.

[11] A. Sahai and A. Sahai (1985), On efficient use of auxiliary informa-tion, Journal of Statistical Planning and Inference, Vol. 12, pp. 203–212.

[12] D. T. Searls (1964), Utilization of known coefficient of kurtosis in theestimation procedure of variance, Journal of American Statistical Asso-ciation, Vol. 59, pp. 1225–1226.

[13] H. P. Singh, L. N. Upadhyaya and U. D. Namjoshi (1988), Estimationof finite population variance, Current Science, Vol. 57 (24), pp. 1331–1334.

[14] J. Singh, B. N. Pandey and K. Hirano (1973), On the utilization ofknown coefficient of kurtosis in the estimation procedure of vari-ance, Annals of Institute of Statistical Mathematics, Vol. 21, pp. 51–55.

[15] S. Singh (2003), Advanced Sampling Theory with Applications, KluwerAcademic Press.

[16] P. V. Sukhatme and B. V. Sukhatme (1970), Sampling Theory of Surveyswith Applications, Asia Publishing House, New Delhi.

[17] S. K. Srivastava and H. S. Jhajj (1980), A class of estimators usingauxiliary information to estimate the population variance, Sankhya,Vol. 42, C, pp. 87–96.

[18] S. K. Srivastava and H. S. Jhajj (1983), Class of estimators of mean andvariance using auxiliary information when correlation coefficient isknown, Biometrical Journal, Vol. 25 (4), pp. 401–409.

[19] D. S. Tracy and H. P. Singh (1999), An improved class of estima-tors for finite population mean in sample survey, Biometrical Journal,Vol. 41 (7), pp. 890–895.

[20] L. N. Upadhyaya and H. P. Singh (1983), Use of auxiliary informationin estimating the population variance, Mathematics Forum, Vol. 6 (2),pp. 33–26.

[21] L. N. Upadhyaya and H. P. Singh (2001), Estimation of the popula-tion standard deviation using auxiliary information, American Jour-nal of Mathematics and Management Sciences, Vol. 21 (3-4), pp. 345–358.

Dow

nloa

ded

by [

Mou

nt A

lliso

n U

nive

rsity

0L

ibra

ries

] at

01:

45 0

2 Se

ptem

ber

2013