estimating the parameters of a bathtub-shaped distribution under progressive type-ii censoring

24
This article was downloaded by: [University of Gothenburg] On: 22 April 2014, At: 03:46 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Applied Statistics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/cjas20 Estimating the parameters of a bathtub- shaped distribution under progressive type-II censoring Manoj Kumar Rastogi a , Yogesh Mani Tripathi a & Shuo-Jye Wu b a Department of Mathematics , Indian Institute of Technology Patna , Patna , 800013 , India b Department of Statistics , Tamkang University , Tamsui, New Taipei City , 25137 , Taiwan Published online: 01 Aug 2012. To cite this article: Manoj Kumar Rastogi , Yogesh Mani Tripathi & Shuo-Jye Wu (2012) Estimating the parameters of a bathtub-shaped distribution under progressive type-II censoring, Journal of Applied Statistics, 39:11, 2389-2411, DOI: 10.1080/02664763.2012.710899 To link to this article: http://dx.doi.org/10.1080/02664763.2012.710899 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: shuo-jye

Post on 23-Dec-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [University of Gothenburg]On: 22 April 2014, At: 03:46Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Applied StatisticsPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/cjas20

Estimating the parameters of a bathtub-shaped distribution under progressivetype-II censoringManoj Kumar Rastogi a , Yogesh Mani Tripathi a & Shuo-Jye Wu ba Department of Mathematics , Indian Institute of TechnologyPatna , Patna , 800013 , Indiab Department of Statistics , Tamkang University , Tamsui, NewTaipei City , 25137 , TaiwanPublished online: 01 Aug 2012.

To cite this article: Manoj Kumar Rastogi , Yogesh Mani Tripathi & Shuo-Jye Wu (2012) Estimatingthe parameters of a bathtub-shaped distribution under progressive type-II censoring, Journal ofApplied Statistics, 39:11, 2389-2411, DOI: 10.1080/02664763.2012.710899

To link to this article: http://dx.doi.org/10.1080/02664763.2012.710899

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Journal of Applied StatisticsVol. 39, No. 11, November 2012, 2389–2411

Estimating the parameters of abathtub-shaped distribution under

progressive type-II censoring

Manoj Kumar Rastogia, Yogesh Mani Tripathia and Shuo-Jye Wub∗

aDepartment of Mathematics, Indian Institute of Technology Patna, Patna-800013, India;bDepartment of Statistics, Tamkang University, Tamsui, New Taipei City 25137, Taiwan

(Received 30 January 2012; final version received 8 July 2012)

We consider the problem of estimating unknown parameters, reliability function and hazard function ofa two parameter bathtub-shaped distribution on the basis of progressive type-II censored sample. Themaximum likelihood estimators and Bayes estimators are derived for two unknown parameters, reliabil-ity function and hazard function. The Bayes estimators are obtained against squared error, LINEX andentropy loss functions. Also, using the Lindley approximation method we have obtained approximateBayes estimators against these loss functions. Some numerical comparisons are made among various pro-posed estimators in terms of their mean square error values and some specific recommendations are given.Finally, two data sets are analyzed to illustrate the proposed methods.

Keywords: Bayes estimator; hazard function; Lindley approximation method; maximum likelihoodmethod; reliability function

1. Introduction

In various life-testing and reliability studies, usually experiments are terminated before failuretimes of all items are observed. Consequently, complete information on failure times for allitems cannot be obtained. During experimentation, such situations arise because of removal orloss of items before they actually fail. However in general, such experiments are purposeful andpreplanned in order to save time and cost associated with the testing. Data obtained through suchexperiments are called censored data. The two well-known censoring schemes are type-I and type-II censoring. Consider an experiment in which n items are placed on a test. In type-I censoringcase, data consist of lifetimes of items which failed up to a time T0. Whereas in a type-II censoringcase, data consist of the lifetimes of the r ≤ n (r pre-specified) items. Indeed, in this censoringscheme only the r smallest life times are observed. In type-I censoring scheme, the time T0 isprefixed and the number of failures observed up to T0 is random. In the type-II censoring scheme,

∗Corresponding author. Email: [email protected]

ISSN 0266-4763 print/ISSN 1360-0532 online© 2012 Taylor & Francishttp://dx.doi.org/10.1080/02664763.2012.710899http://www.tandfonline.com

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2390 M.K. Rastogi et al.

the situation is opposite in the sense that the number of failures observed is fixed, while the time atwhich the experiment is terminated is random. There exist a vast literature dealing with inferentialproblems under type-I and type-II censoring for several parametric statistical distributions. Werefer to Lawless [14], Sinha [23] and Balakrishnan and Cohen [4] for a detailed discussion in thisdirection.

The usual type-I and type-II censoring schemes do not allow for removal of items other thanthe terminal point of the experiment. Cohen [7,8] was one of the earliest to study a more generalcensoring scheme where removal of the items is also allowed in between. This general censoringis referred to as the progressive type-II censoring scheme. Suppose that n units are placed ona life-testing experiment. Under the progressive censoring scheme, a random sample of size m(1 ≤ m ≤ n) of observed lifetimes can be obtained in the following m steps. At first failure instant,r1 of the n − 1 surviving items are randomly removed from the experiment. Next, at the secondfailure instant, r2 of n − r1 − 2 surviving items are randomly removed from the experiment.Likewise, the experiment continues until the mth failure is observed. By this time, the remainingrm = n − m − r1 − · · · − rm−1 surviving items are removed from the experiment. We refer to thisas the progressive type-II censoring scheme (r1, . . . , rm).

Notice that in the progressive type-II censoring scheme, the ri’s are fixed before the commence-ment of the experiment. Also, it can be easily observed that if r1 = · · · = rm−1 = rm = 0, thenm = n and this corresponds to the complete sample case. However, if r1 = · · · = rm−1 = 0, thenrm = n − m and this corresponds to the conventional type-II right-censoring scheme. Thus, com-plete sampling and type-II right-censoring schemes are special cases of the progressive type-IIcensoring scheme. Statistical inferences on parameters of different distributions have been studiedby several researchers under progressively censored sample; for example, we refer to Cohen [7],Cohen and Norgaard [9], Balakrishnan and Aggarwala [3], Ng [20], Balakrishnan et al. [5], Linet al. [15], Balakrishnan [2] and Madi and Raqab [17].

Based on a progressively type-II censored sample, we consider the problem of estimating param-eters of a bathtub-shaped distribution proposed by Chen [6] under both classical and Bayesiancontexts. The probability density function (PDF) and cumulative distribution function (CDF) ofa random variable X having a two-parameter bathtub-shaped distribution are respectively givenby

fX(x) = λβxβ−1 exp {λ(1 − exβ

) + xβ}, x > 0, (1)

and

FX(x) = 1 − exp {λ(1 − exβ

)}, x > 0, (2)

where λ > 0 and β > 0. The reliability function of this distribution is R(t) = exp{λ(1 − etβ )},t > 0, and the hazard function is given by H(t) = λβtβ−1 etβ , t > 0. The parameter β is com-monly known as the shape parameter of fX(x). The parameter λ does not affect the shape of H(t);however, β does affect the shape of H(t). Indeed, when β < 1, H(t) is bathtub-shaped, other-wise this distribution has an increasing hazard function. The class of lifetime distributions withbathtub-shaped hazard rate function is quite useful in practice as lifetimes of certain electronic andmechanical products have such characteristics. Several probability models have been proposedand studied in the literature with bathtub-shaped failure rate function. Among others, we referto Nadarajah and Kotz [19], Gurvich et al. [10], Hjorth [11], Rajarshi and Rajarshi [21], Mud-holkar and Srivastava [18], Wang [24], Reed [22], Jiang et al. [12], Xie et al. [27] and referencescited therein for some detailed review on bathtub-shaped distributions and for some importantinferential results.

Chen [6] considered interval estimation problem when a type-II censored sample is availablefrom fX(x). The author provided an exact confidence interval for the parameter β and an exactconfidence region for the parameters β and λ. A numerical example is also presented to support

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2391

the result. Wu et al. [25] proposed an exact confidence interval for β under type-II censoreddata. Wu [26] took up the problem of estimating parameters of a bathtub-shaped distributionunder a progressively type-II censored sample. In this article, the author obtained maximumlikelihood estimates (MLEs) for the parameters β and λ, respectively.An exact confidence intervalfor the parameter β and a joint confidence region for the parameters β and λ have also beenestablished in this article. A numerical example is given to show the superiority of the proposedconfidence intervals over the one obtained using the normal approximations. Kim et al. [13]investigated the problem of estimating the parameters of exponentiated Weibull distribution underprogressive type-II censored random sample. The CDF of the model under consideration hasthe form (1 − e−xα

)β , x > 0, where α > 0, β > 0. The authors derived Bayes estimators undersquared error and LINEX loss functions by assuming conjugate priors for the parameters α andβ. The Bayes estimator for reliability function has also been obtained. Finally, the performanceof the maximum likelihood estimators and Bayes estimators has been compared numerically interms of root mean square error (MSE) and specific recommendations are given.

The rest of this article is organized as follows. The MLEs of unknown parameters λ, β, R(t)and H(t) are derived in Section 2. The corresponding Bayes estimators are obtained in Section 3against different loss functions such as squared error, LINEX and entropy. These Bayes estimatorsare also derived using the Lindley approximation method in Section 4. In Section 5, a numericalcomparison is made between various estimators in terms of their MSE values and some commentsare made. In Section 6, two examples are presented to illustrate the proposed methods. Finally,some conclusions are made in Section 7.

2. Maximum likelihood estimation

Suppose that n independent units taken from a population are placed on a test with the corre-sponding lifetimes being identically distributed having PDF and CDF as defined in Equations (1)and (2), respectively. Let X = (X1:m:n, . . . , Xm:m:n) be a progressively type-II censored sampletaken from fX(x) with the censoring scheme (r1, . . . , rm). The joint PDF of a progressively type-IIcensored sample is (see [3] for detail) given by

fX1:m:n ,...,Xm:m:n(x1:m:n, . . . , xm:m:n) = Cm∏

i=1

fX(xi:m:n){1 − FX(xi:m:n)}ri , (3)

where C = n(n − r1 − 1) · · · (n − r1 − · · · − rm−1 − m + 1).Utilizing Equations (1)–(3), the likelihood function of λ and β is obtained as

L(λ, β | x) ∝ λmβmm∏

i=1

xβ−1i:m:n exp {λ(1 − exβ

i:m:n) + xβi:m:n}[exp {λ(1 − exβ

i:m:n)}]ri , (4)

where x = (x1:m:n, . . . , xm:m:n). The corresponding likelihood equations for λ and β are obtainedto be

∂ log L

∂λ= m

λ+ s = 0, (5)

and∂ log L

∂β= m

β+

m∑i=1

log xi:m:n +m∑

i=1

xβi:m:n log xi:m:n[1 − λ(1 + ri) exβ

i:m:n ] = 0, (6)

where s = ∑mi=1(1 + ri)(1 − exβ

i:m:n). The MLEs of parameters λ and β, say λ and β, respectively,are solutions of Equations (5) and (6). Unfortunately, analytic solutions for λ and β are notavailable. We have to apply a numerical technique to obtain the desired MLEs λ and β.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2392 M.K. Rastogi et al.

Now, to obtain asymptotic normal distribution of MLEs, we proceed to evaluate the followingquantities:

∂2 log L

∂λ2= − m

λ2, (7)

∂2 log L

∂β2= − m

β2+

m∑i=1

xβi:m:n[log xi:m:n]2

− λ

m∑i=1

(1 + ri) exβi:m:n xβ

i:m:n[log xi:m:n]2[1 + xβi:m:n], (8)

and∂2 log L

∂λ∂β= ∂2 log L

∂β∂λ= −

m∑i=1

(1 + ri) exβi:m:n xβ

i:m:n log xi:m:n. (9)

The Fisher information matrix, say I(β, λ), can be obtained from Equations (7)–(9). It is wellknown that under some regularity conditions, (β, λ) is approximately distributed as bivariatenormal with mean (β, λ) and covariance matrix I−1(β, λ). Furthermore, using the invariance

property of MLEs, the MLEs of R(t) and H(t) can be obtained as R(t) = exp{λ(1 − etβ )} and

H(t) = λβtβ−1 etβ , respectively.

3. Bayes estimation

In this section, we derive Bayes estimators for the unknown parameters β and λ, R(t) and H(t). LetX = (X1:m:n, . . . , Xm:m:n) be a progressively type-II censored sample taken from the two param-eter bathtub-shaped lifetime distribution as defined in Equation (1). We assume that λ and β

are stochastically independent and are a priori distributed as Gamma(b, a) and Gamma(d, c),respectively. The joint prior distribution of (λ, β) can be written as

π(λ, β) ∝ λb−1 e−aλβd−1 e−cβ , λ > 0, β > 0, a > 0, b > 0, c > 0, d > 0. (10)

Next, using Equations (4) and (10), the joint posterior density function of λ and β is obtained as

π(λ, β | x) = λm+b−1βm+d−1

k

(m∏

i=1

xβ−1i:m:n

)e∑m

i=1 xβi:m:n e−λ(a−s) e−cβ ,

where

k =∫ ∞

0

∫ ∞

0λm+b−1βm+d−1

(m∏

i=1

xβ−1i:m:n

)e∑m

i=1 xβi:m:n e−λ(a−s) e−cβ dλ dβ

is the normalizing constant.We derive Bayes estimators of λ, β, R(t) and H(t) against three different loss functions such

as squared error, LINEX and entropy. If μ is the parameter to be estimated by an estimator μ,then the square error loss function is defined as LSB(μ, μ) = (μ − μ)2, where the LINEX lossfunction is defined as

LLB(μ, μ) = p(eh(μ−μ) − h(μ − μ) − 1), p > 0, h �= 0,

where p and h are shape and scale parameter of the loss function LLB. For a detailed expositionon the loss function LLB, one may refer to the paper by Zellner [28]. Without loss of generality,

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2393

we take p to be 1. It is well known that under the loss function LSB, the Bayes estimator of μ isthe posterior mean of μ. However, in the case of the loss function LLB, the Bayes estimator of μ

is given by the expression

−1

hlog{E(μ|X)(e−hμ | X)},

where expectation is taken with respect to the posterior distribution of μ. Finally, the entropy lossfunction is defined as

LEB(μ, μ) ∝(

μ

μ

)q

− q log

μ

)− 1, q �= 0.

In this case, the Bayes estimator of μ is given by (E(μ|X)(μ−q | x))−1/q whenever the expectationexists.

Now, we use these basic results to obtain Bayes estimators for unknown parameters under theprogressive type-II censored sample.

3.1 Squared error loss function

In this case, Bayes estimators of λ and β are obtained to be, respectively,

λSB = E(λ | X) = 1

k

∫ ∞

0

∫ ∞

0λm+bβm+d−1

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ,

and

βSB = E(β | X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

Furthermore, the Bayes estimator of R(t) against the loss function LSB is obtained as

RSB(t) = E(R(t) | X)

= 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1 eλ(1−etβ )

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ,

while the estimator of H(t) is obtained as

HSB(t) = E(H(t) | X) = 1

k

∫ ∞

0

∫ ∞

0λm+bβm+dtβ−1 etβ

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

3.2 LINEX loss function

Under the LINEX loss function LLB, the Bayes estimator of λ is given by

λLB = −1

hlog{E(λ|X)(e−hλ | X)}, h �= 0,

where

E(λ|X)(e−hλ|X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a+h−s) e−cβ dλ dβ,

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2394 M.K. Rastogi et al.

and the Bayes estimator of β is obtained as

βLB = −1

hlog{E(β|X)(e−hβ | X)}, h �= 0,

where

E(β|X)(e−hβ |X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1 e−(h+c)β

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) dλ dβ.

The desired estimator of R(t) is given by

RLB(t) = −1

hlog{E(e−h eλ(1−etβ ) |X)} = −1

hlog{E(e−hR(t) | X)}, h �= 0,

where

E(e−hR(t) | X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1 e−h eλ(1−etβ )

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

In a similar manner, the Bayes estimator of H(t) is obtained as

HLB(t) = −1

hlog{E(e−hλβtβ−1 etβ |X)} = −1

hlog{E(e−hH(t) | X)}, h �= 0,

where

E(e−hH(t) | X ) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1 e−hλβtβ−1etβ

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

3.3 Entropy loss function

We evaluate the Bayes estimator of λ against the LEB loss function as

λEB = {E(λ| X)(λ−q | x)}−1/q,

where

E(λ|X)(λ−q|X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−q−1βm+d−1

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a+h−s) e−cβ dλ dβ,

and the desired estimators of β is given by

βEB = {E(β| X)(β−q | x)}−1/q,

where

E(β|X)(β−q|X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−q−1

(m∏

i=1

Xβ−1i:m:n

)e∑m

i=1 Xβi:m:n e−λ(a+h−s) e−cβ dλ dβ.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2395

Next, the Bayes estimator of R(t) is evaluated as

REB(t) = {E(e−qλ(1−etβ )|X)}−1/q = {E(R(t)−q | X)}−1/q, h �= 0,

where

E(R(t)−q | X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−1βm+d−1 e−q λ(1−etβ )

(m∏

i=1

Xβ−1i:m:n

)

× e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

Proceeding in a similar manner, we evaluate the estimator of H(t) as

HEB(t) = {E((λβtβ−1 etβ )−q|X)}−1/q = {E(H(t)−q | X)}−1/q, h �= 0,

where

E(H(t)−q | X) = 1

k

∫ ∞

0

∫ ∞

0λm+b−q−1βm+d−q−1t−q(β−1) e−qtβ

(m∏

i=1

Xβ−1i:m:n

)

× e∑m

i=1 Xβi:m:n e−λ(a−s) e−cβ dλ dβ.

4. Lindley’s approximation

In the previous section, we obtained several Bayes estimators of λ, β, reliability and hazardfunctions under different loss functions such as squared error, LINEX and entropy. All theseestimators are of the form of ratio of two integrals, which cannot be simplified into a closed form.However, using the approach developed by Lindley [16], we obtain approximate Bayes estimatorsof λ, β, R(t) and H(t).

To illustrate the method, consider the ratio of integral

I(X) =∫(λ,β)

u(λ, β) el(λ,β)+ρ(λ, β) d(λ, β)∫(λ, β)

el(λ,β)+ρ(λ,β) d(λ, β), (11)

where u(λ, β) is a function of λ and β only and l(λ, β) is the corresponding log-likelihood andρ(λ, β) = log π(λ, β). Let (λ, β) denote the MLE of (λ, β). Applying Lindley’s approximation,the ratio of integral I(X) as defined in Equation (11) can be approximated as

I(X) = u(λ, β) + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)],

where uλλ denotes the second derivative of the function u(λ, β) with respect to λ, and uλλ representsthe same expression evaluated at λ = λ and β = β.Also, the σi,j is the (i, j)th element of the inverseof the matrix [−li,j]−1. All other quantities appearing in the above expression of I(X) have the

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2396 M.K. Rastogi et al.

following representations:

lλλ = ∂2l

∂λ2

∣∣∣∣λ=λ,β=β

= − m

λ2, lλλλ = ∂3l

∂λ3

∣∣∣∣λ=λ,β=β

= 2m

λ3, lβλλ = ∂3l

∂β∂λ2

∣∣∣∣λ=λ,β=β

= 0,

lββ = ∂2l

∂β2

∣∣∣∣λ=λ,β=β

= − m

β2+

m∑i=1

xβi:m:n[log xi:m:n]2

− λ

m∑i=1

(1 + ri)exβ

i:m:n xβi:m:n[log xi:m:n]2[1 + xβ

i:m:n],

lββλ = ∂3l

∂β2∂λ

∣∣∣∣λ=λ,β=β

= −m∑

i=1

(1 + ri) xβi:m:n[log xi:m:n]2 exβ

i:m:n(1 + xβi:m:n),

lβββ = ∂3l

∂β3

∣∣∣∣λ=λ,β=β

= 2m

β3+

m∑i=1

xβi:m:n[log xi:m:n]3

− λ

m∑i=1

(1 + ri)xβi:m:n[log xi:m:n]3 exβ

i:m:n(1 + 3xβi:m:n + x2β

i:m:n),

lβλ = ∂2l

∂β∂λ

∣∣∣∣λ=λ,β=β

= lλβ = ∂2l

∂λ∂β

∣∣∣∣λ=λ,β=β

= −m∑

i=1

(1 + ri)xβi:m:n log(xi:m:n) exβ

i:m:n ,

ρλ = (b − 1)

λ− a, ρβ = (d − 1)

β− c.

We will obtain the approximate Bayes estimators under three different loss functions in thefollowing subsections.

4.1 Squared error loss function

The approximate Bayes estimator λSB of λ under the loss function LSB is obtained by settingu(λ, β) = λ, uλ = 1, and uλλ = uβ = uββ = uβλ = uλβ = 0. Then,

λSB = λ + 0.5[2ρλσλλ + 2ρβ σαβ + σ 2λλ lλλλ + σλλσββ lββλ + 2σλβ σβλ lλββ + σλβ σββ lβββ].

Similarly, the approximate Bayes estimator of β under LSB is obtained by setting u(λ, β) = β,uβ = 1, and uλ = uλλ = uββ = uβλ = uλβ = 0. Then,

βSB = β + 0.5[2ρβ σββ + 2ρλσβλ + σ 2ββ lβββ + 3σλβ σββ lλββ + σλλσβλ lλλλ].

In order to obtain the Bayes estimator of R(t), we need the following results:

∂R(t)

∂λ= (1 − etβ ) e{λ(1−etβ )},

∂2R(t)

∂λ2= (1 − etβ )2 e{λ(1−etβ )},

∂R(t)

∂β= −λtβ log(t) e{tβ+λ(1−etβ )},

∂2R(t)

∂β2= λtβ(log(t))2 e{tβ+λ(1−etβ )}(1 + tβ − λtβ etβ ),

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2397

and

∂2R(t)

∂β∂λ= ∂2R(t)

∂λ∂β= −λtβ log(t) e{tβ+λ(1−etβ )}(1 + λ(1 − etβ )).

Now, the Bayes estimator of R(t) under the loss function LSB is obtained by settingu(λ, β) = R(t), uλ = ∂R(t)/∂λ, uλλ = ∂2R(t)/∂λ2, uβ = ∂R(t)/∂β, uββ = ∂2R(t)/∂β2, uλβ =∂2R(t)/∂λ∂β, and uβλ = ∂2R(t)/∂β∂λ. Then,

RSB(t) = E(R(t)|X)

= R(t) + 12 [(uλλ + 2uλ ρλ)σλλ + (uβλ + 2uβ ρλ)σβλ + (uλβ + 2uλ ρβ)σλβ

+ (uββ + 2uβ ρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλ σββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)].

For obtaining the Bayes estimator of H(t), one needs

∂H(t)

∂λ= βt(β−1) etβ ,

∂H(t)

∂β= λt(β−1) etβ {1 + β log(t)(1 + tβ)},

∂2H(t)

∂λ2= 0,

∂2H(t)

∂β2= λtβ(log(t))2 e{tβ+λ(1−etβ )}(1 + tβ − λtβ etβ ),

and

∂2H(t)

∂λ∂β= ∂2H(t)

∂β∂λ= tβ(log(t))2 e{tβ+λ(1−etβ )}(1 + tβ − λtβ etβ ).

Now, by setting u(λ, β) = H(t), uλ = ∂H(t)/∂λ, uλλ = ∂2H(t)/∂λ2, uβ = ∂H(t)/∂β, uββ =∂2H(t)/∂β2, uλβ = ∂2H(t)/∂λ∂β, and uβλ = ∂2H(t)/∂β∂λ, the Bayes estimator of H(t) underthe loss function LSB is obtained as

HSB(t) = E(H(t)|X)

= H(t) + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)].

4.2 LINEX loss function

Observe that for the parameter λ and the loss function LLB, we have u(λ, β) = e−hλ, uλ = −h e−hλ,uλλ = h2 e−hλ, and uβ = uββ = uβλ = uλβ = 0. Thus,

E(e−hλ | X) = e−hλ + 0.5[uλλσλλ + uλ(2ρλσλλ + 2ρβ σλβ + σ 2λλ lλλλ + σλλσββ lββλ

+ 2σλβ σβλ lλββ + σλβ σββ lβββ)],

and hence the corresponding estimator is

λLB = −1

hlog{E(λ|X)(e−hλ | X)}.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2398 M.K. Rastogi et al.

Similarly, observing that for parameter β we have u(λ, β) = e−hβ , uβ = −h e−cβ , uββ = h2 e−hβ ,and uλ = uλλ = uβλ = uλβ = 0. Now we evaluate the following expectation:

E(e−hβ | X) = e−hβ + 0.5[2uββρβ σββ + uβ(2ρλσβλ + σ 2ββ lβββ + 3σλβ σββ lλββ + σλλσβλ lλλλ)],

and the associated estimator is then obtained as

βLB = −1

hlog{E(β|X)(e−hβ | X)}.

Next, setting u(λ, β) = e−hR(t), uλ = −h e−hR(t)(∂R(t)/∂λ), uλλ = h e−hR(t){h(∂R(t)/∂λ)2 −∂2R(t)/∂λ2}, uβ = −h e−hR(t)(∂R(t)/∂β), uββ = h e−hR(t){h(∂R(t)/∂β)2 − ∂2R(t)/∂β2}, uλβ =h e−hR(t){h(∂R(t)/∂λ)(∂R(t)/∂β) − ∂2R(t)/∂λ∂β}, uβλ = h e−hR(t){h(∂R(t)/∂β)(∂R(t)/∂λ) −∂2R(t)/∂β∂λ}, the Bayes estimator of R(t) against the loss function LLB is

RLB(t) = −1

hlog{E(e−hR(t) | X)}, h �= 0,

where

E(e−hR(t) | X) = e−hR(t) + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβ σββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)].

To evaluate the Bayes estimator of H(t) against the loss function LLB, one setsu(λ, β) = e−hH(t), uλ = − h e−hH(t)(∂H(t)/∂λ), uλλ = h e−hH(t){h(∂H(t)/∂λ)2 − ∂2H(t)/∂λ2},uβ = − h e−hH(t)(∂H(t)/∂β), uββ = h e−hH(t){h(∂H(t)/∂β)2 − ∂2H(t)/∂β2}, uλβ = h e−hH(t)

{h(∂H(t)/∂λ)(∂H(t)/∂β) − ∂2H(t)/∂λ∂β}, and uβλ = h e−hR(t){h(∂H(t)/∂β)(∂H(t)/∂λ) −∂2H(t)/∂β∂λ}, and then

E(e−hH(t) | X) = e−hH(t) + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)].

Thus,

HLB(t) = −1

hlog{E(e−hH(t) | X)}, h �= 0.

4.3 Entropy loss function

Notice that for the parameter λ and loss function LEB, one has to set u(λ, β) = λ−q, uλ =−q λ−(q+1), uλλ = q (q + 1) λ−(q+2), and uβ = uββ = uβλ = uλβ = 0. Then,

E(λ|X) = λ−q + 0.5[uλλσλλ + uλ(2ρλσλλ + 2ρβ σλβ + σ 2λλ lλλλ + σλλσββ lββλ

+ 2σλβ σβλ lλββ + σλβ σββ lβββ)].

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2399

Thus, the approximate Bayes estimator of λ in this case is given by

λEB = {E(λ|X)(λ−q | X)}−1/q.

Also, for the parameter β we get that u(λ, β) = β−q, uβ = −qβ−(q+1), uββ = q(q + 1)β−(q+2),and uλ = uλλ = uβλ = uλβ = 0. Thus,

E(β−q|X) = β−q + 0.5[2uββρβ σββ + uβ(2ρλσβλ + σ 2ββ lβββ + 3σλβ σββ lλββ + σλλσβλ lλλλ)],

Consequently,

βEB = {E(β|X)(β−q | X)}−1/q.

The corresponding Bayes estimator of R(t) is obtained by setting u(λ, β) = {R(t)}−q, uλ =−q{R(t)}−q−1(∂R(t)/∂λ), uβ = −q{R(t)}−q−1(∂R(t)/∂β), uλλ = q{R(t)}−q−2{(q + 1)(∂R(t)/∂λ)2 − R(t)(∂2R(t)/∂λ2)}, uββ = q{R(t)}−q−2{(q + 1)(∂R(t)/∂β)2 − R(t)(∂2R(t)/∂β2)}, uλβ =q{R(t)}−q−2 {(q + 1)(∂R(t)/∂λ)(∂R(t)/∂β) − R(t)(∂2R(t)/∂λ∂β)}, and uβλ = q{R(t)}−q−2

{(q + 1)(∂R(t)/∂β)(∂R(t)/∂λ) − R(t)(∂2R(t)/∂β∂λ)}, so

E({R(t)}−q | X) = {R(t)}−q + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββ σββ)],

and hence

REB(t) = {E({R(t)}−q | X)}−1/q.

Finally, let u(λ, β) = {H(t)}−q, uλ = −q{H(t)}−q−1(∂H(t)/∂λ), uβ = −q{H(t)}−q−1(∂H(t)/∂β),uλλ = q{H(t)}−q−2{(q + 1)(∂H(t)/∂λ)2 − H(t)(∂2H(t)/∂λ2)}, uββ = q{H(t)}−q−2{(q + 1)(∂H(t)/∂β)2 − H(t)(∂2H(t)/∂λ2)}, uλβ = q{H(t)}−q−2{(q + 1)(∂H(t)/∂λ)(∂H(t)/∂β) − H(t)(∂2H(t)/∂λ∂β)}, and uβλ = q{H(t)}−q−2{(q + 1)(∂H(t)/∂β) (∂H(t)/∂λ) − H(t)(∂2H(t)/∂β∂λ)}.The Bayes estimator of H(t) against the loss function LEB is obtained as

HEB(t) = {E({H(t)}−q | X)}−1/q,

where

E({H(t)}−q | X) = {H(t)}−q + 12 [(uλλ + 2uλρλ)σλλ + (uβλ + 2uβρλ)σβλ + (uλβ + 2uλρβ)σλβ

+ (uββ + 2uβρβ)σββ] + 12 [(uλσλλ + uβσλβ)(lλλλσλλ + lλβλσλβ + lβλλσβλ

+ lββλσββ) + (uλσβλ + uβσββ)(lβλλσλλ + lλββ σλβ + lβλβ σβλ + lβββσββ)].

5. Numerical comparisons

In previous sections we proposed several estimators for unknown parameters λ, β, R(t) and H(t).In this section, the performance of all these estimators is evaluated in terms of their MSE values.It is to be noted that the exact expressions for MSEs of proposed estimators do not exist and hencetheoretical comparisons between different estimators become really difficult. We have to employMonte Carlo simulations to compute MSEs, and using these values, the performance of all theestimators is compared numerically. Bayes estimators are evaluated under the prior assumptions

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2400 M.K. Rastogi et al.

Table 1. MSE values of all estimates of λ for different choices of (n, m) in Case I.

λLB1 λLB2 λLB3 λEB1 λEB2 λEB3

(n, m) Scheme λ λSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.015735 0.015520 0.016050 0.015030 0.001364 0.018527 0.017317 0.021454(0∗9, 10) 0.000881 0.000846 0.000860 0.000831 0.000803 0.000736 0.000683 0.000876(1∗10) 0.000975 0.000963 0.000980 0.000947 0.000914 0.000835 0.000724 0.000899

(0∗4, 5∗2, 0∗4) 0.000858 0.000847 0.000962 0.000932 0.000903 0.000830 0.000715 0.000852

(20,15) (5, 0∗14) 0.007105 0.006553 0.004344 0.002958 0.001679 0.003264 0.003029 0.003401(0∗14, 5) 0.000858 0.000827 0.000839 0.000814 0.000790 0.000729 0.000636 0.000753

(0∗7, 5, 0∗7) 0.005637 0.005303 0.003534 0.003080 0.001569 0.002693 0.002410 0.002874

(40,10) (30, 0∗9) 0.001189 0.001187 0.001213 0.001161 0.001111 0.001003 0.000786 0.001080(0∗9, 30) 0.000610 0.000500 0.000505 0.000495 0.000486 0.000473 0.000400 0.000606(1∗9, 21) 0.000593 0.000495 0.000500 0.000490 0.000480 0.000465 0.000385 0.000591

(0∗4, 15∗2, 0∗4) 0.000659 0.000656 0.000664 0.000649 0.000633 0.000598 0.000534 0.000654

(40,20) (20, 0∗19) 0.000660 0.000628 0.000635 0.000621 0.000607 0.000569 0.000500 0.000577(0∗19, 20) 0.000465 0.000456 0.000460 0.000452 0.000444 0.000426 0.000412 0.000463

(1∗20) 0.000469 0.000457 0.000460 0.000453 0.000445 0.000424 0.000400 0.000433(0∗8, 5∗4, 0∗8) 0.000494 0.000424 0.000427 0.000421 0.000415 0.000386 0.000372 0.000392

(40,30) (10, 0∗29) 0.000475 0.000412 0.000415 0.000409 0.000403 0.000375 0.000360 0.000384(0∗29, 10) 0.000482 0.000407 0.000410 0.000403 0.000397 0.000380 0.000358 0.000381

(0∗14, 5∗2, 0∗14) 0.000422 0.000349 0.000352 0.000347 0.000343 0.000329 0.000315 0.000331

(60,10) (50, 0∗9) 0.001117 0.001083 0.001106 0.001061 0.001017 0.000919 0.000720 0.001008(0∗9, 50) 0.000578 0.000468 0.000472 0.000463 0.000455 0.000445 0.000451 0.000574(1∗9, 41) 0.000529 0.000433 0.000437 0.000430 0.000423 0.000414 0.000424 0.000526

(0∗4, 25∗2, 0∗4) 0.000558 0.000534 0.000540 0.000529 0.000518 0.000491 0.000439 0.000554

(60,20) (40, 0∗19) 0.000623 0.000580 0.000586 0.000574 0.000561 0.000506 0.000461 0.000520(0∗19, 40) 0.000344 0.000310 0.000312 0.000308 0.000304 0.000399 0.000310 0.000443(1∗19, 21) 0.000378 0.000353 0.000356 0.000351 0.000347 0.000358 0.000338 0.000376

(0∗8, 10∗4, 0∗8) 0.000347 0.000328 0.000330 0.000326 0.000322 0.000310 0.000298 0.000316

(60,30) (30, 0∗29) 0.000360 0.000393 0.000396 0.000390 0.000384 0.000367 0.000341 0.000378(0∗29, 30) 0.000314 0.000310 0.000311 0.000308 0.000304 0.000296 0.000293 0.000312

(1∗30) 0.000297 0.000298 0.000300 0.000297 0.000293 0.000284 0.000277 0.000286(0∗14, 15∗2, 0∗14) 0.000288 0.000272 0.000273 0.000270 0.000267 0.000259 0.000254 0.000267

(60,40) (20, 0∗39) 0.000382 0.000304 0.000305 0.000302 0.000298 0.000288 0.000276 0.000290(0∗39, 20) 0.000293 0.000281 0.000283 0.000280 0.000277 0.000268 0.000263 0.000271

(0∗18, 5∗4, 0∗18) 0.000251 0.000234 0.000235 0.000233 0.000231 0.000224 0.000221 0.000230

(80,40) (40, 0∗39) 0.000296 0.000286 0.000287 0.000284 0.000281 0.000271 0.000260 0.000285(0∗39, 40) 0.000235 0.000232 0.000233 0.000231 0.000229 0.000225 0.000217 0.000234

(1∗40) 0.000233 0.000220 0.000221 0.000219 0.000217 0.000212 0.000201 0.000232(0∗18, 10∗4, 0∗18) 0.000182 0.000179 0.000190 0.000188 0.000187 0.000183 0.000174 0.000191

(100,50) (50, 0∗49) 0.000219 0.000217 0.000228 0.000226 0.000224 0.000218 0.000203 0.000243(0∗49, 50) 0.000184 0.000181 0.000182 0.000181 0.000180 0.000177 0.000162 0.000183

(1∗50) 0.000169 0.000164 0.000174 0.000173 0.000172 0.000168 0.000170 0.000168

that λ and β follow Gamma(b, a) and Gamma(d, c) distributions, respectively. Approximateexpressions for all Bayes estimators are obtained using the Lindley method in Section 4. Wecompute these estimators under squared error, LINEX and entropy loss functions. For the case ofLINEX and entropy loss functions, three different choices such as −0.5, 0.5, 1.5 for both h andq are taken into consideration.

The MSE values of all estimators are evaluated using Monte Carlo simulations based on 5000generations of sample of size m from a sample of size n taken from the model (1). Further,for tabulating these MSEs two different set of values, such as (1, 0.05, 4, 2) and (2, 0.2, 8, 8) forrespective hyperparameters (a, b, c, d), are considered and it is assumed that unknown param-eters (λ, β) take values (0.05, 0.5) and (0.1, 1) for these two sets of hyperparameter values,respectively. In fact, the last set of parameter and hyperparameter values are considered in [26] as

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2401

Table 2. MSE values of all estimates of β for different choices of (n, m) in Case I.

βLB1 βLB2 βLB3 βEB1 βEB2 βEB3

(n, m) Scheme β βSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.005596 0.004339 0.004521 0.004454 0.004403 0.004355 0.004379 0.004409(0∗9, 10) 0.014344 0.009048 0.009244 0.008883 0.008641 0.008838 0.008918 0.009068(1∗10) 0.009288 0.006528 0.006603 0.006462 0.006361 0.006391 0.006486 0.006541

(0∗4, 5∗2, 0∗4) 0.007339 0.005363 0.005405 0.005326 0.005266 0.005242 0.005331 0.005349

(20,15) (5, 0∗14) 0.003672 0.003045 0.003129 0.003100 0.003077 0.003073 0.003079 0.003087(0∗14, 5) 0.005705 0.004483 0.004518 0.004453 0.004405 0.004470 0.004480 0.004525

(0∗7, 5, 0∗7) 0.004145 0.003386 0.004470 0.004147 0.003981 0.003412 0.003420 0.003430

(40,10) (30, 0∗9) 0.004819 0.003873 0.003902 0.003847 0.003804 0.003827 0.003851 0.003873(0∗9, 30) 0.029720 0.015428 0.016242 0.014763 0.013830 0.014197 0.014300 0.014459(1∗9, 21) 0.026377 0.014616 0.015235 0.014103 0.013364 0.013135 0.013732 0.013820

(0∗4, 15∗2, 0∗4) 0.007210 0.005309 0.005355 0.005267 0.005197 0.005172 0.005223 0.005300

(40,20) (20, 0∗19) 0.002472 0.002165 0.002173 0.002158 0.002145 0.002130 0.002157 0.002162(0∗19, 20) 0.006290 0.005013 0.005068 0.004964 0.004881 0.004873 0.004942 0.004961

(1∗20) 0.003952 0.003312 0.003331 0.003294 0.003263 0.003198 0.003286 0.003290(0∗8, 5∗4, 0∗8) 0.002741 0.002336 0.002346 0.002328 0.002312 0.002229 0.002322 0.002391

(40,30) (10, 0∗29) 0.001689 0.001532 0.001536 0.001529 0.001522 0.001500 0.001527 0.001529(0∗29, 10) 0.002611 0.002315 0.002324 0.002307 0.002294 0.002211 0.002312 0.002323

(0∗14, 5∗2, 0∗14) 0.001704 0.001532 0.001537 0.001509 0.001522 0.001509 0.001526 0.001527

(60,10) (50, 0∗9) 0.004461 0.003639 0.003667 0.003614 0.003573 0.003592 0.003614 0.003632(0∗9, 50) 0.036703 0.019584 0.021026 0.018620 0.017461 0.016589 0.017974 0.018494(1∗9, 41) 0.034879 0.020842 0.022048 0.019013 0.017900 0.018011 0.018352 0.018722

(0∗4, 25∗2, 0∗4) 0.007519 0.005581 0.005635 0.005532 0.005445 0.005329 0.005450 0.005698

(60,20) (40, 0∗19) 0.002324 0.002060 0.002067 0.002053 0.002042 0.002006 0.002055 0.002061(0∗19, 40) 0.008906 0.006798 0.006931 0.006678 0.006475 0.006175 0.006530 0.006897(1∗19, 21) 0.007469 0.005899 0.005981 0.005824 0.005697 0.005628 0.005748 0.005796

(0∗8, 10∗4, 0∗8) 0.002731 0.002324 0.002335 0.002315 0.002298 0.002116 0.002204 0.002298(60,30) (30, 0∗29) 0.001542 0.001408 0.001412 0.001404 0.001399 0.001400 0.001404 0.001405

(0∗29, 30) 0.003961 0.003398 0.003423 0.003375 0.003335 0.003328 0.003359 0.003363(1∗30) 0.002302 0.002053 0.002061 0.002046 0.002035 0.002019 0.002046 0.002050

(0∗14, 15∗2, 0∗14) 0.001605 0.001457 0.001460 0.001454 0.001449 0.001435 0.001454 0.001457

(60,40) (20, 0∗39) 0.001191 0.001111 0.001113 0.001110 0.001106 0.001100 0.001110 0.001112(0∗39, 20) 0.002225 0.002024 0.002030 0.002017 0.002007 0.002010 0.002020 0.002027

(0∗18, 5∗4, 0∗18) 0.001232 0.001137 0.001139 0.001135 0.001131 0.001125 0.001133 0.001134

(80,40) (40, 0∗39) 0.001151 0.001073 0.001076 0.001071 0.001068 0.001062 0.001071 0.001076(0∗39, 40) 0.002913 0.002606 0.002619 0.002594 0.002573 0.002576 0.002589 0.002595

(1∗40) 0.001693 0.001548 0.001552 0.001543 0.001536 0.001534 0.001540 0.001541(0∗18, 10∗4, 0∗18) 0.001151 0.001060 0.001062 0.001038 0.001054 0.001058 0.001046 0.001056

(100,50) (50, 0∗49) 0.000882 0.000833 0.000834 0.000832 0.000830 0.000832 0.000838 0.000842(0∗49, 50) 0.002281 0.002078 0.002087 0.002069 0.002054 0.002050 0.002062 0.002072

(1∗50) 0.001314 0.001224 0.001226 0.001221 0.001216 0.001212 0.001220 0.001229

well. In addition, we have tabulated the MSE values of all estimators of R(t) and H(t) for t = 1.In Tables 1–5, we have presented MSE values of estimators of λ, β, R(t) and H(t) and alsothe approximation confidence intervals for the first set of parameter and hyperparameter values(namely Case I for (λ, β) = (0.05, 0.5) and (a, b, c, d) = (1, 0.05, 4, 2)). While for the other set(namely Case II for (λ, β) = (0.1, 1) and (a, b, c, d) = (2, 0.2, 8, 8)), these values are presentedin Tables 6–10. Finally, different combinations of (n, m) and censoring schemes are employed toevaluate these MSE values. The following conclusions are drawn from the tabulated values.

(1) In Tables 1 and 6, the MSE values of MLE and different Bayes estimators of λ arepresented for different choices of (n, m). For each of these choices, different censoring

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2402 M.K. Rastogi et al.

Table 3. MSE values of all estimates of R(t) when t = 1 for different choices of (n, m) in Case I.

RLB1 (t) RLB2 (t) RLB3 (t) REB1 (t) REB2 (t) REB3 (t)(n, m) Scheme R(t) RSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.003294 0.005462 0.003468 0.004369 0.005087 0.003387 0.003892 0.004152(0∗9, 10) 0.002105 0.002832 0.001791 0.001875 0.001962 0.001884 0.001989 0.002095(1∗10) 0.002040 0.002063 0.002017 0.002109 0.002203 0.002119 0.002234 0.002349

(0∗4, 5∗2, 0∗4) 0.002044 0.002081 0.002039 0.002123 0.002209 0.002133 0.002237 0.002342

(20,15) (5, 0∗14) 0.002485 0.002673 0.002633 0.002714 0.002796 0.002666 0.002799 0.002913(0∗14, 5) 0.001747 0.001774 0.001739 0.001810 0.001883 0.001817 0.001904 0.001991

(0∗7, 5, 0∗7) 0.002289 0.002401 0.002371 0.002432 0.002492 0.002436 0.002513 0.002756

(40,10) (30, 0∗9) 0.002449 0.002457 0.002371 0.002504 0.002641 0.002521 0.002691 0.002863(0∗9, 30) 0.001420 0.001613 0.001086 0.001113 0.001141 0.001116 0.001149 0.001184(1∗9, 21) 0.001381 0.001450 0.001164 0.001196 0.001229 0.001199 0.001239 0.001280

(0∗4, 15∗2, 0∗4) 0.001367 0.001421 0.001399 0.001443 0.001488 0.001448 0.001502 0.001558

(40,20) (20, 0∗19) 0.001304 0.001379 0.001359 0.001410 0.001443 0.001404 0.001454 0.001504(0∗19, 20) 0.001067 0.001189 0.000978 0.001001 0.001024 0.001003 0.00103 0.001058

(1∗20) 0.001105 0.001119 0.001106 0.001132 0.001159 0.001135 0.001166 0.001198(0∗8, 5∗4, 0∗8) 0.000931 0.000959 0.000949 0.000969 0.000989 0.000972 0.000994 0.001018

(40,30) (10, 0∗29) 0.00091 0.000959 0.000949 0.000969 0.000990 0.000971 0.000995 0.001025(0∗29, 10) 0.000896 0.000910 0.000901 0.000923 0.000939 0.000921 0.000944 0.000966

(0∗14, 5∗2, 0∗14) 0.000835 0.000871 0.000863 0.000879 0.000894 0.000881 0.000898 0.000917

(60,10) (50, 0∗9) 0.002256 0.002216 0.002155 0.002278 0.002403 0.002292 0.002447 0.002603(0∗9, 50) 0.001097 0.001015 0.001004 0.001026 0.001049 0.001029 0.001057 0.001088(1∗9, 41) 0.001069 0.001084 0.001072 0.001096 0.001121 0.001099 0.001132 0.001166

(0∗4, 25∗2, 0∗4) 0.001314 0.001209 0.001192 0.001225 0.001259 0.001229 0.001271 0.001312

(60,20) (40, 0∗19) 0.001279 0.001341 0.001321 0.001361 0.001402 0.001365 0.001413 0.001461(0∗19, 40) 0.000748 0.000752 0.000746 0.000758 0.000771 0.000760 0.000774 0.000793(1∗19, 21) 0.000884 0.000893 0.000786 0.000805 0.000813 0.000801 0.000817 0.000833

(0∗8, 10∗4, 0∗8) 0.000746 0.000751 0.000735 0.000747 0.000759 0.000748 0.000762 0.000776

(60,30) (30, 0∗29) 0.000907 0.000947 0.000937 0.000957 0.000976 0.000958 0.000981 0.001005(0∗29, 30) 0.000744 0.000750 0.000704 0.000715 0.000726 0.000716 0.000729 0.000743

(1∗30) 0.000617 0.000685 0.000684 0.000687 0.000689 0.000687 0.000690 0.000694(0∗14, 15∗2, 0∗14) 0.000618 0.000616 0.000622 0.00063 0.000639 0.000631 0.000641 0.000651

(60,40) (20, 0∗39) 0.000666 0.000690 0.000684 0.000695 0.000706 0.000696 0.000709 0.000722(0∗39, 20) 0.000671 0.000664 0.000659 0.000668 0.000678 0.000669 0.000680 0.000691

(0∗18, 5∗4, 0∗18) 0.000540 0.000553 0.000551 0.000557 0.000564 0.000557 0.000565 0.000573

(80,40) (40, 0∗39) .000686 0.000714 0.000709 0.00072 0.000731 0.000721 0.000734 0.000748(0∗39, 40) 0.000564 0.000575 0.000541 0.000548 0.000555 0.000548 0.000556 0.000564

(1∗40) 0.000532 0.000537 0.000534 0.000540 0.000547 0.000541 0.000548 0.000556(0∗18, 10∗4, 0∗18) 0.000457 0.000461 0.000459 0.000463 0.000468 0.000464 0.000469 0.000474

(100,50) (50, 0∗49) 0.000544 0.000563 0.000561 0.000567 0.000575 0.000568 0.000576 0.000585(0∗49, 50) 0.000462 0.000467 0.000445 0.000449 0.000454 0.000450 0.000455 0.00046

(1∗50) 0.000426 0.000427 0.000425 0.000429 0.000433 0.000429 0.000434 0.000438

schemes have been taken into considerations. It is observed from these two tables thatBayes estimators of λ computed under the squared error loss function are better than cor-responding MLE of λ. Among Bayes estimators obtained under the LINEX loss function,the choice h = 1.5 seems to be the most reasonable, while for entropy loss function, thechoice q = 0.5 results in minimum MSE values. Overall, Bayes estimators are superior thanthe MLE.

(2) For estimating β, again it is clear from Tables 2 and 7 that Bayes estimators are better thanthe MLE. Moreover, the choice h = 1.5 seems to be reasonable among Bayes estimatorsevaluated under the LINEX loss function, while for the entropy loss function, q = −0.5seems a good choice. Thus, in this case we conclude that estimators λEB when q = 1.5 and

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2403

Table 4. MSE values of all estimates of H(t) when t = 1 for different choices of (n, m) in Case I.

HLB1 (t) HLB2 (t) HLB3 (t) HEB1 (t) HEB2 (t) HEB3 (t)(n, m) Scheme H(t) HSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.001424 0.001313 0.001338 0.001289 0.001241 0.001191 0.001025 0.001168(0∗9, 10) 0.000971 0.000708 0.000716 0.000694 0.000685 0.000696 0.000677 0.000895(1∗10) 0.001033 0.000838 0.000849 0.000827 0.000807 0.000801 0.000731 0.000917

(0∗4, 5∗2, 0∗4) 0.00114 0.001007 0.001022 0.000993 0.000966 0.000938 0.000896 0.000925

(20,15) (5, 0∗14) 0.000996 0.000979 0.000991 0.000967 0.000943 0.000917 0.000878 0.000906(0∗14, 5) 0.000949 0.000873 0.000867 0.000732 0.000716 0.000825 0.000719 0.000807

(0∗7, 5, 0∗7) 0.000864 0.000852 0.000861 0.000843 0.000826 0.000807 0.000779 0.000803

(40,10) (30, 0∗9) 0.001334 0.001187 0.001209 0.001166 0.001124 0.001079 0.001013 0.001041(0∗9, 30) 0.001166 0.000903 0.000922 0.000885 0.000854 0.000851 0.000815 0.000832(1∗9, 21) 0.001056 0.000779 0.000792 0.000767 0.000744 0.000743 0.000728 0.000754

(0∗4, 15∗2, 0∗4) 0.000917 0.00077 0.000779 0.00076 0.000742 0.000721 0.000674 0.000674

(40,20) (20, 0∗19) 0.000695 0.000676 0.000682 0.00067 0.000657 0.000642 0.000618 0.000632(0∗19, 20) 0.000484 0.000401 0.000403 0.000401 0.000396 0.000403 0.000330 0.000471

(1∗20) 0.000502 0.000453 0.000456 0.000451 0.000445 0.000443 0.000415 0.000468(0∗8, 5∗4, 0∗8) 0.000514 0.000484 0.000487 0.000481 0.000475 0.000469 0.000458 0.000468

(40,30) (10, 0∗29) 0.000473 0.000468 0.000471 0.000465 0.000459 0.000452 0.000441 0.000450(0∗29, 10) 0.000422 0.000391 0.000392 0.000389 0.000385 0.000384 0.000369 0.000412

(0∗14, 5∗2, 0∗14) 0.000409 0.000399 0.000401 0.000397 0.000393 0.000388 0.000381 0.000389

(60,10) (50, 0∗9) 0.001236 0.001079 0.001098 0.001061 0.001025 0.000992 0.000954 0.000993(0∗9, 50) 0.002398 0.001390 0.001448 0.001340 0.001258 0.001216 0.001035 0.000996(1∗9, 41) 0.002113 0.001299 0.001371 0.001243 0.001166 0.001144 0.000987 0.000952

(0∗4, 25∗2, 0∗4) 0.001049 0.000852 0.000864 0.000841 0.000819 0.000795 0.000728 0.000706

(60,20) (40, 0∗19) 0.000671 0.000642 0.000648 0.000637 0.000626 0.000614 0.000597 0.000615(0∗19, 40) 0.000375 0.000313 0.000314 0.000312 0.000310 0.000316 0.000334 0.000359(1∗19, 21) 0.000383 0.000321 0.000322 0.000321 0.000318 0.000322 0.000307 0.000360

(0∗8, 10∗4, 0∗8) 0.000392 0.000364 0.000366 0.000362 0.000358 0.000354 0.000347 0.000354(60,30) (30, 0∗29) 0.000461 0.000453 0.000455 0.000451 0.000444 0.000438 0.000427 0.000436

(0∗29, 30) 0.000324 0.000280 0.00028 0.000279 0.000278 0.000282 0.000235 0.000317(1∗30) 0.000342 0.000320 0.000321 0.000318 0.000316 0.000314 0.000303 0.000323

(0∗14, 15∗2, 0∗14) 0.000308 0.000297 0.000298 0.000296 0.000293 0.000290 0.000286 0.000291

(60,40) (20, 0∗39) 0.000349 0.000346 0.000348 0.000345 0.000341 0.000338 0.000331 0.000337(0∗39, 20) 0.000299 0.000278 0.000279 0.000277 0.000276 0.000276 0.000260 0.000292

(0∗18, 5∗4, 0∗18) 0.000278 0.000272 0.000273 0.000271 0.000269 0.000267 0.000264 0.000268

(80,40) (40, 0∗39) 0.000342 0.000339 0.000340 0.000337 0.000334 0.00033 0.000324 0.000329(0∗39, 40) 0.000238 0.000215 0.000216 0.000215 0.000214 0.000216 0.000210 0.000234

(1∗40) 0.000242 0.000229 0.00023 0.000229 0.000227 0.000227 0.000228 0.000235(0∗18, 10∗4, 0∗18) 0.000228 0.000221 0.000222 0.000221 0.000219 0.000218 0.000217 0.000221

(100,50) (50, 0∗49) 0.000285 0.000283 0.000284 0.000282 0.000280 0.000278 0.000273 0.000276(0∗49, 50) 0.000187 0.000172 0.000172 0.000172 0.000172 0.000173 0.000169 0.000187

(1∗50) 0.000194 0.000186 0.000186 0.000185 0.000185 0.000185 0.000186 0.000191

βEB when q = −0.5 are better than their respective competitors for all choices of (n, m) anddifferent censoring schemes. However, these two estimators cannot be compared, as for someschemes λEB is better while for other schemes the opposite is true.

(3) The MSE values of all estimators of R(t) are presented in Tables 3 and 8 for different combi-nations of (n, m) and censoring schemes. Again Bayes estimators show superior performancethan the MLE of R(t). Among estimators obtained from the LINEX loss function, h = −0.5gives a better choice while for entropy loss function q = −0.5 is a good choice. Overall, theestimator RLB1 is better than the other estimators for various proposed censoring schemes.

(4) In Tables 4 and 9, the MSE values of all estimators of H(t) are tabulated. Again Bayesestimators perform quite well compared to the MLE. From these tables, we conclude that

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2404 M.K. Rastogi et al.

Table 5. Approximation confidence intervals for λ and β for different choices of (n, m) in Case I.

λ β

Coverage Average Coverage Average(n, m) Scheme probability length probability length

(20,10) (10, 0∗9) 0.8626 0.125245 0.9376 0.265029(0∗9, 10) 0.8490 0.113243 0.9464 0.428199(1∗10) 0.8476 0.111649 0.9264 0.329209

(0∗4, 5∗2, 0∗4) 0.8464 0.102742 0.8966 0.272511

(20,15) (5, 0∗14) 0.8750 0.105582 0.9354 0.219128(0∗14, 5) 0.8664 0.104692 0.9396 0.280358

(0∗7, 5, 0∗7) 0.8648 0.096165 0.9232 0.219976

(40,10) (30, 0∗14) 0.8796 0.121056 0.9406 0.254257(0∗14, 30) 0.8728 0.091301 0.9460 0.579322(1∗9, 21) 0.8744 0.090545 0.9384 0.536758

(0∗4, 15∗2, 0∗4) 0.8792 0.087899 0.8908 0.253187

(40,20) (20, 1∗19) 0.8960 0.090362 0.9404 0.183728(0∗19, 20) 0.8946 0.083137 0.9452 0.292981

(1∗20) 0.8952 0.079071 0.9266 0.222165(0∗8, 5∗4, 0∗8) 0.9032 0.073993 0.9268 0.19365

(40,30) (10, 0∗29) 0.9058 0.078131 0.9356 0.151779(0∗29, 10) 0.9032 0.075405 0.9404 0.194786

(0∗14, 5∗2, 0∗14) 0.9092 0.069221 0.9368 0.161612

(60,10) (50, 0∗9) 0.8878 0.119226 0.9418 0.249303(0∗9, 50) 0.8988 0.078034 0.9482 0.69925(1∗9, 41) 0.8952 0.079275 0.9496 0.665319

(0∗4, 25∗2, 0∗4) 0.9062 0.082928 0.8874 0.253239

(60,20) (40, 0∗19) 0.8960 0.087441 0.9408 0.180053(0∗19, 40) 0.8986 0.072039 0.9402 0.346532(1∗19, 21) 0.9056 0.072441 0.9402 0.308664

(0∗8, 10∗4, 0∗8) 0.9202 0.066373 0.9292 0.185855

(60,30) (30, 0∗29) 0.9082 0.072768 0.9374 0.149004(0∗29, 30) 0.9164 0.068684 0.9428 0.237115

(1∗30) 0.9076 0.064747 0.9348 0.178525(0∗14, 15∗2, 0∗14) 0.9084 0.059955 0.9294 0.155245

(60,40) (20, 0∗39) 0.9262 0.064973 0.9446 0.130209(0∗39, 20) 0.9148 0.064314 0.9414 0.178897

(0∗18, 5∗4, 0∗18) 0.9186 0.057243 0.9350 0.139206

(80,40) (40, 0∗39) 0.9196 0.063975 0.9414 0.128805(0∗39, 40) 0.9114 0.059604 0.9436 0.204601

(1∗40) 0.9084 0.056029 0.9288 0.153444(0∗18, 10∗4, 0∗18) 0.9252 0.052302 0.9412 0.135246

(100,50) (50, 0∗49) 0.9192 0.057635 0.9390 0.115207(0∗49, 50) 0.9262 0.053626 0.9440 0.182447

(1∗50) 0.9224 0.050432 0.9426 0.13649

HEB for q = 0.5 is a reasonable choice in this case (Table 4). However, in some cases theestimator HLB for h = 1.5 performs better than the other estimators (Table 9).

(5) With the increase in the effective sample size m/n , the MSE of all estimators decreases. Asimilar trend is observed for the various other censoring schemes as well.

(6) In Tables 5 and 10, the 95% asymptotic confidence intervals for λ and β are presented fordifferent combinations of (n, m) and censoring schemes. For the parameter λ, it is seen that

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2405

Table 6. MSE values of all estimates of λ for different choices of (n, m) in Case II.

λLB1 λLB2 λLB3 λEB1 λEB2 λEB3

(n, m) Scheme λ λSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.00338 0.002623 0.002715 0.002534 0.002367 0.002269 0.002056 0.002995(0∗9, 10) 0.00254 0.001866 0.0019 0.001832 0.001769 0.002022 0.001867 0.002754(1∗10) 0.002687 0.002063 0.002114 0.002014 0.001919 0.001852 0.001779 0.002516

(0∗4, 5∗2, 0∗4) 0.002382 0.001972 0.00202 0.001926 0.001836 0.001773 0.00161 0.002229

(20,15) (5, 0∗14) 0.002228 0.001908 0.001955 0.001862 0.001774 0.001714 0.001573 0.002125(0∗14, 5) 0.00211 0.001601 0.001636 0.001567 0.001501 0.001454 0.001383 0.002027

(0∗7, 5, 0∗7) 0.00207 0.001793 0.00183 0.001757 0.001687 0.001639 0.00151 0.00196

(40,10) (30, 0∗9) 0.003206 0.002364 0.002442 0.002289 0.002149 0.002083 0.001931 0.002817(0∗9, 30) 0.003039 0.002635 0.008443 0.002814 0.001546 0.002425 0.001972 0.003882(1∗9, 21) 0.002808 0.002252 0.004534 0.001718 0.001436 0.001699 0.001003 0.00354

(0∗4, 15∗2, 0∗4) 0.002251 0.001841 0.001878 0.001804 0.001735 0.001694 0.001534 0.002026

(40,20) (20, 0∗19) 0.001602 0.001431 0.001456 0.001406 0.001358 0.001324 0.001233 0.001543(0∗19, 20) 0.001236 0.000921 0.000931 0.000911 0.000892 0.000886 0.000876 0.001218

(1∗20) 0.001248 0.001052 0.001064 0.001039 0.001014 0.001128 0.000966 0.001225(0∗8, 5∗4, 0∗8) 0.001127 0.001014 0.001026 0.001002 0.00098 0.000964 0.000919 0.001105

(40,30) (10, 0∗29) 0.001073 0.001012 0.001025 0.00105 0.000976 0.000957 0.000904 0.001054(0∗29, 10) 0.001075 0.000913 0.000922 0.000904 0.000886 0.000878 0.000861 0.001062

(0∗14, 5∗2, 0∗14) 0.000946 0.000888 0.000897 0.000879 0.000862 0.000859 0.000812 0.000931

(60,10) (50, 0∗9) 0.003244 0.002364 0.002439 0.002292 0.002159 0.002101 0.001959 0.002834(0∗9, 50) 0.004074 0.003887 0.005217 0.004911 0.002113 0.002754 0.001196 0.003573(1∗9, 41) 0.005529 0.005125 0.007412 0.006618 0.00193 0.003098 0.001182 0.003254

(0∗4, 25∗2, 0∗4) 0.002148 0.001668 0.00172 0.001637 0.001578 0.001545 0.001408 0.001908

(60,20) (40, 0∗19) 0.001713 0.00152 0.001546 0.001494 0.001444 0.001412 0.001313 0.00165(0∗19, 40) 0.000911 0.000717 0.000722 0.000711 0.000701 0.000696 0.000689 0.000897(1∗19, 21) 0.00093 0.000727 0.000733 0.000721 0.00071 0.000706 0.000703 0.000916

(0∗8, 10∗4, 0∗8) 0.000909 0.000826 0.000834 0.000818 0.000803 0.000792 0.000757 0.000891

(60,30) (30, 0∗29) 0.001077 0.001004 0.001016 0.000992 0.00097 0.000954 0.000908 0.00106(0∗29, 30) 0.000808 0.000646 0.000651 0.000642 0.000634 0.000634 0.000625 0.000798

(1∗30) 0.000777 0.000688 0.000694 0.000683 0.000673 0.000667 0.000654 0.000768(0∗14, 15∗2, 0∗14) 0.000729 0.000686 0.000691 0.00068 0.00067 0.000661 0.000635 0.000718

(60,40) (20, 0∗39) 0.000833 0.000793 0.000804 0.000786 0.000773 0.000763 0.000733 0.000820(0∗39, 20) 0.000723 0.000626 0.000631 0.000622 0.000614 0.000611 0.000606 0.000715

(0∗18, 5∗4, 0∗18) 0.000645 0.000612 0.000616 0.000608 0.000601 0.000595 0.000578 0.000637

(80,40) (40, 0∗39) 0.000813 0.000771 0.000777 0.000764 0.000752 0.000743 0.000717 0.000803(0∗39, 40) 0.000596 0.000496 0.000498 0.000493 0.000489 0.000489 0.000473 0.000589

(1∗40) 0.000589 0.000539 0.000543 0.000536 0.00053 0.000527 0.000517 0.000582(0∗18, 10∗4, 0∗18) 0.000546 0.000518 0.00052 0.000515 0.00051 0.000506 0.000495 0.000542

(100,50) (50, 0∗49) 0.000664 0.000641 0.000646 0.000637 0.000628 0.000622 0.000601 0.000656(0∗49, 50) 0.000492 0.000424 0.000425 0.000422 0.000419 0.000458 0.000423 0.000487

(1∗50) 0.000474 0.000443 0.000445 0.000441 0.000437 0.000434 0.000427 0.000469

when the observed censoring schemes are of type (0∗(m−1), n − m) or (n − m, 0∗(m−1)), thecomputed interval estimates are marginally wider compared with the other schemes. However,the interval estimates of β obtained under the censoring schemes of type (n − m, 0∗(m−1)) haveaverage confidence lengths smaller than the other used schemes. We again observed that whenthe effective sample size increases, the average confidence lengths of all estimates tend todecrease. In these tables, the coverage probabilities for the associated interval estimates arealso presented. From the tabulated values, it is observed that coverage probabilities of λ andβ are quite satisfactory; however, most of them lie below the nominal level of 0.95 for varioussampling schemes.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2406 M.K. Rastogi et al.

Table 7. MSE values of all estimates of β for different choices of (n, m) in Case II.

βLB1 βLB2 βLB3 βEB1 βEB2 βEB3

(n, m) Scheme β βSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.03148 0.014942 0.015136 0.014877 0.012071 0.015032 0.015475 0.016174(0∗9, 10) 0.023222 0.015537 0.032048 0.024602 0.020301 0.014442 0.018796 0.024549(1∗10) 0.034858 0.019323 0.021854 0.02003 0.018939 0.020014 0.021883 0.024108

(0∗4, 5∗2, 0∗4) 0.04716 0.019781 0.020703 0.019984 0.01867 0.020079 0.020883 0.021858

(20,15) (5, 0∗14) 0.021678 0.013228 0.013374 0.01314 0.013021 0.013242 0.013395 0.013679(0∗14, 5) 0.036989 0.01713 0.017463 0.016974 0.01511 0.017174 0.017624 0.018436

(0∗7, 5, 0∗7) 0.023873 0.014159 0.014313 0.014067 0.01405 0.014175 0.014334 0.01462

(40,10) (30, 0∗9) 0.0307 0.015594 0.015871 0.015236 0.015429 0.015591 0.015836 0.016339(0∗9, 30) 0.253044 0.12169 0.02946 0.017978 0.016507 0.015729 0.015753 0.016856(1∗9, 21) 0.119986 0.06617 0.029246 0.02105 0.018261 0.021275 0.024394 0.030275

(0∗4, 15∗2, 0∗4) 0.049807 0.021245 0.022978 0.021446 0.020232 0.021456 0.022173 0.023049

(40,20) (20, 0∗19) 0.014013 0.009916 0.01002 0.00984 0.009765 0.009899 0.009928 0.010029(0∗19, 20) 0.041394 0.018377 0.019028 0.017974 0.017808 0.01823 0.018445 0.019192

(1∗20) 0.023777 0.014455 0.014714 0.014272 0.014107 0.014396 0.014436 0.014644(0∗8, 5∗4, 0∗8) 0.016184 0.011105 0.011226 0.011014 0.010915 0.011078 0.011088 0.011167

(40,30) (10, 0∗29) 0.008737 0.006896 0.006941 0.006861 0.006824 0.006889 0.006903 0.006949(0∗29, 10) 0.016674 0.011668 0.011851 0.011525 0.01135 0.011006 0.011576 0.011653

(0∗14, 5∗2, 0∗14) 0.009496 0.007363 0.007427 0.007312 0.007242 0.007142 0.007327 0.007345

(60,10) (50, 0∗9) 0.028074 0.014666 0.014962 0.014482 0.014406 0.014129 0.014798 0.015228(0∗9, 50) 0.332122 0.252917 0.051331 0.047044 0.016679 0.012731 0.013184 0.013727(1∗9, 41) 0.329027 0.234804 0.058652 0.03467 0.01951 0.010552 0.010931 0.014691

(0∗4, 25∗2, 0∗4) 0.04322 0.019308 0.019508 0.019445 0.018967 0.019475 0.019992 0.020648

(60,20) (40, 0∗19) 0.013969 0.010025 0.01013 0.009947 0.009864 0.010005 0.010027 0.01012(0∗19, 40) 0.060522 0.018444 0.019202 0.018175 0.018008 0.018424 0.019253 0.020954(1∗19, 21) 0.049484 0.019739 0.020429 0.019354 0.01925 0.019608 0.019948 0.020916

(0∗8, 10∗4, 0∗8) 0.015679 0.010831 0.010973 0.010717 0.010565 0.010776 0.010724 0.010737

(60,30) (30, 0∗29) 0.008901 0.007047 0.007102 0.007003 0.006946 0.007031 0.007026 0.007054(0∗29, 30) 0.026347 0.016017 0.016477 0.015664 0.015256 0.015833 0.015702 0.015842

(1∗30) 0.013475 0.009789 0.009924 0.00968 0.00954 0.009639 0.009702 0.00974(0∗14, 15∗2, 0∗14) 0.009188 0.007245 0.007306 0.007195 0.007124 0.007123 0.007202 0.00721

(60,40) (20, 0∗39) 0.006353 0.005323 0.005352 0.0053 0.00527 0.005286 0.005316 0.005334(0∗39, 20) 0.012804 0.009579 0.009715 0.00947 0.009329 0.009329 0.009494 0.009539

(0∗18, 5∗4, 0∗18) 0.006778 0.005634 0.00567 0.005603 0.005559 0.00552 0.005607 0.005612

(80,40) (40, 0∗39) 0.006383 0.005363 0.005395 0.005337 0.0053 0.005253 0.005346 0.005358(0∗39, 40) 0.018083 0.012659 0.01293 0.012441 0.012156 0.012345 0.012445 0.012496

(1∗40) 0.009419 0.007442 0.007524 0.007374 0.007278 0.007208 0.007373 0.007378(0∗18, 10∗4, 0∗18) 0.006578 0.005483 0.005521 0.00545 0.005399 0.005405 0.005443 0.005438

(100,50) (50, 0∗49) 0.00493 0.004313 0.004331 0.004300 0.004283 0.004309 0.004312 0.004326(0∗49, 50) 0.014404 0.010857 0.011058 0.010688 0.010442 0.01056 0.010644 0.010625

(1∗50) 0.00754 0.006284 0.006336 0.006241 0.006177 0.006203 0.006241 0.006243

6. Data analysis

To illustrate the use of the proposed methods, two numerical examples are presented in this section.

Example 1 (Real Life Data) In this example, we consider a real life data investigated in [26]. Itcontains the times to failure of n = 50 units put on a life test. Aarset [1] showed that the hazardrate for these data is bathtub-shaped. With respect to the censoring scheme (0, 0, 0, 3, 0, 0, 0,0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 3, 0, 0, 0), a progressively type-IIcensored sample of size m = 35 generated from the n units is (see [20]), (0.1, 0.2, 1, 1, 1, 1, 1,2, 3, 6, 7, 11, 18, 18, 18, 18, 21, 32, 36, 45, 47, 50, 55, 60, 63, 63, 67, 67, 75, 79, 82, 84, 84, 85,86). The MLEs and different Bayes estimates of λ and β are presented in Table 11. All Bayes

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2407

Table 8. MSE values of all estimates of R(t) when t = 1 for different choices of (n, m) in Case II.

RLB1 (t) RLB2 (t) RLB3 (t) REB1 (t) REB2 (t) REB3 (t)(n, m) Scheme R(t) RSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.006053 0.004179 0.004033 0.004333 0.004662 0.004398 0.004872 0.005381(0∗9, 10) 0.004915 0.003083 0.003023 0.003145 0.003279 0.003169 0.003359 0.00357(1∗10) 0.005087 0.003454 0.003365 0.003546 0.003741 0.003583 0.003862 0.004161

(0∗4, 5∗2, 0∗4) 0.004916 0.003719 0.003625 0.003817 0.004023 0.003858 0.004151 0.004459

(20,15) (5, 0∗14) 0.00452 0.003477 0.003391 0.003566 0.003755 0.003603 0.00387 0.004151(0∗14, 5) 0.004239 0.002895 0.002831 0.002963 0.003105 0.002988 0.003187 0.003399

(0∗7, 5, 0∗7) 0.003883 0.003079 0.003014 0.003147 0.003288 0.003173 0.00337 0.003575

(40,10) (30, 0∗9) 0.005717 0.003882 0.003762 0.004011 0.004289 0.004068 0.004479 0.004930(0∗9, 30) 0.004881 0.002731 0.002591 0.003149 0.003704 0.004015 0.004514 0.004871(1∗9, 21) 0.004223 0.002638 0.002551 0.002848 0.002949 0.003268 0.004274 0.004632

(0∗4, 15∗2, 0∗4) 0.004041 0.003103 0.003044 0.003165 0.003296 0.003192 0.003379 0.003598

(40,20) (20, 0∗19) 0.003084 0.002547 0.002502 0.002593 0.002691 0.002611 0.002746 0.002888(0∗19, 20) 0.002471 0.001714 0.001695 0.001733 0.001774 0.001740 0.001797 0.001858

(1∗20) 0.002425 0.001895 0.001872 0.001918 0.001968 0.001927 0.001995 0.002068(0∗8, 5∗4, 0∗8) 0.002225 0.001891 0.001869 0.001914 0.001961 0.001922 0.001987 0.002055

(40,30) (10, 0∗29) 0.002233 0.001971 0.001947 0.001996 0.002048 0.002005 0.002076 0.00215(0∗29, 10) 0.002065 0.001665 0.001648 0.001683 0.001721 0.001689 0.001742 0.001797

(0∗14, 5∗2, 0∗14) 0.001862 0.001652 0.001633 0.001668 0.001704 0.001674 0.001723 0.001775

(60,10) (50, 0∗9) 0.005647 0.003845 0.003728 0.003975 0.004238 0.004026 0.004426 0.004867(0∗9, 50) 0.007522 0.007143 0.004163 0.004532 0.004957 0.005033 0.005632 0.005917(1∗9, 41) 0.005998 0.004988 0.003376 0.003531 0.004459 0.043302 0.004712 0.004975

(0∗4, 25∗2, 0∗4) 0.003859 0.002859 0.00281 0.002911 0.003027 0.002935 0.003203 0.003512

(60,20) (40, 0∗19) 0.003199 0.002647 0.002601 0.002695 0.002795 0.002714 0.002854 0.003002(0∗19, 40) 0.001856 0.001366 0.001356 0.001377 0.001399 0.001381 0.001412 0.001445(1∗19, 21) 0.001905 0.001446 0.001434 0.001458 0.001483 0.001462 0.001497 0.001535

(0∗8, 10∗4, 0∗8) 0.001879 0.001638 0.001621 0.001656 0.001692 0.001662 0.001712 0.001764

(60,30) (30, 0∗29) 0.002177 0.001924 0.001901 0.001948 0.001997 0.001956 0.002024 0.002095(0∗29, 30) 0.001638 0.001263 0.001255 0.001272 0.001291 0.001276 0.001301 0.00133

(1∗30) 0.001623 0.001376 0.001365 0.001387 0.001411 0.001391 0.001424 0.001458(0∗14, 15∗2, 0∗14) 0.001466 0.00132 0.001310 0.001331 0.001354 0.001335 0.001366 0.001398

(60,40) (20, 0∗39) 0.001723 0.001541 0.001527 0.001554 0.001583 0.001559 0.001598 0.001639(0∗39, 20) 0.001495 0.001253 0.001244 0.001262 0.001282 0.001266 0.001292 0.00132

(0∗18, 5∗4, 0∗18) 0.001304 0.001198 0.001193 0.001207 0.001224 0.001213 0.001234 0.001259

(80,40) (40, 0∗39) 0.001627 0.001489 0.001475 0.001503 0.001533 0.001508 0.001549 0.001591(0∗39, 40) 0.001184 0.000968 0.000963 0.000973 0.000983 0.000974 0.000989 0.001005

(1∗40) 0.001203 0.001063 0.001057 0.001069 0.001082 0.001071 0.001089 0.001107(0∗18, 10∗4, 0∗18) 0.001134 0.001044 0.001038 0.00105 0.001062 0.001052 0.001069 0.001086

(100,50) (50, 0∗49) 0.001333 0.001232 0.001224 0.001241 0.001259 0.001244 0.001268 0.001294(0∗49, 50) 0.000988 0.000837 0.000834 0.00084 0.000847 0.000841 0.000851 0.000861

(1∗50) 0.000968 0.000876 0.000872 0.00088 0.000888 0.000881 0.000893 0.000905

estimates are computed under noninformative prior distribution. This corresponds to the casewhen hyperparameters are defined as a = 0, b = 0, c = 0 and d = 0. The asymptotic confidenceintervals for β and λ are also presented. The estimates of R(t) and H(t) are tabulated for differentchoices of t in Tables 12 and 13, respectively. Conclusions similar to that given in Section 5 caneasily be drawn from these tables.

Example 2 (Simulated Data) In this example, we illustrate our proposed methods using a sim-ulated data set. We generated a progressively type-II right censored sample of size m = 40 froma sample of size n = 50 from the model (1) with the censoring scheme as (0, 0, 0, 0, 0, 0, 0, 0,

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2408 M.K. Rastogi et al.

Table 9. MSE values of all estimates of H(t) when t = 1 for different choices of (n, m) in Case II.

HLB1 (t) HLB2 (t) HLB3 (t) HEB1 (t) HEB2 (t) HEB3 (t)(n, m) Scheme H(t) HSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

(20,10) (10, 0∗9) 0.016251 0.011739 0.012428 0.011111 0.010053 0.01094 0.010813 0.011871(0∗9, 10) 0.018354 0.011258 0.011801 0.010875 0.010653 0.01078 0.010093 0.011532(1∗10) 0.014641 0.01106 0.011582 0.010584 0.009761 0.01044 0.010037 0.010459

(0∗4, 5∗2, 0∗4) 0.015093 0.011225 0.011823 0.010708 0.009861 0.010553 0.010058 0.010756

(20,15) (5, 0∗14) 0.011084 0.008573 0.008886 0.00828 0.007765 0.008218 0.008143 0.008912(0∗14, 5) 0.009148 0.006667 0.006844 0.006502 0.006213 0.006538 0.006324 0.007609

(0∗7, 5, 0∗7) 0.009412 0.007362 0.007583 0.007156 0.006791 0.007137 0.007009 0.007762

(40,10) (30, 0∗9) 0.015638 0.011369 0.011982 0.010809 0.009864 0.010713 0.010632 0.011486(0∗9, 30) 0.010342 0.01007 0.070978 0.061954 0.054355 0.010453 0.082318 0.010782(1∗9, 21) 0.058472 0.048553 0.039159 0.009277 0.008862 0.085264 0.079624 0.096321

(0∗4, 15∗2, 0∗4) 0.019018 0.012121 0.012281 0.011165 0.010635 0.010934 0.010254 0.011237

(40,20) (20, 0∗19) 0.007704 0.006266 0.00642 0.006123 0.005869 0.006117 0.006074 0.006555(0∗19, 20) 0.005474 0.004499 0.004574 0.004428 0.0043 0.0044312 0.004344 0.004612

(1∗20) 0.005837 0.004795 0.004881 0.004714 0.004576 0.004715 0.004632 0.004918(0∗8, 5∗4, 0∗8) 0.005735 0.00486 0.004957 0.004768 0.004601 0.004736 0.004658 0.004754

(40,30) (10, 0∗29) 0.005198 0.004484 0.004553 0.004418 0.004301 0.004426 0.004375 0.004695(0∗29, 10) 0.004352 0.003563 0.003598 0.003534 0.003473 0.003766 0.003691 0.003928

(0∗14, 5∗2, 0∗14) 0.004423 0.003888 0.003943 0.003835 0.003742 0.003828 0.003722 0.003937

(60,10) (50, 0∗9) 0.016023 0.011722 0.012355 0.011146 0.010178 0.011035 0.010796 0.011407(0∗9, 50) 0.010007 0.009975 0.067543 0.064949 0.062119 0.005782 0.005352 0.005962(1∗9, 41) 0.009986 0.009959 0.0.009688 0.096965 0.094347 0.005562 0.005353 0.005783

(0∗4, 25∗2, 0∗4) 0.027245 0.014405 0.013923 0.013092 0.013568 0.005332 0.005781 0.005966

(60,20) (40, 0∗19) 0.006689 0.005181 0.005341 0.005034 0.004782 0.004951 0.004662 0.005263(0∗19, 40) 0.006689 0.005181 0.005341 0.005034 0.004782 0.004951 0.004661 0.005060(1∗19, 21) 0.005934 0.004901 0.005018 0.004791 0.004593 0.004729 0.004318 0.004958

(0∗8, 10∗4, 0∗8) 0.005712 0.004883 0.004985 0.004785 0.004607 0.004728 0.004548 0.004809

(60,30) (30, 0∗29) 0.005078 0.004404 0.004471 0.004342 0.00423 0.004348 0.004288 0.004581(0∗29, 30) 0.003431 0.002966 0.002995 0.002939 0.00289 0.002945 0.002861 0.003038

(1∗30) 0.003611 0.003129 0.003161 0.003099 0.003045 0.003106 0.003031 0.003227(0∗14, 15∗2, 0∗14) 0.003624 0.003235 0.003273 0.003198 0.003131 0.003069 0.003169 0.003224

(60,40) (20, 0∗39) 0.003715 0.003326 0.003364 0.00329 0.003225 0.003294 0.003117 0.003432(0∗39, 20) 0.002921 0.002507 0.002523 0.002492 0.002467 0.002511 0.002472 0.002679

(0∗18, 5∗4, 0∗18) 0.003116 0.002829 0.002858 0.002802 0.002753 0.002797 0.002682 0.002826

(80,40) (40, 0∗39) 0.003775 0.003397 0.003437 0.003358 0.003288 0.003355 0.003154 0.003443(0∗39, 40) 0.002417 0.002156 0.002167 0.002145 0.002126 0.002157 0.002088 0.002253

(1∗40) 0.002721 0.002609 0.002627 0.002593 0.002563 0.002592 0.002581 0.002610(0∗18, 10∗4, 0∗18) 0.002614 0.002398 0.00242 0.002378 0.002342 0.002371 0.002354 0.002378

(100,50) (50, 0∗49) 0.002991 0.002727 0.002749 0.002706 0.002568 0.002642 0.00274 0.002823(0∗49, 50) 0.001852 0.001688 0.001695 0.001682 0.001671 0.001689 0.001609 0.001748

(1∗50) 0.002033 0.001857 0.001868 0.001846 0.001826 0.001847 0.001832 0.001879

0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10) and whenλ = 1 and β = 1. The values for hyperparameters are taken to be a = b = c = d = 4. The simu-lated observations are given by (0.016379, 0.021552, 0.025097, 0.053996, 0.079051, 0.130454,0.161372, 0.163189, 0.181752, 0.212978, 0.217687, 0.226689, 0.230708, 0.269928, 0.330817,0.335223, 0.344630, 0.358805, 0.382527, 0.385639, 0.397436, 0.433930, 0.435276, 0.443870,0.490109, 0.512448, 0.538409, 0.576039, 0.605119, 0.611808, 0.696515, 0.762744, 0.764222,0.767179, 0.816216, 0.819109, 0.848183, 0.867901, 0.908721, 0.911529). As in the previousexample, here again we have obtained various estimates of λ, β and the corresponding confidenceintervals. These results are tabulated in Table 14. The corresponding estimates of R(t) and H(t)are presented in Tables 15 and 16, respectively.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2409

Table 10. Approximation confidence intervals for λ and β for different choices of (n, m) in Case II.

λ cβ

(n, m) Scheme Coverage probability Average length Coverage probability Average length

(20,10) (10, 0∗9) 0.8840 0.213319 0.9304 0.621904(0∗9, 10) 0.8676 0.181062 0.9304 1.07791(1∗10) 0.8744 0.18278 0.9230 0.798092

(0∗4, 5∗2, 0∗4) 0.8720 0.177289 0.9022 0.649612

(20,15) (5, 0∗14) 0.8836 0.178184 0.9274 0.510072(0∗14, 5) 0.8886 0.172421 0.9296 0.670825

(0∗7, 5, 0∗7) 0.8816 0.163688 0.9098 0.516662

(40,10) (30, 0∗14) 0.9006 0.204864 0.9318 0.593535(0∗14, 30) 0.9198 0.168559 0.9532 1.49864(1∗9, 21) 0.9086 0.150335 0.9464 1.3924

(0∗4, 15∗2, 0∗4) 0.9182 0.14736 0.9018 0.618553

(40,20) (20, 1∗19) 0.9068 0.15294 0.9364 0.426806(0∗19, 20) 0.9038 0.130917 0.9338 0.721307

(1∗20) 0.9034 0.131148 0.9296 0.5326(0∗8, 5∗4, 0∗8) 0.9112 0.126172 0.9238 0.432027

(40,30) (10, 0∗29) 0.9118 0.128487 0.9332 0.351713(0∗29, 10) 0.9082 0.124361 0.9364 0.463513

(0∗14, 5∗2, 0∗14) 0.9084 0.117264 0.9326 0.363862

(60,10) (50, 0∗9) 0.8986 0.204869 0.9308 0.580505(0∗9, 50) 0.9358 0.169953 0.9694 1.70793(1∗9, 41) 0.9362 0.167228 0.9662 1.6518

(0∗4, 25∗2, 0∗4) 0.9268 0.150705 0.9050 0.588123

(60,20) (40, 0∗19) 0.9134 0.151221 0.9362 0.420532(0∗19, 40) 0.9190 0.110635 0.9486 0.862314(1∗19, 21) 0.9188 0.113495 0.9396 0.76321

(0∗8, 10∗4, 0∗8) 0.9194 0.114645 0.9264 0.447524

(60,30) (30, 0∗29) 0.9180 0.126763 0.9466 0.346396(0∗29, 30) 0.9234 0.107961 0.9416 0.577965

(1∗30) 0.9232 0.10782 0.9390 0.425726(0∗14, 15∗2, 0∗14) 0.9244 0.103092 0.9284 0.351152

(60,40) (20, 0∗39) 0.9258 0.111009 0.9422 0.301647(0∗39, 20) 0.9298 0.104769 0.9406 0.429474

(0∗18, 5∗4, 0∗18) 0.9282 0.097881 0.9396 0.311445

(80,40) (40, 0∗39) 0.9302 0.109944 0.9474 0.288451(0∗39, 40) 0.9188 0.093402 0.9340 0.49775

(1∗40) 0.9236 0.093183 0.9366 0.364669(0∗18, 10∗4, 0∗18) 0.9234 0.089016 0.9372 0.293566

(100,50) (50, 0∗49) 0.9328 0.098391 0.9404 0.267114(0∗49, 50) 0.9380 0.084243 0.9472 0.442456

(1∗50) 0.9316 0.083651 0.9384 0.323859

Table 11. Estimates of λ and β for Example 1.

λLB1 λLB2 λLB3 λEB1 λEB2 λEB3

λ λSB βLB1 βLB2 βLB3 βEB1 βEB2 βEB3 λVar

β βSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5 βVar C.I. for MLE

0.031536 0.034586 0.034626 0.034547 0.034467 0.033285 0.030632 0.031524 0.000167 (0.006201,0.056872)0.310619 0.307326 0.307479 0.307174 0.30687 0.306837 0.30588 0.304956 0.000622 (0.261745,0.359493)

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

2410 M.K. Rastogi et al.

Table 12. Estimates of R(t) for different choices of t for Example 1.

RLB1(t) RLB2 (t) RLB3(t) REB1(t) REB2 (t) REB3(t)t R(t) RSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

20 0.693036 0.690616 0.691531 0.689704 0.687891 0.689301 0.686698 0.68415730 0.589707 0.590477 0.591489 0.589464 0.587441 0.588759 0.585347 0.58201540 0.412217 0.418038 0.419016 0.417054 0.415076 0.415645 0.410841 0.406151

Table 13. Estimates of H(t) for different choices of t for Example 1.

HLB1(t) HLB2 (t) HLB3(t) HEB1(t) HEB2 (t) HEB3(t)t H(t) HSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

0.01 0.29763 0.335834 0.340209 0.331283 0.32177 0.320005 0.287402 0.261170.05 0.114604 0.126862 0.127433 0.126284 0.125108 0.121625 0.11093 0.1021070.1 0.07813 0.085706 0.085953 0.085457 0.084954 0.082404 0.075689 0.070089

Table 14. Estimates of λ and β for Example 2.

λLB1 λLB2 λLB3 λEB1 λEB2 λEB3

λ λSB βLB1 βLB2 βLB3 βEB1 βEB2 βEB3 λVarβ βSB h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5 βVar C.I. for MLE

1.04543 1.03637 1.04477 1.02804 1.01202 1.0284 1.01303 0.99895 0.033599 (0.686157,1.404704)1.03618 1.02944 1.03602 1.02291 1.01023 1.02314 1.01088 0.99942 0.026297 (0.718342,1.354014)

Table 15. Estimates of R(t) for different choices of t for Example 2.

RLB1(t) RLB2 (t) RLB3(t) REB1(t) REB2 (t) REB3(t)t R(t) R(t) RSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

0.25 0.752747 0.755321 0.753217 0.753848 0.752587 0.751333 0.752383 0.750728 0.7490990.5 0.522714 0.518414 0.522267 0.523114 0.521417 0.519712 0.520626 0.517337 0.5140931 0.179374 0.165904 0.176717 0.177367 0.17606 0.174728 0.17267 0.164384 0.15681

Table 16. Estimates of H(t) for different choices of t for Example 2.

HLB1(t) HLB2 (t) HLB3(t) HEB1(t) HEB2 (t) HEB3(t)t H(t) H(t) HSB(t) h = −0.5 h = 0.5 h = 1.5 q = −0.5 q = 0.5 q = 1.5

0.25 1.284025 1.3068 1.29156 1.30313 1.28016 1.25862 1.28281 1.26597 1.250470.5 1.648721 1.72032 1.6891 1.71071 1.66816 1.63054 1.67681 1.6534 1.632231 2.718282 2.94458 2.93484 3.0998 2.77141 2.55839 2.87711 2.76876 2.67795

7. Conclusions

In this paper we have considered estimation of unknown parameters λ and β of a bathtub-shapeddistribution. We proposed MLEs and Bayes estimators for these unknown parameters. Bayesestimates are computed under different loss functions such as squared error, LINEX and entropy.We found that Bayes estimates are superior than the corresponding MLEs. Asymptotic confidenceintervals are also obtained. Furthermore, similar estimates for the reliability function and hazard

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014

Journal of Applied Statistics 2411

function are also derived. Two numerical examples are also analyzed using the proposed methodsof estimation.

Acknowledgements

The authors are thankful to the Editor and referees for their helpful comments on the earlier version of this manuscript.

References

[1] M.V. Aarset, How to identify a bathtub hazard rate, IEEE Trans. Reliab. 36 (1987), pp. 106–108.[2] N. Balakrishnan, Progressive censoring methodology: An appraisal (with discussion), Test 16 (2007), pp. 211–296.[3] N. Balakrishnan and R.Aggarwala, Progressive Censoring – Theory, Methods, and Applications, Birkhäuser, Boston,

MA, 2000.[4] N. Balakrishnan and A.C. Cohen, Order Statistics and Inference: Estimation Methods, Academic Press, Boston,

MA, 1991.[5] N. Balakrishnan, N. Kannan, C.T. Lin, and S. J. S. Wu, Inference for the extreme value distribution under progressive

type-II censoring, J. Statist. Comput. Simul. 74 (2004), pp. 25–45.[6] Z. Chen, A new two parameter lifetime distribution with bathtub shaped or increasing failure rate function, Statist.

Probab. Lett. 49 (2000), pp. 155–161.[7] A.C. Cohen, Progressively censored sample in life testing, Technometrics 5 (1963), pp. 327–339.[8] A.C. Cohen, Life testing and early failure, Technometrics 8 (1966), pp. 539–549.[9] A.C. Cohen and N.J. Norgaard, Progressively censored sampling in the three parameter gamma distribution,

Technometrics 19 (1977), pp. 333–340.[10] M.R. Gurvich, A.T. Dibenedetto, and S.V. Rande, A new statistical distribution for characterizing the random

strength of brittle materials, J. Mater. Sci. 32 (1997), pp. 2559–2564.[11] U. Hjorth, A reliability distribution with increasing, decreasing, and bathtub-shaped failure rate, Technometrics 22

(1980), pp. 99–107.[12] H. Jiang, M. Xie, and L.C. Tang, On MLEs of the parameters of a modified Weibull distribution for progressively

type-2 censored samples, J. Appl. Statist. 37 (2010), pp. 617–627.[13] C. Kim, J. Jung, and Y. Chung, Bayesian estimation for the exponentiated Weibull model under type-II progressive

censoring, Statist. Papers 52 (2011), pp. 53–70.[14] J.F. Lawless, Statistical Models and Methods for Lifetime Data, Wiley, New York, 1982.[15] C.T. Lin, S.J.S. Wu, and N. Balakrishnan, Inference for log-gamma distribution based on progressively type-II

censored data, Commun. Statist. – Theory Methods 35 (2006), pp. 1271–1292.[16] D.V. Lindley, Approximate Bayesian method, Trabajos de Estadística y de Investigación Operativa 31 (1980), pp. 223–

237.[17] M.T. Madi and M.Z. Raqab, Bayesian inference for the generalized exponential distribution based on progressively

censored data, Commun. Statist. – Theory Methods 38 (2009), pp. 2016–2029.[18] G.S. Mudholkar and D.K. Srivastava, Exponentiated Weibull family for analyzing bathtub failure-rate data, IEEE

Trans. Reliab. 42 (1993), pp. 299–302.[19] S. Nadarajah and S. Kotz, The two parameter bathtub-shaped lifetime distribution, Qual. Reliab. Eng. Int. 23 (2007),

pp. 279–280.[20] H.K.T. Ng, Parameter estimation for a modified Weibull distribution, for progressively type-II censored samples,

IEEE Trans. Reliab. 54 (2005), pp. 374–380.[21] S. Rajarshi and M.B. Rajarshi, Bathtub distributions: A review, Commun. Statist. – Theory Methods 17 (1988),

pp. 2597–2621.[22] W.J. Reed, A flexible parametric survival model which allows a bathtub-shaped hazard rate function, J. Appl. Statist.

38 (2011), pp. 1665–1680.[23] S.K. Sinha, Reliability and Life Testing, Wiley, New York, 1987.[24] F.K. Wang, A new model with bathtub-shaped failure rate using an additive Burr XII distribution, Reliab. Eng.

System Safety 70 (2000), pp. 305–312.[25] J.W. Wu, H.L. Lu, C.H. Chen, and C.H. Wu, Statistical inference about the shape parameter of the new two parameter

bathtub-shaped lifetime distribution, Qual. Reliab. Eng. Int. 20 (2004), pp. 607–616.[26] S.-J. Wu, Estimation of the two parameter bathtub-shaped lifetime distribution with progressive censoring, J. Appl.

Statist. 35 (2008), pp. 1139–1150.[27] M. Xie, Y. Tang, and T.N. Goh, A modified Weibull extension with bathtub-shaped failure rate function, Reliab. Eng.

System Safety 76 (2002), pp. 279–285.[28] A. Zellner, Bayesian estimation and prediction using asymmetric loss function, J. Am. Statist. Assoc. 81 (1986),

pp. 446–451.

Dow

nloa

ded

by [

Uni

vers

ity o

f G

othe

nbur

g] a

t 03:

46 2

2 A

pril

2014