ii - nc state department of · pdf filelarge sample-based test statistics suited to ... 5.2.3...

189
II .. CONTRIBUTIONS TO THE MODELING AND ASSESSMENT OF OCCUPATIONAL EXPOSURES AND THEIR HEALTH- RELATED EFFECTS by Robert Harold Lyles Department of Biostatistics University of North Carolina Institute of Statistics Mimeo Series No. 2154T February 1996

Upload: phungdan

Post on 10-Mar-2018

227 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

II

..

CONTRIBUTIONS TO THE MODELING AND ASSESSMENT

OF OCCUPATIONAL EXPOSURES AND THEIR HEALTH­

RELATED EFFECTS

byRobert Harold Lyles

Department of BiostatisticsUniversity of North Carolina

Institute of StatisticsMimeo Series No. 2154T

February 1996

Page 2: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

CONTRIBUTIONS TO THE MODELING AND ASSESSMENTOF OCCUPATIONAL EXPOSURES AND THEIR HEALTH­

RELATED EFFECTS

by

Robert Harold Lyles

A dissertation submitted to the faculty of the University of North Carolina at

Chapel Hill in partial fulfillment of the requirements for the degree of Doctor

of Philosophy in the Department of Biostatistics, School of Public Health.

Chapel Hill

1996

Advisor

Reader

Reader

Page 3: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

ABSTRACT

Robert H. Lyles. Contributions to the Modeling and Assessment of Occupational Exposures andTheir Health-Related Effects (Under the direction of Lawrence L. Kupper)

Statistical contributions are made with applications in the area of occupational epidemiology.

In particular, methodology is developed within the framework of reasonable models for shift-long

exposure that take into account potentially important sources of variability (e.g., between-worker or

day-to-day variability), while also maintaining the well-supported lognormality assumption.

We consider estimation of key population parameters of the distribution of repeated shift-long

exposure measurements on workers in plants or factories. Assuming balanced data, uniformly

minimum variance unbiased (UMVU) estimators for these parameters are presented under two

exposure models. Under one of these models, we study in detail the efficiency of the UMVU estimator

for the mean with respect to logical competitors (such as the MLE). The prediction of mean exposure

for individual workers is also considered, with emphasis on mean squared error of prediction (MSEP).

Theoretical and simulation studies compare the MSEPs of reasonable candidate predictors.

For occupational exposure assessment, we pursue a hypothesis testing strategy emphasizing

worker-specific mean exposure as a key predictor of long-term adverse health effects. Large sample-

based test statistics suited to both balanced and unbalanced exposure data are derived for testing the

hypothesis of interest, and sample size approximation is discussed. Simulation studies of type I error

rates and power levels assuming moderate sample sizes are provided.

We present and assess methods for adjusting multiple linear regression analyses for

multiplicative (lognormal) measurement error when the goal is to regress a continuous health outcome

variable on true mean shift-long exposure (and covariates) over a study period. Consistent estimation

and valid inference are emphasized. In light of simulation studies assuming moderate sample sizes,

methodological recommendations are made pertaining to specific practical uses in occupational

epidemiology.

Nearly all of the proposed methodology is illustrated using actual exposure and health outcome

data on workers from various industrial settings.

Page 4: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

ACKNOWLEDGEMENTS

First, I would like to express my appreciation to Larry Kupper, for your extraordinary

patience, understanding and guidance. Your contributions to the successful completion of this project

go far beyond just excellent technical advice and insight. You have treated me as a colleague and

friend throughout, and have made this process an invaluable experience for me.

Secondly, my thanks to the other members of my committee: Mike Symons, Paul Stewart,

Lloyd Edwards, and Steve Rappaport. I appreciate your support and insights. I am particularly

thankful to Steve for providing so much of the motivation for this work, and for giving me the chance

to play a part in such an interesting and fruitful real-world project. The interactions with you and

Larry have been a great learning experience. I have also enjoyed and benefited from interacting with

our colleagues on the nickel project: Elaine Symanski, Mike Tornero, R.C. Yu, and Kevin Wei.

Finally, I would not be reaching this milestone without the constant love and support of my

family--and thanks to each of you for keeping the inquiries about graduation dates to a minimum!

To Cindy: I am so proud to be getting to this point alongside you. I would not trade for the world

our time here at UNC, or our future together.

iii

,

Page 5: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Chapter

Table of Contents

Page

1. Literature review and proposed research 1

1.1 Introduction 1

1.2 Background 1

1.3 The lognormal distribution and its application to exposure data 5

1.3.1 Univariate distribution 6

1.3.2 Multivariate distribution 8

1.4 Hypothesis testing for lognormal means and exceedance probabilities 9

1.4.1 Testing the probability of an exceedance 9

1.4.2 Testing the population mean level 11

1.5 Random effects models 13

1.5.1 Notation 13

1.5.2 Estimation 14

1.5.3 Prediction of random effects 20

1.6 Modeling occupational exposures 22

1.6.1 Stationarity 23

1.6.2 Sources of variability 23

1.7 Exposure measurement error 24

1.7.1 Background 24

1.7.2 Concepts 25

1.7.3 Adjusting for measurement error 27

1.7.4 Measurement error in studies of occupational exposure 33

1.8 Proposed research 34

2. Estimation and prediction issues in the study of occupational exposure 38

2.1 Preliminaries 38

2.1.1 Models for exposure 38

2.2 Estimating the population mean and variance 43

2.2.1 Intuitive estimators for mean and variance under ModelL 43

2.2.2 UMVU estimators for mean and variance under Models I and II 46

2.2.3 Variances of UMVU estimators for the mean under Models I and II 50

2.2.4 Efficiency considerations for estimating Ilx under Model I 53

iv

Page 6: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

2.2.5 Approximate confidence intervals for J.Lx under Models I and II 57

2.3 Prediction of worker-specific mean exposures 60

2.3.1 Derivation of "best predictor" of J.Lxi under Models I and II 61

2.3.2 Two conditionally unbiased predictors for J.Lxi under Model I 66

2.3.3 Mean squared error of prediction for candidate predictors under ModelL 67

2.3.4 Mean squared error of prediction for estimated predictors under ModelL 70

3. Statistical methods for occupational exposure assessment 76

3.1 Introduction 76

3.2 Workplace exposure assessment issues based on a random sample 77

3.2.1 Sample size for testing exceedance probabilities 77

3.2.2 Testing the population mean level of exposure 80

3.3 Exposure assessment accounting for between-worker vari.ability 87

3.3.1 Null and alternative hypotheses 87

3.3.2 Derivation of test statistics for the balanced case 88

3.3.3 Sample size approximation 94

3.3.4 Handling negative ANOVA estimates of ol 95

3.3.5 Simulation study of test statistics for balanced case 97

3.3.6 Derivation of a Wald-type test statistic for the unbalanced case 103

3.3.7 Simulation study of test statistics for unbalanced case 105

3.4 An exposure assessment method accounting for day-to-day variability 109

3.4.1 Derivation of a test statistic under Model 11 109

3.4.2 Sample size approximation and considerations 111

3.4.3 Handling negative ANOVA estimates of 0"& 113

3.4.4 Simulation study of test statistic under Model 11 114

4. Measurement m2I considerations with applications in occupational epidemiology 118

4.1 Introduction 118

4.1.1 Occupational health motivation 118

4.2 The multiplicative-lognormal measurement error problem 120

4.2.1 Preliminaries 120

4.2.2 Some potential measurement error adjustment methods 123

4.2.3 Large sample arguments for inference using CE and QL methods 127

4.2.4 Simulation study of CE and QL methods 130

4.3 Specific applications in occupational epidemiology 138

4.3.1 Proposed models for real-world applications 138

v

Page 7: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

4.3.2 Details of parameter estimation 140

4.3.3 Incorporation of estimators for r 141

4.3.4 Simulation study applying CE and QL adjustment methods 145

4.3.5 Commentary regarding simulation results 148

5. Applications using actual occupational exposure data 152

5.1 Purpose 152

5.2 Origins of data and calculations 152

5.2.1 Sample calculations under Model I, balanced data 152

5.2.2 Sample calculations under Model I, unbalanced data 156

5.2.3 Sample calculations under Model II 158

5.2.4 Sample calculations for measurement error adjustment under Model I 161

6. Possible extensions and future research 164

6.1 Introduction 164

6.2 Some potential extensions of the work 164

6.2.1 General issues ~ 164

6.2.2 Estimation 165

6.2.3 Prediction 166

6.2.4 Occupational exposure assessment 166

6.2.5 Measurement error adjustment 167

6.2.6 Censored exposure data 168

APPENDIX: Exposure data for examples in Chapter !i 169

REFERENCES 173

Vt

Page 8: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

List of Tables

Table 2.1: ANOVA table for Model I (2.1.1.1) 39

Table 2.2: ANOVA table for Model II (2.1.1.6) 41

Table 2.3: Bias calculations for intuitive estimators of J1.x under Model 1.. 54

Table 2.4: Variance efficiencies of intuitive estimators of J1.x relative to UMVU under Model 1 55

Table 2.5: MSE efficiencies of intuitive estimators of J1.x relative to UMVU under Model 1 56

Table 2.6: MSEP efficiencies of alternative predictors relative to 'jJ.xi,BP under Model 1 69

Table 2.7: Estimated MSEP efficiencies of alternative predictors rE~lative to 'jJ.~2, BP under Model I 74

Table 3.1: Comparison of exact sample sizes with previously published approximation 79

Table 3.2: Comparison of sample sizes approximated for testing the mean of a lognormal distribution 83

Table 3.3: Results of simulations comparing four procedures for testing mean exposure leveI.. 85

Table 3.4: Results of simulations assessing performance of proposed test statistics for balanced data 100

Table 3.5: Results of simulations assessing performance of proposed test statistics for balanced data 102

Table 3.6: Results of simulations assessing performance of test statistics for unbalanced data 107

Table 3.7: Sample sizes approximated via (3.4.2.1), for cases parallel to those in Table 3.5 113

Table 3.8: Results of simulations assessing performance of Wald-type test statistic under Model II 115

Table 4.1: Summary of simulation results assessing measurement l~rror adjustment strategies 132

Table 4.2: Summary of simulation results assessing measurement error adjustment strategies 134

Table 4.3: Approximate variances of weights for situations considE:red in Tables 4.1 and 4.2 138

Table 4.4: Summary of simulation results illustrating specific occupational health application 147

Table 4.5: Performance of CIs based on unconditional Var(Tan) for simulations in Table 4.4 150

Table 5.1: Sample calculations to estimate population mean and variance for nickel data 153

Table 5.2: Sample calculations for workplace exposure assessment using nickel data 156

Table 5.3: Sample calculations to estimate population mean and variance for styrene data 159

Table 5.4: Results of CE and QL analyses of data on 3 groups of workers in the animal feed industry .. 162

Table AI: Nickel dust exposures on maintenance mechanics at a nickel-producing plant 169

Table A2: Nickel dust exposures on furnacemen from a refinery at a nickel-producing complex 170

Table A3: Nickel dust exposures on maintenance mechanics from a mill at a nickel-producing plant 171

Table A4: Styrene exposures on laminators at a boat-manufacturing plant 172

VIZ

Page 9: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

List of Figures

Figure 4.1: Plot of the means of 1000 simulated values of ?3co for various sample sizes (K) 127

Figure 5.1: Plot of the values of three estimated predictors of mean exposures under Model 1.. 154

Figure 5.2: Plot of the values of two estimated predictors of mean exposures under Model 1.. 157

Figure 5.3: Plot of the values of two estimated predictors of mean exposures under Model II 160

vnz

Page 10: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Chapter 1: literature Review and Proposed Research

1.1 INTRODUCTION

Many occupations involve exposure to potentially toxic chemical agents in the workplace.

Hence, a major goal of the industrial hygienist is to ensure the control of toxic agents to such an

extent that workers' risk of chronic or acute health effects is maintained at or below an acceptable

level. Two major governmental agencies were established in 1970 to oversee the setting and

enforcement of standards limiting workplace exposures: the Occupational Safety and Health

Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH).

Their duties involve important decisions affecting public health, and hence require the utilization of

knowledge from a variety of disciplines, including biolo~~, physiology, pharmacokinetics,

environmental science, epidemiology, and statistics. As the base of scientific knowledge regarding

exposure to agents in the workplace expands, it would seem that the role of the statistician as a

collaborator in the ongoing effort to control the levels of such exposures continues to increase in

importance.

In particular, statisticians can first play an important role: in the effort to better characterize

the distributions of exposures to various agents in the workplace. Secondly, they can help to develop,

interpret and apply statistical models relating exposure to disease risk, a contribution which relates

ultimately to the establishment of appropriate limits on exposures to particular agents. Thirdly,

given appropriate limits, the statistician may also help by contributing methodology useful for

determining whether a particular workplace exposure distribution is acceptable, and if not, for

suggesting means by which it may be made so. With regard to the latter effort, issues relating to

design and sampling (e.g., how many measurements need to be made, how they should be allocated,

and how the sampling should be carried out) are essential. The goal of the research summarized in

the following chapters is to make statistical contributions to the understanding and control of

occupational exposures in each of the three general ways described above.

1.2 BACKGROUND

There have been many studies of airborne exposures in the workplace. The importance of a

Page 11: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

careful statistical treatment of the characterization and assessment of such exposures has long been

recognized. Oldham and Roach laid a solid foundation for occupational exposure monitoring in the

1950s, with a series of papers describing their study of pneumoconiosis among coal miners. A short

listing of more recent studies includes investigation of lead exposures at an alkyl-lead plant (Cope et

al., 1979), mercury exposures at a chloralkali plant (Lindstedt et al., 1979), benzene exposures at a

petroleum refining plant (Spear et al., 1987), rubber dust exposures at a retreading plant (Kromhout

et aI., 1988), styrene exposures at a boat manufacturing plant (Yager et al., 1993), and nickel dust

exposures at a nickel producing company (Rappaport et al., 1995a).

The first and foremost consideration for the study of occupational exposure is the underlying

exposure-disease relationship. One must determine whether the adverse health effect presumed to be

related to the toxicant in question results from long-term (chronic), or from short-term (acute),

exposure to that toxicant. This determination affects every facet of the exposure assessment,

including the choice of the measure of exposure and the appropriate assessment strategy (Rappaport,

1991a), the means of data collection, and the statistical handling of the data. Rappaport (1991b)

discusses some of the biological and practical considerations involved.

Typically, exposure measurements result from laboratory analysis of filtered particulates or

vapor gathered by means of personal monitors attached directly to workers, and the actual measure of

exposure is a time-weighted-average (TWA) of the instantaneous exposures received by a worker over

a given period (Spear et al., 1986). One important issue is determining the appropriate averaging

time (the length of time over which each sample is taken, or equivalently, the length of time over

which the instantaneous exposures are integrated to obtain each TWA value). In the environmental

literature, one most often encounters discussions of 15-minute and 8-hour (shift-long) TWAs. It

would seem logical that attention would most likely focus on the former when short-term effects are

under study, and move to the latter when chronic effects are expected. While this is most often the

case, it is important to pay careful attention to the averaging time and its relevance to the underlying

exposure-disease relationship when planning or interpreting an exposure assessment.

Many strategies for assessing workplace exposures have appeared in the literature. Three of the

more widely recognized approaches include compliance testing, testing the probability that an

arbitrary TWA measurement exceeds a limit, and testing the mean level of exposure against a limit

(Rappaport, 1991b). Compliance testing seeks to ascertain with high certainty that no exposure

measurement (usually a 15-minute TWA during a given shift) exceeds a limit. This differs in a subtle

way from a test focused entirely on the probability that a measurement exceeds a limit, mostly due to

the inherent allowance of exceedance in the latter approach. Several types of exposure limits (or

standards) have been promulgat~d by various regulatory agencies. Prior to 1970, the American

Conference of Governmental Industrial Hygienists (ACGIH) established so-called Threshold Limit

2

Page 12: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Values (TLVs), which were divided basically into long-term (8-hour) and short-term (l5-minute)

exposure limits (STELs). After 1970, OSHA established new permissible exposure limits (PELs). For

a discussion of the history and interpretation of these various typ1es of limits, see the review paper by

Rappaport (1991b). For simplicity, we will henceforth refer to most standards as "occupational

exposure limits" (OELs), and generally assume that they are appropriate to the given situation.

Interest in the mean exposure concentration is generally highest for toxicants leading to chronic

health effects. In their early work, Oldham and Roach (1952) d.escribe a progression from airborne

coal dust exposure, to dose, to risk of pneumoconiosis. They dE~fine the dose as the product of air

concentration per unit air volume, volume of air breathed per unit time, and duration of exposure.

They treat coal dust as a chronic toxicant, and focus attention .on the long-term mean exposure as

opposed to short-term (within-shift) exposures. Oldham (1953) reiterates this focus in a later article.

In the discussion of their sampling program, they show remarkable foresight. Among the issues

covered or implied in their discussion are the need for full-shift monitoring, the random sampling of

workers and shifts, the need for grouping workers due to differences in exposure in different areas of

the colliery, and the need for a continuous sampling device. With regard to the final point, the

thermal precipitator device used for sampling in the 1950s (Roach, 1959), which was only capable of

intermittent sampling, has indeed been replaced by more modern continuously sampling personal

monitors. This allows some simplification of the sampling program.

Rappaport (1991b; 1993) proposes a similar conceptual moOdel in terms of a progression from

exposure, to dose, to tissue damage, and ultimately to disease risk. The conceptual model involves

rate constants for uptake and elimination of the toxicant, as well as for tissue damage and repair.

Rappaport argues that, if the kinetic processes occurring during the progression from exposure to

tissue damage are linear (which translates into the assumption that certain differential equations

involved in the progression are first-order), then tissue damag,e and ultimate risk of disease are

proportional to the long-term mean exposure level, and short-term "peak" exposures are of little

significance. For many toxicants, Rappaport maintains this same point of view even if nonlinear

kinetics are involved (Rappaport, 1991a), and he concludes from his biological considerations that the

assessment of exposures leading to chronic health effects should focus on the long-term mean exposure

level experienced by a worker. His recommended sampling strategy revolves around making repeated

8-hour TWA measurements on randomly sampled workers. With regard to serious acute effects,

Rappaport recommends monitoring and control of the source of contamination as opposed to exposure

assessment programs. For continuous exposures leading to less serious short-term effects, he implies

that an assessment strategy focused on the probability that a short-term TWA measurement exceeds

an appropriate OEL may be reasonable.

There has been some expression of concern In the environmental literature about the

3

Page 13: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

appropriateness of compliance testing, mainly on the grounds that it may encourage employers to

take few measurements (Rappaport, 1984; 1991b). While this may be true, a responsible

interpretation of the goals of the assessment combined with a sound statistical approach actually

leads one to seek higher numbers of measurements in order to improve one's chances of demonstrating

compliance (e.g., Symons et al., 1993). The nature of compliance testing has also led to advocation of

so-called "worst-case" sampling (e.g., Leidel and Busch, 1975), whereby one seeks out periods of high

exposure for sampling. This strategy has clear benefits in terms of cost, though there is some concern

about proper identification of worst-case scenarios. Others feel that this practice should be discarded

in favor of random sampling (e.g., Rappaport, 1991b), with the added benefit that occasional

monitoring would allow the development over time of large databases which would be useful for the

further study of exposure-disease relationships over the full range of exposures.

It is not the purpose of the current research to attempt to settle philosophical debates about

exposure monitoring and assessment, but rather to contribute useful statistical methodology which

may be applied with regard to a range of exposure assessment settings which have received

considerable recent attention in the environmental literature. As such, we will focus mostly on the

long-term mean exposure in light of Rappaport's (1991a, 1991b) biological considerations for

exposures related to chronic disease. With regard to short-term exposures, we will not go further into

compliance testing, but will consider some methodology applicable for testing the probability of

exceedance relative to an OEL. Our discussion of methodology regarding exposure-disease

relationships will assume the availability of appropriate databases, generally compiled through

random sampling schemes designed to provide information over the full range of exposure for the

population in question.

As a final background note, we should consider the issue of grouping workers for the purpose of

workplace exposure assessment. It has long been recognized (e.g., Oldham and Roach, 1952) that

different tasks and locations within a plant, factory, or mine produce gradations in exposure levels.

Hence, a widely accepted observational approach is to form groups of workers based on such common

factors as job title and location. There is less agreement in terms of whether one should then regard

these groups as homogeneous with respect to mean exposure for the purposes of exposure assessment.

A good deal of recent research has sought to point out the fact that groups formed on the basis of

observation are still rarely homogeneous. Presumably as a result of variation in individuals' tasks

and work habits, considerable between-worker variability in 8-hour TWA exposures can be present

(Kromhout et al., 1993; Rappaport et al., 1993). One can clearly postulate other potential sources of

variability. Hence, while the observational grouping may eliminate systematic sources, there are

likely some random sources of variability that remain. For the purposes of this research, we will

generally work with observational groups of workers, but will acknowledge the need to account for

4

Page 14: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

potential sources of variability in exposure among the members of such groups.

1.3 THE LOGNORMAL DISTRIBUTION AND ITS APPLICATION TO EXPOSURE DATA

The lognormal distribution is often used to describe occupational exposure data. Oldham

(1953) reports what he believes to be the first observation that the logarithms of coal dust exposure

measurements made with the thermal precipitator appear to be normally distributed. The averaging

time for the samples studied by Oldham was three minutes. Roach (1959) reports further empirical

evidence that coal dust exposures may be distributed as lognormal; however, this evidence appears to

be based upon the observed distributions of somewhat diverse data in terms of averaging times, and

may contain repeated measurements on workers either during single shifts or across shifts. More

recently, Hines and Spear (1984) found that data on short-term exposures to ethylene oxide in a

hospital were described adequately by a lognormal distribution. With regard to 8-hour TWA

measurements, Spear et al. (1986) give empirical evidence that shift-long exposures to mercury in a

chloralkali plant (Lindstedt et al., 1979) were well approximated by a lognormal distribution.

Rappaport (1991b) cites additional references, and reports empirical evidence for lognormality of

repeated shift-long exposure data on several workers examined separately at an alkyl-lead plant (Cope

et al., 1979).

In general, there are many potential means for generating the lognormal distribution, which

may help to explain its frequent occurrence in nature. Koch (1966; 1969) describes some of the

processes by which the lognormal distribution may be encounterc~d, either exactly or approximately.

In particular, he suggests that biological processes resulting from a sequence of many small steps may

be found to obey the lognormal law. Such a proposal is supported under the assumption that the

sequential steps arise in a multiplicative manner, under conditions such that some version of the

central limit theorem may be applied (hence, independence of the sequential steps is not required).

While Koch's review would seem to lend support to the assumption of lognormality due to its

ubiquitousness in nature, he is quick to point out that many distributions (in particular, the Pearson

types III and V) may simulate the lognormal (Koch, 1969). This observation certainly supports the

idea that some validation of the lognormality assumption should take place before efforts begin to

draw data-based conclusions (statistical or otherwise) using methods dependent upon this assumption.

An important early reference regarding lognormal distribution theory is that of Aitchison and

Brown (1957). The book by Crow and Shimizu (1988) provides an extensive review of the properties

of both univariate and multivariate lognormal distributions, including details regarding parameter

5

..

Page 15: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

estimation. A sampling of some of the basic results follows.

1.3.1 Univariate Distribution

Consider a random variable Y which is normally distributed with mean Jl and variance 0'2 (we

will adopt standard notation and write Y ..... N(Jl, 0'2». It follows that the random variable X =

exp(Y) follows a two parameter lognormal distribution. The density function of X is

f(x) = _1_ ex.J _ (lnx - Jl)2} ...J27rO'x II 20'2

(1.3.1.1)

We will write X ,.., LN(Jl, 0'2). Clearly, the range of the random variable X is (0, 00). We note in

passing that there are other forms of univariate lognormal distributions; for instance, one may

consider the three parameter lognormal distribution (Cohen, 1988), which includes an extra location

parameter i and generates random variables in the range (i, 00).

Given that X LN(Jl, 0'2), it is quickly verifiable that the r-th central moment of X is given

by E(Xr) = exp(rJl + r 20'2/2). The following results are immediate:

E(X) = Jlx = exp(Jl + 0'2/2) , (1.3.1.2)

Var(X) (1.3.1.3)

Expressions for other moment-related quantities (e.g., skewness and kurtosis coefficients) are easily

derived and are given by Shimizu and Crow (1988). Two parameters commonly referred to in the

environmental literature are the geometric mean (GM) and geometric standard deviation (GSD) of X.

The GM, which is also equivalent to the median of X, is given by exp(Jl)j the GSD is given by

exp(O'). It should be emphasized that Jlx ' considered an important parameter related to chronic

health effects, depends on both of the parameters (Jl and 0'2) defining the distribution of Y = In(X).

The 100(1 - O')-th percentile of the distribution of X is given by:

(1.3.1.4)

where zl_ cr is the 100(1- O')-th percentile of the standard normal distribution. By definition, we

have Pr(X > Xl _ cr) = a .

Maximum likelihood (ML) estimation of Jlx ' 0';, Xl _ cr' and related quantities based on a

random sample of size k from the distribution of X is straightforward using the ML estimators jJ.

6

Page 16: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Y and a-2 = (k k 1) S;, where Y = k -I.E Y i and S; = (k -1) -I.E (Y i - y)2 are the samples=1 1=1

mean and variance of the logged sample values; one simply applies the familiar property regarding

ML estimators of functions of parameters. Shimizu (1988) details uniformly minimum variance

unbiased (UMVU) estimation. A brief synopsis of some of the results (adopting much of his notation)

follows.

A general class of functions, linear combinations of whose members encompass J-Lx and 0-; (as

well as other quantities relevant to the X distribution, such as kurtosis and squared skewness) may be

expressed as:

(1.3.1.5)

where a, b, and c are real numbers. Based on work originally due to Finney (1941), the UMVU

estimator for (1.3.1.5) is given by

q, _ r((k -1)/2) ( Y) {(k _ 1)S2/?}C F {k - 1 . (k - 1)(2bk - a2) S2}

a,b,c- f((k-1)/2 + c) exp a y - 0 I 2 + c, 4k y , (1.3.1.6)

where r denotes the gamma function, and ofI denotes a member of the class of generalized

hypergeometric functions (Erdelyi et al., 1953). These may be written as

(1.3.1.7)

where the u's and v's and z are complex numbers, p and q are nonnegative integers, and (for t a

complex number) (t)j = 1 for j = 0 and {t(t + l) ...(t + j -I)} for j ~ 1. From (1.3.1.5) and

(1.3.1.6), the UMVU estimators for the mean (J-Lx ) and variance ((7';) of X are as follows:

A _ (Y-)F{k-1.(k-1)2<;,2}J-Lx,UMVU - exp 0 I -2-' 4k ""y'

·2 (2Y-){F {k-l. (k-1)2 S2} _ F{k-1.(k-1)(k-2)S2}}O-x,UMVU = exp 0 I 2' kyO I 2' 2k y'

(1.3.1.8)

(1.3.1.9)

Shimizu (1988) provides the exact variance of (1.3.1.6). Attfield and Hewett (1992) utilize

exact expressions for bias and variance to compare the performances of the ML, UMVU, and simple

arithmetic estimators of the lognormal mean, for various values of the GSD. Bar-Shalom et al.

(1975) and Leidel and Busch (1975) present a graphical means of approximately calculating

jJ.x, UMVU for fairly limited ranges of the observed Y and S;. Although in general the percentile

zl _ a does not belong to the class of functions (1.3.1.5), Shimizu (1983) provides its UMVU estimator

7

Page 17: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

and derives its properties.

1.3.! Multivariate Distribution

One may derive a multivariate version of the two parameter lognormal distribution by

considering a vector Y = (YI' Y2'"'' Yn)' following a multivariate normal distribution with mean

vector P = (PI' P2"'" Pn)' and variance-covariance matrix E = (O"ij)' i,j = 1, .. ,n. Note that O"ii

= 0"; and 0"ij = 0"ji for all (i,j). Using familiar notation, we write Y '" N n(P' E). Defining the

vector X' = {exp(YI ), exp(Y2),... , exp(Yn )}, it follows that X is distributed as (n-dimensional)

multivarite lognormal with the following density function (Shimizu, 1988):

f( xI' ~,... , xn ) = / 1 ( n ) exp{ - k(lnx - p)'E -I(lnx - p) }. (1.3.2.1)(211")n 2~ I1 xi

i=1

Here, x = (2:1' 2:2"", 2:n )' and Inx = (ln2:l , In2:2"'" In2:n)'.

It is easily verified that each random variable Xi = exp(Yi) is distributed as LN(Pi' 0";),= 1,... , n. Defining s' = (sl' s2' ..., sn)' one may use familiar properties of the multivariate normal

distribution to verify that the central moment of X is E(X~IX~2 ...x~n) = exp(s'p + ~s':Es). It

follows that

..( 1.3.2.2)

for all i, j = 1,...n.

ML estimation of quantities which are functions of p and E is straightforward using results

involving the multivariate normal distribution. Based on a random sample of k (k > n) vectors, Y i

= (Yil,. ..Yin)', from a N n(P' E) distribution, one works with the complete and sufficient statistics

(Y,T). Here, Y' = (Y.I ,... , Y. n ) and T is an (nxn) matrix whose (l,m)-th element isk _ _ . _ -I k~ (YiI - Y")(Yim - Y.m), WIth Y., = k ~ Yil (I, m = 1,... , n).

i=l i=1Shimizu (1988) discusses UMVU estimation of a general class of functions expressed as

'11aBc = IE I Cexp(a'p + trEB),, , (1.3.2.3)

where a is a real vector, B a real symmetric matrix, and c a real number. Calculation of the UMVU

estimator of (1.3.2.3) is quite complicated, as it involves hypergeometric functions of matrix argument

(Muirhead, 1982) and hence necessitates the use of tables of zonal polynomials [Parkhurst and James

(1974)]. Shimizu (1988) provides further details regarding the properties of the UMVU estimators of

(1.3.1.5) and (1.3.2.3).

8

Page 18: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

1.4 HYPOTHESIS TESTING FOR LOGNORMAL MEANS AND EXCEEDANCE PROBABILmES

The purpose of this section is to briefly review some procedures that have appeared in both the

environmental and statistical literature regarding hypothesis testiing based on a random sample from

a LN(J.l, (7'2) distribution.

1..4.1 Testing the Probability of an Exceedance

Let Y represent a normally distributed random variate with mean J.l and variance (7'2. It

follows that X = exp(Y) is lognormally distributed with mean J.l x and variance (7';. Now, define () to

be the probability that X exceeds some specified limit L. Hence, () = Pr(X > L). A determination

of potential relevance in exposure assessment might be one of whether or not () is smaller than some

prespecified value A, based on a random sample of presumably lognormal TWA exposure

measurements. Consider the following null and alternative hypotheses, which force the data to

provide evidence that the workplace environment is "safe":

vs.

HI: () < A.

It is straightforward to show that the above hypotheses may be rewritten as follows:

vs.

where zl _ A is the 100(1 - A)-th percentile of the standard normal distribution. Clearly, Ho is the

condition that YI- A' the 100(1- A)-th percentile of the Y distrilbution, is ~ In(L) (or equivalently,

that zl_A ~ L). Selvin et al. (1987) present the appropriate test statistic for this situation. If {YI ,

Y2' ... , Yk} represent a random sample of k logged exposure measurements, then one computes the

test statistic t = Y + cSy. The constant c, which Selvin et al. do not explicitly define, is

identically equal to "fk1 t'k_l,Q(A), where t'k_l,Q(A) is the 100(o)-th percentile of the noncentral!

9

Page 19: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

distribution with (k -1) degrees of freedom (df) and noncentrality parameter>. - -#zI _ A' and

where a is the significance level of the test (see Faulkenberry and Daly, 1968).

As Selvin et al. (1987) mention, t is an upper one-sided tolerance limit. This nomenclature,

and the appropriateness of t as a test statistic for the current situation, are clarified through

examination of a slight rearrangement of a classical tolerance statement (Johnson and Kotz, 1970a):

This statement gives the explicit mathematical representation of t as an upper one-sided tolerance

limit. Under Ho' it is clear that Pr{t < In(L)} ~ a. This justifies the use of the rejection rule

"reject Ho if and only if t < In(L)" for a testing procedure with size ~ a; if () is identically equal to

A (i.e., JJ + zI_ AU = In(L)), the size of the test is exactly a. That this constitutes the uniformly

most powerful (UMP) test of Ho vs. HI is confirmed by Land (1988).

For determining the power of this test for a given sample size (k), Selvin et al. (1987) propose

the following formula based on a large-sample approximation (note that there was a transcription

error in their original published manuscript):

(1.4.1.1)

where ~{t} represents the standard normal cumulative distribution function, and x I _ A « L) is

supplied by the investigator. Written instead in terms of an assumed value « A) for (), this

expression becomes

A sample size formula (purportedly to achieve power of at least 1 - (3) based on the Selvin et al.

approximation is given by Rappaport (1991b):

(1.4.1.2)

..

Lyles and Kupper (1996a) highlight the fact that (1.4.1.2) tends to substantially underestimate the

required k, and they discuss calculation of the theoretically exact sample size, which has been

approximated by Faulkenberry and Daly (1968).

10

Page 20: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

1.-I.! Testing the Population Mean Level

Again based on a random sample of TWAs presumed to be lognormally distributed, it may

well be of more interest to test whether or not the mean exposure level received by members of a job

group is below a specified limit L'. Let us define null and alternative hypotheses as follows:

vs.

Again, note that we force the data to provide evidence of a "safe" workplace environment. We should

note that this point of view has not always been taken historically. For instance, Leidel and Busch

(1975) reverse the above inequalities, so that the onus is on the regulator to identify a plant in

violation of standards. However, more recent authors (e.g., Rappaport and Selvin, 1987) seem to

favor the above specification of hypotheses. Taking logarithms, we may express these hypotheses

equivalently as:

vs.

A test statistic proposed by Rappaport and Selvin (1987) takes the form

(1.4.2.1)

where J.lx = exp(Y + S;/2), and Vo(J.lx) = (L')2 (S; + stl2) 1 (k - 2) is an estimate of Var(jix)

under Ho. For large k and under the equality condition in Ho' the distribution of RI.l approaches that

of a standard normal variate. However, recognizing the potential problems with this approximation,

Rappaport and Selvin propose the rule "reject Ho if and only if ~l" < t k _ lOt'" where t k _ 1 Ot is the..., ,lOO(a)-th percentile of the central t distribution with (k -1) df. This reference distribution, and the

use of (k - 2) in Vo(Px)' are measures taken by the authors to control the type I error rate for

practical sample sizes. They also give the following approximate sample size formula proposed for use

with the test statistic (1.4.2.1):

11

Page 21: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(1.4.2.2)

As an alternative, one may address the above hypothesis testing problem by making use of a

theoretically exact upper 100(1- a)% one-sided confidence limit for Px based on results due to Land

(1971; 1972; 1973; 1975; 1988). Although this method is apparently not extensively utilized by

environmental scientists, it is reviewed by Gilbert (1987). Land's method utilizes the conditional

distribution of U given V, where U = (..J'k/Sy)[Y -In(L')] and V = ~{(1 - 1/n)S~ + [Y -In(L')]2}

(Land, 1972). To use it, one would reject Ho if and only if EIJ ("E" for "exact") is less than L', where

EIJ is an exact upper 100(1 - a)% confidence limit for Ilx based on the method. One enters tables

(Land, 1973) with the values (1 - a) and Sy' and then reads off the corresponding critical value

HI _ a (using the notation of Gilbert, 1987). The test statistic is then computed as

(1.4.2.3)

..

Clearly, not all values of Sy are tabulated. Cubic interpolation (Warmus, 1964) is recommended as a

remedy. Assuming that the exact value of HI _ a can be determined, this method constitutes the

UMP unbiased test of Ho vs. HI (Land, 1988).

Bar-Shalom et al. (1975) present an alternative derivation of the UMP unbiased test, both for

the above set of null and alternative hypotheses, as well as for the version with the inequalities

reversed. Instead of tables, they present graphs by which one may read off one of three decisions,

based on the UMP procedure: i). Non-compliance (the mean is above the OEL) ii). "No decision"

(not enough evidence to make a decision) iii). Compliance or "no action" (the mean is below the

OEL). The purpose of the graphical presentation is to simplify the application of the method for

those in the field; one need only supply the observed values of Y and Sy to read the decision from the

graph. The disadvantages of this presentation are that only ranges of k = 3 to 25, Y = - 0.9 to

0.3, and Sy = 0 to 0.5 are covered in the tables, and that interpolation may be necessary even for

some values of k in the available range (perhaps calling the appropriate decision into question for

borderline cases). Leidel and Busch (1975) make a similar presentation, with similar limitations.

Lyles and Kupper (1996a) present a comparison of various methods for testing the lognormal

mean exposure level. They also present an alternative test statistic which, while it is relatively easy

to compute Oike (1.4.2.1)], closely mimics the UMP test statistic (1.4.2.3) in terms of performance

over ranges of parameter values typically seen in practice. They provide a potentially useful

alternative (to 1.4.2.2) sample size formula to accompany their test statistic.

Bar-Shalom et al. (1975) also develop approximate and UMP procedures for testing the

lognormal mean exposure level in the presence of non-stationarity (Le., systematic trends in the mean

12

Page 22: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

level over time). They also present some Bayesian methods for addressing similar issues. Other

references that have appeared in the environmental literature regarding testing lognormal means or

exceedance probabilities include Corn and Esmen (1979), Tuggle (1982), Esmen (1992), and Evans

and Hawkins (1988).

1.5 RANDOM EFFECTS MODELS

The purpose of this section is to briefly review some notation and estimation techniques for a

particular subset of the general linear mixed models, and to introduce the topic of prediction of

random effects within this framework. In later chapters, we will make use of this material with

respect to the modeling of occupational exposure measurements. All of the following material (with

much the same notation) may be found in the text by Searle et al. (1992).

1.5.1 Notation

A very general matrix notation for a linear mixed model is the following:

Y =X{3 + Zu + e, (1.5.1.1)

where Y is an N x 1 outcome variable, X is a known N x p fixed effects design matrix, {3 is a p x 1

vector of fixed effects, Z is a known N x q random effects design matrix, u is a q x 1 vector of random

effects, and e is an N x 1 vector of error terms. It is commonly assumed that E(u) = 0 and E(e) =0, from which we have the following unconditional and conditional (on the random effects u) models

for the mean of Y:

E(Y) X{3

(1.5.1.2)

E(Y Iu) X{3 + Zu,

where u is being used in particular contexts to represent both a random variable and its realization.

In general, if the model contains r random factors, it is convenient to partition u as u' = (u~,

t4, ..., u~), where the j-th vector (j = 1,... , r) Uj contains the same number of elements (qj) as therer

are levels for the j-th random factor. Note that q = .L qj. We may specify the assumptions;=1

regarding the variances and covariances of the random terms in the model as follows:

13

Page 23: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Var(e) u~V ,

where u~aj is a qj x qj variance-covariance matrix pertaining to the j-th subvector of random effects,

and u;V is an N x N variance-covariance matrix pertaining to e. Though in the full generality of

(1.5.1.1) it is possible to have nonzero correlation between the elements in u and e, as well as between

different subvectors (Uj and Uj' , say) of u, such generality will not be needed for our purposes.

Hence, we take

cov(Uj' uj,) = 0 and cov(u, e') = O.

It follows that D = Var(u) is a (q x q) block diagonal matrix with each of the r ordered blocks being

given by u~aj' j = 1, ...r. The full variance-covariance matrix for the vector Y is thus

The parameters u; and U] (j = 1,... , r) are typically known as the variance components. In

many commonly used models, the additional assumption is made that V and/or the a;'s (j = 1,... , r)

are identity matrices. This will be the case for the particular variance component models we apply

throughout this work. Finally, we consider a random effects model as one which is described by

(1.5.1.1), but for which the only fixed effect is the overall mean (i.e., X = IN and fJ = J.tY

' so that

E(Y) = J.tylN)'

It is often (but not necessarily) assumed that the joint distribution of all of the random terms

in (1.5.1.1) is multivariate normal (MVN). This assumption facilitates the distributional theory

required for testing hypotheses involving the parameters of the particular model in question.

Although there is no formal means of fully checking the appropriateness of this assumption for a set

of data, there are methods by which particular aspects of it may be addressed (e.g., Dempster and

Ryan, 1985; Lange and Ryan, 1989). In our use of random effects models to describe exposure data

in later chapters, we will generally work under the usual normality assumptions in each case.

..E = Var(Y) = ZDZ' + u;V. (1.5.1.3)

1.5.! Estimation

We will discuss three methods of estimating variance components for a particular subset of the

14

Page 24: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

models encompassed by (1.5.1.1), namely, those we have termed random effects models. These three

methods of estimation are ANOVA, ML, and restricted maximum likelihood (REML). For our

general discussion of ANOVA estimation, we make the simplifying assumptions that i) the data are

balanced, and ii) the data are classified in terms of crossed or nested factors, and possibly interactions

thereof, and are easily summarized by means of an analysis of variance (ANOVA) table. In general,

this implies also that the matrices V and d j (j = 1,... , r) are identity matrices. Searle et al. (1992)

devote an entire chapter to methods of estimation for such data. They define balanced data as data

for which the number of observations in each "sub-most" cell is the same, where a "cell" is defined by

particular levels of the subclasses into which the data are divided. They also give a set of rules by

which ANOVA tables may be constructed for such data. These rules allow the determination of

degrees of freedom, ~ of squares, and~ squares for eac:h line of the table, as well as the

associated expected~ squares. As these terms are familiar and Searle et al. give a clear account

of their definitions and genesis, we will not rehash them here.

Searle et al. (1992) give a thorough description of ANOVA, ML and REML estimation. A

purposefully brief synopsis of certain aspects of their presentation now follows. Details are only

provided according to their relevance to work in later chapters.

ANOVA Estimation

The basic idea of ANOVA estimation is simple, especially under the simplifying assumptions

given in i) and ii) above. One only needs to equate the mean scluares from the ANOVA table with

their expected values, and then solve the resulting set of equations for the variance components. For

random effects models, there will be as many mean squares as there are individual variance

components (namely, r + 1). Hence, as outlined by Searle et aI., a simple rubric for calculating

ANOVA estimators is to first define the (r + I)-dimensional vectors m and u 2 as, respectively, the

vector of mean squares and the vector of variance components. Then determine the matrix P such

that E(m) = Pu2• As P will generally be nonsingular, the ANOVA estimators are calculated as

..

..

P - 1m. (1.5.2.1)

Under the normality assumption, the resulting estimators are UMVU for balanced data. Searle et aL

discuss this and other properties of the ANOVA estimators, including distributional theory, sampling

variances, hypothesis testing, and the construction of confidence intervals under normality. Particular

results that will be useful in later chapters include the facts that (under normality) the sums of

squares are mutually independent, and if the data are balanced, each mean square is distributed as a

15

Page 25: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

multiple of a chi-square (X2) random variate. Hence, in many applications, the ANOVA estimators

are each linear combinations of multiples of independent X2 variates. An exact expression for the

variance of the vector 0-2 is given by Searle et al. (1992):

(1.5.2.2)

where E(Mj ) is the j-th expected mean square (j = 1,... , r), f j is the df of the X2 distribution

associated with the j-th mean square (M j ), and the notation {d G j } is used to represent a diagonal

matrix with j-th diagonal element Gj. An unbiased estimator for (1.5.2.2) is

Maximum Likelihood

2M 2V • (.2) - p-I{ j} pi-IarD" - dfo+2 .

J(1.5.2.3)

ML estimation for the general model (1.5.1.1) is based on maximizing the full likelihood L as a

function of the unknown parameters in p and E. The likelihood is given by

(1.5.2.4)

Equivalently, one maximizes the log-likelihood e = In(L), given as

(1.5.2.5)

..

Searle et al. give the equations to be solved. Although for certain models closed-form expressions for

the ML estimators exist, they must in general be determined iteratively as solutions to a system of

nonlinear equations. Searle et al. (1992) discuss several of the more popular iterative algorithms,

including the Newton-Raphson, Marquardt, and scoring methods, and the EM ("estimate-maximize")

algorithm. Among the familiar attractive properties of the ML estimation method are the fact that it

allows estimation of all parameters (i.e., those in p and E), the ready availability of the estimated

large-sample variance-covariance matrix as the inverse of the information matrix, and asymptotic

normality.

Restricted Maximum Likelihood

The idea behind the REML method of estimation is to take into account the degrees of

16

Page 26: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

freedom for estimating the fixed effects of the model when estimating its variance components. The

simplest and most often given example is the comparison of IGhe REML and ML estimators for

variance, based on a random sample {Vi}' i = 1, ... ,k, from a .N"{Jl,0-2) distribution. The REML

estimator (which is also an ANOVA estimator) for 0-2 is the usual sample variance S~ given on pg. 6.

The ML estimator is fT2 = [{k -1)/k] S~. The advantage of the REML estimator here is its

unbiasedness (in fact, it is UMVU), whereas the ML estimator is biased. The process of REML

estimation has been described as maximizing the portion of L not depending on the fixed effects p.

Searle et aI. (1992) give the REML equations for models satisfyin~: (1.5.1.1), and show that they differ

from the ML equations only by small adjustments. They also give details regarding the large-sample

dispersion matrix of the REML estimators.

For mixed models under conditions i) and ii) of pg. 14, the solutions to the REML equations

are identical to the ANOVA estimators (Searle et al., 1992). Hence, under normality, these solutions

share the UMVU property, and it is clear why REML estimation is sometimes preferred to ML, or at

least considered a viable alternative. For the purposes of this research, we will apply REML

estimation only insofar as it is identical to ANOVA estimation fol' random effects models for balanced

data.

Estimating the Overall Mean

While we have noted that the ML method provides estimalGes of the fixed effects parameters p,

this is not true with regard to ANOVA or REML estimation. However, for models described by

(1.5.1.1) and under conditions i) and ii) of pg. 14, it is in fact true that ML estimation of P is

equivalent to ordinary least squares (OLS) and generalized least squares (GLS) estimation. Anyone

of these three estimation methods would lead to the best linear unbiased estimator (BLUE) of the

quantity Xp (Searle et al., 1992), so it is unlikely that the choice of one particular estimation

technique over another for the variance components would affect the estimation of p. Moreover,

under conditions i). and ii)., for the random effects models as we have described them (i.e., p = Jly ),

the estimate of the overall mean Jl y is simply the overall sample mean Y. In the case of unbalanced

data the picture is more cloudy, as the equivalence of OLS, GLS, and ML no longer holds. However,

a general formula for the BLUE of Xp is (Searle et al., 1992):

BLUE(XP) X{X'E-1X) - X'E-1y , (1.5.2.6)

which is invariant to the choice of the embedded generalized inverse. Of course, E is not generally

known; the ML procedure replaces it with its estimate, say EM£"

17

Page 27: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Negative Estimates of Variance Components

Theoretically, the feasible region for the variance components in the context of (1.5.1.1) is

defined by the inequalities u~ > 0 and U} 2: 0 (j = 1,... , r). However, there is no guarantee against

one or more elements of the solution iT2 of (1.5.2.1) being negative. The same may be said of the

solutions to both the ML and the REML equations for the variance components. For ML and

REML estimation, this problem is generally avoided by constraining the parameter space in such a

way that, in general, if one or more solutions to the equations are negative, the resulting estimates

become 0 and the estimates for the other components are adjusted. Searle et al. (1992) discuss the

issues of implementation. For ANOVA estimation, a common practice is simply to set any negative

variance components estimates to 0, which effectively drops the associated random effect from the

model. Searle et al. discuss the implications of negative solutions for variance components,

attributing them in general to inappropriate model selection or inadequate sample size. In a special

case, they also give sample size guidelines for limiting the probability of such a negative solution

given that the assumed model is appropriate.

Confidence Intervals for Variance Components

Some existing methodology that we will make use of in Chapter 3 involves the calculation of

approximate confidence intervals for variance components. Searle et al. (1992) give a review of much

of this methodology. In particular, we cite here results due to Williams (1962) for balanced data (i.e.,

equal numbers of observations on each subject), and due to Burdick and Eickman (1986) for

unbalanced data.

Williams (1962) proposes the following as an approximate 100(1 - 2a)% confidence interval for

the between-subject variance (u~) under the ~way random effects (RE) ANOVA model (Searle et

al., 1992; also see section 2.1):

{SSB(1 - FU/F) SSB(1 - FdF)}

2 '2 .nXk -I,U nXk-I,L

(1.5.2.7)

..

In the above expression, k represents the number of subjects and n the number of observations per

subject, SSB represents the between-subject sum of squares, and F = MSB/MSW is the ratio of the

between-subject to the within-subject mean squares (we will review this terminology associated with

the one-way RE ANOVA model in detail in section 2.1). Also, X~ -1, L and X~ -I,U are critical

values of the chi square distribution with (k - 1) degrees of freedom (df), and FLand FU are critical

18

Page 28: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

values from the F distribution with (k -1) numerator and ken -1) denominator df, such that

Pr{X~_I.L ~ X~_I $ X~-I,U}

and

(1 - a)

Graybill (1976) gives a natural extension of the Williams result (1.5.2.7) that is appropriate for more

general random effects models for balanced data.

For the one-way RE ANOVA model for unbalanced data (again, see section 2.1), Burdick and

Eickman (1986) give the following approximate 100(1 - 2a)% confidence interval for u~:

(1.5.2.8)

In expression (1.5.2.8), we utilize the following definitions:

X~ _1(1- a)k-1

m = min{ni}' M = max{ni}' h = k / { it/lni}'

and S* = (k -1) -I{ .E Y~. - k -I{ .E YiY} .1=1 1=1

In the above expressions, X~ -I (t) represents the 100(t)-th percentile of the chi square distribution

with (k -1) df, and ~~-_Ik(t) represents the 100(t)-th percentile of the F distribution with (k -1)

numerator and (N - k) denominator df. Also, m and M refer, respectively, to the smallest and largest

subject-specific numbers of repeated measurements (ni) in the observed set of data, and Yi. is the

sample mean of the observations on the i-th subject. Based on the above definitions, one computes

. . {S* I} {S" 1 }the quantItIes L = f 2(MSW) - m and U = f 4(MSW) 'NI' and proceeds to calculate the

approximate interval in (1.5.2.8). Simulation studies by Burdick and Eickman (1986) suggest that

this interval should perform well in most cases. Searle et al. (1992) cite the result in (1.5.2.8),

although there are transcription errors in their representation.

Before closing section 1.5.2, we should note that the basic techniques behind ML and REML

estimation are not. at all dependent upon the simplifying assumptions that we made for ANOVA

19

..

Page 29: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

...

estimation regarding balanced data and the characterization of V and the li./s (j = 1,... , r) as

identity matrices. In later chapters, we will utilize estimates of variance components from

unbalanced data. For this generalization, we will consider only ML and/or ANOVA estimates.

When the data are unbalanced it is generally true that estimation is computationally more difficult

(there are fewer practical cases for which closed-form estimators exist), comparisons between various

estimation methods are less clear-cut (for example, the equivalence of ANOVA and REML estimates

no longer holds), and in the case of ANOVA estimation, there is no longer a single or "best" method.

With regard to the latter point we refer the reader to the three methods of Henderson (Henderson,

1953), which are summarized in detail by Searle et al. (1992).

Finally, it should be noted that there exist readily available computer software packages for

estimation and hypothesis testing under the general linear mixed model (1.5.1.1). An example is the

SAS MIXED procedure (SAS Institute Inc., 1992), which allows several options for structuring the V

and D matrices and provides both ML and REML estimates. For a discussion of the numerical

maximization algorithms involved, see Wolfinger (1992).

1.5.9 Prediction of Random Effects

Often, in the context of (1.5.1.1), there may be interest in predicting the realized but

unobservable values of some subset of the random effects u. The term "prediction" is fundamentally

different from the term "estimation", due to its focus on random variables as opposed to fixed

unknown parameters. Hence, the cri teria for choosing functions of the data Y as predictors of u are

generally somewhat different from the classical criteria for choosing estimators of parameters.

For models satisfying (1.5.1.1) under the conditions that V and the li.js (j = 1, ... , r) are

identity matrices, Searle et al. (1992) describe in detail three related procedures for predicting the

unobservable random variables u. These three methods are termed best prediction (BP), best linear

prediction (BLP), and best linear unbiased prediction (BLUP). For a synopsis of these three

methods, we will make use of (1.5.1.3), with the blocks of D = Var(u) given by U] Iq . (j = 1, ...r),3

and with V given by IN' It follows from (1.5.1.1) that

c Cov(Y, u') DZ' . (1.5.3.1)

Also, one of the considerations in the derivation of each of the three types of predictors IS mean

square error of prediction (Searle et al., 1992), defined by

E{(u - u)'A (u - u)},20

(1.5.3.2)

Page 30: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where A is any positive definite symmetric matrix, and" denotes a predictor of u.

Best Prediction

According to the notation of Searle et al. (1992), the sole criterion for a "best predictor" is that

it minimizes the mean square error of prediction (1.5.3.2). Searlt~ et al. show that this is satisfied by

the conditional mean of u, given Y. Hence, denoting the best predictor as "BP' we have

•"BP = E(u I y) . (1.5.3.3)

We note that "BP depends upon the observed data and the (generally unknown) parameters of the

conditional distribution of u given Y. Properties of the best predictor include the fact that it holds

regardless of the joint distribution of (u, V), and that it is invalLiant to the choice of A. Also, it is

unbiased in the following sense: its expected value with respect to Y is equal to the expected value of

u. In symbols,

(1.5.3.4)

.,

Best Linear Prediction

"Best linear predictors" ("BLP) are those which have the property of minimizing the mean

square error of prediction among those predictors that are linear in Y (Searle et al., 1992). Hence, for

some vector a and some matrix B, we may write "BLP = a + BY. Searle et al. show that, in our

notation, the best linear predictor is given by

E(u) + CI:- 1(y - XfJ) . (1.5.3.5)

Clearly, "BLP is unbiased, in the same sense as is "BP' Under the normality assumptions often

attendant with (1.5.1.1), the best predictor is identical to the best linear predictor (i.e., "BLP =

"BP)'

Best Linear Unbiased Prediction

"Best linear unbiased predictors" accommodate the generalization of allowing one to predict

not just the random effects u, but a linear combination of the form w = L'fJ + U, where L'fJ is an

21

..

Page 31: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

estimable function of the model's fixed effects (Searle et al., 1992). The criteria for BLUPs (which we

will denote by wBLUP) are that they are linear in Y, they are unbiased in the sense that E(wBLUP)

= E(w), and they minimize mean square error of prediction [Le., (1.5.3.2) with u and u replaced by

wand w, respectively] among candidate predictors. Denoting BLUE(XP) of (1.5.2.6) by XpO, Searle

et al. (1992) give the following expression for wBLUp:

(1.5.3.6)

'. As a special case, we note that for random effects models (recall then that p = f.t y ) and balanced

data, (1.5.3.6) reduces to

- ,- -1 -wBLUP=LY+CE (Y-YIN )· (1.5.3.7)

..

...

'",

..

Henderson (1963) provided one of the early treatments of BLUP; further studies and extensions have

been numerous (e.g., Harville, 1976).

One trait that is common to all of BP, BLP, and BLUP is that, in general, none of them may

be calculated for a given set of data. This is because BP assumes knowledge of all of the parameters

of the joint distribution of (u, Y), BLP assumes knowledge of the first and second moments, and

BLUP assumes knowledge of the second moments (Searle et al., 1992; note that the former two

requirements are identical under the usual normality assumptions). Hence, for the practical aspects of

prediction, one must rely on more careful study of predictors that involve the estimation of unknown

parameters. This can be quite involved, even for simple models. As an example, Peixoto and

Harville (1986) studied competing predictors for the one-way RE ANOVA model (see section 2.1) for

balanced data.

1.6 MODELING OCCUPATIONAL EXPOSURES

Naturally, there have been many efforts to model TWA exposure measurements beyond simply

taking them as a series of independent and identically distributed (Li.d.) random variates from a two

parameter lognormal distribution. The purpose of this section is not to discuss all such efforts, but

simply to reference some of the attempts that have been made to account for specific sources of

variability in models for occupational exposures. We should note that there are many references [e.g.,

Spear et al. (1986); Francis et al. (1989)] related to modeling serial correlation (or autocorrelation)

22

Page 32: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

among short-term exposure measurements. Since our main focus in this work relates to long-term

exposures related to chronic health effects, we will not specifically review this portion of the literature.

1.6.1 Stationarit,

Before discussing the modeling of occupational exposures, it is useful to define a stationary

time series. Consider a series of random variables {Y(t i )}, i = 1,... , m, where t i represents the time at

which Y(ti ) is observed. In general, {Y(t i )} is considered a stationary series if the distribution of its

elements is unchanged over time. Hence, for instance, the mean, variance, and autocorrelation

function associated with the Y(t i ) remain the same regardless of the time at which we observe the

series. More formally, "...for any positive integer m and times t i : i = 1,... , m, the joint probability

distribution of {Y(t i + s): i = 1,... , m} is the same for all values of s" (Diggle, 1990). Historically,

much of the effort to model occupational exposures has relied on the stationarity assumption. Hence,

Rappaport (1991b) recommends that exposure assessment be viewed as "an ongoing activity in which

inferences about the levels of airborne chemicals are made periodically". Any isolated exposure

assessment should be made over a reference time period for which stationarity in the TWA exposures

may be reasonably assumed, unless the assessment specifically aecounts for possible systematic (e.g.,

seasonal) trends in the exposure distribution.

1.6.! Sources 0/ Variabilit,

Many authors have suggested the use of ANOVA methods to describe occupational exposure

data, usually (but not always) after applying the log transformation. Perhaps the most widely

recognized source of variability is that between workers; Le., it is often believed that differences in

daily tasks or personal habits lead to heterogeneity in exposures across workers. As mentioned in

section 1.2, evidence for such between-worker variability has app~!ared even in groups which typically

might be classified as "homogeneous" based on factors such as location and job title (Kromhout et al.,

1993). References in the environmental literature which consider accounting for between-worker

variability through ANOVA include Petersen et al. (1986), Rappaport et al. (1988), Heederik et al.

(1991), and Rappaport (1991b). While not specifically referring to ANOVA methods with respect to

the logged exposures, Spear and Selvin (1989) arrive heuristicailly at a scheme for describing TWA

measurements on the original scale which in fact translates identically to a one-way ANOVA scenario

with random worker effects after a log transformation. This provides some of the motivation for the

work in Rappaport et al. (1995a) and Lyles et al. (1995a and 199.5b), which involves the development

of statistical methods for assessing workplace exposures based on assuming the one-way random

23

"

«

Page 33: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

effects ANOVA model (Searle et al., 1992) for the logged TWA exposure measurements. This work

will be discussed in detail in Chapter 3.

Researchers have also postulated or attempted to account for other potential sources of

variability in TWA exposures. Spear and Selvin (1989) allude to the possibility of day-to-dav

variability in 8-hour TWAs, though they state that little evidence is available suggesting this to be a

significant factor. Samuels et al. (1985) describe styrene exposures from multiple companies through

a hierarchical model with random effects for company and sampling date within company, and they

recommend the continued use of models for exposure data accounting for variance components.

Petersen et ale (1986) describe a pilot study concerned with respirable dust exposures at cement

plants, in which a variance component due to particular jobs performed as well as to subject was

estimated. As they obtained a negative estimate of the subject-to-subject variance component, they

dropped that effect from their final model; the results of the modeling were used to estimate required

sample sizes for the main study to obtain desired precision in an estimate of the mean exposure.

Nicas and Spear (1993a; 1993b) describe a model for a single worker's short term exposure series

which does not involve ANOVA methods. They view the worker's overall distribution of TWAs as a

mixture of distributions corresponding to different daily tasks, and they use weighting on the basis of

time spent on each task to specify this distribution. Ultimately, they compare various survey

sampling plans (including plans involving stratification on the basis of task times) with the aim of

suggesting the best strategy in terms of precision in estimating the worker's mean exposure. While it

is difficult to imagine that resources would allow their methods to be used for a large-scale assessment

program, Nicas and Spear (1993b) do suggest that variation in workers' time spent in various jobs

may be viewed as a source of between-worker variability in 8-hour TWAs.

1.1 EXPOSURE MEASUREMENT ERROR

1.1.1 Background

Exposure-disease models are widely used to assess the potential for adverse health effects as the

result of exposure to toxic airborne contaminants, and to aid in the determination of standards

limiting the levels of such contaminants in the workplace. However, despite the technological

improvements represented in modern personal exposure monitors, it is impossible to accurately

measure the true parameters (whether the mean, a percentile, or some other parameter) of a worker's

exposure distribution. In other words, there is inevitably measurement~ in estimates of these

focal parameters of the exposure distribution in any realistic scenario, yet these estimates represent

24

Page 34: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

the only concrete means of relating the occurrence of an adverse health effect to exposure. The

problem is magnified when one hopes to relate cumulative eXPOS1~ to an adverse health effect. We

define this quantity for a particular worker as

pCumulative Exposure = .L Pi'~j ,

3=1(1.7.1.1)

where t j represents the length of the j-th time period, and Pj is the mean exposure suffered by the

worker during that period (e.g., Dement, 1980; Heederik and Miller, 1988). Since any surrogate for

the true mean exposures Pj (j = 1, ... , p) is subject to measurement error, the error in the surrogate

for cumulative exposure will be exacerbated.

The purpose of this section is to introduce the topic of exposure measurement error, and to

discuss potential means of adjusting for such error when a health effects model involving parameters

of the true exposure distribution is postulated. Though this topic has been treated in the literature

under a myriad of forms for the assumed health effects model, our focus will be the case of multiple

linear regression models. We will also focus exclusively on measurement error (in the case of one or

more continuous predictor variables), as opposed to misclassifi<:ation (which relates to categorical

predictors). Finally, we will assume that the outcome variable itself is measured without error.

1.7.11 Concepts

Thomas et al. (1993) give an excellent overview of the subject of exposure measurement error.

Some of the basic concepts that they help to clarify in terms of alternative views, characterizations,

and treatments of the measurement error problem include distinguishing between the "structural" and

"functional" approaches, between "non-differential" and "differential" measurement error, and

between "classical" and "Berkson" measurement error models. Briefly, in the structural approach, the

true unknown predictor X (possibly a vector) is regarded as a random variable following some

distribution f(X) in the population. In contrast, in the functional approach (see, for example,

Edwards, 1994), X is treated as a fixed unknown for each individual and is estimated along with other

parameters relating to the health effects model. In the case of non-differential measurement error, the

distribution of the response Y, conditional on the true X and the measured value Z (known as the

"surrogate" for X), is the same as the distribution of Y conditional on X only. In symbols,

f

f(Y I X, Z) f(Y I X). (1.7.2.1)

In other words, given the value of X, the surrogate Z provides no further information about Y. As

25

Page 35: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

implied by the terminology, this is not the case with differential measurement error. In the classical

measurement error model, the surrogate variable Z is distributed about the true value X; i.e.,

E(Z I X = x) = x. (1.7.2.2)

..

..

In the Berkson model, X is taken to be distributed about Z, so that the roles of X and Z in (1.7.2.2)

are reversed. Thomas et al. (1993) give further details regarding these concepts; also see Fuller

(1987). For the purposes of the current proposed research, we will henceforth consider only the

structural approach, !!Q!!-differential measurement error, and classical measurement error models.

In the context of public health studies, the measurement error problem may generally be

formulated in terms of three models (Clayton, 1992; Kupper, 1994): 1). The true disease model

("TDM"), relating the response Y to the true predictor X and any other (error-free) covariates (as

aforementioned, this will be a linear model in our context); 2). The measurement error model

("MEM"), relating the true predictor X, the surrogate Z, and (possibly) other covariates in a model

involving (generally random) measurement error; 3). The predictor distribution model ("PDM"),

describing the distribution of the true predictor, possibly conditional on other covariates assumed to

be measured without error. Usually, the MEM and the PDM will involve "nuisance" parameters

(e.g., those representing unknown means or variances), whose values must either be assumed, or

estimated separately from or in conjunction with the primary parameters of the TDM.

In general, while the surrogate predictor Z is always observed, the true predictor X may either

be latent (unobservable altogether), or observable in principle but not feasibly (Le., measurements of

the true X may be prohibitively expensive or time-consuming). In either case, one must have some

means of obtaining information about the values of parameters in the MEM and the PDM, unless

these values are to be assumed known. Two common methods of obtaining such information are by

means of either a validation studv or a reproducibilitv study (Thomas et al., 1993). In a validation

study, one uses measurements of both X and Z on subjects to extract the needed information about

the nuisance parameters; clearly, X must be observable. This will generally be possible for only a

small subsample of the study participants. In a reproducibility study, one generally gets at the

nuisance parameters by means of repeated observations of the surrogate Z. Of course, this latter

approach remains applicable when X is latent, which (as previously mentioned) is usually the case

when X relates to the worker's true exposure distribution. As a third option, there is information

about the nuisance parameters in the main study data; this information alone may be adequate, or

may be supplemented by validation or reproducibility data.

With regard to the MEM, it often assumed that measurement errors are additive. This implies

that the basic form of the MEM most often considered is

26

Page 36: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

z x + U, (1.7.2.3)

where U represents random measurement error. In this context i1; is generally assumed that X and U

are independent, and that U has mean 0 and variance u;; it is often assumed in addition that X and

U are both normally distributed. However, the additive model (1.7.2.3) may not be appropriate in all

settings. As Carroll (1989) points out, there is no reason to rule out the possibility that the

measurement error may be multiplicative, as in the following MEM:

f'

z = XU', (1.7.2.4)

where U' has mean 1 and variance U~/' Hence, consideration should be given to the basic form of the

MEM in a particular public health setting. Clearly, the concepts of both the MEM and the POM

require considerations that are generally ignored in standard studies of exposure-disease relationships,

which focus solely on defining the TOM.

1.7.9 Adjusting for Measurement Error

A great deal of recent research has focused on methods of adjusting for measurement error.

The impetus for this research is the fact that measurement error in the predictor variable can have

serious effects on the results and interpretation of a study relating that predictor to a response

variable. This follows from the fact that, even for linear health effects models, the mean and variance

of Y conditional on X is generally not the same as the mean and variance conditional on Z. Most

commonly, with a classical non-differential measurement error model, the apparent relationship

between Y and X will be attenuated (i.e., biased toward the null) when Z is substituted for X in the

health effects model. Also, Y is generally more highly dispersed conditional on Z than on X, and

apparent main effects or interactions involving other covariates in the TOM can be distorted

conditional on Z (Thomas et aI., 1993).

One may find the majority of the proposed methods of adjusting for measurement error

reviewed in Thomas et al. (1993) and/or Carroll (1989). A brief synopsis of some of these methods

now follows.

Simple Correction of Usual Estimates

If one proceeds to regress Y on the surrogate Z, the resulting estimates of the parameters of

27

Page 37: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

interest are biased. An approach that has been put to beneficial use by Rosner et al. (1989; 1992) in

the regression context is to determine an expression for this bias (which should depend on unknown

parameters estimable by means of a validation or reproducibility study), and to correct the usual

estimates using a consistent estimator of the bias. This method is certainly applicable in the linear

regression setting; Rosner et al. (1989) apply it in the logistic regression case under the assumptions of

a ~ disease outcome, additive multivariate normally distributed measurement errors, and a

multivariate normal true predictor vector. Stefanski and Carroll (1985) make the method more

generally applicable in the logistic regression setting by means of Taylor expansions.

As an example, consider the simple linear regression model

Y = a + {3X + e, (1.7.3.1)

"

where E(e) = 0 and Var(e) = 0';. Also, denote E(X) = J.lx and Var(X) = 0';. If the surrogate Z is

defined according to the additive MEM (1.7.2.3), then it is well known (e.g., Carroll, 1989) that the

OLS estimator of the slope parameter when fitting the surrogate regression converges in probability to

>'{3, where

(1.7.3.2)

An extension of this analytical result to allow for multiple predictors measured with error, under

multivariate normality assumptions for the true and surrogate predictors, is available (e.g., Thomas

et al., 1993). If, on the other hand, the multiplicative MEM (1.7.2.4) is in effect, then Carroll (1989)

shows that the OLS estimator converges to ,{3, where

(1.7.3.3)

Given an estimate (based on a consistent estimator) of >. or " the simple correction is simply to

divide the OLS estimate by that value. This practice yields a consistent estimator for {3; Taylor

series-based methods may be used to estimate the variance of this corrected estimator.

Hwang (1986) discusses a multiplicative measurement error problem applicable to a linear

TDM and a generalization of the multiplicative MEM given in (1.7.2.4). Specifically, he considers the

true regression model to be a general linear model of the form

Y (1.7.3.4)

where X is an n x p design matrix, P is a p x 1 vector of unknown parameters of interest, and t: is an

28

Page 38: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

n x 1 vector of random error terms. The assumed MEM is

Z=X0U, (1.7.3.5)

where "0" denotes the elementwise (Hadamard) product and U is an n x p matrix of random

measurement error terms. Without assuming a known distribution for the (assumed to be Li.d.) rows

of X, but assuming a known distribution (involving known parameters) for the i.i.d. rows of U,

Hwang (1986) derives a consistent estimator for P that is essentially a particular corrected version of

the usual OL8 estimator. In particular, he shows that

..

• as 1POLS .... (A 0 M) - Ap, (1.7.3.6)

where POLS represents the ordinary least squares estimator from the regression of Y on Z, A =

E(X1XD, and M = E(U1UD (note that X~ and U~ are taken without loss of generality to be the

first rows of X and U, respectively). The corrected estimator proposed by Hwang is

P = [(Z'Z) 0 M] - IZ'Y , (1.7.3.7)

where" 0" denotes elementwise division. Hwang establishes the asymptotic normality of "{n(P - P),and provides consistent estimators for the large sample dispersion matrix of p.

One can verify that (1.7.3.3) and (1.7.3.6) are in agreement when the true regression model is a

simple linear regression as in (1.7.3.1). Also, for the simple linear TDM (1.7.3.1), (1.7.3.7) produces

an estimator that is the usual OL8 estimator multiplied by a factor, involving the sample mean and

sample variance of the Z's and the (assumed known) measurement error variance O"~/ that is

consistent for (Iii) [see (1.7.3.3)]. We note that this correction is based on the main study data, as

opposed to a validation or reproducibility study.

Regressing Y on E(X I Z)

Many authors have recognized that the regression of the response variable Y on the conditional

expected value of the true predictor X given the surrogate Z tends to alleviate the attenuation

problem. In fact, if the TDM is a simple linear regression and th'e MEM is of the form (1.7.2.3) with

normality, then this practice yields the same result as does the correction of the OLS estimator based

on (1.7.3.2). Again, in practice, one must provide the needed consistent estimates of the nuisance

parameters. This met.hod has proven effective in the multivariate case (where X, Z, and U are

29

..

..

Page 39: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

..

random vectors) under the assumption that the independent vectors X and U follow a joint

multivariate normal distribution; Thomas et al. (1993) provide an expression for the applicable

conditional expectation. If the conditional variance of Y given Z depends on the value of Z, then a

weighted regression analysis is recommended; however, this is not a problem with a linear TDM and

additive measurement error under normality (Thomas et aI., 1993). Some authors (e.g., Schafer,

1989; Pierce et aI., 1992) have explored this option for adjustment in more complex settings, requiring

the modeling of and/or numerical computation of the conditional expectation [E(X I Z)] and the

conditional variance [V(X I Z)]. Whittemore (1989) explores the closely related strategy of surrogate

regression utilizing James-Stein estimates.

Outside the relatively simple context of a linear TDM and an additive MEM with normality

assumptions, it is not always clear that the strategy of regressing Y on E(X I Z) yields consistent

estimators of the parameters in the TDM. Armstrong and Oakes (1982) used simulations to evaluate

the strategy in the case of conditional logistic regression, with a multiplicative MEM and X and U

both lognormally distributed. Their simulations indicated that the resulting estimates were usually

less attenuated than the naive (uncorrected) estimates, but that in some cases they were actually

worse. Liang and Liu (1991) make some clarification of the consistency issue in the generalized linear

models setting, with additive measurement error.

Stroctural Equations

A method of analysis that is popular in the social SClences may also be used to correct for

measurement error. This method is based on "linear structural relations" (LISREL), and its

applicability in latent variable settings is discussed by Bollen (1989). The basic idea behind the

approach is to pose linear models relating a set of (possibly latent) response variables to a set of

latent predictor variables, and relating the observed and latent predictor and response variables.

These models allow one to specify the theoretical structure of the variance-covariance matrix of all of

the variables in terms of unknown parameters. The estimation is essentially based on finding the

parameter estimates that minimize the difference (based on some criterion) between the observed

sample variance-covariance matrix and the theoretical one.

Some potential drawbacks to the LISREL approach include the fact that there is usually an

identifiability problem, requiring the imposition of constraints on the model parameters (Thomas et

aI., 1993). The particular constraints chosen can have great impact on the results of the analysis. In

addition, the preferred criterion (maximum likelihood) for model fitting requires the assumption that

the variables are jointly distributed as multivariate normal. However, this may not be necessary if

another criterion is used.

30

Page 40: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Heederik and Miller (1988) apply the linear structural equations approach in the occupational

epidemiology setting. They use the method to demonstrate the ]potentially serious attenuation effect

in the multiple linear regression of lung function on cumulative dust exposure, controlling for age,

weight, and smoking status.

Maximum Likelihood

In principle, perhaps the ideal way to perform an analysis adjusting for measurement error

would be based upon the full likelihood of the observed data. If fJ represents the unknown parameters

of the TDM and T the unknown "nuisance" parameters, then assuming non-differential measurementk

error allows one to write the full likelihood (for k subjects) Lf«(J; ,-) = n !(Yi , Zi; (J, T) asi =1

(1.7.3.8)

Of course, other covariates measured without error could also be accommodated. In theory, one could

maximize this likelihood with respect to (J to obtain estimates of the parameters of interest properly

adjusted for measurement error. Data from a validation study could also be accommodated

simultaneously with efficiency benefits, provided that the distribution of X may be taken to be the

same in the main and validation studies (Thomas et aI., 1993; Carroll, 1989).

While ML estimation is attractive, there are drawbacks. In general, the maximization of

(1.7.3.8) will be a daunting task, even if it is possible to supply the required distributions. Carroll

(1989) suggests that some of the difficulties may be circumvented by an approximate ML method

(based on Taylor series); such a method has been attempted by Whittemore and Keller (1988). Still,

Carroll notes that both the usual and approximate ML methods suffer from problems due to the

estimates' occasional tendency to "blow up". Liu and Liang (1991) note that the first term in an

expression analogous to (1.7.3.8) depends only on T, and hence suggest the possibility that that

marginal likelihood could be used alone to provide estimates of the nuisance parameters, which would

then be incorporated into the remaining part of the full likelihood by means of a pseudo-likelihood

approach (Gong and Samaniego, 1981). However, Liu and Liang; note that this accomplishes little in

terms of easing the computational burden.

Quasi-likelihood

As indicated by Carroll (1989), a method applicable to measurement error problems which

maintains some of the attractive properties of ML methods whil(~ reducing the computational burden

31

Page 41: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

is based on quasi-likelihood (Wedderburn, 1974). Basically, a quasi-likelihood is analogous to a

likelihood, except that only the first two moments (as opposed to the full distribution) of the response

variable must be specified. Wedderburn's quasi-likelihood provides an extension to the concept of

generalized linear models (GLMs; McCullagh and NeIder, 1983) beyond the exponential family of

distributions for the response. Still, his "quasi-score" equations for maximizing the quasi-likelihood

turn out to be equivalent to the usual score equations for ML estimation in the GLM. This concept

has led to other useful extensions, including the generalized estimating equations method (Liang and

Zeger, 1986) for modeling discrete and continuous correlated response data.

In the measurement error framework, quasi-likelihood requires specification of only the first

two conditional moments, E(Y IZ) and Var(Y IZ), as opposed to the full conditional distribution

fey IZ) that is implicitly required in (1.7.3.8). In general, this eases the maximization task

considerably. Liu and Liang (1991) apply the method in the framework of the GLM, with

misclassification in categorical covariates; Liang and Liu (1991) consider the analogous problem for

continuous covariates measured with error. They show that the appropriate quasi-score function

(which one sets to 0 and solves for the maximum quasi-likelihood estimates) may be written as

S«(J; r) = E(OE(Yi I Zi))' [Yi - E(Yi I Zi)]i =1 00 Var(Yi I Zi)

(1.7.3.9)

"

"

Liang and Liu indicate that one may utilize a consistent estimator f of the nuisance parameters in

(1.7.3.9), and still obtain a consistent estimate of O. Often, the conditional variance Var(Y i IZi)

required in (1.7.3.9) involves an additional dispersion parameter that is not among the parameters of

interest (0) and is not estimable by means of a validation or reproducibility study. An additional

equation is generally employed for simultaneous estimation of this parameter; clarification is given in

Breslow (1990) and Liang and Hanfelt (1994).

As with maximum likelihood, there are regularity conditions which must be satisfied before the

properties of the solutions to the quasi-score equations may be confirmed. Details applicable to the

demonstration of the asymptotic properties of the estimators may be found in Liang and Zeger

(1986), Breslow (1990), and Gong and Samaniego (1981). Approximations of the quasi-likelihood

approach to estimation for measurement error models for which no reasonable parametric form for the

conditional distribution fey IZ) is available are discussed by Schafer (1992); however, these methods

tend to require relatively larger sample sizes. Other authors who have considered adaptations of the

quasi-likelihood approach to measurementjmissclassification problems include Whittemore and Keller

(1988) and Carroll and Stefanski (1990).

32

Page 42: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

1.7.-1 Measurement Error in Studies of Occupational Exposure

As mentioned previously, a good deal of recent attention in the environmental literature has

been devoted to between-worker variation in TWA exposures (Kromhout et al., 1993; Rappaport et

al., 1995a). As also indicated, observed averages of TWAs (as a surrogate for true mean exposure) on

a worker inevitably suffer from measurement error; this may be attributed to within-worker

variability in the observed measurements. Models, such as the familiar one-way RE ANOVA model

(Searle et al., 1992), have been used to characterize these sources of variability when there are

repeated measurements on workers. As this model views the true mean exposure on an arbitrary

worker as a random variable, several authors have begun to consider health effects modeling in

occupational epidemiology as a measurement error problem.

Heederik et al. (1991) and Rappaport et al. (1995b) have considered the simple linear

regression of a continuous health effect (such as lung function) on unobservable true mean exposure to

a toxic agent in the workplace. If Ps represents the OLS estimator of the slope parameter in a

regression of the health effect Y on the worker-specific sample mean of n repeated TWA exposure

measurements, and if these exposure measurements conform to the usual one-way RE ANOVA model,

then one may use the same basic calculation that leads to (1.7.3.2) to argue that

• p ( 1 )f3s = f3 1 + Ajn (1.7.4.1)

•(Brunekreef, Noy, and Clausing, 1987; Rappaport et al., 1995b). In the above expression, f3 is the

slope of the regression of Y on the true worker-specific mean exposure, and A = (u~ju~), the ratio of

the within- to the between-worker variance components. Clearly, the attenuation depends upon the

ratio A, as well as n. Typically, authors have assumed normali.ty of the random effects (and error

terms) in the ANOVA model, so that (1.7.4.1) may often be most appropriate when one is working

with logged TWA exposure measurements. Liu et al. (1978) give an analogous expression to

(1.7.4.1), applicable when the correlation between Y and true mean exposure is the parameter of

interest. It is clear that generalizations of (1.7.4.1) should be possible in cases where cumulative

exposure (1.7.1.1) is the true predictor of interest, dependinl~ on the assumed models for the

components /lj.

In the occupational exposure setting, most authors tend to utilize (1.7.4.1) as a means of

determining the number of exposure measurements required pe:r worker to reduce the bias in the

estimate Ps to an acceptable level. From (1.7.4.1), the relative bias (b) is (1 + Ajn) -1. Solving for

n, one obtains the number of measurements per worker needed to maintain the bias at some

(typically small) level bas

33

Page 43: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(1.7.4.2)

(Rappaport et aI., 1995b). Liu et ai. (1978) give the related result for estimating the true correlation

coefficient. Rappaport et ai. (1995b) use (1.7.4.2) together with estimates of the variance ratio>. to

help elucidate the relative merits of TWA styrene exposure measurements as opposed to biological

markers (such as styrene levels in exhaled air). Heederik et ai. (1991) suggest that strategies for

grouping workers be based on estimates of >., in light of (1.7.4.2).

While the expression in (1.7.4.2) provides useful insights, little has been done in terms of an in­

depth study involving adjustment for the attenuation phenomenon evident in (1.7.4.1). However,

such a study is of little import except in the interest of evaluating the use of competing estimators for

>., since each of the methods outlined in the previous section produce a theoretically identical

adjustment to the OL5 estimator (Thomas et aI., 1993). For logical extensions of the model for

occupational exposures and/or the simple linear health effects model, however, the discrepancies

between the various methods for measurement error adjustment can be pronounced.

1.8 PROPOSED RESEARCH

There is a continuing need for statistical advances in the modeling of occupational exposures to

account for potential sources of variability and other characteristics of TWA exposure distributions.

As plausible models are proposed, their ultimate utility hinges primarily on the fact that they may

aid in the assessment of workplace exposure levels (with relatively sparse data, this task is generally

infeasible without the use of parametric models to reduce the number of unknown parameters).

Hence, statistical techniques relevant to workplace exposure assessment should accompany proposed

models for exposure distributions. One important issue involves the estimation of key parameters

describing the distribution of exposures in a population of workers. In addition, exposure assessments

generally require an exposure limit (or standard) as a benchmark below which it is desired to control

some characteristic (e.g., the mean or some percentile) of each worker's exposure distribution. The

selection of the appropriate characteristic, as well as the determination of the appropriate limit,

depends in theory on the underlying relationship between exposure to the toxic agent in question and

the postulated adverse health effect. The elucidation of this relationship typically requires health

effects models relating exposure to disease. As such, there is also a continuing need for advances

allowing valid interpretations and inferences based on studies employing such models in real-world

scenarIOs.

34

Page 44: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

In Chapter 2, we consider two models for log-transformed TWA exposure measurements. Each

of these is a special case of the general linear mixed model (1.5.1.1). We will show how incorporating

typical normality assumptions with these models makes them int1erchangeable with a plausible pair of

models that assume lognormality of the untransformed TWA exposure measurements. The two

models that we consider account for sources of variability (between-worker and day-to-day) that have

received much attention in the recent occupational hygiene literature. They are particularly appealing

for modeling the distribution of 8-hour TWA exposure measurements within a specified group of

workers.

After defining the two models, we will focus on estimating key occupational exposure

parameters for a group of workers. These key exposure parameters are the overall population mean

and variance. We will discuss estimators of these parameters employing ML and/or ANOVA

estimators of variance components, and we will derive (for balanced exposure data) the UMVU

estimators. In the case of the first model (which is the one-way RE ANOVA model) for exposures, we

will assess the extent to which the UMVU estimator for the population mean is preferable by

theoretically examining its efficiency with respect to competing estimators.

As most exposure assessments seek ultimately to characterize the exposure distributions of

individual workers, we will also consider the prediction of worker-specific mean exposures under each

model. As will also be true of our treatment of the estimatioll1 problem regarding the population

parameters, this study will differ from typical ones in the linear mixed model setting because we will

be working on the lognormal scale. However, we will make para.llels with familiar topics in the area

of prediction, including best linear prediction [BLP; see (1.5.3.5)]. Theoretical results related to mean

squared error of prediction will be derived. We will utilize these results together with simulations to

assess the performances of proposed predictors for worker-specific mean exposure, relative to the use of

worker-specific averages.

Chapter 3 will contain new statistical methodology applicable to the assessment of workplace

exposure levels. We will begin by providing some clarification and describing new methodology

relevant to the simplest case of a random sample of exposures adequately characterized by a

lognormal distribution; this treatment will consider assessment strategies focusing on both long-term

mean exposure and on probabilities of exceedance. We will then proceed to show how analogous

assessment strategies for mean exposure can be formulated and carried out using models accounting

for between-worker and/or day-to-day variability. In particular, under the one-way RE ANOVA

model (accounting for between-worker variability only), we will c:onsider likelihood ratio-, score-, and

Wald-type test statistics applicable to a hypothesis test regarding the probability that an arbitrary

35

Page 45: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

worker's mean exposure level exceeds an applicable limit. We will identify complications that can

arise when applying these test statistics, and we will discuss means for dealing with these problems.

Simulation studies will evaluate the significance and power levels of proposed test statistics over

ranges of the true parameters that appear plausible based on the occupational hygiene literature.

An effort to tie together the previous work regarding the modeling of TWA occupational

exposures with health effects modeling will appear in Chapter 4. In particular, the random effects

models to be considered in Chapters 2 and 3 view the mean of an arbitrary worker's exposure

distribution as a random variable; this provides the impetus for the proposed treatment of prediction

in Chapter 2. As most relevant health effects models seek to evaluate such characteristics of

individuals' exposure distributions as predictors of adverse health outcomes, there is clearly a latent

variable measurement error problem. In the context of multiple linear regression, we will outline this

problem assuming the availability of appropriate exposure data assumed to satisfy the one-way RE

ANOVA model. Our approach will differ from previous work in the occupational setting. Such prior

work has generally treated the exposure data as normal, or at least has taken as additive the

measurement error model relating true and surrogate characteristics of workers' exposure

distributions. In contrast, we will maintain the well-supported assumption of lognormality, and we

will find that the structure imposed when the predictor of interest is the individual's true mean

exposure leads to a multiplicative measurement error model. We will discuss and evaluate potential

methods of adjusting for the measurement error, allowing valid estimation of the exposure-disease

relationship and valid inference. Specific applications will be introduced that allow incorporation of

health outcome and exposure data from disparate groups of industrial workers.

In Chapter 5, we will consider examples of the methodology developed in Chapters 2-4, using

real exposure data and continuous health outcome data. These examples will cover estimation of the

population mean exposure (and the population variance), as well as prediction of worker-specific mean

exposures, according to models accounting for between-worker and day-to-day exposure variability.

We will also see illustrations of the hypothesis testing methodology of Chapter 3 and the

measurement error adjustment procedures discussed in Chapter 4.

Issues warranting future research with regard to extensions of the proposed methodology will

be outlined in Chapter 6. The possibilities include extensions and generalizations of the models

describing occupational exposures and the models relating these exposures to adverse health effects.

We will also consider motivation for further work regarding the estimation of key population exposure

parameters, and the prediction of individual workers' mean exposures. Finally, we will indicate

36

Page 46: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

possible future research in the measurement error context of Chapter 4, and we will make note of the

problem of left-censored ("non-detectable") exposure data, which one may need to consider in

conjunction with exposure assessment strategies like those discussed in Chapter 3.

37

..

..

Page 47: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

"

Chapter 2: Estimation and Prediction Issues in the Study ofOccupational Exposure

2.1 PRELIMINARlES

In this chapter, we focus upon the estimation of key population parameters (e.g., mean and

variance) describing the distribution of TWA exposures in a group of workers, and upon the

prediction of mean exposure for individual members of the group. In the following subsection, we

describe the two models for exposure that we will employ for this study. We discuss the estimation

problem in section 2.2, and the prediction problem in section 2.3. Analyses using real exposure data

that illustrate the methods contained in this chapter are presented in Chapter 5.

t.l.l Models for UlJosure

Consider the one-way random effects ANOVA model (Searle et aL, 1992):

Model I: y .. = In(X ..) = Il + 13· + (.. (iIJ IJ ry I IJ 1,... ,kj j (2.1.1.1 )

where f3 i ..... N(O, O'~) and (ij ..... N(O, u~). Here, Y ij is the natural logarithm of Xij' which

represents the j-th TWA measurement taken on the i-th (of k) randomly selected workers from a job

group. The random effect f3 i represents the deviation of the i-th worker's log-scale mean

(Pyi =Py + f3i) from the population log-scale mean (py), and tij represents the random deviation of

the j-th measurement on the i-th worker from Pyi' Note that Pyi is the expected value of Vii'

conditional on f3 i fixed. We take all random effects in (2.1.1.1) to be mutually independent. The

parameters 0'& and u~ represent the between- and within-worker components of variability,

respectively. This model is the simplest special case of (1.5.1.1), and we use it extensively throughout

the remainder of this work in the occupational setting. Rappaport et al. (1995a) employ a slight

simplification of methodology due to Lange and Ryan (1989) to show that Model I fits reasonably

well a large number of existing sets of shift-long exposure data taken on nickel production workers.

An ANOVA table associated with Model I is as follows (Searle et al., 1992):

Page 48: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 2.1: ANOVA table for Model 1(2.1.1.1)

Source ofvariation

Worker

Within worker

Adjusted total

Degrees offreedom (dO

k-l

N-k

N-l

Sum of Squares(SS)

k _SSB = '" n.(Y· - Y 11

2£oJ 1 I. ..'

i =1

k n·1 - 2

SSW = E E (Yij - Yi .>i=1 j=1

k n·1 - 2

SST = E E (Yij - YJi=1 j=1

Mean Square(MS)

MSB =SSB/(k -1)

MSW = SSW/(N -k)

Estimation of model parameters

k n· k'" - -1 ~ - -1In Table 2.1, N = £oJ ni' Y i • = (ni) £oJ Y ij, and Y.. = (N) 2:

ANOVA variance compone~t~stimators are as fo{l~~s (Searle et aL, 1992): i =1

(k - 1)(MSB - MSW) 2k 2 and uw,an = MSW.

(N - E nJN)i = 1

n·1

.2: Yijo The usual3=1

In general, the MLEs of u~ and u~ must be found iteratively, and hence may not be expressed in

closed form. For estimating the population mean Jly ' we may consider a special case of the BLUE

(1.5.2.6), which is also the generalized least squares estimator (Sea.rle et al., 1992):

.E {Yi./var(Yi .>}it y = 1 =k1 { } ,

i~1 l/var(YiJ

where var(Yi.> = u~ + u~/ni' Of course, in practice one must estimate the unknown variance

components in order to compute ity ; as an example, substitution of the MLEs of the variance

components yields the MLE of Jly '

As a special case of Model I, if the data are balanced (i.e., ni =n, V i), then the ML estimators

of the unknown parameters can be written in closed form as

- 2= Y.., ub,ml

{[(k - 1)/k]MSBn

MSW}d -2

an Uw,ml MSW.

•In that case, the ANOVA estimators are also the REML estimators, and are given by

39

Page 49: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

.2 (MSB - MSW)O"b,an = n

Original-scale model associated with Model I

d •2 -_an O"w,an MSW.

The following multiplicative random effects model is completely interchangeable with Model I:

x .. = exp( II + f3. + t>.)IJ ry I IJ (i = 1,u.,k; j = 1,...ni). (2.1.1.2)

Clearly, the X;;'s are lognormal random variables. The following results follow immediately:

(2.1.1.3)

(2.1.1.4)

Since the joint distribution of Xij and f3i for any i and j is bivariate lognormal, it is straightforward

to define a random variable that is analogous to J.Lyi' except on the lognormal scale. This random

variable, which we denote as J.Lxi' represents the true unknown mean TWA exposure for the i-th

worker, and may be derived as the conditional expectation of Xij given Pi fixed:

E(Xij I f3i fixed) = J.Lxi = exp(J.Ly + f3i + 0"~/2) . (2.1.1.5)

..

As discussed by Rappaport et al. (1995a), J.Lxi is a key predictor of chronic health effects assuming

that Model I holds; hence, this random variable will be the focus of much of the work to follow. The

unconditional distribution of J.Lxi is itself lognormal, with mean J.Lx and variance J.L~[exp(O"~) - 1].

It also follows from Model I that the conditional distribution of Xij given J.Lxi is lognormal, with

mean J.Lxi and variance J.L~i[exp(O"~) - 1]. The three lognormal distributions associated with Xij'

J.Lxi' and Xij given J.Lxi are the concrete specifications of those proposed more heuristically in the

industrial hygiene literature by Spear and Selvin (1989).

As indicated by Samuels et al. (1985), there may in some instances be a need to account for

day-to-day variability in shift-long exposures. To do so, let us consider the following generalization of

Model I:

Model II: Yij = In(Xij) = J.L y + f3i + bj + (ij (i = 1,u.,k; j = l, ...n),

40

(2.1.1.6)

Page 50: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where the new random effect OJ is included to account for random variation in exposure due to date of

sampling. In particular, note that we have assumed in (2.1.1.6) a balanced sample, in which n

measurements (Ion each of n dates randomly selected over the intended period of study) are obtained

on each of the k workers randomly selected from a job group. (In general, the potential necessity of

accounting for the random date effects {OJ} is stronger as the amount of same-dav sampling of

different workers increases).

We maintain the same previous specifications (including normality assumptions) for the

random variables /3; and €ij; however, we now denote Var(€ij) a.s 0'; ("e" for "error"). In addition,

we assume that OJ '" N(O, O'~) V j. These components of variability (O'~ and 0';) essentially serve to

partition the within-worker variability (O'~) accounted for under Model I into a day-to-day and an

error component, respectively. Note also that the {ken + 1) + n} random terms in (2.1.1.6) are

assumed to be mutually independent. While it will be assumed for the purposes of this research, the

balanced sampling strategy (and realized sample) described above is, of course, not an absolute

requirement. Note that one may still define the i-th worker's log-scale mean as Jlyi = Jl y + /3i, the

conditional mean of Y ij given /3 i fixed (i = 1,... ,k). An ANOVA table corresponding to Model II is as

follows:

Table 2.2: ANOVA table for Model n (2.1.1.6)

Source of Degrees of Sum of Squares Mean SquareVariation freedom (dO (SS) (MS)

kWorker k-l SSB =n L: (Vi. - V.i MSB =SSBj(k -1)

i=l

nSampling date n-l SSD =k.L: (V. j - V.l MSD =SSDj(n - 1)

3=1

k nMSE - SSEWithin subjectj (k - l)(n -1) SSE = L: L: (Y .. - V· _. V .+ V )2. . '3 I. .3 .. - (k - 1)(n - 1)

date 1=13=1

Adjusted total kn-lk n - 2

SST = L: L: (Y.· - Y )13 ..;=1 j=l

One

In Table 2.2, Vi.-1 n -

(n) .L: Y ij , Y. j3=1

-1 k -= (k) .L: Y ij , and Y..1 =1

41

k n(nk) -1 L: .L: Y ij .

i=13=1 "

Page 51: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

can verify that E[(MSB, MSD, MSE)1

O"~Y and

Hence, applying the general result (1.5.2.1) from Searle et al. (1992), we obtain the ANOVA

estimators of the variance components as follows:

_2 (MSB - MSE)O"b,an = n

·20"d,an

= (MSD ~ MSE) , and -20"e,an MSE.

ML estimators of these variance components are not obtainable in closed form, and hence their

realized values must be obtained iteratively. However, the ML estimator of the log-scale mean Jio y is

simply the overall sample mean Y... This result follows because the data are balanced and because

Model II is a random effects model in the sense described in section 1.5.1; Y.. is also the ordinary and

generalized least squares estimator, and the BLUE of Jio y (Searle et al., 1992).

Original-scale model associated with Model II

The multiplicative random effects model that is interchangeable with Model II is

(i 1,... ,k; j = 1,...n). (2.1.1.7)

Again, the Xii's are lognormal random variates, with the following mean and variance:

(2.1.1.8)

(2.1.1.9)

Despite the presence of the random date effects (6 j)' the original-scale mean exposure for the i-th

worker may still be derived as the conditional expectation of Xij given (3i fixed:

.. (2.1.1.10)

Under Model II, the distribution of J1.x i is lognormal, with mean J1.x and variance J1.~[exp(O"&) - 1];

42

Page 52: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

note that this distribution has the same functional form as that for P-xi under the one-way RE

ANOVA model. It also follows from Model II that the conditional distribution of Xij given P-xi is

lognormal, with mean P-xi and variance p-;i[exp«(j~+ (j~) - 1]; hence, we have the three analogous

lognormal distributions associated with Model II.

Bias considerations

As a matter of general interest, it is worth considering what the effect (in terms of bias) would

be if one assumes Model I for balanced data given the sampling scheme associated with the discussion

of Model II. In particular, assuming that Model II is operable, one can show that

((j~ - (j~/n) ,

where &~, an and &~, an are as defined in conjunction with Model I. Hence, in general, ignoring

significant day-to-day variability in sets of exposure data containing many measurements taken on

identical dates leads to overestimation of the error variance and underestimation of the between-

worker variance.

2.2 ESTIMATING THE POPULATION MEAN AND VAlUANGE

For the following treatment of estimation problems unde,r Models I and II, we will assume

balanced data throughout. Of primary interest is the extent to which UMVU estimators (which are

much more complicated computationally than more intuitive competing estimators) produce gains in

efficiency for estimating the population mean exposure. We will explore this issue in detail under

Model I. In particular, we will compare competing estimators both on the basis of variance alone,

and on the basis of mean squared error (MSE). We will also examine the bias of the MLE, and of a

closely related estimator employing the ANOVA variance compon,ents estimators.

11.11.1 Int.itive estimators for mean and variance under Modd I

It is clear that the MLEs of (2.1.1.3) and (2.1.1.4) are as follows:

itx,l = exp[ity,ml + (&~,ml + &~,mN2] ,

43

(2.2.1.1)

Page 53: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

•2 . 2 [ ( . 2 •2 ) 1]O"x,l = J.Lx ,l exPO"b,ml + O"w,ml - • (2.2.1.2)

As an immediate alternative, we may also consider estimators employing the ANOVA estimators, i.e,

- 2 2exp[y.. + (Ub,an + Uw ,an)/2] , (2.2.1.3)

(2.2.1.4)

Recall that Y - jJ which is also the BLUE in the balanced case. These estimators are.. - y,ml'

intuitively appealing, as they utilize well-known estimators in place of the unknown parameters in the

functional expressions.

Expectation and variance of {2.2.1.1} and {2.2.1.3}

To facilitate computation of the exact mean and variance of jJx,l and jJx,2' we may write the

following:

fJ.x ,l = exp(Y.. + a1MSB + bMSW)

and

jJx,2 exp(Y.. + a2MSB + bMSW) ,

where a1 = (k -1)/(2kn), a2 = 1/(2n), and b = (n -1)/(2n). Since Y.., MSB, and MSW are mutually

independent (e.g., Searle et al., 1992), we have

E(jJx,t) = E[exp(YJ] x E[exp(atMSB)] x E[exp(bMSW)], (t =1,2) .

Likewise, since Var(jJx,t) = E(jJ;,t) - [E(jJx,tW, we have

Var(jJx, t) = {E[exp(2YJ] x E[exp(2atMSB)] x E[exp(2bMSW)]} - [E(fJ.x , t)]2, (t =1,2) .

Letting r =E(MSB) =(nO"~ + O"~), we have immediately from the normal moment generating

function that E[exp(YJ] = exp[J.Ly + r/(2nk)] and E[exp(2YJ] = exp{2[J.Ly + r/(nk)]}. To work out

the other required expectations, we make use of the following well-known results (e.g., Searle et al.,

1992):

44

Page 54: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

SSB X2 and SSW "" X2T "" k -1 2 k(n --1) .

Uw

Also, we note that, if T "" Xl, then the expectation E[exp(cT)] is finite over the range (c < 1/2), and

is equal to (1 - 2c) - f /2 over that range. It follows directly that

E[exp(atMSB)]{ }

-(k-I)/22atT

1 - (k -1)

and

, (t = 1,2) •

{ }

-k(n-I)/22bu2

E[exp(bMSW)] = 1 - k(n _wI) ;

the corresponding expectations of [exp(2atMSB)] and [exp(2bMSW)] are identical to the above, except

that 4at replaces 2at and 4b replaces 2b, respectively. From these results, we obtain the desired mean

and variance:

{ }

-(k-I)/2_ 2~T

E(px,t)=exp[JLy +T/(2nk)]x 1 - (k-l)

{

2 }-k(n-I)/22buw ()

x 1 - k(n _ 1) , t = 1,2

and

{ }

-(k-I)/24a T

Var(itx,t) = exp{2[py + T/(nk)]} x 1 - (k _~ 1)

(2.2.1.5)

(2.2.1.6)

{

2 }-k(n-I)/24buw

x 1 - k(n-l)

There are conditions for the existence of the above means and variances. In particular, E(itx,I) is

defined (i.e., finite) if and only if (T < kn), and Var(itx,l) is defined if and only if (T < kn/2).

Similarly, E(itx ,2) exists if and only if [T < n(k - 1)] and Var(itx,2) exists if and only if

[T < n(k -1)/2].

Equation (2.2.1.5) provides a direct computational formula for the bias of itx,t (t = 1,2), Le.,

Bias(itx, t) = [E(itx,t) - Px]· Also of interest for later comparisons are the MSEs of these two

intuitively reasonable estimators, i.e.,

45

"

Page 55: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(2.2.1.7)

!.!.! UMVU estimators for mean and variance under Models I and II

To derive UMVU estimators under Models I and II, we make use of the fact that, for random

effects models for balanced data, the overall sample mean Y.., together with the sums of squares, are

complete and sufficient statistics (Searle et aI., 1992). Hence, it suffices to find functions of these

statistics that are unbiased for the parameters of interest. The following derivations and results may

be viewed as natural extensions of those for the case of an i.i.d. sample from a lognormal distribution

(e.g., Shimizu, 1988). For Model I, we note that it is possible to derive these UMVU estimators using

existing results (Shimizu, 1988) pertaining to random samples from a multivariate lognormal

distribution; this is because the TWA exposure data vectors from each worker constitute exactly such

a random sample. However, the following derivation is preferable in that it leads to computational

forms for the estimators that require only standard hypergeometric functions, as opposed to zonal

polynomials and hypergeometric functions of matrix argument.

UMVU estimators under Model I

Recalling again that E[exp(YJ] = exp[Jly + T/(2nk)], one can verify that the UMVU for Jlx

under Model I (2.1.1.3) must take the form

jJ.x,umvu [exp(YJ] x 91(SSB, SSW) , (2.2.2.1)

where E[91(SSB, SSW)] = exp{(1/2)[(k - l)uVk + (nk -l)u~/(nk)]). Rearranging terms and

writing the exponentials in terms of series, we obtain

{ ( )j} { ( )j }00 1 k -1 . 00 1 (n - 1) 2·E[91(SSB, SSW)] = .E"1 -.r-k T) x .E "1 -2- (uwP .)=0 J. - n ) =0 J. n

Using the distributional results for SSB and SSW (see the preceding section), we have

2jr[(k - 1)/2 + j] j and E(SSWi)r[(k -1)/2] T

It follows that

46

Page 56: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

91(SSB, SSW)] = {E ~,(k _l)i . r[(k -1)/2] SSBi}i =0 J. 2kn 2'r[(k -1)/2 + j]

{ ( )i }00 1 (n-1) r[k(n-1)/2] .

x - -- SSW'i~Oj! 2n 2i r[k(n-1)/2+j] .(2.2.2.2)

Using (2.2.2.1) and (2.2.2.2), together with the compact nota.tion of generalized hypergeometric

functions [see (1.3.1.7)], we have the desired UMVU estimator: •

• - (k-1 (k-1) ) (k(n-1) (n-1) )J-lx,umvu = [exp(YJ] xOF1 -2- j 4krl SSB xOF1 -2-- j"4il SSW (2.2.2.3)

To derive the UMVU estimator for u; of (2.1.1.4), we first note that u; = 'l/J1 -'l/J2' where

We also use the fact that E[exp(2YJ] =exp{2[J.ly + T/nk]}. It follows that we may write

U;, umvu = [exp(2YJ] x [g2(SSB, SSW) - !J'3(SSB, SSW)] , (2.2.2.4)

where E[92(SSB, SSW)] =exp{2[(k - l)uVk + (nk - l)u~/(nk)]} and E[93(SSB, SSW)] =exp{[(k - 2)uVk + (nk - 2)u~/(nk)]). Again using E(SSBi) and E(SSWi) and series expansions, we

find that

(SSB SSW) = {E 4- (2(k _l»)i . r[(k -1)/2] SSBi}92' i=oJ! kn 2'r[(k-1)/2+j]

x{E ~, (2(n;; l»)i . r[k(n -1)/2] SSWi}i =0 J. 2'r[k(n -1)/2 + Jl

and

9 (SSB SSW) = {E 4- ((k - 2»)i . r[(k -1)/2] SSBi}3' i=oJ! kn 2'r[(k-1)/2+j]

{~ 1. ((n - l»)i r[k(n -1)/2] SSWi}x L..", n . .

i =0 J. 2'r[k(n -1)/2 + j]

47

(2.2.2.5)

(2.2.2.6)

Page 57: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Using (2.2.2.4)-(2.2.2.6), we may write the desired UMVU estimator as:

(2.2.2.7)

UMVU estimators under Model II

For Model II, we have E[exp(YJ] = exp[Jly + (nO'~ + kO'~ + 0'~)/(2nk)]; it follows that the

UMVU for Jlx under Model II (2.1.1.8) must take the form

fJ.x , umvu = [exp(YJ] x hI (SSB, SSD, SSE) , (2.2.2.8)

where E[hI(SSB, SSD, SSE)] =exp{(I/2)[(k -1)O'Vk + (n - I)O'~/n + (nk - I)O'~/(nk)]). We

again appeal to standard results for random effects models (e.g., Searle et al., 1992), which ensure

that Y.., SSB, SSD, and SSE are mutually independent, and are complete and sufficient statistics.

Also, the following distributional results hold:

~B 2 ~D 2 ~E 2"WI ..... Xk-I' "W2 '" Xn-I' and -2 '" X(k-I)(n-I)'O'e

where WI =(nO'~ + O'~) and w2 =(kO'~ +0';). Again, rearranging terms and writing the exponentials

in terms of series, we have

{ ( )i}{ ()i}{ ( )i}_ 00 1 k-l . 00 1 n-l . 00 1 (n-l)(k-l) 2·E[hI(SSB, SSD, SSE)] - .E"1 ~k wf x .E "1 ~k w~ x .E "1 2k (O'e)3 .

3 =0 J. - n 3 =0 J. - n 3 =0 J. n

Using the distributional results for SSB, SSD, and SSE, we have

E(SSBi) = 2ir[(k - 1)/2 + j] i E(SSDi)r[(k - 1)/2] wI'

and

2i r[(n -1)/2 + j] i= r[(n -1)/2] w2 '

E(SSEi) = 2i r[(k -1)(n -1)/2 + j] ( 2)ir[(k-l)(n-l)/2] O'e'

It follows that

48

Page 58: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

h1(SSB, SSD, SSE)] = {E \(k -1)j . r[(k -1)/2] SSBj}j = 0 J. 2kn 2Jr[(k -1)/2 + j]

(2.2.2.9)

{ ( )j }{()j }E 4. n -1 . r[(n -1)/2] SSDj x E 4- (n -1)(k -1) . r[(n -1)(k -1)/2] SSEj.j=OJ! 2kn 2Jr[(n-l)/2+j] j=OJ! 2kn 2Jr[(n-l)(k-l)/2+j]

Equations (2.2.2.8) and (2.2.2.9) give us the UMVU estimator for f-lx under Model II: ...

- (k-l (k-l) ) (n-l (n·-l) )iJ.x,umvu = [exp(YJ] xOF1 -2- j 4kn SSB xOF1 -2- j ~m SSD

x F (k-l)(n-l). (k-l)(n-l) SSE)o 1 2 ' 4kn (2.2.2.10)

To derive the UMVU estimator for u; of (2.1.1.9), we first note that u; = ¢3 - ¢4' where

We also use the fact that E[exp(2YJ] = exp{2[py + (nu~ + ku~ + cr;)/(nk)]). In this case, we may write

o-;,umvu = [exp(2YJ] x [h2(SSB, SSD, SSE) - h3(SSB, SSD, SSE)] , (2.2.2.11)

h2(SSB, SSD, SSE) = {E ~,(2(k-1»)j . r[(k-1)/2] SSBj} x

j = 0 J. kn 2Jr[(k - 1)/2 + j]

h (SSB SSD SSE) = {E 4. (k - 2»)j . r[(k - 1)/2] SSBj} x3 " j=oJ! kn 2Jr[(k-1)/2+j] .

where E[h2(SSB, SSD, SSE)] = exp{2[(k - 1)uVk + (n -1)u~/n + (nk - 1)u;/(nk)]} and E[h3(SSB, SSD,

SSE)] = exp{[(k - 2)uVk + (n - 2)u~/n + (uk - 2)u~/(nk)]). Using E(SSBj), E(SSDi) and E(SSEi)

and series expansions, we find in this case that

(2.2.2.12)

{ E ~,(2(n-1»)j . r[(n-l)/2] SSDj}X{ E ~,(2(n-1)(k-l»)i . r[(n-l)(k-1)/2] SSEj}j=oJ· kn 2Jr[(n-1)/2+j] j=oJ·. kn 2Jr[(n-l)(k-l)/2+j]

and

(2.2.2.13)

{E ~,(n-2»)j . r[(n-1)/2]· SSDi}x{ E \(n-l)(k--1)+1)j . r[(n-l)(k-l)/2] SSEj}.j = 0 J. kn 2Jf[(n -1)/2 + j] j = 0 J. kn 23r[(n -1)(k -1)/2 + j]

49

Page 59: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

From (2.2.2.11)-(2.2.2.13), we obtain the desired UMVU estimator as:

.2 - { (k-l (k-l) ) (n-l (n-l) )O'x,umvu = [exp(2YJ]x OFt ~;----.m-SSB xOFt -2-;----.m- ssn

x F (n -1)(k -1) . (n -1)(k -1) SSE) _ F (k -1 . (k - 2) SSB)o t 2 'kn 0 t 2 ' 2kn

F (n-l. (n-2) ssn) F (n-l)(k-l). (n-l)(k-l)+1 SSE)}x 0 t 2 '2kn x 0 t 2 ' 2kn

e.e.s Variances of UMVU estimators for the mean under Models I and II

(2.2.2.14)

The estimators derived in the previous section are of greater utility in conjunction with some

measure of their variability. In this section, we derive the exact variance of the UMVU estimators for

/Lx under Models I and II. These results are an extension of results provided by Shimizu (1988) for

the case of a random sample from a lognormal distribution. Using similar derivations, one could

derive the variances of the corresponding UMVU estimators for 0';; however, we will omit these here.

In section 2.2.4, we will utilize the variance of /Lx derived here under Model I, as we study relative

efficiencies.

Variance of {tx,umvu under Model I

Let us write Px,umvu of (2.2.2.3) as {XYZ},

Since X, Y, and Z are mutually independent, we have that Var(px,umvu) =[E(X2)E(y2)E(Z2)] - /L;'

We have already seen that E(X2) = E[exp(2YJ] =exp{2[/Ly + T/(nk)]}. Now, let us derive

E(y2) = E{oF,(k 21 ; (k4~n1) SSB)}'-

From Erdelyi et a1. (1953), we have in general that [oFt(a; b)f = IF2[(2a-l)/2; a, 2a-l; 4b].

[Note that the specific class of hypergeometric functions denoted as IF2 falls under definition

(1.3.1.7)]. Hence, we may write E(y2) = EhF2[c; d, e; f(SSB)]},

50

Page 60: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(k - 2) (k - 1) (k - 1)where c = -2-' d = -2-' e = (k-2), and j =~ .

Substituting back in for c,d,e, and j, and simplifying somewhat in terms of the hypergeometric

function notation, we have

2 (k - 2) 2(k - l)r )E(Y) = EhF2[c; d, e; j(SSB)]} = IFI -2-; (k-2); kn .

In similar fashion, we may write E(Z2) = Eh F2[9; h, i; j(SSW)]},

where 9 = [k(n -21) -1], h = k(n2- 1), i = [k(n -1) -1], and j = _(n_~_I_)

Substituting back in for g, h, i, and j, and again simplifying somewhat, we find that

2 . • ([k(n-l)-I] 2(n-l)q~)E(Z ) = E{IF2[9; h, Z; J(SSW)]} = IFI 2 ; [k(n -1) -1] ; n .

Some simplification of the final result may be achieved by lIloting that, in general,

(2.2.3.1)

(2.2.3.2)

[Erdelyi et al. (1953); Shimizu (1988)]. Applying this result to (2.2.3.1) and (2.2.3.2), we have

and

(2.2.3.3)

(2.2.3.4)

Page 61: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Using (2.2.3.3)-(2.2.3.4) and simplifying, we obtain finally

( ) 2{ [/(k)] F(k-1).[(k-1)T]2) F(k(n-1).[(n-1)U~]2)_}Var itx , umvu = Ilx exp T n x O} 2' 2 x O} 2' 2 1 .(~n) ~

(2.2.3.5)

Variance of f1.x umvu under Model II,

Recycling notation somewhat, let us write {J.x,umvu of (2.2.2.10) as {WXYZ},

- (k-l (k-1)) (n-l (n-l) )where W = [exp(YJ], X = OF} -2-; 4kn SSB ,Y = OF} -2-;"4kil SSD ,

dZ - F(k-l)(n-l).(k-l)(n-l)SSE)an - 0 } 2 '4kn .

By mutual independence, we have that Var(itx,umvu) = [E(W2)E(X2)E(y2)E(Z2)] - Il;. We have

already seen that E(W2) =E[exp(2YJ] =exp{2[lly + (nu~ + ku~ + u;)/(nk)]). Using arguments

completely analogous to those in the previous subsection, we find in the case of Model II that

•2 (k -1) [(k -1)w}]2 )

E(X ) = exp[(k -1)wd(kn)] x OF} -2-; 2 '(2kn)

2 (n-l) [(n-l)W2]2)E(Y ) = exp[(n -1)w2/(kn)] x OF} -?-; 2 '

- (2kn)

and

It follows from (2.2.3.6)-(2.2.3.8) that, under Model II,

. _ 2{ 2. 2 2. (k - 1) . [(k - 1)w}]2 )Var(llx umvu) - Ilx exp[(nub+kud+ue)/(kn)] x OF} -2-' 2 x, (2kn)

(n -1) [(n -1)w2]2 ) (k -1)(n - 1) [(k -1)(n -1)u;]2 ) 1}F -_. x F . -

O} 2 '(2kn)2 0 } 2 '(2kn)2 .

52

(2.2.3.6)

(2.2.3.7)

(2.2.3.8)

(2.2.3.9)

Page 62: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

!.t..1 Efficiency considerations for estimating Pz under Model I

As mentioned in section 2.2.1, the intuitively appealing ~~timators [lx, 1 and [lx,2 [given by

(2.2.1.1) and (2.2.1.3), respectively] for the population mean J.lx under Model I are biased. In this

section, we examine the extent of this bias for a selection of population parameter values, n, and k.

Using variances derived in the preceding sections, we also examine relative efficiencies of [lx,t (t = 1,2)

with respect to [lz, umvu of (2.2.2.3). This study of relative efficilmcies will be done first ignoring the

bias of [lx,t (t =1,2) (Le., taking ratios of the variances), and then accounting for the bias (Le.,

taking ratios of the MSEs).

The following tables point out ranges of true parameter values over which the extra effort

required to calculate [lx, umvu of (2.2.2.3) pays its greatest dividends. Of course, since it is impossible

to tabulate all possible cases (with respect to nand k in particular), we have selected particular

conditions for tabulation. These conditions essentially cover the ranges of variance components as

estimated for a large number of groups of nickel industry workers (data from Rappaport et al.,

1995a). The selections of nand k may seem somewhat enigmatic; some clarification of these

selections will be made in Chapter 3. The following study is in the spirit of Attfield and Hewett

(1992), who made analogous comparisons under conditions of an LLd. sample from a lognormal

distribution.

Bias of [J.x t (t = 1,2) under Model I,

Table 2.3 summarizes calculations utilizing equation (2.:U.5). Note that some entries are

missing because the required expectation does not exist. The parameter ¢ represents the ratio of the

between- to the within- worker variance component (Le., ¢ = O"VO"~). For all entries in Tables 2.3­

2.5, the population parameter J.l y takes the value - 3 (again, this is fairly representative of the groups

considered by Rappaport et al., 1995a).

Efficiencies of [J.x t (t =1,2) relative to [J.x umvu under Model I, ,

Table 2.4 displays the ratio of Var(jLx,umvu) [equation (2.2.3.5)] to Var(jLx,t) (t =1,2)

[equation (2.2.1.6)] over the same ranges covered by Table 2.3. This measure of efficiency ignores the

bias of [lx, t· In Table 2.5, we display the ratio of MSE(jLx, t) [equation (2.2.1.7)] to MSE(jLx, umvu);

the latter MSE is of course identical to Var(jLx umvu).,

53

Page 63: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 2.3: Bias calculations for intuitive estimators of Pz under Model I

(¢a, u~) (n,k) Pxb Bias(jL ) c, d (¢a, u~) (n,k) J.lx

b Bias(jL ) c, dx,t x,t

(.1, .5) (7, 26) .066 3 x 10 - 5 (1, .5) (2, 16) .082 8xlO- 4

(2xl0- 4 ) (.003)

(.1, 1) (7, 26) .086 lxlO- 4 (1, 1) (2, 16) .135 .005

(6xI0- 4 ) (.013)

(.1, 3) (6, 32) .259 .004 (1, 3) (2, 17) 1.000 .458

(.007) (.750)

(.25, .5) (3, 25) .068 9 x 10 - 5 (2.5, .5) (2, 12) .119 .006

(5 x 10 -4) (.015)

(.25, 1) (3, 24) .093 5 x 10- 4 (2.5, 1) (2, 12) .287 .068

(.002) (.133)

(.25, 3) (2, 37) .325 .017 (2.5, 3) (2, 13) 9.488 174.524

(.028) (630.107)

(.5, .5) (2, 26) .072 2xlO- 4 (5, .5) (2, 10) .223 .052

(9 x 10 - 4) (.111)

• (.5, 1) (2, 25) .105 .001 (5, 1) (2, 10) 1.000 2.079

(.004) (4.939)

(.5, 3) (2, 26) .472 .056 (5, 3) (2, 11) 403.429 ***

(.092) (***)

(.75, .5) (2, 19) .077 4 x 10 - 4

(.002)

(.75, 1) (2, 19) .119 .003

(.007)

(.75, 3) (2, 20) .687 .170

(.275)

a: the variance ratio, uVu~

b: true value of J.lx { =exp[J.ly + (u~ + u~)/2])

c: Bias(jLx, t) = [E(jLx, t) - J.lx], computed using (2.2.1.5)

d: Upper entry gives Bias(jLx,l) [MLE]; lower entry gives Bias(jLx,2) [using ANOVA estimates]

*** Signifies entry is non-existent

54

Page 64: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 2.4: Variance efficiencies4 of intuitive estimators of JJz rd.ative to UMVU under Model I

(tj)b, u~) (n,k) IJ.~/ Eff(jLx t) d (¢b, u~) (n,k) IJ.xc Eff(jLx t) d, ,

(.1, .5) (7, 26) .066 .998 (1, .5) (2, 16) .082 .955

(.989) (.873)

(.1, 1) (7, 26) .086 .993 (1, 1) (2, 16) .135 .839

(.977) (.704)

(.1, 3) (6, 32) .259 .950 (1, 3) (2, 17) 1.000 .200

(.915) (.105)

(.25, .5) (3, 25) .068 .993 (2.5, .5) (2, 12) .119 .790

(.972) (.611)

(.25, 1) (3, 24) .093 .975 (2.5, 1) (2, 12) .287 .359

(.934) (.191)

(.25, 3) (2, 37) .325 .836 (2.5, 3) (2, 13) 9.488 ***(.761) (***)

(.5, .5) (2, 26) .072 .986 (5, .5) (2, 10) .223 .331

(.951) (.150)

(.5, 1) (2, 25) .105 .947 (5, 1) (2, 10) 1.000 *** ..(.883) (***)

(.5, 3) (2, 26) .472 .663 (5, 3) (2, 11) 403.429 ***(.547) (***)

(.75, .5) (2, 19) .077 .972

(.914)

(.75, 1) (2, 19) .119 .900

(.799)

(.75, 3) (2, 20) .687 .426

(.296)

a: efficiency defined here as Var(jLx, umvu)/Var(jLx, t) (t =1,2) [equations (2.2.3.5) and (2.2.1.6)]

b: the variance ratio, uVu~

c: true value of JJx { =exp(IJ.y + (u~ + u~)/2]}

d: Upper entry gives Eff(jLx,l) [MLE]; lower entry gives Eff(jLx, :z) [using ANOVA estimates]

*** Signifies entry is non-existent [Le., Var(jJ.x t) is not finite],

55

Page 65: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 2.5: MSE efficienciesG of intuitive estimators of JJ,z relative to UMVU under Model I

(4)&, u~) (n,k) JJ:/ Eff(jLz. t) d (4)&, u~) (n,k) JJzc Eff(jLx t) d,

(.1, .5) (7, 26) .066 .998 (1, .5) (2, 16) .082 .953

(.988) (.860)

(.1, 1) (7, 26) .086 .993 (1, 1) (2, 16) .135 .833

(.973) (.680)

(.1, 3) (6, 32) .259 .947 (1, 3) (2, 17) 1.000 .194

(.903) (.101)

(.25, .5) (3, 25) .068 .993 (2.5, .5) (2, 12) .119 .784

(.969) (.586)

(.25, 1) (3, 24) .093 .974 (2.5, 1) (2, 12) .287 .350

(.926) (.182)

(.25, 3) (2, 37) .325 .825 (2.5, 3) (2, 13) 9.488 ***

(.737) (***)

(.5, .5) (2, 26) .072 .985 (5, .5) (2, 10) .223 .324

(.946) (.144)

(.5, 1) (2, 25) .105 .945 (5, 1) (2, 10) 1.000 ***

(.869) (***)

(.5, 3) (2, 26) .472 .647 (5, 3) (2, 11) 403.429 ***

(.519) (***)

(.75, .5) (2, 19) .077 .971

(.904)

(.75, 1) (2, 19) .119 .896

(.779)

(.75,3) (2, 20) .687 .411

(.279)

a: efficiency defined here as MSE(jLx umvu)fMSE(jLz t) (t =1,2) [equations (2.2.3.5) and (2.2.1.7)]. ,b: the variance ratio, uVu~

c: true value of JJz { =exp[JJy + (u~ + u~)f2])

d: Upper entry gives Eff(jLz,l) [MLE]; lower entry gives Eff(jLz,2) [using ANOVA estimates]

*** Signifies entry is non-existent [Le., Var(jLx t) is not finite],

56

Page 66: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Comments on Tables 2.3-2.5

The key points illustrated in Table 2.3 may be summarized as follows. First, both {lx, 1 (the

MLE) and {lx,2 (the estimator using the ANOVA variance component estimators) are biased upward

in all cases considered. For both estimators, the bias increases for fixed O'~ as <I> increases; likewise,

the bias increases for fixed <I> as O'~ increases. The bias ranges from being essentially negligible (for

low values of <I> and/or O'~), to absurd (e.g., when <I> =2.5 and (7~ =3). In all cases, the bias of the

MLE (2.2.1.1) is smaller than that of the alternative estimator (2.2.1.3).

In Table 2.4, we find that {lx, umvu' which by definition has minimum variance among all

unbiased estimators of Jlx ' also has smaller variance than the two intuitively reasonable estimators

({lx,l and {lx,2) in all cases considered. Again, we find that the performance of both {lx,l and {lx,2

(this time, relative to {lx,umvu) worsens for fixed O'~ as <I> increclSes and for fixed <I> as O'~ increases.

The loss of efficiency as O'~ increases for a given value of <I> is quite striking in many cases; clearly

there are cases in which both alternative estimators are woefully inefficient relative to the UMVU.

Table 2.5 displays efficiency calculations that are perhaps more standard, as the bias of {lx, t

(t = 1,2) is accounted for and the relative MSEs are tabulated. Of course, all entries in this table are

less than or equal to the corresponding entries in Table 2.4. However, the changes are generally

minor.

Based on parameter estimates for many separate groups of workers, we may state that most of

the nickel exposure data to which Rappaport et al. (1995) apply their proposed exposure assessment

protocol tends to suggest "relatively" low values of the population parameter <I> ( = O'VO'~). However,

some estimates of <I> significantly greater than 1, as well as estimates of O'~ greater than 3, were

obtained. Clearly, our study indicates that {lx,2 should never be favored over the MLE ({lx, 1)' In

general, bias and efficiency calculations are recommended (at least utilizing estimated parameter

values for a particular set of data) before using the MLE in place of the UMVU (2.2.2.3) to estimate

the population mean exposure (Jl x ) in the balanced case under Model I. If the bias and efficiency loss

using the MLE are minimal, one may wish to report it on the basis of its intuitive appeal. However,

a strong argument in general favor of the UMVU estimator may be made: its expectation (which is,

of course, Jlx ) is always finite provided that Jl x is finite. This is not always true of the MLE (see

Table 2.3 and discussion in Section 2.2.1).

!.!.5 Appronmate confidence intervals for Px under Models I and II

In sections 2.2.1-2.2.4, we have considered estimation of the population mean (Jlx ) and

variance (0';), and have derived some related theoretical expressions. In this section, we present large-

57

..

Page 67: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

sample based approximate 100(1- a)% confidence intervals (Cis) for P,x under Models I and II. Such

an approximate CI is used under Model I (assuming balanced data) by Rappaport et al. (1995a) to

aid in the selection of an intervention strategy in the event that workplace exposures are deemed

unacceptable using an assessment method discussed in detail in Chapter 3.

Assuming that Model I (2.1.1.1) is operable, consider the vector ;Pml = (fly, ml' o-~, ml'

o-~,ml)/. The asymptotic multivariate normal distribution of ;Pml permits construction of an

approximate CI for In(p,x) = [P,y + (O"~ + 0"~)/2], expressed as follows:

(2.2.5.1)

..

where On(px)] is the MLE of In(p,x) and Var[ln(px)] is the MLE of Var[ln(px)] = {Var(fly,ml) +[Var(o-~ ml)+Var(o-~ ml)]/4 + Cov(o-~ ml' 0-& ml)/2}. (We note that, at least asymptotically, the

, t t,

covariance between fly, ml and the two variance components estimators is 0). A corresponding CI for

J.lx may be obtained by exponentiating the bounds of the interval in (2.2.5.1).

Interval for balanced data under Model I

For balanced data, the ANOVA estimators o-~,an and o-&,an given 111 section 2.1.1 are

asymptotically equivalent to the MLEs, and hence are asymptotically distributed (along with

fly,ml = YJ as multivariate normal. Moreover, empirical studies (not presented here) have shown

that a CI analogous to that based on (2.2.5.1) employing these ANOVA estimators tends to achieve

coverage closer to the nominal 100(1- a)% level. The appropriate covariance terms (e.g., Searle et

aI., 1992) are as follows:

Var(fly ) = T/(nk) = (nk) -l(nO"~ + O"~), Var(o-~,an) = 20"~/[k(n -1)],

Cov(o-~,an' o-~,an) = - 20"~/[kn(n -1)], and Var(o-&,an) = (2/n2){T

2/(k -1) + O"~/[k(n - 1m .

After algebra, we obtain

-.. (2.2.5.2)

and the associated approximate 100(1 - a)% CI is given by

(2.2.5.3)

58

Page 68: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

In (2.2.5.3), Px ,2 is the intuitive estimator given by (2.2.1.3), and the estimator Var[ln(l'x)] is formed

by utilizing the ANOVA estimators in the functional expression (~~.2.5.2).

In simulation studies (not detailed here), the approximate interval (2.2.5.3) exhibited better

coverage than normal theory-based intervals that are symmetric about Px,l or Px,2. It also

outperformed an alternative normal theory-based interval symmE!tric about Px umvu (2.2.2.3), which,employed an estimate of the theoretical variance in (2.2.3.5). The asymmetric interval (2.2.5.3) likely

does better because the normal approximation is more reasonable (for moderate sample sizes) after

the log transformation of Px ,2.

Interval for unbalanced data under Model I

Given an iterative algorithm for obtaining the MLEs in the unbalanced case, Searle et al.

(1992) give the necessary details from which one can construct the CI in (2.2.5.1). However, in light

of the availability of closed-form expressions (section 2.1.1) for u:~,an and u~,an' as well as the better

empirical performance of (2.2.5.3) relative to (2.2.5.1) for balanced data, it seems reasonable to

construct an interval analogous to (2.2.5.3) using the ANOVA estimators for unbalanced data.

Let us denote as Py,an the estimator formed by using the ANOVA estimators to estimate the

generalized least squares estimator of l'y (section 2.1.1). Using existing results (e.g., Searle et al.,

1992), we have (at least asymptotically) that

..

..

Var(u~,an) = b

= { t ~~ }--li =1 I

20"~/(N - k) ,

C ( -2 -2)ov O"w,an' O"b,an c(N - k)(N -- S2/N) ,

and

k kIn the above expressions, Ti = (niO"~ + O"~), S2 = L n~, and S3 = L n~. Also, for large

i=1 i=1samples, we have Cov(Py,an' u~,an) = Cov(Py,an' u~,an) =O. Denoting as Px,an the estimator

utilizing the ANOVA estimators for unbalanced data that is analogous to (2.2.1.3) for balanced data,

we have (for large samples) Var[ln(l':,an)] = {Var({.ty,an) + [Var(u~,an)+Var(u~,an)]/4 +Cov(u~,an' ug,an)/2} {a + (b+2c+d)/4}.

59

..

Page 69: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Hence, for the unbalanced case, we may consider the following approximate 100(1 - a)% CI:

(2.2.5.4)

where we form Vi:u[ln(Il:,an)] by using the ANOVA estimators in the functional expression for

VarOn(Il:,an)]' We note that, for unbalanced data, asymptotic normality of the vector ~an

(py , an' c1~,an' c1~,an)' has not been established in the literature (Searle et al., 1992). However,

(2.2.5.4), based on a normal approximation, still appears to be a reasonable approximate CI, and has

performed reasonably well in empirical studies (not presented here). We also make use of such a

normal approximation with respect to ~an for unbalanced data in sections 3.3.6 and 4.3.3; further

commentary regarding this subject appears in Chapter 6.

Interval under Model II

Since under Model II we assume balanced data, the ANOVA estimators c1g,an' c1~,an and c1~,an

given in section 2.1.1 are once again asymptotically equivalent to the MLEs, and hence are

asymptotically distributed (along with Py , ml = YJ as multivariate normal. In this case, we have

In(llx) = [Il y + (ug + u~ + u~)/2], and Var[ln(~x)] = {Var(py) + [Var(c1g, an)+Var( c7~, an) +Var(c1~,an)]/4 + [COV(c1~,an' c1~,an) +COV(c1g,an' c1~,an) + COV(c1~,an' c1~,an)]/2}, where we estimate

In(llx) using the ANOVA estimators for the variance components. The variance and covariance terms

are easily obtained using (1.5.2.2), with P as defined in section 2.1.1.

Assuming that we estimate Ilx analogously to (2.2.1.3) and that we compute V~r[ln(llx)] by

utilizing the ANOVA estimators in place of the unknown parameters in the functional expression for

Var[ln(~x)]' then we have an approximate 100(1 - a)% CI for Ilx under Model II that is a direct

counterpart to (2.2.5.3).

Rappaport et al. (1995a) make use of an approximate 100(1-1/k)% CI for Ilx based on

(2.2.5.3) to propose an ad hoc method for selecting an appropriate intervention strategy for reducing

workplace exposures. Their proposed procedure also makes use of estimated predictors of mean

exposures computed using some of the methodology of section 2.3; examples of this procedure using

actual exposure data are provided in Chapter 5.

2.3 PREDICTION OF WORKER-SPECIFIC MEAN EXPOSURES

In Section 2.1.1, equations (2.1.1.5) and (2.1.1.10), we introduced the random variable Ilxi

60

Page 70: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

under the specifications of Models I and II, respectively. This random variable, which represents the

overall mean exposure level for an arbitrary worker from the job group in question, is arguably a key

predictor of long-term adverse health effects related to exposure (e.g., Rappaport, 1991b). In

Chapters 3 and 4, respectively, we will develop methodology th.at utilizes this key predictor in the

context of occupational exposure assessment, and in the context of modeling continuous health

response variables.

In the sections to follow, we apply some of the basic concepts regarding the prediction of

random variables in the context of random effects models to develop candidate predictors for J.lxi

under Models I and II. In the case of Model I, we also develop some theoretical evaluation of the

candidate predictor, and compare its performance with that of intuitively reasonable alternatives.

The concepts that we apply in these developments are discussed by Searle et al. (1992).

1.3.1 Derivation 0/ "best predictor" 0/ J.lzj under Models I and II

Model I

Recall from (2.1.1.5) that, under Model I (2.1.1.1), J.lxi = exp(J.ly + f3 i + 0'~/2) =

exp(J.ly +0'~/2) x exp(f3 i ). Using a basic definition (see equation 1.5.3.3) of Searle et al. (1992), we may define

the "best predictor" (BP) of the random component exp(f3i ) as

..

(2.3.1.1)

where X is the complete (random) vector of exposure data for the group of workers in question, and

Xi is the vector of exposure data for the i-th worker (i = 1,... , k). The second equality in (2.3.1.1)

follows from the assumed independence across workers under Model I; this independence also assures

that the BP has the same functional form for all workers. As discussed in Searle et al. (1992), the

predictor that satisfies (2.3.1.1) achieves minimal~ squared ~!:Q!: of prediction.

The derivation of the required conditional expectation in (2.3.1.1) may be approached by

conventional Bayesian methods in the following sense. If tPi = exp(f3 i ), then the BP is simply the

posterior mean of the distribution of tPi' given Xi. More specifically, if we consider the prior

distribution 7r(tPj) and the conditional distribution f(Xi I tPi)' then the BP is the mean of the

following posterior distribution: ..(2.3.1.2)

61

Page 71: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

The posterior distribution 71"( 1/;j I X;) is quite easy to derive using multivariate normal (MVN)

properties and the direct relationships between the normal and lognormal distributions. First,

consider the joint multivariate normal distribution of {3j and Yj, where Yj = In(Xj):

(2.3.1.3)

From conditional MVN properties, we have

and

where Yi is the realization of Yj. The required matrix inverse is a familiar form (e.g., Searle, 1982),

and it follows after algebra that

(2.3.1.4)

and

(2.3.1.5)

n·- -1 •the realization of the random variable Yj = (nj) .I: Yjj, and

3=1

..

n·•where Yi = (ni) -1.I: Yij is

3=1

Ii = (niO"~)(niO"~ + O"~) - 1.

Using normal/lognormal distributional properties, it follows that the posterior distribution

7l"(t/Ji I Xi) is lognormal. In particular, we obtain from (2.3.1.4) and (2.3.1.5) that

We may use a slight stretch of the Searle et al. (1992) definition of best predictors (which considers

only random effects) to propose what we will term the "best predictor" of J.1. x i:

'flxi,BP = exp(J.1.y +0"~/2) X BP[exp({3j)]- 2 2= exphiYi + (1- 'i )(J.1.y +O"b/2) + O"w/2].

62

(2.3.1.6)

Page 72: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

This predictor minimizes the mean squared error of prediction (MSEP); in other words,

E[(ilxi - Ilxi)~' with ilxi being any function of the exposure data, is minimized by taking

ilxi = ilxi, Bp·

A connection between BP[exp(Pi )] and the "best linear predictor" (BLP) as defined by Searle

et al. (1992) is worthy of note. This connection is not immediately obvious, since BP[exp(Pi)] is not

a linear function of either the X or Y data. However, it is straightforward to write BP[exp(Pi)] in the

following way:

where Xi = (.rt Xii)l/ni, the geometric mean of the exposure data for the i-th worker. Hence, just]=1

as for random effects models under normality assumptions, the BP is identical to the BLP (Searle et

al., 1992), so in our case BP[exp(Pi)] is identical to what we may call the "BGP" ("best geometric

predictor") of exp(Pi). The BGP terminology and its definition seem a natural extension that stems

directly out of the specific framework (with lognormality assumptions) in which we are working.

Model II

Recall from (2.1.1.10) that, under Model II (2.1.1.6), Ilxi == exp[py + Pi + (u~ + u;)/2] =

exp[lly + (u~ + u;)/2] x exp(Pi ). From definition (1.5.3.3), we have BP[exp(Pi)] = E[exp(Pi ) I X]

= E[exp(Pi) I Xi]' which is identical to (2.3.1.1); again, the latter equality stems from the fact that

Pi is independent of the exposure data from all but the i-th worker. The problem is once again to

find the posterior distribution (2.3.1.2), except that this time Model II is operable. Using a very

similar derivation to that in the previous subsection, we have under Model II that

.

(2.3.1.7)

It follows that

and

63

Page 73: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Var(f3i IVi)

After simplification, we have

(2.3.1.8)

and

(2.3.1.9)

nwhere Yi =(n) -I,E Yij and 17 = (nug)[nug + (u~ + u~)] - 1.

3=1

In this case, we obtain from (2.3.1.8) and (2.3.1.9) that the mean of the lognormal posterior

distribution 11"(1/;i I Xi) is

Using the same adaptation of the Searle et al. (1992) definition, we have our "best predictor" of J.txi

under Model II:

(2.3.1.10)

Note that we may write BP[exp(f3i)] under Model II as exp[ - 17J.ty + ug(1-7])/2] x ( Xi)17, so that we

may again attach to it the label "BGP", as defined in the previous subsection. Again, ji.xi, BP of

(2.3.1.10) minimizes the mean squared error for prediction of J.txi'

Comments about "BP"s for J.txi

There are several comments about the "best predictors" (2.3.1.6) and (2.3.1.10) that are

worthy of note. We summarize these points as follows:

1). They are unbiased in the sense described by Searle et al. (1992). In other words,

E(ji.xi, BP) =E(Jlxi) = J.t x' where Jlxi and J.t x are defined according to the appropriate model (lor II),

64

Page 74: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

fixed) = E{exp[-yi(Jly + /3 i +7 i) + (1 - "Y i)(Jl y + tTV2) + tT~/2]

After simplifying, we obtain

respectively.

2). They involve unknown parameters, and so they must be estimated in practice. Of course, this

precludes the achievement of a "best" predictor in practice, if MSE:P is the criterion. In Section 2.3.4,

we will consider estimated predictors based on (2.3.1.6) under Model I, which may be viewed as

empirical Bayes estimators of Jlzi (e.g., Casella, 1985).

3). They are derived to minimize MSEP, defined as E[(Pzi - Jl~~i)2]. We note that this is only one

of many possible criteria. For instance, some authors (e.g., Casella, 1985; Peixoto and Harville, 1986)

also consider what is termed conditional MSEP, which we may de,fine in our case as E[(Pzi - Jlz i)2 I/3i fixed]. The predictors given by (2.3.1.6) and (2.3.1.10) do not minimize this conditional criterion;

in fact, there may be ranges of parameter values and worker-specific sample sizes over which a simple

predictor (such as Xi' the worker-specific sample mean) achieves IIower conditional MSEP. Also, the

work of Casella (1985) seems to suggest that there may be empiric:al Bayes estimators of PZi, BP that

could perform better than Pzi, BP itself if conditional MSEP is thE~ chosen criterion. The choice of an

appropriate criterion is largely philosophical; we consider here only unconditional MSEP, which is

indeed minimized by (2.3.1.6) and (2.3.1.10).

4). Although we consider here only unconditional MSEP, it is nonetheless instructive to derive the

conditional expected values of Pzi, BP under Models I and II. Under Model I, we have E(Pzi, BP I /3in·

&

I /3i fixed}, where 7 i =ni- 1.E €ij·J=1

E(Pzi, BP I /3i fixed) (2.3.1.11)

Clearly, this is not identical to Jlzi m general. However, we note that E(Pzi, BP I /3i fixed)

approaches Jlzi as "Yi approaches 1 (equivalently, as tTl becomes very large relative to tT~). If"yi

approaches 0 (equivalently, if tT~ dwarfs tTl), then E(Pzi,BP I /3i fixed) approaches the population

mean Jlz of (2.1.1.3).

Similarly, after some algebra, one may show under Model n that

(2.3.1.12)

•Hence, under Model II, E(Pzi BP I /3i fixed) approaches Jlzi as TJ approaches 1; if TJ approaches 0,,

then E(Pzi,BP I l3i fixed) approaches the population mean Jlz of (2.1.1.8).

65

Page 75: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

5). As equations (2.3.1.11) and (2.3.1.12) suggest, !lxi,BP may be viewed as a biased estimator of

J-lxi once we condition on the value of the random effect f3 i' Some further examination of this bias

could be of regulatory significance. In particular, from (2.3.1.11), the conditional expectation of

'flxi, BP under Model I exceeds the true J-lxi if and only if the exponential factor on the right side

exceeds 1. Equivalently, under Model I,

One consequence of this fact is that, conditional on f3 i fixed, 'itxi, BP overestimates (on average) the

true mean exposure (J-lxi) of any worker for whom J-lxi is below the population mean exposure (J-l x )'

Perhaps more importantly from a regulatory standpoint, conditional on f3i fixed, !lXi, BP

underestimates (on average) J-lxi for those workers with the higher values of /lxi' In particular, since

0< "'Ii < 1, the conditional expectation of 'itxi,BP is assuredly smaller than /lxi if f3 i exceeds u~. This

potentially disadvantageous property motivates our consideration in the next subsection of two

alternative predictors that are conditionally unbiased.

t.9.t Two conditionally unbiased predictors for J-lxi under Model I

The simplest and most obvious predictor for /lxi under Model I is the sample mean,n·

Xi = ni-l.~1Xij' Clearly, Xi is unbiased [i.e., E(Xi ) = E(/lxi) = /lx]' Also, unlike !lxi, BP [see

(2.3.1.11)1. its conditional expectation is identically equal to /lxi [i.e., E(Xi I f3 i fixed) = /lxi]' The

simplicity of this predictor makes it an attractive alternative for occupational hygienists employing

random sampling schemes for exposure assessment. However, by definition, its MSEP (which is our

chosen criterion) must be higher than that of !lxi, BP for all possible parameter values.

Unlike p'xi,BP' the sample mean Xi is a linear (not geometric) function of the exposure data

(X). It seems reasonable to assume that a conditionally unbiased predictor that is a geometric

function of X might also perform better than Xi in terms of MSEP. To pursue this idea, we note that

!lxi,BP (2.3.1.6) may be written as exp[aYi + b], where a and b are both functions of the fixed

parameters of Model I (and are not functions of the observed data). If we force a predictor of this

general form to be conditionally unbiased (e.g., we set E{exp[aYi + b] I f3 i fixed} equal to J-lxi)' we

find that we must take a = 1 and b = [(ni -1)/(2ni)]u~. Hence, one alternative predictor worth

considering is

!lxi,CU (2.3.2.1)

where "CU" stands for "conditionally unbiased".

66

Page 76: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

In the following subsection, we derive and compare the MSEPs for each of our three candidate

predictors [Xi' Pri,CU of (2.3.2.1), and Pri,B? of (2.3.1.6)].

!.3.3 Mean squared error of prediction for candidate predictors uf&der Model I

MSEP of the sample mean (Xi)

To derive a theoretical expression for MSEP(Xi) requires the following calculation:

Using standard calculations, we first find that Var(Xi) = (n~) -l{ .t Var(Xii) + 2:L Cov(Xii,3 =1 i < i'

Xii')} = (n~)-l{niP;[exp(O'g+O'~)-l] + ni(ni- 1)p;[exP(o'g)-1]} [note that the covariance

terms may be found using equation (1.3.2.2)]. It follows that

Next, note that E(XiPri) =(ni) -lE{.t [exp(py + f3 i + iii)] x [exp(py + f3 i + 0'~/2)]} =3=1n·

(ni) -lexp(2py + 0'~/2) x E{exP(2f3i).t eXP(iii)}. After simplifying, we obtain3 =1

Thirdly, E(P;i) = Var(Pri) + [E(Pri)]2, which follows directly from the lognormal distribution of Pri:

Working through the algebra, we have finally

..

MSEP of Pri,CU {2.3.2.1}

The required calculation is:

(2.3.3.1)

MSEP(Jtri, CU)

67

Page 77: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

Note that E(il;i,CU) =E{exp[2Yi + (ni) -1(ni -1)0"~]}. Using standard calculations and simplifying,

we have

Next, note that E(iJxi,CU/lxi) = E{exP{(/ly +,Bi + (" i) + [(ni - 1)/(2ni)]0"~} x exp(/ly + ,Bi + 0"~/2)} =exp[2/ly + (2ni) -1(2ni -1)0"~] x E{exP(2,Bi + (" i)}. Simplifying, we obtain

which is also equal to E(/l;i)' After more algebraic simplification, we have finally

(2.3.3.2)

MSEP of Pzi,BP (2.3.1.6)

This time, we require:

Proceeding in similar fashion, we first note that E(iJ;i,BP) = E{exp[2/iYi + (1- /i)(2/ly + 0"&) + O"~]} =

exp[(1- /i)(2py + O"~) + O"~] x E[exp(2/iYi)]. Following a standard calculation, we obtain

Next, we have E(iJxi,BP/lxi) =E{exphiYi + (1- /i)(/ly + O"V2) + 0"~/2] x exp(/ly +,Bi + 0"~/2)} =

exp[/ly + (1- /i)(/ly + O"V2) + O"~] x E[exp(,iYi + ,Bi)]' Working out the required expectation and

simplifying, we find that

which is the same as E(iJ;i, BP)' Recalling that E(/l;i) = exp[2(/ly + O"~) + O"~], we arrive at the

final result:

(2.3.3.3)

In Table 2.6, we utilize (2.3.3.1)-(2.3.3.3) to investigate what we may term the efficiency of Xi

68

Page 78: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

and {lzi,CU with respect to {lzi,BP (which is known to minimize MSEP). We tabulate the

appropriate ratios of MSEP over the same values of n and ranges of (O'~, O'~) as were used in Tables

2.3-2.5.

Table 2.6: MSEP efticienciesCi of alternative predictors relative to {lzi, BP under Model I

(</Jb, O'~) n· J.lzc Eff({l~h d, e (¢>b, O'~) n· J.lzc Eff({l~h d,eI I

(.1, .5) 7 .066 .313 (1, .5) 2 .082 .473

(.391) (.541)

(.1, 1) 7 .086 .233 (1, 1) 2 .135 .330

(.372) (.437)

(.1, 3) 6 .259 .054 (1, 3) 2 1.000 .066

(.264) (.182)

(.25, .5) 3 .068 .319 (2.5, .5) 2 .119 .580

(.380) (.662)

(.25, 1) 3 .093 .232 (2.5, 1) 2 .287 .397

(.337) (.525)•

(.25, 3) 2 .325 .041 (2.5, 3) 2 9.488 .075

(.113) (.205)

(.5, .5) 2 .072 .362 (5, .5) 2 .223 .627

(.414) (.716)

(.5, 1) 2 .105 .257 (5, 1) 2 1.000 .425

(.341) (.563)

(.5,3) 2 .472 .055 (5, 3) 2 403.429 .078

(.152) (.214)

(.75, .5) 2 .077 .429

(.490)

(.75, 1) 2 .119 .302

(.400)

(.75, 3) 2 .687 .062

(.170)

69

Page 79: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

..

a: efficiency defined here as MSEP(Jtxi,BP)/MSEP(Jtxi)' where Jtxi represents Xi or Jtxi,CU

b: the variance ratio, uVu~

c: true value of JJ x { = exp[py + (u~ + u~)/2])

d: t =(1,2); JtW =Xi' JtW =Jtxi,CU

e: Upper entry gives Eff(Xi ); lower entry gives Eff(Jtxi, CU)

Comments regarding Table 2.6

It is clear from Table 2.6, that Jtxi, BP is not only more efficient than either of the two

alternative predictors (Xi and Jtxi,CU)' but quite strikingly so. In fact, no efficiency value tabulated

exceeds 72%, and several values are extremely small (Le., < 10%). Secondly, as we speculated in

section 2.3.2 based on the form of Jtxi,CU' this alternative predictor is uniformly more efficient than

Xi in Table 2.6. Hence, if one chooses to emphasize conditional unbiasedness when selecting a

predictor for JJxi' we conclude that Jtxi, CU affords a substantial advantage over the worker-specific

sample means. Finally, Table 2.6 shows that, as u~ increases for fixed ¢, the efficiencies tend to

decline markedly; as ¢ increases for fixed u~, the efficiencies generally tend to increase.

An important consideration regarding Table 2.6 is the fact that MSEP(Jtxi,BP) and

MSEP(Jtxi, CU), which are used in the efficiency calculations, take on theoretical values that are not

realizable in practice. In the following section, we take into account the parameter estimation

required for the practical use of these predictors.

!.3.-I Mean squared error of prediction for estimated predictors under Model I

Clearly, the calculation of the realized value of the simple predictor Xi based on a set of

occupational exposure data requires no parameter estimation. In contrast, as mentioned previously,

the calculation of Jtxi, BP (2.3.1.6) in practice requires the estimation of the three unknown

parameters (Py, u~, u~). The resulting estimated predictor, which may be viewed as an empirical

Bayes estimate of JJxi (e.g., Casella, 1985), necessarily sacrifices some or all of the properties possessed

by Jtxi,BP itself. Similarly, the alternative predictor Jtxi,CU (2.3.2.1) must also be estimated in

practice, as it involves the unknown parameter u~.

In this section, we propose (for balanced data) an empirical Bayes-like estimator for JJxi based

on Jtxi,CU' and two such estimators based on Jtxi,BP' For the estimator based on Jtxi,CU' we derive

theoretical properties including unconditional and conditional expectation, and MSEP. By means of a

simulation study, we compare the efficiencies of this estimated predictor and of Xi' with respect to the

two estimators based on Jtxi, BP'

70

Page 80: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

An empirical Bayes estimator of J1. xi based on ji.xi CU,

Recall [equation (2.3.2.1)] that Pxi,CU = exp(Yi) xexp{[(ni -1)/(2ni)]q~}. Assuming

balanced data (Le., ni =n, Vi), we have already deriived the UMVU estimator for

exp{[(n - 1)/(2n)]q~} as part of the derivation of fJ.x,umvu in S'ection 2.2; this UMVU estimator is

given by

F (k(n -1) . (n -1) ssw)o 1 2 '4n "

Utilizing this UMVU estimator, we obtain the following empirica.l Bayes-like estimator of J1.x i based

on PXi,CU:

..

(2.3.4.1)

Using the statistical independence relationships (Y i .L SSW) and (f3 i .L SSW) for balanced

data, we can readily verify that ~xi,CU has the following two pottmtially attractive properties:

and

..

E(~xi,CU I f3 i fixed) exp{[(n -1)/(2n)]0"~} x ECYi I f3 i fixed) J1.x i .

Hence, ~xi,CU is an empirical Bayes estimator of J1. xi that, like Pxi,CU itself (and like Xi) is both

unconditionally and conditionally unbiased.

MSEP of fixi,cu (2.3.4. 1)

Derivation of the MSEP of the estimated predictor in (2.3.4.1) requires the following

calculation:

First, the independence ofY; and SSW dictates that E(~;i,CU)= E[exp(2Yi )] xE(Z2), where

71

Page 81: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Z - F (k(n -1) . (n -1) SSW)- 0 1 2 '4n .

Using the calculation of E(Z2) presented in equation (2.2.3.4), we obtain

Next, recalling that (Pi J.. SSW), it follows that E(ltxi,CUlLxi) = exp{[(n -1)/(2n)](j~} x

E[exp(Yi + /L y + Pi + (j~/2)]. Simplifying, we have

which also equals E(/L;i)' After more algebraic simplification, we obtain the desired result:

(2.3.4.2)

We utilize the result in (2.3.4.2) shortly, as we compare relative efficiencies of predictors for /Lxi that

are potential candidates for practical use.

Two empirical Bayes estimators of /Lxi based on PXi, BP

We may write fJ.xi BP of (2.3.1.6) as follows:,

Because of the embedded factor Ii' it is not feasible to obtain an empirical Bayes estimator that fully

utilizes UMVU estimation, as we did in (2.3.4.1). However, we consider two reasonable estimators for

fJ.xi, BP' Assuming balanced data, so that Ii = I = (n(j~) (n(j~ + (j~) - 1, we propose the following:

Z(l) =ILx i,BP

and

iJ. y , ml - u~, m,/2)] (2.3.4.3)

z(2)ILx i,BP

72

(2.3.4.4)

Page 82: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where 1ml = (nu~,ml)(nolml + U~,ml) - 1, Px,l is the MLE of JJ,r given in (2.2.1.1), and Px,umvu is

the UMVU estimator of JJx given in (2.2.2.3). It was found in prdiminary simulation studies that the

use of the MLE for O'~, as in (2.3.4.3), yields better MSEP than does the use of the ANOVA

estimator. Also, it is advantageous to use the true MLE (which sets U~,ml to 0 if the solution to the

ML equations is negative; Searle et al., 1992) in (2.3.4.3). Hence, we also propose utilizing the true

MLEs for estimating unknown parameters in the exponential part of (2.3.4.4).

Like {lxi, BP itself, lt~li~ BP and It~~~BP are conditiona~ biased (Le., their expectations

conditional on l3i fixed are not equal to JJxi)' However, unlike {lxi,BP itself, these two empirical

Bayes estimators of JJxi are also unconditionallv biased [i.e., they do not have unconditional

expectations identical to E(JJxi) = JJx]' Still, in light of the clear superiority of {lxi, BP in Table 2.6,

(2.3.4.3) and (2.3.4.4) may prove to be the most desirable predictors in practice, especially if MSEP is

the main criterion. In the following subsection, we investigate this possibility.

Simulation-based assessment of relative efficiencies

The derivation of the theoretical MSEP for lt~li~ BP (2.3.4.3) and It~~~BP (2.3.4.4) is

problematic. However, we are able to obtain a useful assessment via a simulation study. In Table

2.7, we present such an assessment for the same ranges of parameter values (and nand k) utilized in

Tables 2.3-2.5. The values in Table 2.7 are estimated relative efficiencies, which compare the two

(unconditionally and conditionally unbiased) predictors Xi and P-xi,CU (2.3.4.1) to 1t~~BP and

It~~~BP in terms of MSEP. Specifically, we tabulate

MSEP[z(t) ]Eff(lt~h = JJ~z,BP ,(t=1,2),

MSEP(Ji~h(2.3.4.5)

where p-~li) =Xi' p-~~) = Itxi, cu' and MSEP[/J.~/.BP] (t =1,2) is actually a simulation-based estimate

of the desired MSEP. This simulation-based estimate is computed as

[-(t)] 1 5~O ~ [z(t,s) u(s)]2MSEP Jixz , BP = (5000k) s~1 i~1 JJxi, BP - ""xz '

where It~i,~p is the realized value of p-~/. BP in the s-th of 5000 independently generated data sets

conforming to Model I, and JJ~i) is the true mean exposure for the i-th simulated worker in the s-th

data set.

Because MSEP(p-~h (t =1,2) is the actual theoretical (not estimated) MSEP of Xi or P-xi,CU'

computed according to (2.3.3.1) or (2.3.4.2), respectively, E-ff(It~:}) is a reasonable estimator for the

actual efficiency in each case.

73

..

Page 83: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 2.7: Estimated MSEP efficienciesa of alternative predictors relative to p.~} BP (t = 1,2) under Model I,

(¢b, cr~) (n,k) E-ff(Xi) c, d - - d (¢b, cr~) (n,k) Eff(Xi ) c,d ::. c dEff(Pxi CU) c, Eff(/lxi, CU) ,,

(.1, .5) (7, 26) .376 .466 (1, .5) (2, 16) .575 .651

(.376) (.466) (.567) (.642)

(.1, 1) (7, 26) .287 .451 (1, 1) (2, 16) .485 .629

(.287) (.451) (.443) (.575)

(.1,3) (6, 32) .071 .333 (1, 3) (2, 17) .251 .634

(.070) (.328) (.080) (.201)

(.25, .5) (3, 25) .392 .464 (2.5, .5) (2, 12) .665 .751

(.391) (.463) (.601) (.679)

(.25, 1) (3, 24) .302 .430 (2.5, 1) (2, 12) .577 .744

(.300) (.427) (.377)* (.486)

(.25, 3) (2, 37) .065 .170 (2.5, 3) (2, 13) .178 .438

(.060) (.158) (.020)* (.048)

(.5, .5) (2, 26) .450 .511 (5, .5) (2, 10) .919 1.034

(.448) (.509) (.639) (.720)

(.5, 1) (2, 25) .335 .437 (5, 1) (2, 10) 7.021* 9.010*

(.329) (.430) (16.798)* (21.558)*

(.5,3) (2, 26) .097 .250 (5, 3) (2, 11) .002* .004*

(.078) (.202) (.002)* (.004)*

(.75, .5) (2, 19) .536 .607

(.531) (.602)

(.75, 1) (2, 19) .449 .585

(.422) (.550)

(.75, 3) (2, 20) .129 .330

(.078) (.198)

a: estimated efficiency defined as in (2.3.4.5)

b: the variance ratio, crVcr~"

c: upper entries relative to fi.W BP of (2.3.4.3); lower entries relative to fi.~~)BP of (2.3.4.4), ,

d: X - ::'(1) d' (2345)'::' - ::.(2) d' (2345)i - /lxi use 10 ••• ,/lxi,CU - /lxi use 10 •••

*: estimated efficiency must be viewed with caution

74

Page 84: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Comments regarding Table 2.7

The first thing to note about Table 2.7 is that the empirical Bayes estimators ~~~~BP and

~~~~BP' though necessarily less efficient than J1.x i, BP itself, still achieve the lowest MSEP in the vast

majority of cases. For the most part, the trends (see Table ~!.6) of decreasing efficiencies of the

alternative predictors for fixed ¢ as O"~ increases, and increasing efficiencies for fixed O"~ as ¢

increases, are maintained in Table 2.7. Secondly, in all cases but; one, the estimated efficiency of the

sample mean Xi with respect to the empirical Bayes estimators of J1. x i. BP (Table 2.7) is seen to be

somewhat higher than the corresponding efficiency with respect to J1.x i. BP itself (Table 2.6).

However, the efficiencies in Table 2.7 are still markedly less than one in most cases.

As we move to the latter cases depicted in Table 2.7, we see some anomalous results; these

cases are marked in the table with an asterisk ("*"). In two of the cases (¢ = 2.5, O"~ = 1; ¢ = 2.5,

O"~ =3), the estimated efficiency of Xi with respect to the empirical Bayes estimator ~~~~BP is lower

than the corresponding theoretical efficiency with respect to J1.xi , BP given in Table 2.6. Of course,

this is not possible in reality, and reflects a poor simulation-based estimate of MSEP(~~~~BP)'

In two other cases (¢ = 5, O"~ = 1; ¢ = 5, O"~ = 3), the estimated efficiencies are ridiculously

high (O"~ =1) or low (O"~ =3). Again, this reflects instability in the simulation-based estimate of

MSEP(~~!.BP) (t =1,2). In light of this instability, we in fact performed 50,000 (instead of 5,000)

simulations for the latter 5 cases [(¢ = 2.5, O"~ = 1) through (¢ = 5, O"~ = 3)]. Even so, we are unable

in most of these extreme cases to get a realistic estimate of the true MSEP(~~!.BP) (t =1,2).

Clearly, the random variate [~~;.s~P - Jl~~)]2, which is used in the estimated efficiency calculation

(2.3.4.5), is unstable in that it can take on extremely high values (with low probability) for certain

ranges of ¢ and O"~. In fact, there are most likely ranges of these parameters over which the

expectation of ~~;,1Jp - Jl~)]2, or of ~~/.BP itself, does not exist. This would be similar to the

observations we made in section 2.2.1 with regard to certain estimators for the population mean Jlx '

The anomalous results in Table 2.7 occur over ranges of variance components that seem

unlikely in light of most published occupational exposure data (e.g., Kromhout et al., 1993).

Nevertheless, in all cases but one (¢ =5, O"~ = 1), the average squared deviation of the empirical

Bayes estimates of J1.x i, BP from the true Jlxi over very many replications is markedly smaller than the

theoretical MSEPs of either Xi or ~xi.CU' Hence, we generally recommend the use of these empirical

Bayes estimators of the "best predictor" in practice, given that one wishes to minimize MSEP. In

terms of whether it is worth the extra effort to calculate J1.x , umvu for the use of ~~~~BP in (2.3.4.4),

our recommendations based on Table 2.7 are qualitatively similar to the recommendations that

followed Table 2.5 (see section 2.2.4). In general, the gains in efficiency over ~~li~BP (which employs

the more easily calculated MLE of Jlx ) tend to become more signiJlicant as ¢ and/or O"~ increase.

75

...

Page 85: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Chapter 3: Statistical Methods for Occupational ExposureAssessment

3.1 INTRODUGrION

In this chapter, we consider statistical methodology appropriate for workplace exposure

assessment involving the analysis of exposure data obtained through random sampling schemes. In

section 3.2, we revisit the well-studied setting in which one obtains a random sample of lognormally

distributed personal exposure measurements. In that setting, we clarify some issues regarding sample

size calculations when the goal is to make inference about the probability that an arbitrary randomly

selected exposure measurement exceeds some (short-term) occupational exposure limit. We also

present a comparative simulation study of the performances of several possible test statistics

(including one based on a newly developed method) for testing the population mean exposure level.

A sample size calculation based on this method is also given. The material in section 3.2 is

summarized in a recent article by Lyles and Kupper (1996a).

In section 3.3, we consider testing an appropriate statistical hypothesis concerning the long­

term mean exposure level for workers in a job group, using data adequately summarized by the one­

way RE ANOVA model [Model I (2.1.1.1)]. The particular hypothesis testing strategy considered was

introduced by Rappaport et al. (1995a). Assuming balanced data, we develop three large sample­

based test statistics for addressing this hypothesis. We present an appropriate computational formula

for sample size, and we evaluate the competing test statistics empirically. This material is included in

a recently submitted manuscript (Lyles, Kupper, and Rappaport, 1995a). The extension of one of

these methods to the case of unbalanced data [as described in Lyles, Kupper, and Rappaport, 1995b

(submitted)] is also discussed here.

In section 3.4, we extend one of the large sample-based testing procedures to account for day­

to-day exposure variability under Model II (2.1.1.6). Sample size approximation is discussed, and an

empirical evaluation of the method is presented.

Examples of some of the methodology presented in this chapter, applied to actual shift-long

exposure data, are provided in Chapter 5.

Page 86: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

3.2 WORKPLACE EXPOSURE ASSESSMENT ISSUES BASED ON A RANDOM SAMPLE

3.!.1 Sample size for testing exceedance probabilities

Occupational hygienists occasionally wish to assess the prolbability that an arbitrary short-term

exposure measurement (Xi) may exceed an appropriate limit (L), based on an LLd. sample {Xl"'"

Xk } of such measurements. Assuming lognormality, the appropriate (UMP) test statistic for testing

Ho: (0 ~ A) vs. HI: (0 < A), where 0 = Pr(X i > L), is an upper one-sided tolerance limit. As

previously discussed, this limit is given by t = (Y + cSy), where Y and Sy are the sample mean and

standard deviation of the logged exposure measurements and c is a multiple of a noncentral t

percentile. The details of this approach are recounted in section 104.1, where we also cite a sample

size approximation (equation 1.4.1.2) given by Rappaport (1991b). The purpose of this brief section

is to assess this approximation relative to the exact method, which was approximated by

Faulkenberry and Daly (1968).

The power approximation (equation 1.4.1.1) given by Sellvin et al. (1987), and the resulting

sample size approximation (1.4.1.2) given by Rappaport (1991b), are intended to be used in

conjunction with the UMP test statistic, and are purportedly based on large-sample arguments.

However, one can show that a Wald-type (Rao, 1965) test statistic for the tolerance problem is given

by

..J2k [it + zl _ AU - In(L)]

~u2(2 + z~ _ A)(3.2.1.1)

where it and u2 are the MLEs of It and (12, which are respectively the mean and variance of

Y i = In(X i ). For large k, the distribution of Z approaches thE~ standard normal; hence, based on

(3.2.1.1), an approximate expression for the power of the test using Zis derived as

(3.2.1.2)

In equation (3.2.1.2), Xl _ A represents the 100(1 - A)-th percent.ile of the distribution of short-term

personal exposures, for which L ( > x 1 _ A) is the operable regulatory limit. Based on (3.2.1.2), the

attendant sample size approximation is:

(3.2.1.3)

Upon comparing expression (1.4.1.1) with (3.2.1.2), and expression (1.4.1.2) with (3.2.1.3), it is clear

77

Page 87: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

that the power and sample size approximations given by Selvin et al. (1987) and Rappaport (1991b)

are identical to those derived from the Wald-type test statistic (3.2.1.1), except for the omission of

the term z~ _ A in the denominator of (3.2.1.2) and in the numerator of (3.2.1.3). Hence, it is clear

that the previously published approximations (1.4.1.1) and (1.4.1.2) overestimate the approximate

power and underestimate the sample size requirements of the test based on the Wald-type statistic. It

is not immediately clear, in typical situations, how the sample size approximations (1.4.1.2) and

(3.2.1.3) perform for estimating the theoretical sample size required for the UMP test discussed in

section 1.4.1.

Assuming that the specified value of () is the true value, an exact expression for the power of

the UMP test based on the noncentral t distribution is

(3.2.1.4)

where>. = - ~zl _ A as introduced in section 1.4.1, and T'k -1(>'1) represents a random variate

distributed as noncentral t with (k - 1) df and noncentrality parameter >'1 = - ~zl _ o' (The

subscript "1" is used to refer to the distribution under the alternative hypothesis HI.) Then, the

exact sample size to provide power of at least (1 - {3) may be obtained by finding the smallest integer

k such that the following holds:

(3.2.1.5)

The solution to (3.2.1.5) was approximated by Faulkenberry and Daly (1968). If instead of supplying

a value for (), one prefers to compute power or sample size by supplying an assumed value ( > 1) for

the ratio (L/z1 _ A) of the applicable limit to the 100(1 - A)-th percentile of the exposure

distribution, in addition to an educated guess for u2, the following identity can be used:

() _ J In(L/z1 - A) }- 1 - 1 u + ZI - A •

One computes the value of () corresponding to the assumed values for L/z1 _ A and u, and then

proceeds with the exact calculations after computing \ based on that value of ().

Commonly used statistical software packages (e.g., SAS Institute, Inc., 1990) now make the

calculation of the exact sample size via (3.2.1.5) quite feasible in most cases. Table 3.1 compares this

exact sample size with that derived from both the Rappaport (1991b) formula (1.4.1.2) and the

formula (3.2.1.3) based on the Wald test discussed above. This comparison is made for several values

of the assumed ratio (L/z1 _ A)' and for values of u2 corresponding to a range of geometric standard

78

Page 88: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

deviations [(GSDs; Le., exp(oo)] from roughly 2 to 6. For illustration, we take the values of a, A, and

power (1- {3) to be 0.05, 0.05, and 0.80, respectively. Clearly, the conclusion that "large numbers of

observations must be sampled to identify many 'acceptable' envilLOnments" (Selvin et al., 1987) is, in

fact, based on an extremely optimistic sample size estimation formula. The adjustment inherent in

(3.2.1.3) provides a marked improvement, but that formula still substantially underestimates the

sample size required for the UMP test in all cases considered in Table 3.1. In light of the accessibility

of noncentral t critical values, the theoretically superior UMP teSIG and its attendant exact sample size

calculation [based on (3.2.1.5)] are recommended in practice.

Table 3.1 Comparison of Exact Sample Sizes with Previously Published Approximation

(A = 0.05, a = 0.05, power = 0.80)

~ =0.50 a U; = 1.0 002 = 1.5 2 -'20 (T2 = 2.5 (T2 = 3.0

" "(Ty- .•

" "---

Lb k C k d ke k ke k k k ke ka ke kaX1- Ae a a a e a

1.5 58 19 107 38 154 57 202 76 249 95 295 113,

(45) (89) (133) (177) (222) (266)

2.0 24 7 42 13 59 20 76 26 93 33 109 39

(16) (31) (46) (61) (76) (91)

2.5 16 4 27 8 37 12 47 15 57 19 67 23

(9) (18) (26) (35) (44) (52)

3.0 13 3 20 6 28 8 35 11 42 13 49 16

(7) (13) (19) (25) (31) (37)

a: GSDs corresponding to the six tabulated values of oo~ are 2.03, 2.72, 3.40, 4.11, 4.86, and 5.65, respectively

b: assumed ratio of applicable limit to 100(1- A)-th percentile of exposure distribution

c: ke = exact sample size required according to (3.2.1.5)

d: ka = sample size according to (1.4.1.2); number in parentheses is sample size according to (3.2.1.3), based

on the Wald test using (3.2.1.1).

..

79

Page 89: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

S.!.! Testing the population mean level of exposure

(3.2.2.1)In(L')

0-2)

2

Z =Il

In section 1.4.2, we cite several references (and describe two test statistics) pertaining to the

hypothesis testing problem focused on the population mean level (Px) of exposure, based on a random

sample of (presumably lognormal) personal exposure measurements. Specifically, we consider the test

of Ho: (Px ~ L') vs. HI: (Px < L'), where L' denotes the appropriate regulatory limit for mean

exposure. In this section, we present two alternative test statistics, and we compare them empirically

with the two previous ones.

First, we recall that the test statistic RIl

(1.4.2.1) proposed by Rappaport and Selvin (1987) is

large sample-based; specifically, with its use of an estimated variance under Ho' it has features in

common with a score-type test statistic. It is also natural to consider a Wald-type statistic, as we did

in the previous section on testing exceedance probabilities. Using standard methods (Rao, 1965), one

may show that one such Wald-type statistic is given by

• 0-2P + 2"-

• where it and 0-2 again represent the MLEs of p and u 2, and where V(it + 0-2/2) = (0-2/k)(1 +0-2/2) is the estimated variance of the numerator. By ML theory, as k becomes large, the distribution

of ZIl approaches that of a standard normal variate under the equality condition in Ho. Hence, at

least for large k, an applicable decision rule for an approximate size Q' test using ZIl is "reject Ho if

and only if ZIl < za"'

A similar ML theory-based test could be derived using the original formulations of Ho and HI'

in which case we would consider a Wald-type test statistic with numerator (itx - L') and

denominator V(itx )' where itx is the ML estimator of Px' This statistic would also have an

asymptotic standard normal distribution under Ho' and would bear closer resemblance to Rappaport

and Selvin's (1987) statistic (1.4.2.1). We do not include this statistic in our comparative study since

empirical studies reveal its performance to be significantly worse than that of ZIl' which deals with

the estimated restriction after a natural log transformation.

On theoretical grounds, the UMP unbiased size Q' test (see equation 1.4.2.3) based on the test

statistic Ell due to Land (1971), is arguably the best option. However, the limitations arising from

the lack of comprehensive tabulation, as discussed in section 1.4.2, are a drawback. This would seem

to provide motivation for the development of a more readily calculable test statistic, whose properties

might more closely mimic (for relatively small samples) those of the UMP unbiased test than would

those of the large sample-based test statistics (1.4.2.1) and (3.2.2.1). To develop such a test statistic,

80

Page 90: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

let us draw from the tolerance limit approach of the previous sect.ion (and section 104.1), and consider

another linear combination of the sample mean and sample standard deviation of the logged exposure

measurements, but this time replacing c by a new constant c'. In an analogous derivation to that

applied in the classical tolerance problem, one can show that the statement

Pr{V + C'Sy < In(L') I Ho: (Px == L')}

is equivalent to

where 6 = - ifO'/2 and T'k -1 (6) represents a random varia.te distributed as noncentral t with

(k -1) df and noncentrality parameter 6. Hence, were 0' known, the rejection rule "reject Ho if and

only if V + C'Sy < In(L')", with c' assigned the value - t'k_l,a(6) / if, would provide an exact

size a test for Ho: (Px = L') vs. HI: (Px < L' ). Since 0' is ill!l known in practice, we consider an

upper bound (c~) on c' = - t'k_l,a(6) / if. An applicable bound due to Halperin (1963) and

reviewed by Johnson and Kotz (1970b) is given by

_ 6 ~k -1I k t k -1, 1 - Or

Cu = Xk-l,a + -.Jk-

where Xk -1, a is the positive square root of the 100(a )-th percentile of the chi square distribution

with (k - 1) df, and tk -1, 1 _ a is the 100(1 - a)-th percentile of the central t distribution with (k - 1)

df. Since 6 is unknown, we consider using an estimated upper bound (c~) on c' , with 6 = - if Sy/2

replacing 6 in c~. Our new proposed test statistic is

(3.2.2.2)

and the associated rejection rule is simply "reject Ho if and only if UIJ. < In(L')". The performance of

the suggested test clearly depends upon the extent to which the estimation of 0' is "balanced out" by

the use of the Halperin bound.

One nice feature of the proposed test statistic in (3.2.2.2) is that a sample size approximation

is also attendant with its derivation. If 0'2 were known, an exact expression for the power of the test

using V + C'Sy would be given by..

POWER

81

Page 91: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where 61 = (- #/u)[ln(L'/ Jlx ) + u2/2]. With values supplied for a, (1 - {3), u2, and an assumed

ratio (L'/ Jlx ) of the applicable limit to the mean, the determination of the minimal sample size

required to achieve power of at least (1 - {3) would be based on finding the smallest value of k satisfying

(3.2.2.3)

Note the similarity to expression (3.2.1.5). However, since the test based on fJ JJ is adjusted to

account for estimation of u2, (3.2.2.3) leads to underestimation of the required sample size. To

compensate, we apply Halperin's (1963) bounds again, substituting a lower bound on t'k_1 ,i6) and,an upper bound on t'k -1, 1 -,B(61) in (3.2.2.3). The appropriate value of k is then found as the

smallest integer satisfying

(3.2.2.4)

'* 6 ...Jk'=l" '* 61 ...J7C=-fwhere t k-1 a(6) = X - tk - 11 - a , and t k-1 1_,,(61) = X + tk - 1,1-"·, k-1,a' , IJ k-1,1-,B IJ

The values of 6 and 61 are computed based on the assumed values for u 2 and L'/ Jlx .

In some cases, the sample size approximation in (3.2.2.4) gives quite a different result than

does the Rappaport and Selvin (1987) approximation in (1.4.2.2). Table 3.2 displays computed

sample sizes based on these two different approaches. We consider three assumed ratios (1.5, 2, and

4) of the exposure limit to the population arithmetic mean, and a fairly wide range of possible values

for u 2• The corresponding GSD's range from about 1.8 to 7.4, as shown in Table 3.2. For

illustration, we assume a = 0.05 and power (1- {3) = 0.80. The table indicates that, as the GSD

increases (and particularly for higher values of the ratio L'/ Jlx )' the new sample size approximation

(3.2.2.4) tends to yield markedly smaller values for k. The implication of this finding cannot be

appreciated until we assess whether the sample size computed via (3.2.2.4) achieves the desired level

of power.

82

Page 92: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 3.2 Comparison of Sample Sizes Approximated for Testing the Mean of a Lognormal Distribution

(a = 0.05, power = 0.80)

(T2 =0.35 (GSD =1.81) (T2 =2.0 (GSD =4.11)

L' a n b n C L'nR nu

J.t r R U J.t r

...1.5 23 27 1.5 223 191

2.0 11 12 2.0 99 71

4.0 5 5 4.0 44 22

~ =0.50 (GSD =2.03) (T2 =3.0 (GSD =5.65)

L'nR nu

L'nR nu

J.tr J.t r

1.5 35 38 1.5 418 322

2.0 16 16 2.0 186 118

4.0 7 7 4.0 83 35

(T2 =1.0 (GSD =2.72) (T2 =4.0 (GSD =7.39)

L'nR nU

L'nR nU

J.tr J.tr

1.5 84 83 1.5 668 473

2.0 38 32 2.0 297 171

4.0 17 11 4.0 132 49

a: assumed ratio of applicable limit to mean of exposure distribution

b: nR = sample size approximated by Rappaport and Selvin (1987) formula (1.4.2.2)

c: nu = sample size approximated by new formula {3.2.2.4}

83

Page 93: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

..

Simulation Study

In Table 3.3, we present the results of simulations comparing the test statistics RIl

(1.4.2.1),

Ell (1.4.2.3), ill (3.2.2.1), and UIl (3.2.2.2) on the basis of power and type I error rates; we employ

the same selection of possible population values for u2 and L'/ JJx as considered in Table 3.2. The

results in each row of Table 3.3 are based on 2000 independently generated data sets under Ho (where

L'/JJx = 1) and, separately, under HI (where L'/JJx has the value shown in the table). Sample size

(k) for each case was initially determined by solving the inequality (3.2.2.4), with nominal values of a

and (1- {3) taken to be 0.05 and 0.80, respectively. If the published tables (Land, 1975) for the UMP

unbiased test based on Ell (1.4.2.3) contain entries for the appropriate value of k, then that value was

used in the simulations. Otherwise, the closest available k was used; the actual value of k obtained

by use of (3.2.2.4) (purportedly for 80% power) in each case appears in parentheses in Table 3.3.

There is a slight tendency towards overestimation of the required k via (3.2.2.4).

For computing the UMP unbiased test statistic (1.4.2.3), we applied cubic interpolation

faithfully in each case, although [since it requires equally spaced values of Sy in the tables (Warmus,

1964)] we were forced to use simple linear interpolation in rare cases (i.e., 0.1 < Sy < 0.2). Also, if Sy

fell below 0.1, we used the critical value corresponding to Sy = 0.1. It is clear that these

complications, along with slight variability due to simulation, prevent us from being able to realize

the exact 0.05 type I error rate using (1.4.2.3) in some cases; such difficulties persist even with the

most complete tables.

The essential indications of Table 3.3 are as follows: a). ill (3.2.2.1) tends to be markedly

anticonservative, although this tendency naturally lessens somewhat with larger k. (Hence, the high

power associated with ill is misleading.); b). RIl

(1.4.2.1) is conservative in every case we

investigated, and clearly displays a comparative lack of power in some situations; c). Ell (1.4.2.3) and

UIl (3.2.2.2) seem to perform almost equally well in most cases, both in terms of power and type I

error rates. Only in cases of relatively high (for most shift-long exposure data) variance [i.e., u2 > 3

(GSD > 5.7)], or extremely low sample size (i.e., k < 10), do the simulations show the UMP method

to be slightly preferable to the new proposed method. In addition, the similarity in power achieved

using Ell and UIl suggests that the sample size calculation in (3.2.2.4) gives a reasonable

approximation for either procedure. The simulations do not directly address the adequacy of the

Rappaport and Selvin (1987) sample size approximation (1.4.2.2) when applied prior to using their

test statistic, but it is clear from Tables 3.2 and 3.3 that (1.4.2.2) often calls for more sample size

than is necessary if one intends to utilize (1.4.2.3) or (3.2.2.2). This is not surprising in light of the

tendency for the latter two tests to be more powerful.

84

Page 94: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 3.3 Results of Simulations Comparing Four Procedures for Testing Mean Exposure Level

(nominal a = 0.05, nominal power:=0.80)

tTl =0.35 (GSD =1.81):

Z a • b E C• d

I" RI" I" VI"

L' e n f a 9 (1 - ,8) h (1 - ,8) (1 - ,8) (1 - ,8) ..JJ x

1.5 28 0.078 0.947 0.043 0.877 0.041) 0.891 0.042 0.879

(27) i

2.0 12 0.106 0.968 0.041 0.814 0.05:3 0.868 0.050 0.876

(12)

4.0 5 0.161 0.995 0.032 0.546 0.051 0.764 0.057 0.872

(5)

u 2 =0.50 (GSD =2.03):

ZIJ 1'1" EIJ VI"

..L' n (1 - ,8) (1 - ,8) (1 - ,8) (1 - ,8)JJ x

1.5 36 0.078 0.909 0.043 0.818 0.048 0.846 0.041 0.822

(38)

2.0 16 0.102 0.953 0.041 0.806 0.048 0.859 0.046 0.865

(16)

4.0 7 0.143 0.987 0.033 0.643 0.05:3 0.832 0.059 0.904

(7)

tTl =1.0 (GSD =2.72):

ZIJ 1'1" EIJ VI"

L' n (1 - ,8) (1 - ,8) (1 - ,8) (1 -,8)JJ x

1.5 81 0.075 0.893 0.048 0.823 0.051 0.846 0.046 0.831

(83)

2.0 31 0.085 0.915 0.032 0.762 0.040 0.813 0.036 0.819

85

Page 95: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(32)

4.0 11 0.133 0.958 0.030 0.568 0.044 0.771 0.050 0.841

(11)

til =2.0 (GSD =4.11):

Z/.l T/.l E/.l V/.l

L' n (1 - {3) (1 - {3) (1 - {3) (1 - {3)P,:r

1.5 201 0.076 0.875 0.049 0.811 0.059 0.837 0.053 0.835

(191)

2.0 71 0.087 0.877 0.041 0.731 0.053 0.805 0.052 0.820

(71)

4.0 21 0.106 0.906 0.023 0.471 0.043 0.741 0.050 0.813

(22)

til =3.0 (GSD =5.65):

Z/.l TtJ EtJ V/.l

L' n (1 - {3) (1 - {3) (1 - {3) 0' (1- {3)P,:r

1.5 301 0.064 0.808 0.036 0.708 0.045 0.760 0.047 0.774

(322)

2.0 121 0.075 0.857 0.034 0.694 0.051 0.788 0.055 0.817

(118)

4.0 36 0.099 0.901 0.021 0.457 0.047 0.761 0.057 0.830

(35)

til =4.0 (GSD =7.39):

Z/.l

L'P,:r

n (1 - {3) (1 - {3) (1 - {3) (1 - {3)

1.5 401 0.075 0.749 0.044 0.648

86

0.058 0.707 0.068 0.737

Page 96: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(473)

2.0 161 0.075 0.811 0.030 0.618 0.05~: 0.741 0.064 0.790

(171)

4.0 51 0.104 0.853 0.023 0.380 0.058 0.725 0.067 0.799

(49)

a: refers to Wald-type test (3.2.2.1)

b: refers to Rappaport and Selvin (1987) test (1.4.2.1)

c: refers to UMP unbiased test (1.4.2.3)

d: refers to proposed new test (3.2.2.2)

e: population value of the ratio of the applicable limit to the mean exposure

f: sample size employed in simulations

g: empirical type I error estimate (proportion rejecting under Ho out of 2000 simulated data sets)

h: empirical power estimate (proportion rejecting under Hl out of 2000 simulated data sets)

i: numbers in parentheses are values of k obtained by the sample size inequality (3.2.2.4) for 80% power

3.3 EXPOSURE ASSESSMENT ACCOUNTING FOR BETWEEN-WORKER VARIABILITY

In this section, we derive statistical methodology for assessing workplace exposures for a group

of workers for whom it is assumed that Model 1 (2.1.1.1) is an adequate descriptor of repeated

(natural logged) shift-long exposure measurements. In section 3.3.1, we introduce a hypothesis testing

strategy that motivates the statistical developments to follow.

3.3.1 Null and Alternative Hypotheses

From a regulatory standpoint, there is a clear advantage to having a simple decision rule for

assessing whether or not the overall exposure distribution for an entire group of workers is in

compliance with specified regulatory guidelines. When the population of shift-long exposure

measurements are taken to be LLd. lognormal, such a rule is provided in the context of the test for a

lognormal mean discussed in section 3.2.2.

In the context of Model I, the individual worker's mea.n exposure (J.Lxi' as introduced in

Chapter 2) is arguably the key quantity of interest with regard to occupational health. As such, the

following null and alternative hypotheses are appealing, where the parameter (J = Pr(J.Lxi > OEL)

denotes the probability that the mean exposure for an arbitrary worker (Le., J.Lxi for arbitrary i)

87

Page 97: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

..

exceeds the occupational exposure limit (OEL) for mean exposure (note that for a particular toxicant,

OEL is the same as L' in the notation of the previous section):

vs.

The structure above reflects an emphasis on caution, while defining the occupational hygienist's

decision based on a clear and simple dichotomy. For overall regulatory compliance, we require the

data to convince us that workplace exposures are "acceptable" in the sense that there must be strong

evidence in the data against the state of nature "0 ~ A" I where A is some small probability defined

a priori. Spear and Selvin (1989) first discussed the relevance of the quantity 0 in the occupational

setting.

In the extreme hypothetical case under Model I in which O'~ = 0, these hypotheses essentially

reduce to the assessment of the overall mean exposure against the OEL based on a random sample of

shift-long exposures from an underlying lognormal distribution (section 3.2.2). The added complexity

of the current problem makes intractable the development of exact methods such as UMP unbiased

size a tests. As a result, we focus our attention here on methodology based on the three classical

large-sample, likelihood-based hypothesis-testing criteria (Wald, likelihood ratio, and score; Rao,

1965).

3.3.~ Derivation 01 test statistics lor the balanced case

To derive the Wald-, likelihood ratio(LR)-, and score-type test statistics, it is first helpful to

re-express our null and alternative hypotheses in the following equivalent manner, based on the

definition of JJxi as exp(JJy + l3 i + 0'~/2) in Chapter 2:

vs.

The left side of the inequality in Ho is the logarithm of the 100(1 - A)-th percentile of the JJxi

distribution. Our test poses a single scalar restriction upon the parameter space under the equality

condition in Ho' If we define the parameter vector 7' = (JJyl O'~, O'~), we may represent this scalar

restriction as follows:

88

Page 98: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(3.3.2.1)

A Wald-type Test

To apply the Wald methodology, we utilize the vector T(-y) = {a:~7) , aR~7) , aRa~r) }y auw Vb

= { 1, 1/2, zI _ A/(2Ub)}' and the inverse of the Fisher expected information matrix, ,,- 1(7) (Searle

et aI., 1992, pg. 86). The Wald test statistic against the two-sided alternative hypothesis (0 :f:. A) is

where 1 is the unrestricted MLE of 7, and A(7) = T(-y):I- I (-y)'T/(7) (Rao, 1965). Since we have a

scalar restriction, the test statistic may be written in the following intuitively appealing form:

W = {R(1)}2/ V{R(1)}, (3.3.2.2)

where V{R(1)} is the estimated variance of R(1) based on a second-order multivariate Taylor series

expansion. For large k, the distribution of W approaches X~ under Ho' To amend this test for the

one-sided alternative (0 < A) of interest, we define a Wald-type test statistic as follows:

(3.3.2.3)

As k gets large, maximum likelihood theory dictates that the distribution of 2w approaches N(O,I)

under the equality condition in Ho. Hence, for an approximately size a test, one may reject Ho: () >

A in favor of HI: () < A when 2w < zQ'

Although a standard Wald test is based on MLEs with attendant large-sample attributes, a

major consideration in the current context is the fact that obtaining personal exposure measurements

is relatively time consuming and costly, ensuring that typical samples will seldom be "large" in a

statistical sense. With this sample size limitation in mind, empirical studies show that using the

ANOVA estimator (and functions thereof) for u~ in R(1) and V{R(1)} affords a significant

improvement in terms of the adequacy of the standard normal reference distribution for practical

sample sizes. Hence, the estimators we use for the elements of 7 are as follows: fly = Y.., the overall

mean of the logged exposures (this is the MLE as well as the OlLdinary and generalized least squares

estimator of ~1I); and, provided that u~ > 0, then

89

Page 99: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

...

...

u~ = MSW and u~ = (MSB - MSW)Jn ,

as given in section 2.1.1. We treat the case u~ < 0 separately in section 3.3.4. Note that there is no

problem asymptotically with the use of u~ in place of the MLE, since for balanced data the ANOVA

estimators are asymptotically equivalent to the MLEs. The ANOVA estimators are also UMVU,

which may explain their better empirical performance.

In terms of the estimated variances and covariances of the elements (pY

' u~, u~), the Taylor

series-based expression V{R(,.)} in (3.3.2.3) may be written as

We note that unbiased estimators are available for the variance and covariance terms in the above

expression (Searle et aI., 1992). However, preliminary empirical work reveals that the resulting Wald­

type statistic actually performs somewhat better when we simply estimate these terms by using the

ANOVA estimators in place of the unknown variance components in the theoretical expressions for

these covariances (also see the related comment in section 3.4). An overall algebraic expression for

V{R(,.)} after adopting this practice is

V{R(,.)} = 2n(n -1)(k -1)u~f - 2n(k -1)ubu~Zl_A + n2(k -1)u~u~ + z~_A[k(n _1)f2 + (k -1)u~]

2kn2(n - 1)(k -1)u~

(3.3.2.4)

where f =(nu~ + u~), the estimated expected mean square between workers, and ub =~ .Note that a simple point estimate of 0 may be computed as

_ _ ~ln(OEL) - Py - U~J2}0-1- _ .

(Tb

This estimate can be informative in conjunction with the chosen test statistic, provided that ug > O.

Restricted MLEs

For both the LR- and score-type tests, we require restricted ML estimates of the unknown

parameters in the one-way random effects ANOVA setting. The applicable restriction is the equality

condition under Ho (namely 0 = A), given in (3.3.2.1). Under this restriction, we wish to maximize

90

Page 100: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

the multivariate normal log likelihood (Searle et al., 1992, pg. 80), which (excluding a known

constant term) is given by

(3.3.2.5)

Again, k is the number of workers and n the number of measurements in the (balanced) design, SSB

and SSW are the familiar between- and within-subject sums of slquares, and r = (nO'~ + O'~) is the

expected mean square between workers.

The maximization of (3.3.2.5) subject to (3.3.2.1) may be achieved by finding the simultaneous

solution of a set of nonlinear equations derivable using the method of Lagrangian multipliers (e.g.,

Taylor, 1955). For implementation, we first define V as follows:

V = e + .\[R(-y)] ,

with .\ a constant (the Lagrangian multiplier). To obtain the restricted MLEs, we simultaneously

solve the following 4 equations:

kn(Y - fly)"r + .\ = (I

oV00'~

~r = R(-y) = 0

oVOfly

2 -= nk + nSSB + n k(Y.. -- 2r 2r2

(3.3.2.6)

(3.3.2.7)

(3.3.2.8)

oV =00'2

w

k(n -1)20'2

w

(3.3.2.9)

After some algebraic manipulations, we can eliminate A and reduce the 3 equations (3.3.2.7)-(3.3.2.9)

to 2 equations. Written in terms of the observed data and fly' O'~, and O'~, these are:

(3.3.2.10)

and

(3.3.2.11)

When solved together with (3.3.2.6), they yield the desired restricted MLEs.

To simultaneously solve (3.3.2.6), (3.3.2.10), and (3.3.2.11), we apply a Newton-Raphson

91

Page 101: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

algorithm (e.g., Press et aI., 1986). First, let us denote the left sides of these three equations,

respectively, as f l' f 2' and ! 3' Combining these into a vector-valued function of the unknown

parameter vector'" = (Il y' O'~, O'b)', we have

For the Newton-Raphson approach, we require the (3 x 3) Jacobian matrix J('l)) = {Jij}' (i,

j =1,2,3), where J il = ~:'yi , Ji2 Of; , and Ji3 = ~fi (i =1,2,3). These 9 derivatives are:U,.. aUw uO'b

J 11 = O!l = 1ally

J 12 = O!l = 100'2 2

w

J 13 = O!l = Zl_AOO'b

J 2l = O!2 = O'~(zl _ A - UO'b)ally

J22 = ::i = O'b{O'~(n -1) - S~W} + (n -1)O'bT - 20'~CY.. - IlY)(Zl _ A - nUb)w

J 23 = O!2 _{0'2 (n -1) - SSW}(3n0'2 + 0'2) + n0'4 (9" - Il )

OO'b - w k b w w .. Y

J3l = O!3- 2UO'b(Y.. - Il y) + Zl_AT

ally

J32 = O!3 = - [O'b + Zl _ ACY.. - Il y)]00'2

w

J33 = O!3 _ {S~B + n(Y.. - lly)2} - 2UO'b[O'b + zl_ A(Y.• - Il y)] - T.OO'b -

Given an initial guess .(; (typical unrestricted parameter estimates serve well here), the Newton­

Raphson iterative algorithm produces subsequent estimates via

(3.3.2.12)

where the "*,, superscript is used to denote that the estimate is made under the restriction (3.3.2.6).

92

Page 102: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

The final solution vector [.* = (jl=, o-~2, 0-;:), the restricted MLE of ~] is obtained once, for some r,

all elements of the (r + l)-th estimate lie within a specified small distance (tolerance) of the r-th

estimate. It is straightforward to implement this algorithm using common statistical software, such

as the SAS IML procedure (SAS Institute, Inc., 1989). Provided that 0-;: > 0 (see section 3.3.4), we

compute the restricted estimate of the between-worker variance as 0-;:2 = (0-;:)2.

Likelihood Ratio- and Score-type Test Statistics

Once we have obtained the restricted MLE vector 1'* = Cit=, a-~2, 0-;:2)', we compute LR- and

score-type test statistics by means of an adjustment to the usual likelihood ratio and score statistics

(Rao, 1965) which would apply were the alternative hypothesis two-sided. The Wald test statistic

(W) for this situation is given in (3.3.2.2).

The likelihood ratio test statistic is given by

2{ e(1') - e(1'*)} . (3.3.2.13)

To assure that i ~ 0, l' must be the true unrestricted MLE of "( in this case, so that the estimator for

O"~ in computing i differs slightly from the ANOVA estimator used in conjunction with the Wald­

type statistic Zw' In particular, this MLE of O"~ is {(k -l)MSB - kMSW}/nk] (see section 2.1.1).

The score statistic is given by

..

,

s = S'(1'*)~- \1'*)S(1'*) , (3.3.2.14)

where S(1'*) = ( te, o~, oe2 )' is the score vector and ~(1'*) the expected information matrix, bothJJ y 00"w OO"b

evaluated at the restricted MLE 1'*. An algebraic expression equivalent to (3.3.2.14) is

Z (Y - jl*) k(Y __ ' *)2 '*4( '*)2S = I-A,:' Y [kn(Y _ '*)2 + SSB _ kT*] + __u_,-.!!:.L {n + o"w zI-A -nO"b }.2r*0";: U JJy r" 2(n-1)f*0-;:2

Asymptotically (as k ..... 00) under the equality condition in Ho, W, L, and Sare identically distributed

as X~ random variates (Rao, 1965).

The Wald-type statistic relevant to the one-sided alterna.tive HI: () < A) derives from W(3.3.2.2) by first determining whether the estimated restriction R(1') = [jly + 0-~/2 + o-bzl _ A

In(OEL)] is positive or negative. The resulting test statistic ZW (~L3.2.3) is given by -..JW if R(1') <

0, and by +..JW if R(1') > 0 [recall that we form R(1') and W using functions of the ANOVA

93

Page 103: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

estimator c7~]. As mentioned previously, the asymptotic distribution of Zw under the equality

condition in Ho is standard normal. To define LR- and score-type test statistics based on Land 5,respectively, we make an identical adjustment. In other words, we define the LR-type statistic (Z,)

and the score-type statistic (Z8) by applying the same (positive or negative) square root

transformation described above, based on R(,.), to the respective two-sided counterparts Land 5. In

so doing, we remain consistent while ensuring that Z, and Z8 have the same asymptotic standard

normal distribution as Zw under the equality condition in Ho.

3.3.9 Sample Size Approximation

For a set of assumed values for n, u~, u~, A, OEL, a, and (JI « A), it is straightforward to

derive an approximate formula for the minimum number of workers (k) that are theoretically

required to achieve power of at least (1 - {3) when testing Ho: (J = A versus HI: (J < A using the

Wald-type test statistic Zw of (3.3.2.3). In particular,

for large samples, where I\, = (zl _ 81

- zl _ A)ub ' and VI is formed by plugging the values of u~

and u~ in place of c7~ and c7~ in expression (3.3.2.4). An approximate solution may be expressed

succinctly as

where CI

(3.3.3.1)

Rappaport et al. (1995a) show that one can use the above sample size formula by supplying a

value for the ratio of the population mean exposure to the OEL (denoted as l' = Jlx/OEL) instead

of a value (Jl for (J. This alternative representation is based on the equality l' = exJ{ uV2 ­UbZI_ 8}. One can apply the above formula for k after computing the corresponding value of (JI

using the expression

94

Page 104: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

This latter approach may have more intuitive appeal, as users may be more comfortable specifying

values of T under HI than values of 01•

The sample size computation in (3.3.3.1) involves solving for k assuming a fixed value for n.

Alternatively, one could solve for k for a range of values of n to find that combination of nand k

minimizing the total number of measurements N ( = nk).

The derivation of an approximate sample size formula based on Zw (3.3.2.3) is possible because

the distribution of Zw under HI is easily obtained. This is not the case with the LR- and score-type

statistics (Z, and Zs' respectively), so the resulting formula is only a rough approximation with

respect to those procedures.

3.3•./ Handling negative ANOVA estimates of tTl

As mentioned in section 1.5.2, a difficulty that can arise in the application of random effects

ANOVA models is the occurrence of a negative ANOVA variance component estimate. One

"remedy" is simply to drop the random effect associated with the negative estimate and fit the

reduced model. In the one-way random effects setting, this strategy would amount to assuming that

the N =nk personal exposures represent a random sample from a lognormal distribution, so that the

state of nature "JJx < OEL" would be equivalent to "0 = 0", and one would assess regulatory

compliance by testing the population mean exposure level using; methods such as those explored in

section 3.2.2. Lyles et al. (1995a) argue against such a simplification, for specific reasons that they

discuss. In particular, it is possible for 0 to be quite large, even though the population mean (JJx ) is

well below the OEL. The problem is not so much in assuming that the data resemble a random

sample, because the event (o'~ < 0) is indeed indicative of a population correlation coefficient

[p = tTU(tT~ + tT~)] that is not very different from 0 [Searle et 03.1. (1992)]. Rather, the problem is

more in assuming that tT~ is itself identically 0, based only on a negative point estimate.

In maintaining faith in the one-way random effects ANOVA model, most researchers would

probably take 0 as the estimate of O'~ if the ANOVA estimate o'~: $ o. There is a simple expression

(as a function of n, k, and ¢) for the probability that o'~ $ 0 under this model (Searle et aI., 1992,

pg. 67). In the occupational exposure setting, this probability can be fairly large due to the apparent

tendency for ¢ to be somewhat low in typical applications (Krornhout et aI., 1993) and to the desire

to keep sample sizes at a minimum in light of the expense involved in measuring exposures.

As it happens, it is also possible to obtain negative (or identically 0) restricted estimates (o'!:)

by means of the Newton-Raphson algorithm (see equation 3.3.2.12). Following standard practice, we

may obtain "true" restricted ML~s by constraining o'!: to be non-negative. To do so, we set the

negative estimate to 0; to maximize eunder this condition then requires the following adjustments to

95

Page 105: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

and

• * 2

it; = In(OEL) - 0"2' .

(3.3.4.1 )

(3.3.4.2)

.. The question is, how does the practice of setting the negative estimate (u~ or ub) to 0 affect the test

statistics i w ' it, and is? Unfortunately, setting u~ equal to 0 makes Zw = 0 in (3.3.2.3), regardless

of the values of the other parameter estimates. While the use of (3.3.4.1) and (3.3.4.2) in the

computation of it and Zs does not yield similar pathological results, empirical studies show that these

test statistics (especially it) tend to be unstable under Ho when the original unrestricted ANOVA

estimate (u~) is negative. The event "u~ < 0" sometimes, but not always, occurs in conjunction with

the event "ub < 0" .

Based on these findings, we seek to find an alternative testing procedure that may be utilized

in place of Zw' Zt, and Zs in the event that u~ < O. Rappaport et al. (1995a) suggest a simple ad hoc

procedure that provides a rough solution to the problem, but cannot be used in all circumstances.

The procedure developed here is more intuitive and, in practice, performs better overall. It is based

on an adaptation of existing methods for assessing the level of the mean of a lognormal distribution

based on a random sample (section 3.2.2) in the following way.

The null and alternative hypotheses Ho: 8 ? A and HI: 8 < A can be written equivalently as

Ho: P,x ? 7]OEL versus HI: P,x < 7]OEL, where 7] = exp{ O"V2 - O"bzI _ A}. Since u~ < 0

suggests a small intraclass correlation, applying methods appropriate for testing whether the

population mean (I-lx ) of a random sample of lognormal measurements is less than 7]OEL should

provide a reasonable test of Ho: 8 > A versus HI: 8 < A in that case. Note that 7] = 1 only if O"~

is identically O.

Since O"~ is unknown, we must estimate 7]. A conservative estimate is a lower bound on 7],

which we define by

T71 (3.3.4.3)

.. The appropriate choice for u~, .95 depends upon the parabolic expression within the brackets. It is

determined as follows: (i) u~, .95 is an approximate 95% !!JlliIT bound on O"~, provided that its value

falls below zr-A; (ii) If the approximate 95% upper bound on O"~ falls above zr-A' then set u~, .95

96

Page 106: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

equal to Z~_A' One can estimate the upper bound using the approximation due to Williams (1962)

(see 1.5.2.7); in particular, it is appropriate to use the upper limit of a 90% confidence interval

computed according to that approximation. Provided (i) is the 'case, then, with approximately 95%

certainty, a test of

Ho': J.Lz ~ TJI OEL

vs.

H{ J.Lz < TJI OEL

based on a procedure assuming a random sample from a lognormal distribution should be at least as

stringent (in terms of the evidence required in the data to reject Ha') as testing Ho: 0 ~ A versus HI:

o < A. We qualify the above statement with "approximately", because the upper bound is

approximate and because the data are considered to be "like" a random sample [since (o-~ < 0) gives

statistically significant evidence that p is small]. If (ii) occurs, then TJI is as conservative as possible

based on equation (3.3.4.3); however, at least based on indications of the study by Kromhout et a1.

(1993), (i) should occur most often unless sample sizes are quite low.

Anyone of a number of procedures (see section 3.2.2) could be applied to address Hb. Based

on the test statistic (3.2.2.2), we may consider the following rejection rule (for occasions when o-~ <

0): Reject Ho: 0 ~ A in favor of HI: 0 < A if and only if t = y .. + d~5 < In(TJ,OEL), where

Here, 52 is the usual sample variance of all N = nk measurements, and XN _ 1 a and t N _ 1 1 _ a are, ,as defined in section 3.2.2. If o-~, .95 :::::; 0 (which is, in fact, possible), then we set TJI to 1. Only in

this extreme case would one indeed be applying a test of the worker population mean exposure level

(J.Lz) against the OEL, assuming all observations to be independ,ent, as a surrogate for testing Ho: 0

~ A versus HI: 0 < A.

3.3.5 Simulation Study of Test Statistics for Balanced Case

Since we are utilizing the asymptotic distribution (under Ho) of the proposed test statistics to

obtain critical values, a simulation study to assess type I error rates for "practical" sample sizes is

warranted. In addition, it is of interest to compare the power of the procedures based on i w' ii' and

is, and to determine the adequacy of the proposed sample size formula (3.3.3.1) as it applies to the

Wald-type test and the extent to which it serves as an approximation for the other tests. The

97

r

.

Page 107: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

rejection rules that we have constructed in the previous sections dictate that any overall differences in

performance among the LR-, score-, and Wald-type tests are due only to cases for which a-~ > o.

Adjustment to Critical Values

As would be expected, the standard normal reference distribution for Zw' Z/' and Zs is not

entirely adequate for the relatively modest sample sizes that we would expect to see in practice. In

fact, it is fairly common for Wald tests to be anticonservative in such situations, while score-type

tests tend to be comparatively conservative (for example, see the discussion of the simulation results

in section 3.2.2). Simulation results to evaluate the type I error rates of the three tests are consistent

with these general tendencies. In particular, preliminary studies reveal that the type I error rates of

both the Wald- and LR-type tests show a marked tendency to exceed the nominal level when the

population parameter ¢ = O'VO'~ becomes relatively large. Since it is important from a regulatory

standpoint to maintain control of the specified type I error rate a, we contemplate the following

simple adjustment (based on the unrestricted parameter estimate ¢ = a-Va-~) when using Zw or Z/:

IIf ¢ > 0.5, then reject at the a* = a/2 level.I

This simply entails utilizing zcr* as the critical value, as opposed to zcr' when assessing the value of

the test statistics Zw and Z/. We make no such adjustment attendant with the use of Zs.

Results

Tables 3.4 and 3.5 summarize the results of simulations covering wide ranges (based on most

published occupational exposure data) of ¢ and O'~. Each row of the tables gives the values of ¢ and

O'~ used to generate the data conforming to the one-way random effects ANOVA model; in all cases,

we take J.l y = - 2. Each row also provides the approximated values of nand k [chosen to minimize

N =nk according to the sample size formula (3.3.3.1)]. If ¢ :::: 0.5 in the table, then the sample size

formula was applied using a* =a/2 to account for the conservative adjustment above based on ¢,

which was implemented in the simulations with respect to Zw and Zl. The columns of the tables

report, for all three test procedures, the observed overall proportions of cases for which Ho was

rejected out of 5000 independent data sets simulated under the equality condition in Ho (&) and out

of 5000 separate data sets simulated under HI (1 .:... fJ). For Table 3.4, we assume a true value of (J =

0.002 for the simulations under HI' we apply the sample size formula (3.3.3.1) for nominal power of

0.60, and we take A = 0.05. For Table 3.5, we assume under HI that the ratio i = J.lx/OEL is 0.2,

98

Page 108: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

we apply the alternative version of the sample size approximation for nominal power of 0.80, and we

take A =0.10. In all cases, we assume a = 0.05.

With regard to Tables 3.4 and 3.5, it is important to note that the overall empirical estimates

[& and (1':'" ,8)] actually represent a weighted average of observed estimates conditional on whether or

not q~ < 0; in all cases, the procedure detailed in section 3.3.4 was utilized if q~ < O. The observed

percentage of datasets for which this occurred for the 5000 simulations used in obtaining & is reported

in each row of the tables.

For fixed (J, Table 3.4 reflects the fact that the minimal required sample size based on (3.3.3.1)

tends to decrease as the value of ¢ increases, and that it remaiins fairly consistent for a particular

value of ¢. On the other hand, we see from Table 3.5 that the required sample size can be highly

variable for a given value of ¢ when one fixes i = J1.x /OEL. The nature of this tendency seems to

depend upon whether ¢ is large or small, and it is due to the complicated relationship among i, (J,

and the two variance components.

Given the infeasibility of obtaining exact nominal type I error rates, the ability to control these

rates to be at or below the nominal level is arguably the most important feature of a successful

hypothesis testing procedure in the regulatory setting under consideration. This is achieved by use of

the score-type statistic (Zs) in every case depicted in Tables 3.4 and 3.5. With only minor exceptions,

the same may be said of the Wald-type statistic (Zw' equation 3.~:.2.3), as long as ¢ :5 0.75. The LR­

type statistic (Z,) occasionally permits excursions above the nominal level, although these excursions

are small (always less than 2%). The only major excursions occur in Table 3.4, with the use of Zw for

large values of ¢ ( 2: 2.5); these occur despite the fact that a* =cl:/2 was operable in essentially all of

these cases.

For very small values of ¢, it is often the case that the observed type I error rates are

considerably less than nominal; this is most evident in the first five rows of Table 3.5. This effect is

in part due to the conservative nature of the alternative procedure discussed in section 3.3.4 (for

q~ < 0). However, the procedure is quite sensitive to departures from Ho, so that this is generally not

a great detriment to power. As might be expected, the alternative procedure is less conservative as ¢

increases, since its derivation assumes that ¢ is small; however, this effect is countered by a decreasing

probability that q~ < O.

The trade-off with regard to the score-type test is evident in Tables 3.4 and 3.5, since it is, at

times, by far the least powerful. This effect seems to be most pronounced for relatively small k; at its

most extreme, it led to essentially no rejections under HI for the datasets summarized in the first row

of Table 3.5 (except for the cases for which q~ < 0). On the other hand, the score-type test often

displays competitive power for larger numbers of subjects, and is occasionally the most powerful (in

Table 3.5) when a large value of ¢ forces the conservative adjustment to the critical values for Zw and

99

Page 109: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Z,.In light of the complexity of the problem, the sample size formula (3.3.3.1) appears to provide

quite a good approximation for the Wald-type test, which generally achieves power near the nominal

level. It is also relatively accurate for the LR-type test. Tables 3.4 and 3.5 indicate that additional

sample size will be required for the score-type test, unless k is relatively large (say, > 40).

Table 3.4: Results of Simulations Assessing Performance of Proposed Test Statistics for Balanced Data

(py = - 2, (J =0.002, A =0.05, Q =0.05, power =0.60)

Wald LR Score

(ib (i (i

t/J0 (72 n k (1 ..:. ,8) c (1 ..:. {3) (1 ..:. {3) (% U& < O)dw

.1 0.5 7 26 .019 .047 .023 6.0

(.603) (.622) (0.471 )

.1 1.0 7 26 .021 .046 .020 5.9

(.605) (.621) (0.480)

.1 3.0 6 32 .032 .051 .029 6.7

(.600) (.624) (0.516)

.25 0.5 3 25 .022 .046 .022 7.1

(.597) (.625) (0.484)

.25 1.0 3 24 .027 .046 .025 7.8

(.571) (.602) (0.462)

.25 3.0 2 37 .029 .048 .029 11.3

(.553) (.604) (0.531 )

.5 0.5 2 26 .030 .058 .029 4.5

(.647) (.639) (0.588)

.5 1.0 2 25 .039 .061 .035 4.9

(.636) (.628) (0.561)

.5 3.0 2 26 .052 .062 .043 4.9

(.639) (.616) (0.567)

100

Page 110: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

·75 0.5 2 19 .048 .069 .035 2.8

(.643) (.598) (0.525)

.75 1.0 2 19 .051 .067 .034 3.0

(.642) (.588) (0.518)

.75 3.0 2 20 .070 .068 .042 2.3

(.653) (.596) (0.548)

1.0 0.5 2 16 .057 .067 .031 2.2

(.630) (.550) (0.455)

1.0 1.0 2 16 .059 .064 .032 2.0

(.635) (.563) (0.472)

1.0 3.0 2 17 .071 .062 .036 2.0

(.650) (.569) (0.517)

2.5 0.5 2 12 .075 .060 .029

(.646) (.539) (0.397) ..2.5 1.0 2 12 .075 .058 .030

(.663) (.552) (0.416) ..2.5 3.0 2 13 .078 .055 .033

(.662) (.551) (0.459)

5.0 0.5 2 10 .092 .063 .028

(.647) (.526) (0.332)

5.0 1.0 2 10 .086 .058 .024

(.642) (.520) (0.333)

5.0 3.0 2 11 .075 .048 .028

(.656) (.541) (0.408)

a: the variance ratio, (TV(T~

b: upper numbers are observed proportion of Type I errors (5000 data sets)

c: lower numbers (parentheses) are observed proportion rejecting under HI (5000 data sets)

d: percentage of the 5000 data sets under Ho with negative variance component estimate •("-" indicates < 0.5%)

101

Page 111: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 3.5: Results of Simulations Assessing Performance of Proposed Test Statistics for Balanced Data

(py = - 2, T = 0.20, A = 0.10, (t = 0.05, power = 0.80)

Wald LR Score

ii b ii ii

¢a (12 n k (1 ..: ,8) c (1 ..: ,8) (1": ,8) (% u~ < O)dw

.1 0.5 3 5 .014 .017 .011 43.1..(.847) (.840) (0.393)

.1 1.0 3 9 .016 .019 .013 36.7

(.789) (.816) (0.568)

.1 3.0 4 25 .032 .044 .026 17.0

(.778) (.811) (0.732)

.25 0.5 2 7 .024 .028 .015 34.2

(.759) (.753) (OA09)

.25 1.0 2 15 .026 .036 .019 23.1

(.743) (.762) (0.637)

.25 3.0 2 67 .044 .059 .042 5.1

(.770) (.786) (0.757)

.5 0.5 2 10 .036 .048 .020 16.8

(.812) (.776) (0.572)

.5 1.0 2 24 .050 .059 .036 5.3

(.812) (.780) (0.744)

.5 3.0 2 114 .056 .050 .047

(.831) (.806) (0.834)

.75 0.5 2 13 .054 .061 .027 604

(.812) (.751) (0.639)

.75 1.0 2 32 .059 .063 .045 0.8

(.800) (.756) (0.762)

.75 3.0 2 125 .045 .039 .050•(.812) (.783) (0.842)

102

Page 112: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

1.0 0.5 2 16 .063 .065 .032 1.5

(.806) (.739) (0.678)

1.0 1.0 2 41 .049 .044 .040

(.800) (.751) (0.784)

1.0 3.0 2 119 .041 .032 .051

(.801 ) (.769) (0.836)

2.5 0.5 2 38 .051 .035 .043

(.791) (.737) (0.766)

2.5 1.0 2 71 .043 .034 .048

(.790) (.754) (0.805)

2.5 3.0 2 49 .045 .033 .046

(.797) (.749) (0.802)

5.0 0.5 2 61 .044 .034 .045

(.783) (.742) (0.788) ~

5.0 1.0 2 56 .042 .034 .043

(.798) (.749) (0.800)

5.0 3.0 2 17 .059 .038 .037

(.792) (.711) (0.690)

a: the variance ratio, aUO'~

b: upper numbers arc observed proportion of Type I errors (5000 data sets)

c: lower numbers (parentheses) are observed proportion rejecting under HI (5000 data sets)

d: percentage of the 5000 data sets under Ho with negative variance component estimate

("-" indicates < 0.5%)

S.S.6 Derivation of a Wald-type test statistic for the unbalanced case

The three test statistics presented in section 3.3.2 allow for some methodological choices in the

balanced case, and the simulation. study in section 3.3.5 provides an aid for making such choices.

However, few existing databases contain completely balanced exposure data, and the practical issues

103

Page 113: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

of environmental sampling make the attainment of balanced data problematic. Hence, in this section,

statistical details allowing for the application of the hypothesis testing strategy introduced in section

3.3.1 are provided for the unbalanced case. The null and alternative hypotheses remain identical. We

restrict our attention here to a Wald-type test, although likelihood ratio- and score-type alternatives

are certainly conceivable.

The Test Statistic

The Wald-type test statistic, generalized to the unbalanced case, has an identical functional

representation as its counterpart from section 3.3.2, namely

(3.3.6.1)

where again i is the unrestricted MLE of 'Y. Again, ML theory dictates the standard normal

asymptotic reference distribution. Since, in the case of unbalanced data, the MLE for 'Y must be

found iteratively (Searle et aI., 1992), one must employ numerical methods such as those available in

the SAS MIXED procedure (SAS Institute, Inc., 1992) to obtain it. For a standard ML theory-based

test, these MLEs may then be utilized together with their estimated variances and covariances in the

expression for V{R(in given in terms of these quantities in section 3.3.2; Searle et al. (1992) give

expressions for these variances and covariances based on the expected Fisher information matrix.

Based upon the empirical evidence cited in section 3.3.2, it is also worth contemplating an

analogous adjustment, in which we calculate a Wald-type statistic by utilizing the ANOVA

estimators in place of the MLEs in (3.3.6.1). These ANOVA estimators are given in section 2.1.1.

Denoting these estimators here simply as (u~, u~), we may utilize them (as mentioned in section

2.2.5) to estimate the generalized least squares estimator of J-t y also given in section 2.1.1, Le.,

.E {Yi./viif(YiJ}• .:..1=-:,.;l~ _

J-ty

= .t {ljvar(YiJ} ,1 = 1

where var(Yj) ( .2 • 2 j ) D fi' . ( . ·2·2 )' h(1b + (1w nj' e mmg 'Y = J-t y' (1b' (1w ,we ave

and

104

Page 114: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

V{R(;-)} = a + i + Z~a:bA ( 2c + ~rZ~~A )

where ub = ~ , and <1, b, c, and d are computed by utilizing ug and u~ in the functional

expressions for a, b, c, and d given in section 2.2.5. Recall from that section that <1, b, c, and dare

estimated variances and covariances, whose expressions are provided by Searle et al. (1992); also, for

large samples, we have Cov(ity ' ug) =Cov(ity ' u~) =O.

Assuming that the MLE for 'Y is utilized, the appropriate :rejection rule (at least for large k) is

"reject Ho: 0 ~ A in favor of HI: 0 < A if and only if Zw < zo,". We will also adopt this rejection

rule assuming the use of the ANOVA estimators, although the asymptotic theory is not

straightforward as it was in the balanced case. In particula.r (as mentioned in section 2.2.5),

asymptotic normality of the vector ;- has not been established (Searle et aL, 1992). We return to this

issue briefly in section 6.2.5; however, for our purposes the empirical study in section 3.3.7 will suffice

as a source of practical recommendations concerning the estimation of'Y in conjunction with (3.3.6.1).

The use of the ANOVA estimators has the appealing characteristic that, when the data are balanced,

Zw in (3.3.6.1) is identical to the previous Wald-type statistic (3.~;.2.3).

Negative Estimates of u~

As in the balanced case, negative ANOVA estimates of ug are a possibility under Model I;

likewise, the true MLE obtained from an iterative procedure such as SAS MIXED (SAS Institute,

Inc., 1992) can be identically O. The procedure detailed in section 3.3.4 for the balanced case is

directly generalizable to the unbalanced case, and is recommended for the same intuitive reasons.

Hence, we compute the estimated lower bound (TIl) as in equation (3.3.4.3), and reject Ho if and only

if T = Y.. + d~s < In(ilpEL), where d~ is as given in section 3.3.4 and Y.. and S are the sample

mean and standard deviation of the N (logged) shift-long exposure measurements. The only

difference is that the approximate 95% upper bound must be computed using methods appropriate for

unbalanced data; for that, we recommend use of the upper limit of an approximate 90% confidence

interval according to the method of Burdick and Eickman (1986) (see 1.5.2.8).

3.3.7 Simulation Study of Test Statistics Jor Unbalanced Case

In this section, we present a simulation study assessing and comparing the performances of the

Wald-type test statistics (equation 3.3.6.1) computed using both the MLEs and the ANOVA

estimates of the variance components. The same conditions as d'epicted in Table 3.4 for the balanced

105

..

Page 115: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

..

..

case are considered, except that in this case unbalanced data sets are generated according to the

following simple algorithm. First, we generate balanced data sets with k workers and n* logged

exposure measurements per worker., The value of n* is chosen via

n* = n -: 1 + 1 ,p

where n is as displayed in Table 3.4 and p* =0.50 in each instance. Then, independently for each of

the k workers in each data set, ni of the n* observations were retained, with ni determined as

ni = 1 + T i ,

where T i '" Binomial(n* - 1, p*). Each resulting simulated set of exposure data is unbalanced, with k

workers and from 1 to n* measurements on each particular worker. By design, E(ni) = n, and hence

the expected total number of measurements on each simulated group of workers is N = nk. Hence, Qll

average, the simulated data sets for the cases summarized in Table 3.6 had the same number of

measurements as for the corresponding balanced case in Table 3.4. This allows for a meaningful

assessment of power, given the fact that, for balanced data, N measurements should produce nominal

power of roughly 0.60.

For each simulated data set, we compute the test statistic (3.3.6.1) two ways. First, we utilize

MLEs from SAS PROC MIXED, together with their estimated large-sample dispersion matrix as

given by Searle et al. (1992). Secondly, we compute Zw utilizing the ANOVA estimators according to

the formulae for R(1') and V{R(7)} presented in section 3.3.6. If the ANOVA estimator for u& is

negative, then the decision rule associated with the test based on the ANOVA estimators is

determined according to the alternative procedure presented in section 3.3.4 and reviewed in the

previous subsection. The same procedure is utilized for the decision rule based on the MLEs, if the

MLE for ul is O. Finally, based on preliminary empirical work and on the empirical studies

associated with the balanced case, we also recommend and utilize (for both Wald-type testing

procedures) the same conservative adjustment as was presented in section 3.3.5, namely:

IIf;P > 0.5, then reject at the 0* = 0/2 levelI '

where;P = &U&~ is computed using the ANOVA estimators.

For purposes of implementation, the results in Table 3.6 are based on 1000 simulated data sets

in each case (instead of 5000 as were used for Tables 3.4 and 3.5). Also, for brevity, the intermediate

value (namely, 1) of u~ considered in Tables 3.4 and 3.5 is omitted here.

106

Page 116: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 3.6: Results of Simulations Assessing Performance of Tt:st Statistics for Unbalanced Data

(py = - 2, (J =0.002, A =0.05, a := 0.05)

Using ML ests. Using ANOVA ests.

&C &

</JG u2 E(n;) b k (1 .:. ,8) d (1 .:. ,8) (% ug < 0) ew

.1 0.5 7 26 .020 .017 6.9

(.654) (0.611) '"

.1 3.0 6 32 .040 .035 6.2

(.638) (0.593)

.25 0.5 3 25 .028 .022 7.8

(.622) (0.569)

.25 3.0 2 37 .034 .027 11.4

(.580) (0.541)

.5 0.5 2 26 .047 .033 3.0..

(.673) (0.622)

.5 3.0 2 26 .089 .066 4.4 ..(.675) (0.606)

.75 0.5 2 19 .074 .058 1.9

(.701) (0.630)

.75 3.0 2 20 .084 .066 2.3

(.697) (.616)

1.0 0.5 2 16 .069 .036 1.8

(.678) (0.596)

1.0 3.0 2 17 .103 .077 1.6

(.701 ) (.631)

2.5 0.5 2 12 .115 .080 ..(.726) (0.641)

107

Page 117: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

2.5

5.0

5.0

3.0

0.5

3.0

2

2

2

13

10

11

.103 .068

(.724) (.653)

.113 .083

(.730) (.640)

.105 .080

(.714) (.627)

..

..

a: the variance ratio, uUu~

b: n;'s are determined as discussed in section 3.3.7

c: upper numbers are observed proportion of Type I errors (5000 data sets)

d: lower numbers (parentheses) are observed proportion rejecting under HI (5000 data sets)

e: percentage of the 1000 data sets under Ho with negative ANOVA variance component estimate

("--" indicates < 0.5%)

Results

The results depicted in Table 3.6 are similar to those in Table 3.4 for the balanced case, in

that the Wald-type test procedure tends to be somewhat conservative for low values of t/J, yet

somewhat anticonservative for higher values of t/J. The trend toward anticonservativeness appears

most marked when we utilize the MLE for j, for t/J ;:: 0.5. Hence, Table 3.6 suggests that, despite its

more solid theoretical foundation, the Wald-type test based on the use of the MLE may be less

desirable for practical sample sizes than its counterpart which utilizes the ANOVA estimators, at least

for values of t/J nearing or exceeding 0.5. While it is slightly anticonservative in some of these

situations, the test using the ANOVA estimators is significantly more conservative than the standard

ML theory-based test; the power gains attendant with the latter test are largely an artifact.

An interesting and potentially useful feature of the Wald-type test using the ANOVA

estimators is the fact that it delivers (at least for the cases depicted in Table 3.6) roughly the nominal

power associated with the total sample size based on the approximation for balanced data (equation

3.3.3.1). This is despite the fact that the use of p* =0.5 in the simulation algorithm described above

allows for a fairly wide range of the worker-specific sample sizes (ni) for any particular simulated data

set. This fact may be quite beneficial since no meaningful sample size approximation is really

possible assuming a random pattern of unbalancedness in the data. The use of (3.3.3.1) prior to the

108

Page 118: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

collection of a sample that is "intended" to be nearly balanced should be quite acceptable in terms of

assuring near nominal power (provided the assumed values for the unknown variance components in

3.3.3.1 are reasonable).

3.4 AN EXPOSURE ASSESSMENT METHOD ACCOUNTING }!'OR DAY-TO-DAY VARIABILITY

Clearly, all of the methodology discussed in section 3.3 in the context of the one-way RE

ANOVA model for (logged) shift-long exposure measurements is extendable to situations in which

Model II (equation 2.1.1.6) is deemed more appropriate to specifically account for day-to-day

variability as a portion of within-worker variability. In this section, we provide the necessary details

for applying Wald-type methodology to address under Model II the same hypothesis testing problem

as discussed in section 3.3. We again consider sample size approximation and the handling of

negative estimates of the between-worker variance component, and we provide an empirical study of

the proposed method.

3•.1.1 Derivation of a test statistic under Model II

Again, consider the following null and alternative hypothes'es:

vs.

where the parameter (J = Pr(Jlxi > OEL) denotes the probability that the mean exposure for an

arbitrary worker exceeds the OEL for mean exposure. In this case, we assume that Model II (2.1.1.6)

is operable, so that Jlxi = exp[Jly + l3 i + (O'~ +0';)/2], as in equation (2.1.1.10). In similar fashion as

in section 3.3.2, we re-express the null and alternative hypotheses ,as

vs.

Defining the parameter vector w' = (Jl y ' O'~, O'~, 0';), we consider the following scalar restriction

under Ho:

109

..

..

..

.,

Page 119: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(3.4.1.1 )

II

Exactly as described in section 3.3.2, we apply the Wald methodology by utilizing the vector

T(w) - {aR(W) aR(w) aR(w) aR(W)} - {I, zl _ A/(2ub)' 1/2, 1/2 }, and the inverse of the- ap.y' au~ , au~ 'au~ -

Fisher expected information matrix, ,-l(w). The Wald test statistic against the two-sided

alternative hypothesis (0 =1= A) is

where w is the MLE of w, and A(w) = T(w),-l(w)T'(w). As before, a Wald-type test statistic

amended for the one-sided alternative (0 < A) may be expressed as

i~ = R(w) / ~V{R(w)} , (3.4.1.2)

..

where w is the MLE of wand V{R(w)} = A(w) will be the estimated variance of R(w) based on a

second-order multivariate Taylor series expansion. Once again, as k gets large, maximum likelihood

theory dictates that the distribution of i~ approaches N(O,l) under the equality condition in Ha.

As discussed in section 2.1.1, the MLE of the unknown parameter vector w under Model II is

not obtainable in closed form; of course, it is available along with the necessary estimated dispersion

matrix via common statistical software such as the SAS MIXED procedure (SAS Institute, Inc.,

1992). However, to facilitate sample size approximation and to remain consistent with the empirical

considerations of the previous sections, we consider an adaptation of (3.4.1.2) utilizing the ANOVA

estimators (see section 2.1.1) for the variance components in place of the MLEs. Since we are

considering the balanced case, these ANOVA estimators are UMVU and are hence asymptotically

equivalent to the MLEs, so that there is no necessary caviat (see section 3.3.6) with regard to the

N(O, 1) asymptotic reference distribution.

In conjunction with the ANOVA estimators, we utilize their dispersion matrix (determined

according to equation 1.5.2.2), in addition to Var(ity)=Var(Y.J=(nk)-l(nu~+ku~+u~),to

determine the dispersion matrix of wan = (ity' u~,an' u~,an' u~,an)' for use in place of ,-l(w) in the

calculations. The calculation using 1.5.2.2 makes use of the expected mean squares given in section

2.1.1. After so doing and working through the matrix algebra, we express the variance A(w) as

A(w)

110

(3.4.1.3)

Page 120: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where vy = Var(ity ) and vb' vd' and ve are the variances of the respective mean squares MSB, MSD,

and MSE (see section 2.1.1), which are the elements of the internal diagonal matrix

V(m) = Var{(MSB, MSD, MSE)')} determined according to equation (1.5.2.2). In particular, the

diagonal elements of V(m) in this case are 2(nCT~+CT;)2/(k-l), 2(kCT~+CT;)2/(n-l), and

2CT~/[(k - 1)(n - 1)], respectively.

We note in passing that an unbiased estimator of V(m) is readily determined as the internal

diagonal matrix in equation (1.5.2.3). However, as also discussed in section 3.3.2 for the balanced

case under Model I, empirical studies tend not to favor the use of this unbiased estimator in

conjunction with the Wald-type test statistic. Hence, our proposed test statistic Z~ is calculated

according to (3.4.1.2), where we take W= wan' and where V{R(w)} = A(w) is calculated by

substituting the ANOVA estimators for the unknown variance components in expression (3.4.1.3).

3.-I.! Sample Size Approximation and Considerations

For a set of assumed values for n, CT~, CT~, CT;, A, OEL, o~, and (}1 « A), we may derive an

approximate formula for the minimum number of workers (k) that are theoretically required to

achieve power of at least (1- (3) when testing Ho: (} = A versus HI: (} < A using the Wald-type test

statistic Z~ of (3.4.1.2). In this case,

POWER= Pr{ Z~ < Zcr I (} = 81 < A}

for large samples, where K.' = (zl_ (}1 - zl _ A)CTb , and V~ = A(w) of expression (3.4.1.3),

computed using the assumed values of CT~, CT~, and CT;. An approximate solution is obtained by

finding the smallest value of k such that

'!'.

(3.4.2.1)

As was the case for the approximation under Model I given in section 3.3.3, one may prefer to supply

an assumed value for the ratio of the population mean exposure to the OEL (denoted as T =

J1.,:l0EL) instead of a value (}1 for 8. Again, one can apply (3.4.2.1) after computing the

corresponding value of (Jl using the expression ,

{In(T))/O"b} .

Page 121: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Finally, the sample size computation in (3.4.2.1) involves solving for k assuming a fixed value

for n. As discussed in section 3.3.3, it is advantageous to solve for k for a range of values of n to find

that combination of nand k minimizing the total number of measurements N ( = nk).

The required sample size (according to 3.4.2.1) assuming that Model II holds can be

significantly larger than that required under similar circumstances under Model I. As a general rule,

the assumed value 81 needs to be much lower than the value of A in order for one to obtain

reasonable nominal power with fewer than 100 exposure measurements. To illustrate this effect,

consider Table 3.7, which gives the approximated (minimal) values of nand k based on (3.4.2.1) for a

selection of cases that parallel those displayed in Table 3.5. The only difference is that the within­

worker variance (u~) for the cases in Table 3.5 has been divided equally into day-to-day variance (O"~)

and error variance (O"~). As can be seen from the table, the range of required total sample sizes (N)

for this set of cases under Model II is much more extreme than the corresponding range under Model I

(see Table 3.5); however, the values of 8 corresponding to T = 0.2 are identical regardless of whether

Model I or Model II is operable. The approximated sample sizes in Table 3.7 range from quite

manageable to, in many cases, completely unreasonable. This trend toward larger necessary sample

sizes is also present, though not as extreme, for cases analogous to those in Table 3.4, which are the

subject of the simulation study in section 3.4.4.

From a practical standpoint, in conjunction with the general hypothesis testing strategy that is

the subject of sections 3.3 and 3.4, these sample size considerations seem to support sampling in such

a way that few workers are measured on identical dates. In that case, few random date effects

common to workers under Model II would be present, and Model I should be satisfactory.

Unfortunately, same-day sampling of workers tends to reduce sampling costs in its own right, and

hence merits a study such as this one of the practicalities of accounting for random date effects.

Practical considerations seem to warrant efforts to characterize likely ranges of variance components

within particular industrial settings so as to get an idea of whether the benefits of same-day sampling

will outweigh the costs of likely additional repeated sampling required assuming Model II. It may

also be the case that day-to-day variability is generally ignorable in many cases, even assuming same­

day sampling. A study analogous to that of Kromhout et al. (1993), but also considering day-to-day

variability, would be useful in this context.

112

Page 122: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 3.7 Sample Sizes Approximated via (3.4.2.1), for Cases Parallel to those in Table 3.5

(T =0.20, A =0.10, a =0.05, power =0.80)

¢i a 0"2( = 0"2) n k 8 b ¢i a cra( = O"~) n k 8 b

0.1 0.25 5 2 1 X 10 -13 1.0 0.25 4 16 0.004

0.1 0.5 9 3 8 X 10- 8 1.0 0.5 13 32 0.017

0.1 1.5 34 8 0.001 1.0 1.5 41 90 0.036

'"0.25 0.25 3 5 1 X 10- 6 2.5 0.25 6 44 0.022

0.25 0.5 9 6 3 X 10- 4 2.5 0.5 12 80 0.035

0.25 1.5 42 26 0.011 2.5 1.5 11 48 0.025

0.5 0.25 4 6 3xlO- 4 5.0 0.25 5 87 0.035

0.5 0.5 10 14 0.004 5.0 0.5 5 80 0.033

0.5 1.5 49 62 0.027 5.0 1.5 3 19 0.009

0.75 0.25 4 10 0.002

0.75 0.5 11 24 0.011

0.75 1.5 50 80 0.034

a: the variance ratio, O"V(O"~+ O"~)

b: the value of 0 corresponding to assumed values of 0"& and T =(J.lx/OEL)j these values are identical

to the operable values of 0 for the corresponding cases in Table 3.5

3.-1.3 Handling negative ANOVA estimates of O"~

As was the case for the Wald-type test statistic under Model I, a negative ANOVA estimate a-lposes problems in conjunction with (3.4.1.2). Once again, if we set the estimate to 0, this yields

Z:" =0, no matter what are the estimated or true values of the other population parameters that

should have bearing on the decision regarding acceptance or rejection of Bo. Using an expression of

Searle et aI. (1992, pg. 137), we have the exact probability of a negative ANOVA estimate under

113

Page 123: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Model II as

(3.4.3.1)

"

We note in passing that a negative ANOVA estimate of u~ is also possible, but this contingency is of

little concern in conjunction with (3.4.1.2).

Fortunately, by applying Graybill's (1976) extension to Williams' (1962) result (see section

1.5.2), we may suggest an alternative procedure that is completely analogous to those discussed for

Model I in sections 3.3.4 and 3.3.6. In particular, an approximate 100(1 - 2a)% confidence interval

for u~ under Model II using Graybill's result is

{SSB(l - Fu/F) SSB(l - FdF)}

2 '2 •nXk-l,U nXk-l,L

(3.4.3.2)

In (3.4.3.2), F = MSB/MSE is the ratio of the between-worker to the error mean square (see Table

2.2). Also, X~ -1, L and X~ _ 1, U are critical values of the chi square distribution with (k - 1) degrees

of freedom (df), and FL and Fu are critical values from the F distribution with (k -1) numerator

and (k -l)(n -1) denominator df, such that

(1 - a)

..

and

Just as for the case of Model I discussed in section 3.3.4, under Model II the null and

alternative hypotheses Ho: () ~ A and HI: () < A can be written equivalently as Ho: P:I; ~ 770EL

versus HI: P:I; < 770EL, where 77 = exp{ uV2 - UbZl-A}' and P:I; is given by (2.1.1.8). Hence, we

may apply the procedure laid out in section 3.3.4 involving the use of an estimated lower bound on T1

(3.3.4.3); the only difference is that the approximate 95% upper bound on u~ is calculated according

to (3.4.3.2) instead of (1.5.2.7).

3.-1.-1 Simulation Study of Test Statistic Under Model II

As the cases considered in ,Table 3.7 parallel those in Table 3.5, the cases considered for the

simulation results in Table 3.8 parallel those in Table 3.4. In other words, the only difference is that

114

Page 124: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Model II is operable, and that u~ in Table 3.4 has been divided equally into day-to-day variance (u~)

and error variance (u~). As in Table 3.4, we assume a true value of 0 = 0.002 for the simulations

under HI' and we take Q' =A =0.05; this time we apply the sample size approximation in (3.4.2.1)

for the same nominal power of 0.60. In the event of a negative ANOVA estimate of U&, we apply the

alternative procedure as modified for Model II in section 3.4.3. However, this was extremely rare (i.e.,

occurred < 0.1% of the time) for all sets of conditions displayed in Table 3.8, which agrees with the

calculation of (3.4.3.1) for the various cases.

Table 3.8: Results of Simulations Assessing Performance of Wald-type Test Statistic under Model n(py = - 2, (J = 0.002, A =0.05, Q' =0.05, power =0.60)

..

<p' a

0.1

0.1

0.1

0.25

0.25

0.25

0.5

0.5

0.5

0.25

0.5

1.5

0.25

0.5

1.5

0.25

0.5

1.5

n

32

33

45

12

12

20

6

6

10

k

13

14

14

14

16

13

14

16

13

115

(ib

0.084

(0.0.52)d

0.081

(0.Ol52)

0.089

(0.0157)

0.090

(0.060)

0.091

(0.0,59)

0.094

(0.061)

0.091

(0.059)

0.091

(0.063)

0.094

(0.059)

(1 .: ,8) c

0.603

(0.503)d

0.627

(0.518)

0.620

(0.507)

0.614

(0.511)

0.618

(0.508)

0.606

(0.505)

0.617

(0.513)

0.629

(0.529)

0.617

(0.519) ..

Page 125: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

0.75 0.25 4 15 0.099 0.618

(0.067) (0.517)

0.75 0.5 5 13 0.098 0.620

(0.067) (0.527)

0.75 1.5 7 13 0.108 0.619

(0.074) (0.522)

1.0 0.25 3 15 0.101 0.621

(0.068) (0.540)

1.0 0.5 3 17 0.114 0.625

(0.082) (0.538)

1.0 1.5 5 14 0.105 0.614

(0.074) (0.523)

2.5 0.25 2 11 0.121 0.634

(0.084) (0.537)

2.5 0.5 2 12 0.112 0.647

(0.077) (0.564)

2.5 1.5 3 11 0.120 0.626

(0.083) (0.542)

5.0 0.25 2 8 0.122 0.634

(0.090) (0.541)

5.0 0.5 2 9 0.123 0.649

(0.088) (0.563)

5.0 1.5 2 10 0.117 0.631

(0.083) (0.553)

a: the variance ratio, O'V(O'~+ O'~)

b: observed proportion of Type I errors (5000 data sets)

c: observed proportion rejecting under HI (5000 data sets)

d: numbers in parentheses are type I error and power estimates based on rejection at 0:/2 = 0.025

level

116

Page 126: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Results

Table 3.8 indicates that, for constant values of 0, A,and a, required sample sizes tend to

increase (for fixed ¢') as u~ ( = u;) increases, and to decrease [for fixed uJ ( = u;)] as ¢' increases.

Also, fixing 0 tends to stabilize the requirements somewhat in comparison with Table 3.7, in which

fixing the ratio T = (Px/OEL) led to wide variations in the equivalent value' of 0 and hence wildly

varying estimated sample sizes. However, Table 3.8 still illustrat.es (especially for small values of ¢')

an important drawback to extending our hypothesis testing strategy to account for random date

effects common to different workers: the sample size requirements can be prohibitive in some cases,

particularly because many more observations tend to be required on each worker.

Aside from sample size considerations, a troubling aspect of Table 3.8 is the fact that the type

I error rates are well above the nominal (0.05) level in all cases. This is in contrast to the Wald-type

test statistic (3.3.2.3) under Model I, which tended to be anticonservative only for high values of the

variance ratio (uVu~). Further considerations in the interest of improving the type I error rate

associated with the use of (3.4.1.2) are warranted, and some possible approaches in this direction are

mentioned briefly in section 6.2.4. As a rough solution, we might consider rejection of Ho at the

a* =a/2 level, regardless of the likely ranges of the true parameters [recall the similar adjustment

based on the estimate of the variance ratio ¢ in association with t.est statistics under Model I (sections

3.3.5 and 3.3.7)]. As Table 3.8 suggests, this adjustment affords a substantial improvement in the

type I error rates, at the expense of roughly 5 to 10% power for the cases considered. However,

excursions above the nominal level of 0.05 still tend to be somewhat troublesome for higher values of

the ratio ¢'. In line with the tendency toward anticonservativeness, the use of the sample size

approximation (3.4.2.1) in conjunction with the standard normal reference distribution seems to

produce slightly more than nominal power. Still, a very good approximation to the minimal

necessary sample sizes is achieved. It is worth noting that some of the type I error rates in the lower

half of Table 3.8 would likely be improved if we allocated sampl,e size to allow more observations per

worker. The total sample size would no longer be the theoretical minimum, but more efficient

estimation of unknown variance components might result in better small-sample performance.

117

..

Page 127: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

Chapter 4: Measurement Error Considerations with Applications inOccupational Epidemiology

4.1 INTRODUCTION

This chapter provides a detailed study of measurement error adjustment strategies in the

multiple linear regression setting, in which it is assumed that a lognormal exposure variable, that is a

key predictor of a continuous health outcome variable, is measured with multiplicative measurement

error. The relevance of this study to occupational epidemiology is explained, and specific applications

in that area are explored. The material presented in this chapter is also the subject of a recent

manuscript [Lyles and Kupper, 1996b (submitted)]. An example of the methodology, using actual

exposure and health outcome data on three groups of workers from the Dutch animal feed industry, is

given in Chapter 5.

-1.1.1 Occupational Health Motivation

Suppose we describe a set of repeated shift-long personal exposure measurements via Model I

(2.1.1.1), for which we alter notation slightly here as follows:

y .. = fJ + 15· + {.. (i = 1,... , K; J' = 1,... , n,.) ,I) Y 1 I)

(4.1.1.1)

..

where ni is the number of (possibly logged) personal exposure measurements (Yil , Yi2,... , Yin.) taken1

on the i-th of K workers. The use of "15/' here in place of "f3i" in (2.1.1.1) is to avoid notational

problems later, but otherwise the assumptions attendant with (4.1.1.1) are exactly the same as before,

including the normality assumptions: c5i "" .N'(O, O"~) and (ij "" .N'(O, O"~) for all i and j.

As mentioned in section 1.7.4, recent work in the environmental literature (e.g., Heederik et

al., 1991; Rappaport et al., 1995b) has considered the measurement error problem that arises when a

postulated linear exposure-response model involves the true mean of Y for the i-th worker. More

specifically, consider the model

R·1 a + f3fJ yi + ei (i = 1,... , K) , (4.1.1.2)

Page 128: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where R i is a continuous health outcome for the i-th worker (for example, a measure of lung

function), Q' and f3 are, respectively, the unknown intercept and slope, and Pyi = Py + hi IS

(unconditionally) a random variable representing the unobservabile mean of Y for the i-th worker. In

the applications discussed in the literature, Y i ; typically represents the j-th natural logged shift-long

exposure measurement on the i-th randomly selected worker from a job group. Assuming that the

exposure data are balanced (i.e., ni =n V i), and making the logical choice of the observed sample

mean (Vi = n -1.f: Yi;) as the surrogate for the latent variable Pyi' then we have a classicalJ=1

additive measurement error problem under normality (Fuller, 1987). So, from (4.1.1.1), the

measurement error model takes the specific form ..

(4.1.1.3)

nwhere '£ i = n -I.E ii; is independent of J.lyi and is also normally distributed.

J=1Under (4.1.1.2) and (4.1.1.3), there is little need for a comparative study of measurement error

adjustment approaches, since all of the standard ones discussed in section 1.7.3 essentially yield

identical results. In particular, these approaches produce an adjusted estimator equivalent to

correcting the OLS estimator of f3 from the surrogate regression of R i on Vi according to the

consistency result (for large K) given in equation (1.7.4.1) [technically speaking, one could assert the

stronger property of almost sure convergence using an argument similar to that of Hwang (1986)].

The simple occupational health setting just described is probably unrealistic in many cases.

First, in the case of unbalanced shift-long exposure data, altern.ative measurement error adjustment

strategies would generally yield different results. Secondly, in most studies of postulated occupational

exposure-response relationships, measurements of the same response and exposure variables are taken

on groups of workers from various areas or job categories within a given plant, so that the parameters

Py' O'~, and O'~ may vary across groups. Hence, exposure data from typical occupational health

studies are often more appropriately represented by the model

Ygi; Pyg + hgi + igi; (g = 1, ...,G; i = 1,... , kg; j = 1,... , ngi) , (4.1.1.4)

where 9 indexes the group and i indexes the worker sampled from the g-th group (so that there are aG

total of K = E kg workers). We assume here that the {p yg } Clore fixed effects, and that hgi ...... X(O,g=1

O'~g) and igi; ...... N(O, O'~g) [g = 1,... , G], again with mutual independence among all these random

effects. Thirdly, covariates which may be predictive of the response variable are typically measured

on each worker, and they must be considered in any generalization of (4.1.1.2).

The fourth and most fundamental point, as indicated earlier and as also supported by

119

"

Page 129: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Rappaport et al. (1995a), is that model (4.1.1.4) is much more appropriate if Ygij is a logged shift­

long exposure measurement. The implication then is that Xgij =exp(Ygij)' the exposure

measurement on its original scale, follows a lognormal distribution, as we have assumed throughout

the developments in Chapters 2 and 3. Hence, as a direct generalization of equation (2.1.1.5), the

true mean exposure for the i-th worker on the original scale is given by

Jlxgi = exp(Jlyg + 8gi + (T~i2), which (unconditionally) is a lognormal random variable. Thus, a

potentially much more relevant health effects model is

.. R gi = Q' + 13Jlxgi + egi (g =1, ...,G; i =1,... , kg) , (4.1.1.5)

...

possibly generalized to account for covariates, where Q' and 13 again represent intercept and slope

parameters. As we will see later, the use of logical surrogates for Jlxgi leads to a multiplicative­

lognormal measurement error problem with important implications for the analysis of occupational

health data. For this particular problem, a detailed comparison of possible measurement error

adjustment procedures is warranted for at least two reasons. First, no such detailed comparison has

been made. Secondly, the limited resources in most occupational health studies preclude the

possibility of obtaining repeated exposure measurements on each of very many workers, so that

realistic comparisons need to be made in moderate sample size situations. In the following section, we

relax the specific framework attendant with (4.1.1.5) to facilitate a treatment of the general

multiplicative-lognormal measurement error problem in multiple linear regression.

4.2 THE MULTIPLICATNE-LOGNORMAL MEASUREMENT ERROR PROBLEM

-I.~.1 Preliminaries

Following Clayton (1992), let us assume the following "true disease model" (TOM),

"measurement error model" (MEM), and "predictor distribution model" (POM) for i = 1,... , K:

TOM: Ri

MEM: Zi = XiU i

(4.2.1.1)

(4.2.1.2)

..

120

(4.2.1.3)

Page 130: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

In (4.2.1.1), Ri is a continuous response variable, and we assume that the e/s are i.i.d. with mean 0

and variance O'~. Also, Xi represents a true (unobservable or unmeasured) predictor variable of

interest, and Cit represents the t-th of T covariates assumed to be related to the response. In

(4.2.1.2), Zi is a surrogate for Xi' and we assume that the V /s are LLd. 10gnormal(J-lu ' O'~) and that

the V/s and X/s are mutually independent random variables. Finally, without loss of generality, we

assume here that E(Vi) = 1 'V i; this assumption induces the constraint that J-lu = - 0'~/2.

Armstrong (1985) briefly considered this setting in the absence of covariates.

One additional important assumption that we make in this treatment is the following:

(4.2.1.4)

In words, the covariates are assumed to be statistically independent of both the true and the surrogate

exposure variables (though not necessarily independent of one another). This assumption seems

restrictive at first, but it is actually quite plausible. Factors such as age, weight, and smoking status

(which can be risk factors for the adverse health effect under study and even mild confounders in the

observed data) are seldom important determinants of the tasks and work habits that lead to the

distribution of exposures among workers in a job group. As further support for this assumption, note

that (4.1.1.4), which appears to be a very reasonable model for dust exposure in groups of workers

classified according to job title and location within a plant in the nickel-producing industry

(Rappaport et al., 1995a), makes no provision for such fixed factors. Any fixed work-related factors

predictive of exposure are assumed to be taken into account by the physical groupings of workers, so

that (4.1.1.4) holds within each such grouping. Relaxation of the assumption in (4.2.1.4) is

theoretically possible, but for many adjustment strategies it necessitates unlikely distributional

assumptions about the joint distribution of (Xi' Zi' Ci) or the use of methods for approximating

conditional moments (e.g., Whittemore and Keller, 1988; Schafer, 1989; Schafer, 1992) that may

require very large sample sizes in the case of multiplicative error. For alternative recommendations

regarding the independence assumption (4.2.1.4) for a particular occupational health application, see

section 6.2.5.

Given (4.2.1.2) and the attendant assumptions, the joint distribution of the transformed

variates In(X i ) and In(Zi) is as follows:

(4.2.1.5) ..

121

Page 131: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Equivalently, the joint distribution of Xi and Zi is bivariate lognormal. Some of the features of this

joint distribution are the following:

Var(Zi) = /I~[exp(q~ + q~) - 1] ,

(4.2.1.6)

(4.2.1.7)

(4.2.1.8)

(4.2.1.9)

Moreover, one can show that the conditional distribution of Xi given Zi is itself lognormal, with the

following conditional mean and variance:

E(Xi I Zi) = TJZr, (4.2.1.10)

(4.2.1.11)

"

where t/J = {q~/(q~ + q~)} and TJ = exp[/Lx + t/J(q~ - /Lx)]. It follows immediately from (4.2.1.4),

(4.2.1.10), (4.2.1.11), and the TDM (4.2.1.1) that

and

(4.2.1.12)

Var(Ri I Zi) (4.2.1.13)

The above results [(4.2.1.6)-(4.2.1.13)] will be useful as we explore possible methods for

obtaining estimators for /3 that are adjusted for multiplicative measurement error. As we do so, it

will be helpful to think of e = (a, /3, 11' 1'2'''.' IT)' as the (p x 1) vector of primary parameters of

interest and T = (/Lx, q~, q~)' as the (q x 1) vector of nuisance parameters. (In the current setting,

p =T + 2 and q =3.) As discussed in section 1.7.2, the practical implementation of any valid

measurement error adjustment procedure requires information regarding the nuisance parameters.

Unless the elements of T are assumed to be known, these elements must be generally estimated via

either a validation or a reproducibility study (Thomas et al., 1993).

122

Page 132: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

.I.!.! Some potential measurement error adjustment methods

In section 1.7.3, we reviewed several possible methods for adjusting statistical analyses for

covariate measurement error. In this section, we provide the necessary details for implementing some

of these procedures in the context of the multiplicative measurement error setting introduced in

section 4.2.1.

Correcting the OLS estimator

One appealing approach to obtaining an adjusted estimator for e involves showing that the

ordinary unweighted least squares estimator (80LS) from the surrogate regression of R on (Z, C)

converges in probability (as K gets large) to some parameter vector involving the true e and T.

Given the structure of this vector, a "CO" (for "correction") approach suggests itself. Given

(4.2.1.1)-(4.2.1.4) with or without the lognormality assumptions, one can show that

...

where M = [M1

0], in which M1(pxp) 0 M2

• peOLs~Me, (4.2.2.1)

The constants in M1 are ..

A. = E(X) (1 _ Cov(X, Z) )'1'1 Var(Z)

and

Cov(X, Z)¢2 = Var(Z)

In the absence of covariates, (4.2.2.1) agrees with a result given by Carroll (1989); one can also verify

(4.2.2.1) to be a special case of a result due to Hwang (1986), whose approach may be applied to show

that the convergence is, in fact, almost sure. For known ¢1 and ¢2' one obtains a corrected estimator

ofeas

(4.2.2.2)

where &eo = &OLS POLS(¢I!¢2)' Peo = POLsf¢2' and 'Yj~,eO = 'Yt,OLS (t = 1,m,T). Hence,

123

Page 133: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

asymptotics dictates adjustment of the estimators of Q and (3 from the surrogate regression, but no

adjustment is required for the estimators of It (t = 1, ..., T); the latter conclusion follows from the

independence assumption (4.2.1.4). Note that (4.2.2.2) does not explicitly agree with Hwang's (1986)

corrected estimator, which does not assume (4.2.1.4). Using (4.2.1.6), (4.2.1.8), and (4.2.1.9), we find

in particular that

iJ - {[exp(u; + u~) - 11}iJco - [exp(u;) _ 1] OLB •

(4.2.2.3)

Of course, in practice, one must compute this corrected estimator using estimates of the nuisance

parameters in T. Provided that T is consistently estimated, the corrected estimator eCO is itself

consistent.

Regressing R on [E(X I Z), C]

As an alternative, consider regressing the response R on C and on the conditional expectation

of the true predictor X given (Z, C) (e.g., Thomas et al., 1993). Note that there is no guarantee, in

general, that this strategy provides a consistent estimator of e (Liang and Liu, 1991). However,

evaluation of this approach in our context is greatly simplified by the fact that the TDM (4.2.1.1) is

linear. Noting from (4.2.1.4) that E(X I z, C) = E(X I Z), we consider the estimator

(4.2.2.4)

where R = (RI , ..., RK )' and where De = (1, E, CI , C2, ..., CT ) is the (K x p) design matrix. The

subscript "CE" stands for "conditional expectation". The vector E contains the elements E(Xi I Zi)

given by (4.2.1.10). Equivalently, the elements of De are given by {De}i,l = ~~~ (i =1, ...,K;

1= 1, ...,p), where J.ti = E(Ri I Zi' Ci ). Under the conditions of (4.2.1.1)-(4.2.1.4), one can verify

that E(R I De) = Dee. The key result that makes this possible is the fact that E[Xi I E(Xi IZi)l

= E(Xi IZi) according to (4.2.1.2) and (4.2.1.3). It follows that

E(ec E I De) = e.

Hence, for known T, eCE is not only a consistent, but an unbiased, estimator of e. In practice, the

use of a consistent estimator of T to calculate E in (4.2.2.4) maintains the consistency of eCE'

although strict unbiasedness is lost.

124

Page 134: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Quasi-Likelihood

As discussed in section 1.7.3, a full likelihood approach (see 1.7.3.8) is often intractable or

otherwise infeasible in conjunction with non-standard measurement error problems; such is the case in

the current context, even if we assume the e/s to be Li.d. normal random variates.

To avoid the necessity of specifying the full conditional dEmsity f(Ri I Zi' Ci ; e, T), we may

adapt quasi-likelihood "QL" methods to our setting. This requires solution of the QL "pseudo-score"

equations (see 1.7.3.9), in this case expressed as

o. (4.2.2.5)

...

..

Here, S(e; T) represents a system of p equations which are nonlinear in the elements of e.

The estimating equations (4.2.2.5) do not directly provide for the estimation of the dispersion

parameter lT~, although that parameter is embedded in these equations. We may estimate this

dispersion parameter by solving an additional ("Pearson-type") estimating equation in conjunction

with (4.2.2.5), in similar fashion as discussed by Liang and Hanfelt (1994) and Breslow (1990). This

additional estimating equation has the structure

(4.2.2.6)

and is nonlinear in lT~. The quasi-likelihood estimator aQL of e satisfies (4.2.2.5) and (4.2.2.6), both

of which in practice involve the use of a consistent estimator of 'j; note that the required conditional

moments are given by (4.2.1.12) and (4.2.1.13). In section 4.2.4, we briefly discuss methods for

solving these equations.

Inference Procedures

If one wishes to make inferences using aco in (4.2.2.2), straightforward calculations provide its

conditional variance and confirm its approximate normality for large K assuming T is known; these

calculations can be adjusted to account for estimating T. (By "conditional" variance, we mean

conditional on the observed covariate and surrogate exposure data.)

Theoretical arguments required to demonstrate the consistency of aQV and to allow

(conditional) inferences using aCE and a QV are considered in section 4.2.3. These arguments make

use of several matrices, including De in (4.2.2.4) and the (K x K) diagonal matrix V, whose elements

are the values (estimated in practice) of Var(R j IZj) given by (4.2.1.13). These arguments provide

125

..

..

..

Page 135: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

approximate 100(1 - a)% confidence intervals (CIs) of the form

(4.2.2.7)

..

assuming that 'T is estimated consistently in a way that is not dependent on the response data. In

section 4.2.4, we provide a simulation-based evaluation of these approximate confidence intervals.

[One can construct approximate CIs for the other primary parameters (a, 11'"'' IT) analogously].

Bias Using CO Method

Since the primary goal is to determine effective multiplicative measurement error adjustment

strategies for small to moderate sample sizes, a preliminary small-scale simulation study assuming

known 'T was conducted. A striking finding is illustrated in Figure 4.1, which is a plot of the means

of 1,000 independently generated realizations of Pea against the natural log of the sample size K.

The TDM assumed was (4.2.1.1) with no covariates, and with parameter values a = 1.5, {3 =2.5,

0'; =2, JJx = - 3, 0'; =1, and O'~ =1.2. Note that the arithmetic mean of the 1,000 Pea values for

the first sample size (K =25) grossly overestimates the true value of 2.5. As the sample size

increases, the bias naturally lessens; however, the fact that there is still substantial bias when

K =1,000 On(K) == 7], and even discernable bias for the largest sample size (K =40,000) considered,

is quite disturbing. Clearly, the rate of convergence in (4.2.2.1) seems to be extremely slow, which

may be a peculiarity of the multiplicative measurement error setting; this finding is somewhat

atypical of the CO method according to most previous commentary (e.g., Carroll, 1989). Though not

shown in Figure 4.1, the corresponding means of estimates based on the CE and QL methods were

much closer to the true value of 2.5 for all sample sizes. Similar findings were obtained when

considering other sets of parameter values and when using the sample median (instead of the mean) of

the estimates as the indicator of location.

The striking effect illustrated in Figure 4.1 was apparently overlooked by Armstrong (1985)

and Hwang (1986) due to the lack of extensive numerical studies. Despite the consistency of Pea'there is a possibility of substantial bias (even for known T) for any reasonable sample size. These

considerations argue against the use of the CO method in the multiplicative-lognormal measurement

error setting. In what follows, we focus only on the readily available CE and QL approaches. As we

will see, these methods perform reasonably well without excessive sample size demands, and even

when we account for the additional variability due to estimation of T •

126

Page 136: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

1110987

In (I<)

654

----------------------------------------------------~

5.5

5.0

4.5

z 4.0Lli~ 3.5

3.0

2.5

2.0,i

3

Figure 4.1. Plot of the means of 1000 simulated values of ilea for various sample sizes (K),

against the natural log of K. The true value of {3 is 2.5; other parameters assumed

. th " I I" ". - 1 - 2 - ~ - 3 2 - 1 d 2 - 1 2In e sImp e mear regressIon are. (} - .:J, (j e - ~, J-Lx - - ,(jx - ,an (ju - ••

-I.!.3 Large sample arguments for inference using CE and QL methods

In this section, we layout and sketch the proof of a proposition allowing approximate inference

about the parameters of the TOM (in particular, ,8). in conjunctiion with the CE and QL approaches

to obtaining appropriately adjusted estimates of these parameters. The simulation study in section

4.2.4 provides some evaluation of these approximations for selections of sample sizes corresponding to

a hypothetical main study and its accompanying validation study.

Proposition: Assume that a consistent estimator f of T is computed independently of the response

vector R. Under appropriate conditions, distributional theory relevant for inference based on the QL

and CE procedures may be summarized as follows:

..jK (eQL - 6) R Jf( O. lim K ~QV[( ...00

(4.2.3.1 ) ..

and •

12i

Page 137: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where

.IV A D'lK (aCE - a) -+ )frO, lim K r.CE) ,

K-+oo(4.2.3.2)

and

In the above expressions, V22 is a covariance matrix defined in the following "sketch of proof", V and

De are as defined in section 4.2.2, and D.,. is a (K x q) matrix, conceptually similar to De, with (i,I)­

th element defined as follows:

..where J.ti

(i = 1, ..., K; 1= 1, ... ,q) ,

..

Sketch of Proof:

The large sample variance given above for QL is notationally very similar to the general result

of McCullagh (1991), except that here we take into account estimation of nuisance parameters (T).

Qualitatively, the distributional result is the same as that of Liang and Liu (1991). As they mention,

the proof bears strong resemblance to that of Gong and Samaniego (1981) in the context of

pseudolikelihood estimation, and to that of Liang and Zeger (1986) in the context of generalized

estimating equations. The necessary regularity conditions are analogous to those detailed by Gong

and Samaniego (1981). Briefly, the steps consist of the following:

(i). Assume that T is a consistent estimator of T such that

Note also that V12 = 0 given the assumption that T is functionally independent of R, although this

128

Page 138: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

assumption may be relaxed. It is straightforward to show that Vn

(ii). Make a Taylor series expansion of S(8QL; r) about the true e, obtaining a first-order

approximation for see; r). In so doing, we obtain

(4.2.3.3)

(iii). Make a first-order expansion of see; r) about the true T. In so doing, we have

(4.2.3.4)

(iv). Use the approximations in (ii). and (iii). to obtain an approximation for (8QL - e) as a linear

combination of see; T) and (r - T). In conjunction with (i)., this leads to the result in (4.2.3.1). In

particular, it follows directly from (4.2.3.3) and (4.2.3.4) that

(4.2.3.5)

If for the moment we write (4.2.3.5) as (8QL - e) ~ AS(e; T) - B(r - T), we have from (i).

that

Since we assume that V12 =0, this leads directly to the final expression for EQL; note, however, that

the above expression also allows for more general cases in which V12 :f: o. Also, note from (i). that

K- 1V22 equals (asymptotically) Yarer).

The result (4.2.3.2) for CE follows from a nearly identical proof after noting that 8CE is the

solution to the set of equations

(4.2.3.6)

and after assuming [analogously to (i).] that

129

Page 139: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Again, Vi2 = 0 by assumption. It is easily shown that ViI = K - I(DeVDe).

The distributional results (4.2.3.1) and (4.2.3.2) are the same as we would obtain by assuming

that O'~ is known and not estimated; for details regarding this point, see Breslow (1990). In practice,

we apply these results for inference about the parameters e (in particular, (3) by computing t QL or

tOE using data-driven estimates of De' Dr' V, and V22'

./.!../ Simulation study of CE and QL methods

To study the performance of the CE and QL methods (with regard to both point and

confidence interval estimation), a series of more realistic simulations were conducted in which r was

estimated using separate validation data and for which the asymptotic results described in section

4.2.3 were applied. In these cases, we assume true models (4.2.1.1) containing two covariates (CI , C2)

generated independently of exposure. These covariates are derived according to distributions deemed

reasonable to describe age (in years) and smoking status (0 =nonsmoker, 1 =smoker) among a

"typical" group of industrial workers; in particular, age was generated as a transformed beta variable

distributed symmetrically over the range 20 to 65, and smoking was generated independently of age

(for convenience, not for necessity) as a Bernoulli variate with probability 0.40. Table 4.1 reports the

results of 5,000 simulations for each of several settings for which the true value of {3 is 0.75. Other

values used for Table 4.1 are as follows: a = 9, II = 0.25, 12 = 0.4, J.lx = - 3, 0'; = 2, and O'~ = 1.5.

Table 4.2 reports the results of sets of 5,000 simulations in each case, for which (3 = 2.5, a = 1.5,

11 =0.9, 'Y2 = 1.3, J.lx = - 3, 0'; = 1, and O'~ =1.2.

Details of Parameter Estimation

For each simulated dataset, we first use the estimates T and ec E to solve equation (4.2.2.6)

for a dispersion parameter estimate (iT~,CE) using Newton-Raphson methods. Again using a Newton­

Raphson algorithm, we then solve (4.2.2.5) for an initial QL estimate of e using T and iT~ CEo We,

alternately solve (4.2.2.5) and (4.2.2.6) until convergence yields the final estimates eQL and iT~,QL'

This iterative approach is essentially equivalent to the procedure outlined by Schafer (1989). Liang

and Liu (1991) note that such calculations can also be implemented by modifying existing statistical

packages such as GUM. In the simulations leading to Table 4.1, there were rare instances in which

convergence was not obtained for the QL procedure, in which case those runs were dropped from

further consideration. However, the convergence rate exceeded 98% for each set of conditions

considered. Also, negative estimates (0-:, CE or 0-;, QL) of the dispersion parameter occurred in a few

130

Page 140: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

instances, due to the moderate sample sizes used. If u;, QL was negative, we set u;, QL equal to 0

when solving (4.2.2.5) for the final estimate 8QL and when computing the estimated variance of eQL;

likewise, if U;,CE was negative, we set U;,CE equal to 0 when c:omputing the estimated variance of

8CE. This practice is in agreement with Breslow (1990), and it does not affect the large-sample

properties of 8QL or 8CE'

Estimates of T Using Validation Data

For each simulated main study data set of size K = '75, a separate validation data set

consisting of K' = 10, 25, or 50 (X, Z) pairs was generated, from which we compute the maximum

likelihood estimate (MLE) T of T and its estimated dispersion matrix K -IV22 (see section 4.2.3)

based on Fisher's expected information. In particular, standard calculations assuming a random

sample of K' (X, Z) pairs following (4.2.1.5) reveal that the MLE T = (Px' u;, u~), where

and

(4.2.4.1)

(4.2.4.2)

In(X;Y1 2- I}. (4.2.4.3)

Likewise, the estimated asymptotic dispersion matrix is given by

-2 0 0Ux

V~r(T) K-IV K- I 0 ? -41 0 (4.2.4.4)22 _uJ~

4. 40 0

U u

(2 + u~)

Both T and K - 1V22 were used in the simulations for estimation and inference via both the QL and

CE procedures. Although this procedure is not fully efficient (i.e., it ignores information about T in

the main study data), it sufficiently illustrates and facilitates comparison of the two methods. In

these simulations, it is assumed that X has the same distribution in the main study data as in the

validation data (Carroll, 1989).

131

Page 141: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 4.1 Summary of simulation results assessing measurement error adjustment strategies

(75 workers in main study, K' workers in validation study) /J

{3 = 0.75 b

K' (72 Estimator Mean Median SDe

...

POLS 0.42 0.33 0.49

PCE 1.18 0.80 1.70•

PQL 1.17 0.84 1.62

10 0.5 Var(PCE) 3.68 1.02 12.29

Var(PQL) 3.01 0.90 9.84

Width(CE) 5.43 3.97 5.21

Width(QL) 5.00 3.71 4.62

CoveragesC of approximate 95% CIs for {3: CE Q1

0.945 0.944

POLS 0.42 0.32 0.84

PCE 1.17 0.78 2.69

• PQL 1.19 0.81 2.63

10 2.0 Var(PCE) 9.63 2.56 50.80

Var(PQL) 8.83 2.45 42.65

Width(CE) 8.65 6.28 8.55

Width(QL) 8.39 6.14 8.09

Coveragesc of approximate 95% CIs for {3: CE Q1

0.985 0.981

POLS 0.41 0.32 0.48

PCE 0.87 0.72 0.92

PQL 0.91 0.76 0.92

25 0.5 Var(PCE) 1.15 0.66 2.02

Var(PQL) 0.90 0.58 1.05

Width(CE) 3.65 3.19 2.09

132

Page 142: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Width(QL) 3.33 2.99 1.63

CoveragesC of approximate 95% CIs for 13: CE Q.k0.944 0.941

ilOLS 0.40 0.32 0.82

liCE 0.87 0.74 1.49 ..

liQL 0.92 0.76 1.57

25 2.0 Var(liCE) 3.06 1.88 3.94 •

Var(liQL) 2.77 1.79 3.32

Width(CE) 6.06 5.38 3.21

Width(QL) 5.82 5.25 2.94

Coveragesc of approximate 95% CIs for 13: CE Q.k0.984 0.977

liOLS 0.39 0.31 0.48

liCE 0.77 0.68 0.77

liQL 0.81 0.72 0.81

50 0.5 Var(ilCE) 0.82 0.57 0.90

Var(liQL ) 0.65 0.51 0.56

Width(CE) 3.23 2.96 1.46

Width(QL) 2.96 2.80 1.13

Coveragesc of approximate 95% CIs for 13: CE Q.k

0.941 0.938

liOLS 0.44 0.33 0.83

liCE 0;83 0.75 1.29

liQL 0.87 0.77 1.38

50 2.0 Var(liCE ) 2.31 1.72 2.06

Var(ilQL) 2.08 1.61 1.69

Width(CE) 5.51 5.15 2.26

Width(QL) 5.27 4.98 2.04

133

Page 143: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

CoveragesC of approximate 95% CIs for (3: CE

0.990

Q1.0.982

..

a: 5,000 simulated datasets in each case

b: other parameters are: a =9., .Yt =0.25, 1'2 =0.4, JJx = - 3, 00; =2, and oo~ =1.5.

c: CIs are calculated according to (4.2.2.7); coverages are reported in terms of proportions of

datasets for which the {3 value of 0.75 is contained within the calculated interval.

Table 4.2 Summary of simulation results assessing measurement error adjustment strategies

(75 workers in main study, K' workers in validation study) a

(3 = 2.5 b

K' 002 Estimator Mean Mediane

i30Ls 0.93 0.82

i3CE 3.48 2.62

i3QL 3.43 2.63

10 0.5 Var(i3cE) 17.28 6.06

• Var(i3QL) 14.96 5.45

Width(CE) 12.51 9.65

Width(QL) 11.81 9.15

Coveragesc of approximate 95% CIs for (3: CE Q1.0.930 0.932

i30Ls 0.95 0.80

i3CE 3.45 2.55

i3QL 3.48 2.60

10 2.0 Var(i3cE) 41.53 16.18

Var(i3QL) 39.96 15.69

Width(CE) 20.13 15.77

Width(QL) 19.75 15.53

SD

0.96

3.91

3.70

53.86

44.31

10.44

9.51

1.45

5.52

5.58

93.89

91.05

15.26

14.97

134

Page 144: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

CoveragesC of approximate 95% CIs for {3: CE Q1.

0.975 0.973

POLS 0.95 0.82 0.90

PCE 2.85 2.54 2.34

PQL 2.87 2.59 2.32

25 0.5 Var(PCE) 6.22 4.26 6.83 •

Var(PQL) 5.40 3.89 5.52

Width(CE) 8.92 8.09 4.00

Width(QL) 8.40 7.73 3.52

Coveragesc of approximate 95% CIs for {3: CE Q1.

0.943 0.942

POLS 0.91 0.76 1.43

PCE 2.75 2.44 3.80

PQL 2.81 2.43 3.87

25 2.0 Var(PCE) 17.21 12.28 20.53

Var(PQL) 16.39 11.86 18.58

Width(CE) 14.93 13.73 6.44

Width(QL) 14.62 13.50 6.17

CoveragesC of approximate 95% CIs for {3: CE Q1.

0.977 0.971

POLS 0.95 0.81 0.89

PCE 2.65 2.44 2.01

PQL 2.67 2.49 2.00

50 0.5 Var(PCE) 4.57 3.65 3.73

Var(PQL) 3.93 3.30 2.63

Width(CE) 7.92 7.49 2.75

Width(QL) 7.43 7.12 2.28

CoveragesC of approximate 95% CIs for {3: CE Q1.

0.946 0.942

135

Page 145: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

liOLS 0.97 0.80 1.46

liCE 2.74 2.52 3.43

liQL 2.77 2.53 3.51

50 2.0 Var(liCE) 13.45 11.16 9.54

Var(liQL) 12.72 10.72 8.40

Width(CE) 13.68 13.09 4.44

Width(QL) 13.35 12.84 4.15

Coveragesc of approximate 95% CIs for {3: CE Q10.979 0.971

a: 5,000 simulated datasets in each case

b: other parameters are: a =1.5, 1'1 =0.9, 1'2 =1.3, Jlx = - 3, u; =1, and u~ =1.2.

c: CIs are calculated according to (4.2.2.7); coverages are reported in terms of proportions of

datasets for which the {3 value of 2.5 is contained within the calculated interval.

Results and commentary

Tables 4.1 and 4.2 depict the means, medians, and standard deviations of liCE' liQL, and

liOLS [the OL5 estimator from the naive surrogate regression of R on (Z, C)] for the various sets of

conditions we considered. The same summary statistics are also reported for the estimated

conditional variances [Var(liCE) and Var(liQL)] and the widths [Width(CE) and Width(QL)] of the

approximate confidence intervals (4.2.2.7) based on the distributional results for CE and QL given in

section 4.2.3. The coverages of these intervals also appear in Tables 4.1 and 4.2. These results

provide some insight into the relative performances of the CE and QL procedures for moderate sample

sizes.

The QL method is simply a weighted version of the CE method, as can be seen by comparing

5(9; 'T) of (4.2.2.5) with T(9; T) in section 4.2.3. Clearly, the weights are the inverted estimates of

the conditional variances Var(Rj IZj' Cj) = Var(Rj IZj) given by (4.2.1.13). In fact, the QL

procedure is essentially equivalent to a weighted least squares approach to the regression of R on

[E(X IZ, C), C], in which we seek in principle to minimize the function

E(R j I Zi' C j )]2 ,

136

(4.2.4.5)

Page 146: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

where wi = [Var(Rj I Zi' Cj)] -1. However, strict minimization of this function does not generally

yield a consistent estimator of e, and the adaptation typically termed iteratively weighted least

squares (e.g., Charnes, Frome, and Yu, 1976) is generally equivalent to quasi-likelihood estimation

(McCullagh, 1991). In conjunction with (4.2.2.6), this adaptation of a weighted least squares

approach to the regression of R on [E(X IZ, C), C] is identical to the QL procedure as it has been

described in the current setting.

Based on the preceding discussion, one might expect that the main benefit of the more

computationally involved QL procedure would be efficiency, leading on average to narrower

confidence intervals for f3 than those produced by the CE procedure. It also seems reasonable to

expect that the extent of the gain in efficiency of QL over CE should relate directly to the variability

of the weights Wi' To investigate this notion, for each of the settings depicted in Tables 4.1 and 4.2,

an estimate of the true variance of the weights Wj using the true parameter values and the delta

method was computed (alternatively, one could consider the sample variances of the estimated

weights). In particular, the approximation used is:

.,

Var(wi) Var[g(Zi)] ..:.. {g'[E(Zi)]}2Var(Zj),

where g(Zi) =[Var(Rj I Zi)] -1, and E(Zi), Var(Zj)' and Var(Rj I Zi) are given by (4.2.1.6),

(4.2.1.8) and (4.2.1.13), respectively. As depicted in Table 4.3, increasing the dispersion parameter

(O"~) from 0.5 to 2.0 has a substantial effect on the estimated vari.ability of the weights; this allows an

interesting simulation-based comparison of the efficiencies of i3CB and i3QL. In a similar vein, Liang

and Liu (1991) present some analytical efficiency calculations for an additive-normal measurement

error problem with a log-linear TDM.

Tables 4.1 and 4.2 illustrate these ideas using simulated data. Note that the conditional

variances based on the QL method tend to be smaller than those for the CE method, and that the

approximate intervals for QL are, on average, narrower. However, the relative discrepancies are often

minor, and they tend to be more pronounced (all other parameters held constant) when we take

O"~ = 0.5 as opposed to 2.0, thereby increasing the variability of the weights Wi (see Table 4.3). As

expected, the naive estimator (13oLs) is attenuated and often severely so. Though not shown in the

tables, standard confidence intervals for f3 based on this surrogate regression lead to wildly misleading

inferences in many cases. Comparably with Liang and Liu (1991), the distributions of i3CE and i3QL

tend to be skewed, so that the median may generally be a better indicator of central tendency than is

the mean. This effect is less pro~ounced as the size (/(') of the validation sample increases, and it

also diminishes if we increase K. Despite the skewness of these unconditional distributions, the

137

Page 147: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

coverages of the approximate CIs for {3 tend to be reasonable and quite comparable, although they are

somewhat conservative (coverage higher than nominal) in some cases. The overall performances of

these two methods appear to be similar in Tables 4.1 and 4.2, except for the efficiency gains from

using QL.

Table 4.3 Approximate variances of weights [Wi in (4.2.4.5)] for situations considered in Tables 4.1 and 4.2

..Table 0- (T2 Var(lQi) a--e

1 0.75 0.5 0.564

1 0.75 2.0 0.003

2 2.50 0.5 0.197

2 2.50 2.0 0.001

a: based on delta method approximation using true parameter values in each case

4.3 SPECIFIC APPLICATIONS IN OCCUPATIONAL EPIDEMIOLOGY

..

This section provides illustration of the CE and QL methodology in the context of the general

occupational health setting described in section 4.1.1. First, we describe the three models for that

setting, which are special cases of models (4.2.1.1)-(4.2.1.3). We proceed to present a brief simulation

study to illustrate and evaluate the methodology of section 4.2 as it applies to this more specific

setting. In Chapter 5, we use real data from a study by Heederik et al. (1991) to further illustrate the

applications.

-/.3.1 Proposed models for real-world applications

Assume the availability of repeated shift-long exposure measurements on selected workers from

G job groups, and that these repeated (logged) measurements are appropriately modeled by (4.1.1.4),

where Ygij represents the natural log of the j-th measurement on the i-th (i = 1, ..., kg) worker in the

g-th (g = 1,... , G) group. The associated measurement error problem is a special case of that

discussed in section 4.2, which we outline as follows:

138

Page 148: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(4.3.1.1)

where JJxgi = exp(JJyg + cgi + (j~ul2) , i =1,..., kg; 9 =1,... , G;

(4.3.1.2)

(4.3.1.3)

Note that (4.3.1.1) is a direct generalization of (4.1.1.5) to include covariates. In concordance with

(4.2.1.4), the covariates (Cgil ,... , CgiT) are assumed to be statistically independent of the true and

surrogate exposure variables in the underlying population from which the data are obtained.

Note that E(JJxgi) = JJxg =exp[JJyg + «(j~g + (j~g)/2]; also, recall in reference to (4.3.1.3) that we

use the notation of section 1.3.1 to specify the lognormal distribution. Unlike Zi in section 4.2, then .

ga

chosen lognormal surrogate exp(Ygi) (where Ygi = ngi - ~~lYgi.i) in (4.3.1.2) does not have the same

expectation as, and is not necessarily more variable than, the true (latent) exposure variable JJxgi' In

particular, E[exp(Ygi)] = exp[JJyg + «(j~g + n;i l(j~g)/2] :f= JJ xg unless ngi = 1; also, Var[exp(Ygi)] =

{E[exp(Ygi)]}2 x [exp«(j~g + n;i l(j~g) -1], which mayor may not exceed Var(JJxgi) = JJ;'g[exp«(j~g) -1].

Hence, while the naive surrogate regression replacing JJxgi by exp(Ygi) in (4.3.1.1) certainly leads to a

biased estimator of 13, whether it is attenuated or inflated depends upon the values of the set of

nuisance parameters T =(Tl ,... , TC)', where Tg = (JJyg,(j~g,(j~~,), 9 =1,... , G. The multiplicative

errors Ugi are given by exp(7 gi - (j~g/2), and they are independent (though not identically

distributed) lognormal random variables. For known T, using exp(Ygi) of (4.3.1.2) as the surrogate

exposure variable in conjunction with the CE or QL approach is equivalent to using iixgi' where iixgi

is the "best predictor" of JJxgi as defined and discussed in section 2.3.1.

Were we considering only one group, and were the exposure data balanced (Le., were ni = n, \;f

i), then a result analogous to (1.7.4.1) would be

13 p f3{eXP[(2n)-l(n-l)(j~][exP«(j~) - I]}.OLS'" [exp«(j~+n l(j~) _ 1]

However, the combination of several groups of workers, together with the unbalancedness of the data

within each group, essentially precludes consideration of the CO approach. At any rate, the message

in Figure 4.1 leads us to avoid that approach even in the balanced case for a single group.

139

..

Page 149: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

-I.3.t Details of parameter estimation

For the specific setting proposed in section 4.3.1, parameter estimation may be approached

using a strategy identical to that given in section 4.2.4. We present here a few necessary details for

solving equations (4.2.2.5) and (4.2.2.6); in that direction, consider the following definitions:

..

.. T;\{ I eY9i)= ,J.lxgi =

(4.3.2.1)

(4.3.2.2)

T= R . - a - pt ·1 - L:"Y C ·tg. g. t =1 t g.

and

(4.3.2.3)

(4.3.2.4)

Assuming for the sake of illustration that there are 3 covariates (Le., T =3), then the set of 5

equations given by (4.2.2.5) may be written in terms of (4.3.2.1 )-(4.3.2.4), as follows:

G kg

L: .L: t gi2<Pgi 0, (4.3.2.5)g=1.=1

G kg

L: .L: t gil tgi2<Pgi = 0, (4.3.2.6)g=1o=1

G kg

L: .L: Cgil t gi2<Pgi o, (4.3.2.7)g=l.=l

G kg

E .E Cgi2tgi2¢gi 0, (4.3.2.8)g=1.=1

Gk

9

E .E Cgi3tgi2<Pgi = o. (4.3.2.9)g=1.=1

Also, equation (4.2.2.6) for the dispersion parameter O"~ may be written compactly as:

(4.3.2.10)

The solution of equation (4.3.2.10) as required for the CE approach, and the simultaneous solution of

equations (4.3.2.5)-(4.3.2.10) as required for the QL approach, may be achieved using Newton­

140

Page 150: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Raphson routines as discussed in section 4.2.4.

For use in conjunction with the results of section 4.2.3, note that the (K x p) matrix De = (1,

E. Ct. C2, .... CT ), where p = (T + 2) and E is the K-vector whose i-th element is given by tgi1 of

(4.3.2.2). Likewise, the elements of the (K x K) diagonal matrix V -1 are given by ¢gi of (4.3.2.4).

-l.S.S Incorporation of estimators for T

The repeated exposure measurements on each worker in the g-th group (g = 1,... , G) provide

information on T 9 that is more akin to that obtained in a reproducibility, rather than a validation,

study (Thomas et al., 1993). Clearly, ML is one option for estimating T 9 and its (3 x 3) dispersion

matrix. Assuming certain regularity conditions, utilization of the ML estimators validates inferential

procedures based on the asymptotic results in section 4.2.3, due to the well-established asymptotic

normal distribution of the vectors ~(Tg,ml - T g), 9 = 1, ...,G. Recalling that the Tg'S are mutually

independent random vectors, we would form the block diagonal asymptotic [i.e., for kg (g = 1,... ,G)

large] dispersion matrix VT,ml = Var(Tml)' Based on the general results of section 4.2.3, which

assume certain regularity conditions, the corresponding asymptotic dispersion matrices EQL and ECE

for the setting described in section 4.3.1 become

"

E = (D'y-1D )-I{I + (D'y-1D)V (D'y-1D )(D'y-tD )-l}QL e e peT T,ml Tee e

and

(4.3.3.1)

(4.3.3.2)

Strictly speaking, the asymptotic theory leading to (4.3.3.1)-(4.3.3.2) pertains to the use of

Tmi' However, as demonstrated in conjunction with the approximate CIs for Ilx (section 2.2.5), and

with the Wald-type regulatory test statistic for unbalanced data under Model I (section 3.3.6), an

argument may be made for using the ANOVA estimators for the variance components and employing

them to estimate the generalized least squares estimator of Ilyg' This argument stems from the fact

that, for practical sample sizes, the asymptotic results are truly relevant only to the extent to which

they permit reasonable approximate inference. The unbiasedll€:ss of the ANOVA estimators is an

attractive feature that makes them preferable to ML in other circumstances (e.g., see Table 3.6);

hence, we will use these estimators for demonstrating applications of the measurement error

adjustment strategies presented in. this chapter. Hence, we modify (4.3.3.1) and (4.3.3.2) accordingly

by replacing Vr,ml by VT,an' the (large sample) dispersion matrix of the vector Tan' This

141

Page 151: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

[(3G) x (3G)] block diagonal dispersion matrix is easily computed, using the results cited in section

2.2.5 (the vector denoted as ~an in section 2.2.5 is qualitatively the same as Tg,an' so that each block

of V r,an may be computed using the results from that section).

Use of the conditional dispersion matrix

..Under ordinary circumstances, an estimate of V r,an would be the appropriate dispersion

matrix for practical use in conjunction with the modified versions of equations (4.3.3.1)-(4.3.3.2) and

the asymptotic theory provided in section 4.2.3. However, it is important to recall that, as in

ordinary regression, the theoretical variances ~QL and ~CE are appropriately viewed as conditional

on the observed predictors in the model (in the general notation of section 4.2.3, ~QL should really be

viewed, asymptotically, as Var(9QL I Z, C), and similarly for ~CE)' Hence, for the specific

application proposed in section 4.3.1, we have a somewhat atypical situation. For each group, the

observed surrogates [exp(Ygi)' i = 1,... , kg] for true mean exposure (J.Lxgi) are themselves functions of

the data used to calculate Tg,an' (Note that this is not the case for more classical settings, such as

the general setting for the simulation study in section 4.2.4, in which external validation data are

used to estimate r). It should follow that the appropriate estimated dispersion matrix Vr an for use,in our occupational setting should be adjusted to reflect the conditional variability of Tan' given the

{Ygil (g =1,..., G; i =1,..., kg)'

Fortunately, the closed-form expressions for the elements of Tan make this adjustment

relatively simple in our setting; this is perhaps a further argument in favor of the use of the ANOVA

estimators, as opposed to the MLEs. Recall (section 2.1.1) that the ANOVA estimators are:

(kg -l)(MSBg - MSWg)

kg

(N g -.L n~;/N)1=1

d ·2an (J'wg MSWg ,

k

.t {Yg;./var(Y9iJ}= 1 -1

k

i~J l/var(Y9iJ}

j.J.yg

where var(YgiJ = (c7~g + c7~g/ngi)'

Now, let us consider the variances of j.J.yg' c7~g' and c7~g, given {Yg1 , Yg2 ,..., Ygk }. First, as9

kg becomes large, the only variability associated with Pyg is essentially due to the {Ygil; hence (as

142

kg9 = 1, ...,G, where the appropriate mean squares are given in Table 2.1, and Ng = .L ngi' Also, as

1 =1we have done previously in sections 2.2.5 and 3.3.6, we estimate the generalized least squares

estimator of J1. yg as

Page 152: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

would also be true of the MLE), the asymptotic conditional variance of jJ.!Jg is 0. Also (see Table

2.1), the conditional variance of MSB g is 0, since MSBg is completely determined by the {Ygil.

Standard results for random samples from normal populations dictate that MSW9 .1 {Ygil; hence,

and

(see section 2.2.5). Finally, since the conditional variance of MSB g is 0, we have

It follows that (for large kg) the conditional dispersion matrix of ir-9 = (jJ.!Jg' U~g' U~g)1 is given by:

(4.3.3.3) •

(g = 1,...,G), where v22 = Var(U~g I {Ygi})' v 23 = Cov(U~g' u'~g {Ygi})' and v33 = Var(U~g I{Ygi})' For practical use, we may estimate the nonzero terms in (4.3.3.3) by replacing the unknown

variance components by their estimates in the functional expressions. The resulting estimated

dispersion matrix forms the g-th block of Vr, an' which is used to estimate EQL and ECE via the

modified (to account for the use of the ANOVA estimators) expre;ssions (4.3.3.1) and (4.3.3.2).

Specification of the matrix D.,.

An estimate of the [K x (3G)] matrix Dr (as defined in sedion 4.2.3) is readily obtained, after

we make the following specifications:

T;« I Ygi C )Pgi = '\.R gi e , gi =

143

(4.3.3.4)

Page 153: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(4.3.3.5)

(4.3.3.6)

(4.3.3.7)

and

.. (4.3.3.8)

The basic derivatives required for forming the matrix DT

, written in terms of (4.3.3.4)-(4.3.3.8), are

given by:

(4.3.3.9)

(4.3.3.10)

and

• (4.3.3.11)

9 =1,...,G.

Inference in practice

As mentioned in section 4.2.3, in practice we utilize estimates of the matrices De' DT

, and V,

in addition to VT, an' We estimate these quantities using the operable estimates in place of unknown

parameters in the functional expressions. In particular, for inference using the CE or QL method, we

utilize PeE or PQV respectively, in place of {3. For both methods, we utilize the ANOVA estimates

of the variance components, although the MLEs could also be used. With regard to the use of these

ANOVA estimators, we stress again that the asymptotic normality of the vector Tan has not been

established (see the brief discussion in section 6.2.5); what has been established is the tendency for Tan

to be preferable to Tml in empirical studies evaluating related statistical methods.

Finally, we compute approximate 100(1- a)% CIs for {3 based on the QL and CE methods as

144

Page 154: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

(4.3.3.12)

and

(4.3.3.13)

where Var(13QL) is the (2,2) element of the (estimated) dispersion matrix (4.3.3.1), but with VT,an

replacing VT,ml' Var(13cE) is obtained analogously, based on the modified version of (4.3.3.2).

Clearly, approximate CIs for the other parameters (a, 11"'" IT) can be constructed analogously, as

was mentioned in section 4.2.2.

./.3•./ Simulation stud, applying CE and QL adjustment methods

The results from applying the CE and QL methods [as well as the naive regression that ignores

measurement error in the surrogate exp(Ygi)] on data simulated according to the proposed models in

section 4.3.1 are provided in Table 4.4. For each of the cases considered, exposure data were

generated for G = 3 groups according to model (4.1.1.4) and response data according to model

(4.3.1.1). In each case, T = 3 covariates were included in (4.3.1.1); these covariates were generated

independently of one another using distributions roughly compatible with the observed distributions

of age, height, and pack years of smoking in the actual data from Heederik et al. (1991) that is used

for the example in section 5.2.4. For each simulated data set, the nuisance parameters T were

estimated as described in section 4.3.3, and an estimate of the conditional dispersion matrix [the g-th

block of which is given by equation (4.3.3.1)] was calculated, together with the other quantities

necessary for estimating 'EQL and 'ECE'

For each of the three cases displayed m Table 4.4, the following values were used for the

primary and dispersion parameters (9 and u~, respectively): a =1, f3 = - 1, 11 = - 0.25, 12 = - 0.5,

13 =1, and u; =0.5. However, the three cases considered in the table differ in terms of the value of

K used and in terms of the ranges used for the ngi values. In each case, separately for each of the

three groups, unbalanced exposure datasets were derived from balanced ones (for which n gi = n*, V 9

and i) according to a simple algorithm identical to the one described in section 3.3.7 for generating

unbalanced exposure data for the simulation study evaluating the Wald-type test statistic (3.3.6.1).

First, the value of n* was chosen via

n* = n~ 1 + 1,p

145

Page 155: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

where n is as displayed in Table 4.4 and p* = 0.10 in each instance. Then, independently for each of

the kg = K/3 workers in each group, ngi of n* observations were retained, with ngi determined as

where T gi "'" Binomial(n* - 1, p*). Each resulting simulated set of exposure data for a particular

group is unbalanced, with kg =K /3 workers (g =1,2,3) and from 1 to n* measurements on each

particular worker. By design, E(ngi) = n, and hence the expected total number of measurements on

each simulated group of workers is N 9 =nkg (g =1,2,3). Therefore, each fuJI simulated data set

contains K workers, with on average N = nK total exposure measurements.

The first case considered in Table 4.4 is designed to mimic roughly (on average) the sample

size conditions for the actual data of Heederik et al. (1991), except that the number of workers (and

hence the average number of exposure measurements per data set) is increased by roughly 50%

(K = 60, n = 4, average N 9 = 80, average N = 240). For the second case, we keep roughly the same

number of workers as in the observed data, but maintain the 50% sample size inflation by increasing

the average number of measurements per worker (J( =39, n =6, average N 9 =78, average N =234).

For the third case, overall sample sizes are markedly higher (J( =150, n =3, average N 9 = 150,

average N = 450). For each of the three cases, 1000 independent sets of data were generated. In rare

cases, one or more of the required numerical algorithms failed to converge, in which case the data set

was dropped from consideration. However, for each of the three cases, a convergence rate of at least

98% was achieved.

The first two cases displayed in Table 4.4 were chosen to study the effect of varying the

allocation of (roughly) the same total sample size under otherwise identical conditions. For these first

two cases, the following values were used for the nuisance parameters T = (Tl' T 2' T 3)', as reported in

the table: Jlyl = - 2, O'~l = 0.2, O'~l = 1.5; Jl y2 = - 2, 0'~2 = 1, 0'~2 =2; JlY3 = - 2, 0'~3 =3,

0'~3 = 1. One consequence of this choice of T is that, for the vast majority of (simulated) workers,

the theoretical variance of the surrogate [exp(Ygi)] for true mean exposure (Jlxgi) is smaller than the

theoretical variance of Jlxgi itself. This motivates consideration of the third case, for which the

following values for the elements of T were used: Jl yg = - 2, O'~g = 0.3, O'~g = 0.5 (g = 1,2,3). As a

consequence, the theoretical variance of the surrogate [exp(Ygi)] for the true mean is larger than the

variance of Jlxgi for the simulated workers producing the results for the third case in Table 4.4. This

situation is more typical of measurement error models in general, and allows illustration of the

different types of bias that are possible in the current setting if one ignores measurement error in the

surrogate.

146

Page 156: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 4.4 Summary of simulation results illustrating specific occupational health application a,b

(G=3, kg =20, K=60, n=4) c,d

Estimator Mean Median SD

POLS -1.266 - 1.19:> 0.564

PCE...

- 0.974 - 0.897 0.433

PQL - 0.979 - 0.93'7 0.324

Var(PCE) 0.206 0.127 0.396

Var(PQL) 0.086 0.070 0.077

Width(OLS) 0.743 0.674 0.441

Width(CE) 1.586 1.397 0.809

Width(QL) 1.092 1.035 0.350

Coveragese of approximate 95% CIs for (3: Naive CE QL

0.478 0.895 0.897

(0.803)1 (0.975)1

(G = 3, kg = 13, K = 39, n = 6) c,d

Estimator Mean Median SD

POLS -1.424 -1.354 0.598

PCE - 0.943 - 0.905 0.401

PQL -0.978 - 0.942 0.427

Var(PCE) 0.171 0.126 0.154

Var(PQL) 0.228 0.089 3.481

Width(OLS) 1.196 1.050 0.777

Width(CE) 1.507 1.393 0.591

Width(QL) 1.301 1.172 1.346

Coveragese of approximate 95% CIs for /3: Naive CE QL

0.548 0.897 0.908

(0.810)1 (0.899)1•

147

Page 157: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Estimator

(G =3, "g =50, K =150, n =3) d.g

Mean Median SD

POLS

PCE

PQL

V"ar(pCE)

V"ar(PQL)

Width(OLS)

Width(CE)

Width(QL)

-0.591

-0.952

-0.956

0.459

0.457

1.660

2.599

2.595

- 0.575

-0.932

-0.933

0.414

0.413

1.642

2.523

2.520

0.444

0.663

0.667

0.200

0.200

0.314

0.538

0.538

Coveragese of approximate 95% CIs for {3: Naive

0.813

CE

0.952

(0.294)1

QL.0.950

(0.300)1

a: 1,000 simulated datasets in each case

b: primary parameters used are: a =1, {3 = - 1, 11 = - 0.25, 12 = - 0.5, 13 = 1, 0"; =0.5.

e: or' =(IlYI,O'gI'O'~I; J.lY2,O"g2'0'~2; J.lY3'0"~3'0"~3) =(-2,0.2,1.5; -2,1,2; -2,3,1)

d: n utilized in equation (6.4); E(ngi) = n; E(N) = nK

e: CIs are calculated according to (4.3.3.12) and (4.3.3.13); coverages are reported in terms

of proportions of datasets for which the true {3 is contained within the calculated interval.

f: proportion of intervals for which Ho: {3 = 0 is rejected in favor of HI: {3 < O.

g: or' =(J.lyI' O'gI' O'~I; ll y 2' O'g2' 0'~2; J.l y3' 0'~3' 0"~3) =(- 2, 0.3, 0.5; - 2, 0.3, 0.5; - 2, 0.3, 0.5)

-1.3.5 Commentary regarding simulation results

Summary

In general, the simulation results reported in Table 4.4 suggest that the adaptation of the

adjustment procedures described in section 4.2 to a specific occupational health setting [(4.3.1.1)­

(4.3.1.3)] is potentially quite useful. In each of the three cases, the adjusted estimators pCE and PQL

allow us to eliminate most of the bias in the naive estimator POLS' although the discrepancy between

the sample mean and median values of these adjusted estimators still reflects some lack of symmetry

148

Page 158: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

in their unconditional distributions (also see section 4.2.4). The coverage probabilities of the

approximate CIs based on the theoretical developments in section 4.2.3 are less than the nominal 0.95

in the first two cases, although the discrepancy is not major. For the third case (in which we used

larger sample sizes), the coverage achieved is almost exact.

The first two cases illustrate the somewhat atypical inflation phenomenon associated with

130LS under conditions for which the surrogate is generally less variable than the true mean exposure.

However, as expected, the third case illustrates the opposite effect (attenuation). In addition to the

bias, an arguably more crucial drawback to the surrogate regression is clearly evident in the low

coverage probabilities associated with the naive CI for /3. Oth'er simulation results (not presented

here) verify that a full range of qualitative effects (i.e., inflation, attenuation, excessively wide or

narrow CIs) when using the naive 130LS are possible in this setting.

With regard to comparing the performances of the two adjusted estimators (CE and QL), the

better power (in terms of the proportion of CIs for which the upper limit is negative) attendant with

the QL procedure is evident, particularly for the first two cases presented in Table 4.4. This follows

from the weighted nature of the QL approach as illustrated by equation 4.2.4.5. The simulations also

indicate that the QL estimator retains less bias than does the CE estimator (both appear to be

slightly attenuated, probably due mostly to the necessity of incorporating estimates of the unknown

nuisance parameter vector T). The relatively high sample mean .and standard deviation of Var(13QL)

in the second case are misleading, and arc the result of one or two (out of 1000) simulated data sets

that may have led to ill-conditioned matrices. However, if we consider median values, both for these

estimated conditional variances and for the widths of the associated CIs, we find that the expected

efficiency gains using QL as opposed to CE methods are indeed present in all three cases; in the first

two cases, these gains are substantial.

The change in sample size allocation from the first case to the second in Table 4.4 does not

produce a marked difference in performance of the measurement error adjustment procedures,

although for QL there is a slight tendency toward narrower approximate CIs in the first case. Despite

the lack of clear evidence, it is reasonable to expect in general that the performance of the adjusted

inference procedures is sensitive to both the total number of subjects and the overall quality of the

estimates of the nuisance parameters, and not just to the total number of exposure measurements.

The third case considered in Table 4.4 reflects significantly less power to reject Ho: (/3 =0) than do

the first two cases. This is surprising at first, given the largelr sample size and the fact that the

primary parameters (6) used are the same for all three cases. However, the combination of nuisance

parameters (T) used in the third case corresponds to markedly smaller variance associated with the

true predictor J.lxgi; as would be the case in ordinary regression, this has a negative impact with

regard to power.

149

...

Page 159: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Finally, based on considerations mentioned in section 4.3, the estimated conditional large

sample dispersion matrix of the vector ran was used in the simulations leading to Table 4.4.

However, for comparison, CIs based on the use of the unconditional estimated dispersion matrix were

also calculated. The coverages and power estimates based on the use of these CIs for the simulations

in Table 4.4 are given in Table 4.5.

Table 4.5 Performance of CIs based on unconditional Var(TaJ for simulations in Table 4.4 a, b

(G = 3, kg = 20, K = 60, n = 4) c,d,e

CE

0.910

(0.752)1

.Q1

0.918

(0.971)1

(G=3, kg = 13, K=39, n=6) c,d,e

CE

0.904

(0.757)1

.Q1

0.920

(0.878)1

(G=3, kg =50, K = 150, n=3) d,e,g

CE

0.955

(0.281)1

.Q1

0.954

(0.293)1

a:

b:

c:

d:

e:

f:

g:•

1,000 simulated datasets in each case

primary parameters used are: Q =1, {3 = - 1, /1 = - 0.25, /2 = - 0.5, 'h =1, u; =0.5.

T' =(JLYl,u~I,U~I; JLY2,U~2,u~2; JlY3,U&3,U~3) =(- 2,0.2, 1.5; - 2, 1,2; - 2,3, 1)

n utilized in equation (6.4); E(ngi) = n; E(N) = nJ(

CIs are calculated according to (4.3.3.12) and (4.3.3.13); coverages are reported in terms

of proportions of datasets for which the true {3 is contained within the calculated interval.

proportion of intervals for which Ho: {3 = 0 is rejected in favor of HI: {3 < o.T' =(JLYl,U&I,U~I; JLY2,U&2,U~2; JlY3,U&3,U~3) =(-2,0.3,0.5; -2,0.3,0.5; -2,0.3,0.5)

150

Page 160: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Upon comparing the results in Table 4.5 with those in Table 4.4, we find that the coverage

probabilities are quite similar in these cases whether we use the conditional or unconditional

estimated dispersion matrix of Tan' In fact, as we would expect, the use of the unconditional

dispersion matrix provides slightly higher coverage, which is beneficial in the first two cases. In the

third case (in which sample sizes were much larger), we see a hint of conservativeness when using the

unconditional dispersion matrix, as opposed to coverage that is almost exactly nominal in Table 4.4.

Small-sample considerations

Despite the argument (section 4.3.3) in favor of using Var(T9 I {Ygil) of (4.3.3.3), Tables 4.4

and 4.5 suggest that better coverage (at least in smaller sample size situations) may be obtained using

the unconditional Var(T g) when constructing the estimate V,r, an' In general, this choice is a

subjective one; we would expect better power and near nomimal coverage using the conditional

dispersion matrix as sample sizes become large.

Another sample size consideration has to do with the stability of numerical algorithms for

obtaining !JQL and its estimated variance. For all three cases considered in Table 4.4, a convergence

rate of 98% or higher was obtained. However, further simulation studies (not detailed here) suggest

that the convergence rate for QL worsens as sample sizes become smaller; also, the estimates

Var(!JQL) tend to become more unstable. Since this is generally not a problem with the CE method,

this method may be favored in some cases when sample sizes are small. Hence, while QL is generally

preferable for theoretical reasons, CE is a good substitute if small sample sizes lead to numerical

problems.

The modeling strategy proposed for the occupational health setting of section 4.3.1 allows one

to combine exposure and health outcome data from several groups of workers, possibly from different

plants or industries (given, of course, that the same toxicant and health outcome are measured for

each group). This is an advantage in light of real-world sample size limitations. Still, the stability of

required numerical algorithms and the validity of the normal approximations in conjunction with the

adjustment procedures proposed in this chapter require a reasonable data collection effort. In

particular, one should strive for enough workers and exposure measurements per worker within each

group to obtain a fairly accurate estimate of T9 for all 9 (g =1,... , G).

151

...

..

..

"

Page 161: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

Chapter 5: Examples Using Actual Occupational Exposure Data

5.1 PURPOSE

In Chapters 2 through 4, statistical methodology was developed with intended applications in

occupational epidemiology. Specifically, this methodology addresses estimation, prediction,

hypothesis testing, and measurement error issues with potential practical uses for the analysis of

workplace exposure data and health outcome data on workers. While much of the proposed

methodology has been applied to simulated data sets, no actual exposure or health effects data have

been utilized. The purpose of this chapter is to further illustrate applications of the methodology

using actual exposure and outcome data collected on workers in various industrial settings.

5.2 ORIGINS OF DATA AND CALCULATIONS

5.!.1 Sample calculations under Model I, balanced data

In this section, we use an actual set of (logged) shift-long exposure data collected on a group of

maintenance mechanics from a nickel producing plant to illustrate many of the statistical methods

proposed in Chapters 2 and 3. This set of data (n =2, k =14) represents total nickel dust exposures

(encompassing all forms of nickel as measured by the so-called "total dust" method), and is tabulated

in the Appendix (Table AI). For these data, the values of the ANOVA and ML variance component

estimates, together with the standard estimate of J.l y (namely, the overall mean of the logged exposure

measurements, YJ, are as follows:

-3.393, -2uw,an

·2Uw,rnl 0.906, c7~. an = 0.098, and c7~. rnl 0.060

(for the standard computational formulae, see section 2.1.1).

Estimation of population mean and variance

Table 5.1 displays the realized values of various estimators of J.lx (2.1.1.3) and u; (2.1.1.4),

Page 162: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

based on Model I, for the nickel data. Recall that jJ.x 1 (2.2.1.1) and 0-; 1 (2.2.1.2) are the MLEs,. ,

while jJ.x,2 (2.2.1.3) and 0-;,2 (2.2.1.4) are intuitive estimators formed by substituting the ANOVA

estimates in place of the MLEs for the variance components. Also displayed are the values of the

UMVU estimators jJ.x umvu (2.2.2.3) and 0-; umvu (2.2.2.7). In parentheses below each estimate of, ,

the mean (I-'x) is the ML estimate of the theoretical variance associated with that estimate. For jJ.x,1

and jJ.x 2' this theoretical variance is given by equation (2.2.1.6); for jJ.x umvu' it is given by equation, ,

(2.2.3.5).

Although the computational formulae for the UMVU estimators and for their estimated

variances involve infinite series, it was found that summing the first 10 terms for each required

hypergeometric function (oF1) provided more than adequate convergence. [Similar findings for

related univariate estimation problems were reported by Shimizu (1988) and by Attfield and Hewett

(1992)]. This same truncation was used for the efficiency calculations reported in Tables 2.4 and 2.5.

Table 5.1 Sample calculations to estimate population mean and variance for nickel data a,b

(n =2, k =14)

Pre urnt'" jJ.x ] jJ.x 2-2 -2 -2q:r um"te (1 X 1 (1 x 2

0.0541 0.0545 0.0556 0.0042 0.0048 0.0053 •(1.60 X 10 - 4) (1.67 X 10 - 4) (1.79 X 10 - 4)

a: unpublished exposure data tabulated in Table Al ..b: numbers in parentheses are MLEs of theoretical variances associated with the estimate

While the discrepancies among the three estimates of the mean in Table 5.1 are minor for these

data, note that their relative magnitudes are in agreement with the trends seen in the theoretical bias

considerations (see section 2.2.1 and Table 2.3). The same is true of the relative magnitudes of the

(estimated) variances associated with these three point estima.tes of Ilx ' in accordance with the

considerations illustrated in Table 2.4. Note that we see a similar trend (in magnitude) among the

three estimates of (1; for these data. The small differences observed with respect to the three

estimates of I-'x are consistent with Tables 2.3-2.5, which suggest that for very low values of the

variance ratio (¢ = (1V(1~), the biases associated with jJ.x,1 and jJ.x,2 are very small, and their

efficiencies relative to jJ.x, umvu are high. For these data, the point estimate;P = ug, aniu~,an is

0.11.

153

.,

..

Page 163: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Prediction of individual workers' mean exposures

..

Figure 5.1 displays a plot of various possible predictors of individual mean exposure [J.Lxi of

equation (2.1.1.5)] for the 14 workers in the nickel data set, and illustrates the ad hoc method

proposed by Rappaport et al. (1995a) for recommending an intervention strategy for reducing

workplace exposures.

0.32

•----------~------~-----_:_--------------------j

en 0.28• a:§ 0.24

0 0.20wa:0- 0.160w 0.12!;(:E 0.08~

0.04w

0.00

1 2 3 4 5 6 7 8 9 10 11 12 13 14

10Fzgure 5.1 Plot 0] the values of three estimated predictors of mean exposures under Model I: Xi

(squares), [Jxi,CU (triangles), and [Jxi,BP (dots), for the 14 workers in the nickel data set (Table Al)

The solid line in Figure 5.1 depicts the value (0.0541) of the UMVU estimator for J.L x' The

dashed lines represent the upper and lower limits of an approximate 100(1 - l/k)% (= 92.9%)

confidence interval for J.Lx ' calculated according to (2.2.5.3). Since no value of Jtxi, BP falls outside

these bounds, the recommended intervention strategy according to the rule of thumb proposed by/

..

Rappaport et al. (1995a) would involve reducing exposure for workers via engineering or

administrative controls aimed at the entire group. The intuition behind this rule of thumb is that the

values of Jtxi, BP displayed in Figure 5.1 suggest that the discrepancy among true mean exposures is

minor for this group of workers.

The estimated predictor Jtxi, BP displayed for each worker in Figure 5.1 is calculated according

to equation (2.3.4.4), which utilizes iLx , umvu' Hence, according to the empirical study summarized in

Table 2.7, Jtxi, BP is likely the best available estimated predictor in terms of mean squared error of

prediction (MSEP). The alternative predictors Xi and ~xi,CU (2.3.4.1) are also displayed in the

154

Page 164: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

figure; these have the potential advantage of being both uncondiitionally and conditionally unbiased,

with ~xi,CU generally achieving lower MSEP than Xi (section 2.3.4 and Table 2.7). Figure 5.1

clearly displays a familiar feature of empirical Bayes-like predictors, as the estimated 'jJ.xi,BP's tend to

be compressed toward the estimate of the overall population mean flx' This is the feature that

prevents the conditional unbiasedness of 'jJ.xi, BP' while at the same time leading to its favorable

MSEP properties under the assumed model.

Figure 5.1 displays an extreme case, in which the discrepa.ncy among the various predictors of

flxi is quite striking. Again, this is likely due to the apparent low variance ratio (¢J =uVu~) for

these data. The empirical Bayes-like predictor 'jJ.xi, BP picks up the fact that most of the variability

in these data is within-worker, and, in familiar fashion, produces values that are tightly distributed

about the estimated population mean. As Table 2.6 suggests, this characteristic of 'jJ.xi, BP produces

the most substantial gains in MSEP in situations, apparently like this one, where <P is small.

Hypothesis test for workplace exposure assessment

To facilitate computation of the three large sample-based test statistics CZw' Zl' and Zs)

derived in section 3.3.2, we note that the applicable OEL for nickel dust is 1 mg/m3 (ACGIH, 1995).

Each of these three test statistics addresses the hypothesis testing problem introduced in section 3.3.1,

namely Ho: (0 ~ A) vs. Ht : (0 < A), where 0 = Pr(flxi > OEL). For this example, we use the values

A =0.05 and Q' =0.05.

For the nickel data, Table 5.2 presents the restricted par,ameter estimates (see section 3.3.2),

along with the values of the test statistics Zw (3.3.2.3), Zl [based on (3.3.2.13)], and Zs [based on

(3.3.2.14)]. Note that all three test statistics are in agreement to reject Ho. This conclusion is quite

consistent with the rough point estimate e(section 3.3.2), which iis very near 0 for these data. It also

seems consistent with the fact that all of the estimated 'jJ.xi,BP values plotted in Figure 5.1 lie far

below the OEL of 1 mg/m3 , as does the estimate (and upper confidence bound) for fl x' Due to the

small value of ;p = (&~,an/&~,an)' rejection at the 0'* = 0'/2 le,vel with regard to Zw and Zl (see

section 3.3.5) is not necessary for the example in Table 5.2. Since Ho is rejected, there is no essential

need to implement the groupwise intervention measures mentioned in conjunction with Figure 5.1.

The results of this portion of the analysis of the data in Table Al are also discussed by Lyles et al.

(1995a).

155

...

..

Page 165: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table 5.2 Sample calculations for workplace exposure assessment using nickel data a, b

(n = 2, k = 14)

..0.109 - 2.823 1.992 1.002 -3.660 - 3.595 - 2.231 < 10- 16

..

a: unpublished exposure data tabulated in Table Al

b: A =0.05, a =0.05, ZI _ A =1.645, Za = - 1.645, OEL =1 mg/m3

5.~.~ Sample calculations under Model I, unbalanced data

To illustrate methodology from Chapters 2 and 3 appropriate for unbalanced sets of exposure

measurements, consider another actual set of (logged) shift-long exposure data obtained on workers in

the nickel producing industry. This set of data (k = 12, N = 27, 1 S ni S 5) represents total nickel

dust exposures (again, encompassing all forms of nickel as measured by the "total dust" method) of

furnacemen in the refinery of a nickel-producing complex, and is provided in Table A2. For these

data, the values of the ANOVA and ML variance component estimates, along with the ML estimate

(p y , ml) and the alternative estimate Py , an (see sections 2.2.5 and 3.3.6), are as follows:

Py , an = - 0.590, Py , ml = - 0.589, o-~, an 1.482, o-~, ml = 1.426,

..

0.138, and U~,ml = 0.125

(computational formulae for ANOVA estimators are cited in section 2.1.1).

Prediction of individual workers' mean exposures

Figure 5.2 displays a plot of two possible predictors of J.Lxi for the 12 workers in the second

nickel industry data set, and illustrates an extension (to the unbalanced case) of the ad hoc method

proposed by Rappaport et al. (1995a) for recommending an intervention strategy. The two predictors

depicted in Figure 5.2 are Xi and an estimated version of [.Lxi, BP' where in this case [.Lxi, BP is

estimated using ML estimates for unknown parameters in a direct extension of (2.3.4.3) to the

unbalanced case. [We do not use an analog to (2.3.4.4), since the UMVU estimator of J.Lx is available

156

Page 166: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

for the balanced case only].

o-------------~-----------------------------------

t

"

••

o

••

o

o

••

••

o- -- - - - - -- -- -- -- -- -- ---- ---- -- -- -- _.• - - ------ 1::l-- --

o

2.5

(JJa: 2.0§C

1.5wa:CL

c 1.0w~::Et; 0.5w

oo0.0 'T-----,-__r_-_r_-r--~-~-.,..__-....,-__r_-_r_~

1210 119854321 6 7

10Figure 5.2 Plot of the values of two estimated predictors of mean exposures under Model I: Xi

(squares), and Pri,BP (dots), for the 12 workers in the nickel dust data set (Table A2)

In Figure 5.2, the solid line represents the value (1.205) of the ML estimator for J.L r ' The

dashed lines represent the upper and lower limits of an approximate 100(1- 1Jk)% (= 91.i%)

confidence interval for J.L x ' this time calculated according to equation (2.2.5.4). As in the previous

example, no estimated values of Pri, BP fall outside these bounds. Hence, in the event that exposures

are deemed unacceptable, an analogous rule of thumb to that proposed for balanced data by

Rappaport et al. (1995a) dictates engineering or administ.rative measures to reduce exposure

uniformly for all workers in the group.

,Hypothesis test for workplace exposure assessment (a~ > 0)

Section 3.3.6 discusses the extension of the hypothesis testing strategy of section 3.3.1 to the

unbalanced data case. For the nickel data, the Wald-type test statistic (3.3.6.1) makes use of the

estimates jJ.y,an' a-~.an' and a-l,an given above (for details, see section 3.3.6). Taking A = 0.10 and

a =0.05, and recalling that OEL =1 mgJm3 , we obtain the following point estimate of () together

with the value of the test statistic:

15i

Page 167: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

iJ 0.658, Zw = 0.925.

The point estimate of (J, as well as the fact that itx , ml and many of the predicted mean exposures

(ftxi,BP) in Figure 5.2 are above the OEL of 1 mg/m3, is consistent with the conclusion based on Zw

(namely, we do not reject Ho). Hence, the extension of the Rappaport et al. (1995a) protocol would

call for control measures aimed at the group as a whole. This example is also discussed by Lyles et

al. (1995b).

Hypothesis test for workplace exposure assessment (a~ < 0)

To illustrate the use of the alternative hypothesis testing procedure (section 3.3.4) developed

for cases in which a negative estimate of O"~ is encountered, we use a third set of shift-long exposure

data obtained on nickel industry workers. This set of data (k = 20, N = 28, 1 ::; ni ::; 4) represents

total nickel dust exposures of a group of maintenance mechanics in the milling operation of a nickel­

producing complex, and is also displayed in the Appendix (Table A3). The ANOVA estimate of O"~

based on these data is - 0.046, and the corresponding ANOVA estimate for O"~ is 1.225 ( = MSW).

For this example, we assume the values A =0.10 and Q' =0.05.

In order to apply the procedure discussed in section 3.3.4, we first compute an approximate

95% upper bound on O"~. Since the data are unbalanced (see section 3.3.6), we use the upper limit of

an approximate 90% CI calculated via the method of Burdick and Eickman [1986; equation (1.5.2.8)].

To illustrate this calculation for these data using notation from section 1.5.2, we have:

k = 20, N = 28, Q' = 0.05, f 3 = 0.532, f 4 = 0.404,

m = 1, M = 4, h = 1.171, and S* = 1.037.

From these values, we obtain U = 1.844 and UL = r:,2 1 331 Since r:,2 < z2 =b, .95 .. b, .95 1 - A

1.644, we compute il, = 0.443 according to equation (3.3.4.3). Making use of the sample mean

(Y.. = - 4.065) and sample standard deviation (S =1.087) of the logged exposures, we compute the

test statistic t (based on 3.2.2.2) as described in section 3.3.4, obtaining t = (Y.. + d~S) =- 2.952. Since t < In(il,oEL) - 0.814, we reject Ho: ((J 2:: 0.10) for these data, as also discussed

by Lyles et al. (1995b).

5.t.' Sample calculations under Model II

To illustrate some of the methodology proposed in Chapters 2 and 3 under Model II (2.1.1.6),

158

Page 168: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

we use a balanced subset of an exposure data set collected as part of a study by Yager et al. (1993).

This balanced set of data (n =5, k =8) represents personal shift-long styrene exposures on laminators

at a boat manufacturing plant. The data are provided in Table A4. For these data, the values of the

ANOVA and ML variance component estimates, together with the standard estimate (YJ of JJ y ' are

as follows:

Py = 4.751, u~,an = 0.350, U~,ml = 0.306, u~,an = 0.021, U~,ml = 0.019,

u~,an = 0.158, and U~,ml 0.159 •

(again, computational formulae for ANOVA estimators are given in section 2.1.1).

Estimation of population mean and variance

Table 5.3 displays the realized values of various estimators of JJ x (2.1.1.8) and 0"; (2.1.1.9),

based on Model II, for the styrene data. In the table, Px,ml and u;, ml are the MLEs, while Px,an

and u;,an are formed by substituting the ANOVA estimates in place of the MLEs for the variance

components. Hence, while these estimators are not given explicitly in Chapter 2, they are completely

analogous to the estimators [(2.2.1.1)-(2.2.1.4)] corresponding to Model I. Also displayed are the

values of the UMVU estimators Px,umvu (2.2.2.10) and u;,umt!u (2.2.2.14). In parentheses below

Pr , umvu is the ML estimate of the theoretical variance (2.2.3.9) associated with it.

Table 5.3 Sample calculations to estimate population mean and variance for styrene data a,b

(n = 5, k = 8)

fJ:r urn"u p$ ml p$ an·2 ·2 ·2(J'* tLmtllL 0"$ ml 0". an

146.39 147.31 150.71 12923.83 13489.01 15835.70(1171.22)

a: unpublished exposure data from a study by Yager et aI. (19~13); data are tabulated in Table A4

b: number in parentheses is the MLE of Var(px , umvu) from equation (2.2.3.9)

As with the example in section 5.2.1, the UMVU estimate of JJr is not markedly different from

the other two estimates; however, larger discrepancies are seen among the three estimates of 0";. The

trends in magnitude across the three estimates for each parameter are likely illustrative of actual bias

159

II

Page 169: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

properties (i.e., as under Model I, it may well be that itz •ml and itz •an are both positively biased, and

similarly for u;, ml and u;,an)' While this point is not specifically addressed in Chapter 2, one could

readily address it using derivations completely analogous to those that were presented assuming

Model I.

Prediction of individual workers' mean exposures

Figure 5.3 displays a plot of two possible predictors of JJzi for the 8 workers in the styrene data

set. The two estimated predictors are Xi and an empirical Bayes-like estimate of fJ.zi, BP of

(2.3.1.10), where in the latter case the unknown variance components in the functional expression are

estimated via the MLEs. The plot also depicts the UMVU estimate (146.39) of JJ z ' together with the

bounds of an approximate 100(l-ljk)% (= 87.5%) CI computed as discussed in section 2.2.5.

Figure 5.3 is intended as an illustration of concepts and ideas that have been studied in more detail

under Model I.

•o

50

80

110

170

140

230 ------------------------------------------------if2200 ~

~UCwa:a.Cw

~tiw

,

20 ~-~--_r_--r__-_r--.,.._-.....,..--""T

1 2 3 4 5 6 7 8

10Figure 5.3 Plot of the values of two estimated predictors of mean eXpOSUT"eS under Model II: Xi

(squares), and {l.zi. BP (dots), for the 8 workers in the styrene data set (Table A4)

It is interesting to note the apparent similarity in the realized values of the estimated

predictors Xi ~nd fJ.zi. BP in Figure 5.3. Although the det.ailed study of MSEP in Chapter 2 was

carried out assuming Model I, the message in Table 2.7 may be instructive in this case. Specifically,

160

Page 170: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

that table suggests that, under Model I, the discrepancy in MSEP between Xi and the estimated

Pxi, BP tends to be less pronounced when (T~ accounts for a large proportion of the variability in Yii'

particularly when (T~ is small. Based on the estimated variance components for the styrene data, it

appears that (T~ is large compared to ((T~ + (T~), and that the latt{:r quantity is quite small. Efficiency

calculations analogous to those presented in Chapter 2 under Model I could be used to verify whether

the similarity between the Xi and 'ilxi, BP values in Figure 5.3 is indeed indicative of general

tendencies.

Since at least one estimated 'ilxi, BP value falls outside the approximate confidence bounds, an

extension of the rough rule of thumb proposed by Rappaport et al. (1995a) would tend to support

modifications of individual tasks and/or work practices in the event that the styrene exposures for

this group of workers are deemed unacceptable. Empirical st.udies such as those alluded to by

Rappaport et al. (1995a) would be required to help to validate this extension in practice.

Hypothesis test for workplace exposure assessment

In order to apply the Wald-type test statistic (3.4.1.2), woe first note that the applicable OEL

for styrene is 213 mg/m3 (ACGIH, 1995). The details for computing the statistic, which utilizes the

estimates 'ily ' alan' c7~,anand c7~,an' are provided in section 3.4.1. Assuming A =0.10 and a =0.05,

we obtain:

8 = 0.189 and Zw = 0.747,

where iJ is a rough point estimate completely analogous to that proposed in section 3.3.2 under Model

I. Hence, we do not reject Ho: (0 ~ A) for the styrene data under Model II.

5.J!.-I Sample calculations for measurement error adjustment under Model I

Here, we use a set of data from a study by Heederik et al. (1991) to illustrate the CE and QL

methodology in the context of the specific occupational health setting described in section 4.3.1. The

response variable (Rgi) considered is a measure of lung function [forced expiratory volume (FEV1)]

taken on individual workers in the Dutch animal feed industry after a period of roughly one year

during which repeated personal measurements of inhalable dust exposure were made. In addition to

the single measurement of FEV1 taken on each worker, this set of data contains a total of N =3 kg 3L L ngi =156 shift-long dust exposure measurements (2 $ ngi $ 6) on a total of K = L kg =38

g=li=l g=l

161

"

Page 171: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

workers, distributed among three separate job groups [unloaders (51 measurements on 14 workers),

facility operators (53 measurements on 12 workers), and expedition workers (52 measurements on 12

workers)]. The overall model assumed to describe these exposure measurements is (4.1.1.4), where

Ygij represents the natural log of the j-th measurement on the i-th worker in the g-th group

(g = 1,2,3). The measurement error problem is described in detail in section 4.3.1, based on the three

models (4.3.1.1), (4.3.1.2), and (4.3.1.3).

The three covariates considered by Heederik et al. are age in years (Cgi1 ), pack years smoked

(Cgi2), and height in centimeters (Cgi3), which we assume [in accordance with (4.2.1.4)] are

statistically independent of the true and surrogate exposure variables in the underlying population

from which the data are obtained.

Results of analysis

The results from applying the CE and QL adjustment methods [as well as the naive regression

that ignores measurement error in the surrogate exp(Ygi)] to the actual set of data are provided in

Table 5.4. Each of the 38 workers was employed in the Dutch animal feed industry during the mid­

to late-1980's. The data are available pending permission from the authors of the original source

article (Heederik et al., 1991).

Table 5.4 Results of CE and QL analyses of data on 3 groups of workers in the Dutch animal feed

industry a, b, c

Parameter Naive

0' -5.525

/3 -0.019

1'1 - 0.039

1'2 - 0.019

1'3 0.064

(1'2 0.378e

Approximate 95% CIs for {3: naive( - 0.043, 0.004)

a: Data are from Heederik et al. (1991)

162

METHOD

CE

-5.297

- 0.008

- 0.037

- 0.019

0.062

0.395

CE( - 0.025, 0.009)

Q1

- 5.289

- 0.008

- 0.037

- 0.019

0.062

0.395

Q1( - 0.025, 0.009)

Page 172: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

3 kg

b: Response is FEVl, exposure is inhaled dust (G =3, J( =38, N = l: l: ngi =156,g=li=l

2 $ ngi $ 6)

N . . t f d ~ (- -2 -2 - -2-2C: ulsance parameter estlma es rom exposure ata: T = J.lY1' UbI' UwI; J.l y 2' Ub2' Uw2;

fJ y3' q~3' q~3) = (2.24, 0.26, 1.87; - 0.19, 0.32, 0.78; 0.06, 0.08, 0.80)

In Table 5.4, the estimates of a and f3 for the naive regression are noticeably different from

those using the CE and QL adjustment methods, while the estimates of /1' /2' and /3 are nearly

equal; this agrees with the consistency result (4.2.2.1). In particular, POLS is inflated away from zero

(as opposed to attenuated), relative to PeE and PQL; recall that we saw a similar tendency for the

first two cases considered in the simulation study of section 4.3.4. Also somewhat atypical of more

standard measurement error problems is the numerical result that the naive CI for {3 is actually wider

than is either of the adjusted CIs. Note that the CE and Q1L methods produce nearly identical

estimates for this example; this is because the sample variance of the estimated weights for the QL

procedure is very small (namely, 0.001).

In support of the validity of assumption (4.2.2.1) for these data, fitting a mooel ignoring all

covariates leads to the exact same conclusion concerning the lack of significance of the estimate of {3.

One factor that may contribute to the small (and not statistically significant) estimates of {3 for this

example is the somewhat imprecise overall assessment (i.e., a single FEVI measurement) of each

workers' lung function following the exposure period. Nevertheless, given a more precise response

measure and/or more workers or exposure measurements per worker, data similar to these might

easily demonstrate major discrepancies between point estimates and inferences based on the biased

naive regression and those based on methods that appropriately account for multiplicative

measurement error. This fact has been demonstrated in section 4,,3.4.

For calculating approximate CIs for the primary parameters, the estimated conditional

dispersion matrix based on (4.3.3.3) was used. Based on the QL results, approximate 95% CIs for the

three parameters b 1, /2' /3) corresponding to the covariates me:asured by Heederik et al. (1991) are

( - .06, - .01), (- .04,.003), and (.034, .090), respectively. Hence, while the data do not exhibit

significant (at the .05 level) effects attributable to true mean exposure or pack years of smoking, the

FEVI measurements are seen to be affected negatively by increasing age and positively by increasing

height.

163

..

Page 173: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Chapter 6: Possible Extensions and Future Research

6.1 INTRODUCTION

In this dissertation, we have explored statistical issues in several distinct but related areas, all

generally associated with occupational health studies. As is usually the case, this particular work

generates certain open-ended questions and ideas for future research. This brief final chapter

highlights some of these issues, and briefly discusses their potential importance.

6.2 SOME POTENTIAL EXTENSIONS OF THE WORK

6.t.l General issues

Most of the research presented in the preceding chapters has assumed that repeated (logged)

exposure measurements are well described by a one-way RE ANOVA model [Model I (2.1.1.1)]. This

model is the simplest example of a general linear mixed model (1.5.1.1), and one could contemplate

extensions of this research to accommodate the full generality of that class of models (as a start, we

have considered Model II to account specifically for day-to-day variability). However, the decision to

focus mainly on Model I is not simply a matter of convenience. Exposure data are often collected

separately on pre-defined (Le., by job title and plant location) groups of workers, such that workers

within a specific group are reasonably homogeneous with regard to clearly definable characteristics

that have bearing on mean exposure levels. Assuming such groupings, modeling fixed effects other

than an overall mean for each group may be unnecessary. Also, with shift-long exposure

measurements (the main focus of this dissertation), there is much less of a tendency to see significant

autocorrelations than in the case of short-term exposures, so that the assumed compound symmetric

correlation structure may be quite reasonable. This is generally in concert with the experience of

those involved in the studies from which most of the data for the examples in Chapter 5 were derived.

Indeed, it has been shown (e.g., Rappaport et al., 1995a) that Model I appears to characterize

adequately many group-specific exposure data sets from the nickel producing industry. From a

practical standpoint, we have also seen (section 3.4.2) that an extension of the model for exposure

measurements (i.e., to Model II) tends to increase sample size demands for exposure assessment well

Page 174: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

past the limits of practicality in many cases. This observation would seem to argue for grouping

strategies designed to keep to a minimum the number of components of variability that would

logically need to be accounted for.

Nevertheless, some consideration of extensions of Models I and II may be worthwhile. If

shorter-term exposure measurements (for instance, 15 minute TWAs) are to be considered, an

extension of Model I to allow autocorrelated error terms is intuitively reasonable. In that framework,

a focus on probabilities of exceedance (see sections 1.4.1 and 3.~U) would be relevant (Rappaport,

1991b). Likewise, assuming a study of shift-long exposures designed so that long series of

measurements closely spaced 10 time are to be collected, an extension of Model II to allow

autocorrelated date effects would be worth considering. In that scenario, mean exposure would likely

remain the focus.

The health effects model (4.3.1.1) has the clear advantage of allowing one to combine exposure

data on workers from any number of groups, possibly even across plants or factories. However, note

that it was deemed reasonable to assume that the parameters of the exposure distributions (J.lyg' O'~g'

O'~g) vary across these groups; hence, the only advantage gained stems from the reasonable

assumption that the primary regression parameters associated with the TDM (4.3.1.1) are the same

across groups. Extensions of the methodology in Chapters 2 and 3 to allow for the pooling of

exposure data on different groups of workers are also worth contemplating. However, the possible

statistical advantages relate to the assumption that one or more of the parameters (J.l yg' O'~g, O'~g) is

homogeneous across groups. Deciding whether or not such an assumption is valid requires a large

data collection effort in its own right, but a study to address this issue may well be quite useful in the

interest of establishing the validity of methodological extensions that would reduce sample size

demands in the long run.

6.!.! Estimation

Extension of the UMVU methodology of section 2.2.2 to the case of general random effects

models for balanced data, so that the overall sample mean and the mean squares represent complete

and sufficient statistics, is straightforward. In particular, a general expression for Pr,umvu in that

setting has already been derived. However, the practical limitation of this methodology is the

necessity for balanced exposure data.

The bias and efficiency considerations presented in Tables 2.3-2.5 clearly display that Pr , umvu

is highly preferable to the MLE under Model I. While no straightforward extension of the UMVU

derivation is possible assuming unbalanced data, the existence of a strong alternative to the MLE in

the balanced case would seem to justify efforts to obtain one in the unbalanced case as well, especially

165

Page 175: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

..

in light of practical difficulties that often preclude collection of a completely balanced sample.

6.!.3 Prediction

In section 2.3, we focus almost exclusively on mean squared error of prediction (MSEP) as a

criterion for evaluating competing predictors of individual mean exposure. While this focus has

support in the literature (e.g., Searle et al., 1992), there may be interest in considering alternative

criteria. As mentioned in section 2.3.1, one such alternative would be conditional MSEP (e.g.,

Peixoto and Harville, 1986). Also, the motivation behind the proposed predictor '[lxi, au (2.3.2.1)

stems from the fact that conditional unbiasedness may well be a desirable property in the

occupational setting. The derivation of a predictor that minimizes MSEP (or conditional MSEP)

among conditionally unbiased predictors could be useful.

Finally, as mentioned in section 2.3.1, '[lxi,BP of (2.3.1.6) and (2.3.1.10) may logically be

termed "best geometric" predictors (BGPs), as a direct extension of the common "best linear"

predictor (BLP) notation [equation (1.5.3.5)]. It would be interesting to contemplate whether a "best

geometric unbiased" predictor could be derived, as a direct extension of the BLUP (1.5.3.6).

Presumably, such a predictor would account for estimating the (log-scale) mean parameter, while still

assuming knowledge of the variance components.

6.!.-4 Occupational exposure assessment

As mentioned in Chapter 3, the focus on large-sample ML theory-based hypothesis testing

procedures is largely a matter of feasibility and practicality. An extension of Land's (1971; 1972;

1973; 1975; 1988) UMP unbiased testing procedure for i.i.d. samples from a lognormal population is

desirable, but would be quite complicated (if feasible) even in the balanced case. The practical

difficulties associated with the use of Land's method (see section 3.2.2), such as the need for extensive

tabulation, would also be greatly magnified.

Recognition of potential difficulties with using the asymptotic standard normal reference

distribution motivated the emphasis on simulation studies in Chapter 3. Such consideration also led

to our alternative proposals for dealing with negative variance component estimates and adjusting

significance levels, and to the derivation of likelihood ratio- and score-type tests as alternative

approaches for the balanced case under Model I. Derivation of the latter two procedures as

alternatives to the Wald-type tests under Model II and under Model I in the unbalanced case might

be beneficial. Other strategies aimed at achieving (for small samples) finer control of type I error

rates without a major sacrifice of power could also be of interest. Possible approaches might include

166

Page 176: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

adjustment to the test statistics themselves, or an attempt to pinpoint more nearly their actual

sampling distributions under Ho (which tends to be skewed slightly to the left in small samples).

Computer intensive methods (e.g., bootstrapping) might be considered; however, the reliability of

hypothesis testing and confidence interval estimation based on resampling would still be a function of

the original sample size. In general, Table 3.8 seems to suggest that extra measures to control type I

error rates may be of greater importance as extensions to Model I are considered.

6.!.5 Measurement efTor adjustment

As mentioned in section 4.2.1, relaxation of the independence assumption (4.2.1.4) would

possibly be worth considering, but would be practically demanding in terms of the necessary sample

sizes to obtain reliable estimates of the conditional moments required for the CE and QL adjustment

procedures. As an alternative, a more promising approach when (4.2.1.4) seems unrealistic might be

to consider a revised grouping of workers. A successful grouping [i.e., one that results in a satisfactory

fit of model (4.1.1.4) to the data] should eliminate the need for any fixed effects associated with

exposure other than the overall (log-scale) mean, thus validating assumption (4.2.1.4).

Further investigation of the idea concerning the use of the conditional (on the predictors in the

TDM) dispersion matrix of the estimated nuisance parameters (1-) would be of definite interest, as it

could have applicability in a fairly wide range of measurement error adjustment problems. In this

regard, we should note that (4.3.3.3) is convenient to derive because of the use of the ANOVA

estimators in estimating T g (g =1,... , G). It was shown in several contexts (sections 2.2.5, 3.3.7, and

4.3.3) that the use of these estimators instead of the MLEs tends to improve the small- to moderate­

sample performance of the proposed statistical methodology. However, assuming unbalanced data

under Model I, purists may prefer the MLE for T 9 in conjunction with these applications because of

well-established asymptotic normality results. To satisfy the purists as to the empirically beneficial

practice of using ANOVA estimators, establishment of asymptotic normality with respect to the

vector Tg,an (section 4.3.3) would be useful. Based on respected sources (e.g., Searle et al., 1992), this

would be a significant and challenging extension to the theory associated with variance component

estimation. References treating large sample theory (e.g., Sen and Singer, 1993) could provide the

tools for approaching this problem.

Extensions of the TDM (4.3.1.1) to incorporate models for survival or categorical data analysis

would be of great interest. This poses a particular challenge in the occupational health setting,

because the existence of the conditional expectation of the response given a lognormal surrogate for

exposure is not guaranteed. For instance, one can show that the required conditional moments for the

CE and QL adjustment procedures described in section 4.3.1 are not finite if model (4.3.1.1) is

167

Page 177: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

assumed to apply to the logit of Rgi when Rgi is a dichotomous response variable.

Assuming a linear TDM, it would also be of interest to contemplate the incorporation of an

index of cumulative exposure [see (1.7.1.1)] to allow for longer-term studies and worker mobility over

time. This would represent a potentially important extension to the work in Chapter 4, and some

investigation of this extension has been initiated.

6.!.6 Censored ezposure data

A problem that is sometimes encountered in occupational health studies involves dealing with

TWA exposure data when some values are below a limit of detection. This problem has been dealt

with rather extensively in the environmental statistical literature (e.g., see Gilbert, 1987), but most

discussions assuming lognormal exposure data are based on assuming completely i.i.d. samples. An

attempt to amend some of the methodology proposed in Chapters 2-4 (in particular, the hypothesis

testing strategy in Chapter 3) to allow for non-detects while still accounting for components of

variability could be very useful with respect to certain industries.

168

Page 178: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

APPENDIX: Exposure data for examples in Chapter 5

Table AI: Nickel dustG exposures on maintenance mechanics at a nickel-producing plant(n =2, k =14)

..

Worker(i) ~easurement(j) y ..b Worker(i) ~easurement(j) y ..-Jj -Jj

1 1 -4.269 8 1 - 3.411

1 2 -2.957 8 2 -2.666

2 1 - 3.474 9 1 -0.540

2 2 -4.510 9 2 -3.689

3 1 - 3.912 10 1 -3.963

3 2 -4.423 10 2 -3.963

4 1 -4.269 11 1 -1.492

4 2 - 2.442 11 2 - 2.745

5 1 -1.790 12 1 -4.343

5 2 - 3.170 12 2 - 3.442

6 1 - 2.847 13 1 - 3.507

6 2 -3.576 13 2 - 5.298 ~.

7 1 -4.269 14 1 - 3.147

7 2 - 3.594 14 2 -3.283

a: Dust exposures to all forms of nickel calculated using "total dust" method

b: Yij represents the natural log of the j-th shift-long exposure measurement on the i-th worker

169

Page 179: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table A2: Nickel dusta exposures on furnacemen from a refinery at a nickel-producing complex(k = 12, 1 ~ ni ~ 5)

Worker(i) ~easurement(j) y ..b Worker(i) ~easurement(j) y ..-13 -13

1 1 -0.624 5 1 -1.884

1 2 0.981 5 2 0.057• 1 3 - 0.166 5 3 - 2.040

1 4 -1.784 6 1 - 0.155, 2 1 - 0.254 6 2 -1.754

2 2 0.846 6 3 0.403

2 3 1.411 7 1 0.171

3 1 - 0.970 8 1 0.510

3 2 -1.945 9 1 - 0.931

3 3 -2.733 10 1 -1.016

4 1 0.828 11 1 - 0.423

4 2 2.012 12 1 -2.765

4 3 0.078

4 4 -1.852

4 5 -1.386

•a: Dust exposures to all forms of nickel calculated using "total dust" method

b: Yi; represents the natural log of the j-th shift-long exposure measurement on the i-th worker

..

170

Page 180: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table A3: Nickd dustO exposures on maintenance mechanics frOID a mill at a nickel-producing plant(k = 20, 1 ~ ni ~ 4)

Worker(i) ~easurernent(j) y ..b Worker(i) ~easurement(j) y ..-'J -OJ

1 1 -4.135 12 1 -4.828

2 1 -4.962 13 1 -5.116

3 1 - 3.352 14 1 -3.817 •

4 1 -4.017 15 1 -5.116

4 2 -4.828 16 1 - 5.116

5 1 -5.116 17 1 - 4.423

6 1 -4.135 18 1 -3.058

7 1 -4.828 19 1 -1.280

7 2 -3.650 19 2 - 3.863

8 1 -4.962 19 3 - 4.423

8 2 -4.828 20 1 - 3.612

9 1 - 2.659 20 2 - 3.170

10 1 - 3.963 20 3 - 3.912

11 1 - 1.136 20 4 - 5.521

a: Dust exposures to all forms of nickel calculated using "total dust" method~

b: Yij represents the natural log of the j-th shift-long exposure measurement on the i-th worker

171

Page 181: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Table A4: Styrene exposures on laminators at a boat manufacturing plant a(n =5, k =8)

Worker(i) Date measured(j) y ..b Worker(i) Date measured(j) y ..-" -'3

1 07/28/87 2.965 5 07/28/87 4.678

1 10/02/87 2.930 5 10/02/87 4.775..

11/04/87 11/04/871 3.071 5 4.893

1 12/11/87 4.029 5 12/11/87 4.415

; 1 02/01/88 3.871 5 02/01/88 4.929

2 07/28/87 4.203 6 07/28/87 4.310

2 10/02/87 4.319 6 10/02/87 5.192

2 11/04/87 4.396 6 11/04/87 4.457

2 12/11/87 5.216 6 12/11/87 4.410

2 02/01/88 5.396 6 02/01/88 5.097

3 07/28/87 4.103 7 07/28/87 5.060

3 10/02/87 4.876 7 10/02/87 5.454

3 11/04/87 5.221 7 11/04/87 5.201

3 12/11/87 4.439 7 12/11/87 5.500

3 02/01/88 5.187 7 02/01/88 5.236

• 4 07/28/87 5.329 8 07/28/87 5.246

4 10/02/87 5.716 8 10/02/87 5.365

4 11/04/87 5.116 8 11/04/87 5.033

4 12/11/87 4.688 8 12/11/87 5.175

4 02/01/88 4.572 8 02/01/88 5.970

a: Data from a study by Yager et al. (1993)

b: Yi; represents the natural log of the 8-hour TWA measurement on the i-th worker, taken on date j.

172

Page 182: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

REFERENCES

ACGIH (1995) Threshold limit values for chemical substances and physical agents. AmericanConference of Governmental Industrial Hygienists, Cincinnati, OH.

Aitchison, J. and Brown, J.A.C. (1957) The Lognormal Distribution, Cambridge: CambridgeUniversity Press.

Armstrong, B. (1985) Measurement error in the generalised :linear model. Communications In

Statistics--Simulation and Computations, 14, 529-544.

Armstrong, B.G. and Oakes, D. (1982) Effects of approximation iin exposure assessments on estimatesof exposure-response relationships. Scandinavian Journal of Work Environment and Health, 8: suppl.1,20-23.

Attfield, M.D. and Hewett, P. (1992) Exact expressions for the bias and variance of estimators of themean of a lognormal distribution. American Industrial Hygiene Association Journal, 53(7): 432-435.

Bollen, K.A. (1989) Structural Equations with Latent Variables, l\"ew York: John Wiley and Sons.

Breslow, N. (1990) Tests of hypotheses in over-dispersion Poisson regression and other quasi-likelihoodmodels. Journal of the American Statistical Association, 85: 565-571.

Brunekreef, B., Noy, D., and Clausing, P. (1987) Variabillity of exposure measurements inenvironmental epidemiology. American Journal of Epidemiology, 125, 892-898.

Burdick, R.K. and Eickman, J. (1986) Confidence intervals on the among group variance componentin the unbalanced one-fold nested design. Journal of Statistical Computation and Simulation, 26: 205­219.

Carroll, R.J. (1989) Covariance analysis in generalized linear measurement error models. Statistics inMedicine, 8: 1075-1093.

Carroll, R.J. and Stefanski, L.A. (1990) Approximate quasi-likelihood estimation m models withsurrogate predictors. Journal of the American Statistical Association, 85: 652-663.

Casella, G. (1985) An introduction to empirical Bayes data analysis. The American Statistician,39(2): 83-87.

Charnes, A., Frome, E.L., and Yu, P.L. (1976) The equivalence of generalized least squares andmaximum likelihood estimates in the exponential family. Journal of the American StatisticalAssociation, 71: 169-171.

Clayton, D. (1992) Models for the longitudinal analysis of cohort and case-control studies withinaccurately measured exposures. In: Statistical Models for Longitudinal Studies of Health (James H.Dwyer, ed.), Oxford, U.K.: Oxford University Press.

Cohen, A.C. (1988) Three-parameter estimation. In: Lognormal Distributions (E. Crow and K.Shimizu, eds.), New York: Marcel Dekker, Inc.

Cope, R., Panacamo, B., Rinehart, W.E., and Ter Haar, G.L. (1979) Personal monitoring fortetraalkyllead in an alkyllead manufacturing plant. A merican Industrial Hygiene Association

Page 183: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Journal, 40: 372-379.

Corn, M. and Esmen, N.A. (1979) Workplace exposure zones for classification of employee exposuresto physical and chemical agents. American Industrial Hygiene Association Journal, 40: 47-57.

Crow, E. and Shimizu, K. (1988) Lognormal Distributions, New York: Marcel Dekker, Inc.

Dement, J.M. (1980) Dose-Response Among Chrysotile Asbestos Workers, Chapel Hill, N.C.: TheUniversity of North Carolina at Chapel Hill (Ph.D. Dissertation).

Dempster, A.P. and Ryan, L.M. (1985) Weighted normal plots. Journal of the American StatisticalAssociation, 80(392): 845-850.

Diggle, P.J. (1990) Time Series: A Biostatistical Introduction, Oxford: Oxford University Press.

Edwards, 1.J. (1994) An alternative approach to estimation in the functional measurement errorproblem. Communications in Statistics: Theory and Methods, 23(6): 1651-1664.

Erd~lyi, A. et al. (1953) Higher Transcendental Functions, New York: McGraw-Hill, Inc.

Esmen, N.A. (1992) A distribution-free double-sampling method for exposure assessment. AppliedOccupational and Environmental Hygiene, 7(9): 613-621.

Evans, J.S. and Hawkins, N.C. (1988) The distribution of Student's t-statistic for small samples fromlognormal exposure distributions. American Industrial Hygiene Association Journal, 49(10): 512-515.

Faulkenberry, G.D. and Daly, J.C. (1968) Sample size for tolerance limits on a normal distribution.(Technical report #4, Department of Statistics, Oregon State University).

Finney, D.J. (1941) On the distribution of a variate whose logarithm is normally distributed. Journalof the Royal Statistical Society (Supplement), 7: 155-167.

Francis, M., Selvin, S., Spear, R., and Rappaport, S. (1989) The effect of autocorrelation on theestimation of workers' daily exposures. American Industrial Hygiene Association Journal, 50(1): 37­43.

Fuller, W.A. (1987) Measurement Error Models, New York: John Wiley and Sons, Inc.

Gilbert, R.O. (1987) Statistical Methods for Environmental Pollution Monitoring, New York: VanNostrand Reinhold, pp. 169-171.

Gong, G. and Samaniego, F.J. (1981) Pseudo maximum likelihood estimation: theory andapplications. Annals of Statistics, 9: 861-869.

Graybill, F.A. (1976) Theory and Application of the Linear Model, North Scituate, Mass.: Duxbury.

Halperin, M. (1963) Approximations to the Noncentral t, with Applications. Technometrics, 5: 295­305.

Harville, D.A. (1976) Extension of the Gauss-Markov theorem to include the estimation of randomeffects. A nnals of Statistics, 2: 384-395.

Heederik, D. and Miller, B.G. (1988) Weak associations in occupational epidemiology: adjustment for

174

Page 184: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

exposure estimation error. International Journal of Epidemiology, 17(4): 970-974.

Heederik, D., Boleij, J.S.M., Kromhout, H., and Smid, T. (1'991) Use and analysis of exposuremonitoring data in occupational epidemiology: an example of an epidemiological study in the Dutchanimal food industry. Applied Occupational and Environmental Hygiene, 6: 458-464.

Henderson, C.R. (1953) Estimation of variance and covariance components. Biometrics, 9: 226-252.

Henderson, C.R. (1963) Selection index and expected genetic advance. In: Statistical Genetics andPlant Breeding (W.D. Hanson and H.F. Robinson, eds.), Washington, D.C.: National Academy ofSciences and National Research Council Publication No. 982, pp. 141-163.

Hines, C.J. and Spear, R.C. (1984) Estimation of cumulative exposures to ethylene oxide associatedwith hospital sterilizer operation. American Industrial Hygiene Association Journal, 45(1): 44-47.

Hwang, J.T. (1986) Multiplicative errors-in-variables models with applications to recent data releasedby the U.S. Department of Energy. Journal of the American Statistical Association, 81(395): 680­688.

Johnson, N. and Kotz S. (1970a) Continuous Univariate Distributions--l, Boston: Houghton MifflinCo., pp. 74-75.

Johnson, N. and Kotz S. (1970b) Continuous Univariate Distributions--2, Boston: Houghton MifflinCo., p. 210.

Koch, A.L. (1966) The logarithm in biology I. Mechanisms for generating the lognormal distributionexactly. Journal of Theoretical Biology, 12: 276-290.

Koch, A.L. (1969) The logarithm in biology II. Distributions simulating the log-normal. Journal ofTheoretical Biology, 23: 251-268.

Kromhout, H., Ikink, H., De Haan, W., Kroppejan, J., and Bos, R. (1988) The relevance of thecyclohexane soluble fraction of rubber dusts and fumes for epidemiological research in the rubberindustry. In: Progress in Occupational Epidemiology, Oxford: Elsevier Science, B.V., pp. 387-390.

Kromhout, H., Symanski, E., and Rappaport, S.M. (1993) A comprehensive evaluation of within- andbetween-worker components of occupational exposure to chemical agents. Annals of OccupationalHygiene, 37(3): 253-270.

Kupper, L.L. (1994) "Errors in variables: implications for the design, analysis, and interpretation ofpublic health research studies" (1st annual University of North Carolina at Chapel Hill Department ofBiostatistics Alumni Day lecture, 4/8/94).

Land, C.E. (1971) Confidence intervals for linear functions of the normal mean and variance. Annalsof Mathematical Statistics, 42: 1187-1205.

Land, C.E. (1972) An evaluation of approximate confidence interval estimation methods forlognormal means. Technometrics, 14(1): 145-158.

Land, C.E. (1973) Standard confidence limits for linear functions of the normal mean and variance.Journal of the American Statistical Association, 68: 960-963.

Land, C.E. (1975) Tables of confidence limits for linear functions of the normal mean and variance.

175

..

Page 185: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

"

In: Selected Tables in Mathematical Statistics, Vol. III, Providence, R.I.: American MathematicalSociety, pp. 385-419.

Land, C.E. (1988) Hypothesis tests and interval estimates. In: Lognormal Distributions (E. Crow andK. Shimizu, eds.), New York: Marcel Dekker, Inc.

Lange N. and Ryan L. (1989) Assessing normality in random effects models. The Annals ofStatistics, 17(2): 624-642.

Liang, K-Y. and Hanfelt, J. (1994) On the use of the quasi-likelihood method 10 teratologicalexperiments. Biometrics, 50: 872-880.

Liang, K-Y and Liu, X-H. (1991) Estimating equations in generalized linear models withmeasurement error. In: Estimating Functions (V.P. Godambe, ed.), Oxford, U.K.: Oxford UniversityPress.

Liang, K-Y. and Zeger, S.L. (1986) Longitudinal data analysis using generalized linear models.Biometrika, 73(1): 13-22.

Lindstedt, G., Gottberg, I., Holmgren, B., Jonsson, T., and Karlsson, G. (1979) Individual mercuryexposure of chloralkali workers and its relation to blood and urinary mercury levels. ScandinavianJournal of Work Environment and Health, 5: 59-69.

Liu, K., Stamler, J., Dyer, A., McKeever, J., and McKeever, P. (1978) Statistical methods to assessand minimize the role of intra-individual variability in obscuring the relationship between dietarylipids and serum cholesterol. Journal of Chronic Diseases, 31: 399-418.

Liu, X. and Liang, K-Y. (1991) Adjustment for non-differential misclassification error in thegeneralized linear model. Statistics in Medicine, 10: 1197-1211.

Lyles, R.H. and Kupper, L.1. (1996a) On strategies for comparing occupational exposure data tolimits. American Industrial Hygiene Association Journal, 57: 6-15.

Lyles, R.H. and Kupper, L.L. (1996b) A detailed evaluation of adjustment methods for multiplicativemeasurement error in linear regression, with applications in occupational epidemiology. (submitted toBiometrics).

Lyles, R.H., Kupper, L.L., and Rappaport, S.M. (1995a) Assessing regulatory compliance ofoccupational exposures via the balanced one-way random effects ANOVA model. (accepted subject tominor revisions, Journal of Agricultural, Biological, and Environmental Statistics).

Lyles, R.H., Kupper, L.L., and Rappaport, S.M. (1995b) A lognormal distribution-based exposureassessment method for unbalanced data. (accepted subject to minor revisions, Annals of OccupationalHygiene).

McCullagh, P. (1991) Quasi-likelihood and estimating functions. In: Statistical Theory and Modelling(D.V. Hinkley, N. Reid, and E.J. Snell, eds.), New York: Chapman and Hall.

McCullagh, P. and Neider, J.A. (1983) Generalized Linear Models, London: Chapman and Hall.

Muirhead, R.J. (1982) Aspects of Multivariate Statistical Theory, New York: John Wiley and Sons.

National Institute for Occupational Safety and Health (1975) Handbook of Statistical Tests for

176

Page 186: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

Evaluating Employee Exposure to Air Contaminants by Y. Bar-Shalom, D. Budenaers, R. Schainker,and A. Segall (USDHEWjNIOSH Pub. No. 75-147), Cincinnati, OH.: NIOSH.

National Institute for Occupational Safety and Health (1'975) Statistical Methods for theDetermination of Noncompliance with Occupational Health Standards by N.A. Leidel and K.A. Busch(USDHEW jNIOSH Pub. No. 75-159), Cincinnati, OH.: NIOSH.

Nicas, M. and Spear, R.C. (1993a) A task-based statistical model of a worker's exposure distribution:part I--description of the model. American Industrial Hygiene Association Journal, 54(5): 211-220.

Nicas, M. and Spear, R.C. (1993b) A task-based statistical model of a worker's exposure distribution:part II--application to sampling strategy. American Industrial Hygiene Association Journal, 54(5):221-227.

Oldham, P. and Roach, S.A. (1952) A sampling procedure for measuring industrial dust exposure.British Journal of Industrial Medicine, 9: 112-119.

Oldham, P. (1953) The nature of the variability of dust conce:ntrations at the coal face. BritishJournal of Industrial Medicine 10: 227-234.

Parkhurst, A.M. and James, A.T. (1974) Zonal polynomials of order 1 through 12. In: SelectedTables in Mathematical Statistics, Volume II (H.L. Harter and D.B. Owen, eds.), Providence, R.I.:American Mathematical Society.

Peixoto, J.L. and Harville, D.A. (1986) Comparisons of alternative predictors under the balanced one­way random model. Journal of the American Statistical Association, 81: 431-436.

Petersen, M.R., Sanderson, W.T., and Lenhart, S.W. (1986) Application of a pilot study to thedevelopment of an industrial hygiene sampling strategy. American Industrial Hygiene AssociationJournal, 47: 655-658.

Pierce, D.A., Stram, D.O., Vaeth, M., and Schafer, D.W. (19~t2) The errors-in-variables problem:considerations provided by radiation dose-response analyses of the A-bomb survivor data. Journal ofthe American Statistical Association, 87(418): 351-359.

Press, W.H., Flannery, B.P., Teukolsky, S.A., and Vetterling, W.T. (1986) Numerical Recipes: TheArt of Scientific Computing, Cambridge: Cambridge University Press.

Rao, C.R. (1965) Linear Statistical Inference and Its Applications, New York: John Wiley and Sons,Inc., pp.35D-352.

Rappaport, S.M. (1984) The rules of the game: an analysis of OSHA's enforcement strategy.American Journal of Industrial Medicine, 6: 291-303.

Rappaport, S.M. (1991a) Selection of the measures of exposure for epidemiology studies. AppliedOccupational and Environmental Hygiene, 6: 448-457.

Rappaport, S.M. (1991b) Assessment of long-term exposures to toxic substances in air. Annals ofOccupational Hygiene, 35(1): 61-121.

Rappaport, S.M. (1993). Biological considerations in assessing exposures to genotoxic andcarcinogenic agents. International Archives of Occupational and Environmental Health, 65: S29-S35.

177

11

II

Page 187: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

II

Rappaport, S.M. and Selvin, S. (1987) A method for evaluating the mean exposure from a lognormaldistribution. American Industrial Hygiene Association Journal, 48(4): 374-379.

Rappaport, S.M., Spear, R.C., and Selvin, S. (1988) The influence of exposure variability on dose­response relationships. Annals of Occupational Hygiene, 32 (Suppl. 1): 529-537.

Rappaport, S.M., Kromhout, H., and Symanski, E. (1993) Variation of exposure between workers inhomogeneous exposure groups. American Industrial Hygiene Association Journal, 54(11): 654-662.

Rappaport, S.M., Lyles, R.H., and Kupper L.L. (1995a) An exposure-assessment strategy accountingfor within- and between-worker sources of variability. Annals of Occupational Hygiene, 39(4): 469­495.

Rappaport, S.M., Symanski, E., Yager, J.W., and Kupper L.L. (1995b) The relationship betweenenvironmental monitoring and biological markers in exposure assessment. Environmental HealthPerspectives, 103(suppl. 3): 49-53.

Roach, S.A. (1959) Measuring dust exposure with the thermal precipitator in collieries and foundries.British Journal of Industrial Medicine, 16, 104-122.

Rosner, B., Willett, W.S., and Spiegelman, D. (1989) Correction of logistic regression relative riskestimates and confidence intervals for systematic within-person measurement error. Statistics inMedicine, 8: 1051-1069.

Rosner, B., Willett, W.S., and Spiegelman, D. (1992) Correction of logistic regression relative riskestimates and confidence intervals for random within-person measurement error. American Journalof Epidemiology, 136: 1400-1413.

Samuels, S.J., Lemasters, G.K., and Carson, A. (1985) Statistical methods for describing occupationalexposure measurements. American Industrial Hygiene Association Journal, 46(8): 427-433.

SAS Institute, Inc. (1989) SASIIML Software: Usage and Reference, Version 6, First Edition, Cary,N.C.: SAS Institute, Inc.

SAS Institute, Inc. (1990) SAS Language: Reference, Version 6, First Edition, Cary, N.C.: SASInstitute, Inc.

SAS Institute, Inc. (1992) SASISTA T Software: Changes and Enhancements, Release 6.07, SASTechnical Report P-229, Cary, N.C.: SAS Institute, Inc.

Schafer, D.W. (1989) Measurement error model estimation using iteratively weighted least squares.(Technical report #136, Department of Statistics, Oregon State University).

Schafer, D.W. (1992) Replacement methods for measurement error models. (Technical report #151,Department of Statistics, Oregon State University).

Searle, S.R. (1982) Matrix Algebra Useful for Statistics, New York: John Wiley and Sons, Inc.

Searle, S.R., Casella, G. and McCulloch, C.E. (1992) Variance Components, New York: John Wileyand Sons, Inc.

Selvin, S., Rappaport, S., Spear, R., Schulman, J., and Francis, M. A note on the assessment ofexposure using one-sided tolerance limits. A merican Industrial Hygiene Association Journal, 48(2):

178

Page 188: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

89-93.

Sen, P.K. and Singer J.M. (1993) Large Sample Methods m Statistics: An Introduction withApplications, New York: Chapman and Hall.

Shimizu, K. (1983) Estimation of the quantile of a lognormal distribution. Rep. Stat. Appl. Res.,JUSE, 30(2): 28-33.

Shimizu, K. (1988) Point estimation. In: Lognormal Distributions (E. Crow and K. Shimizu, eds.),New York: Marcel Dekker, Inc.

Shimizu, K. and Crow, E.L. (1988) History, genesis, and properties. In: Lognormal Distributions (E.Crow and K. Shimizu, eds.), New York: Marcel Dekker, Inc.

Spear, R.C., Selvin, S., and Francis, M. (1986) The influence of a.veraging time on the distribution ofexposures. American Industrial Hygiene Association Journal, 47(1)): 365-368.

Spear, R.C., Selvin, S., Schulman, J., and Francis, M. (1987) Benzene exposure m the petroleumrefining industry. Applied Industrial Hygiene, 2: 155-163.

Spear, R.C. and Selvin, S. (1989) OSHA's permissible exposure llimits: regulatory compliance versushealth risk. Risk Analysis, 9(4): 579-586.

Stefanski, L.A. and Carroll, R.J. (1985) Covariate measurement error in logistic regression. Annals ofStatistics, 13: 1335-1351.

Symons, M.J., Chen, C., and Flynn, M.R. (1993) Bayesian nonparametrics for compliance to exposurestandards. Journal of the American Statistical Association, 88: 1~~37-1241.

Taylor, A.E. (1955) Advanced Calculus, Boston: Ginn.

Thomas, D., Stram, D., and Dwyer, J. (1993) Exposure measurement error: influence on exposure­disease relationships and methods of correction. Annual Review of Public Health, 14: 69-93.

Tuggle, R.M. (1982) Assessment of occupational exposure using one-sided tolerance limits. AmericanIndustrial Hygiene Association Journal, 43: 338-346.

Warmus, M. (1964) Tables of Lagrange Coefficients for Cubic Interpolations--Part I, Warszawa,Poland: PWN-Polish Scientific Publishers.

Wedderburn, R.W.M. (1974) Quasi-likelihood functions, generalized linear models, and the Gauss­Newton method. Biometrika, 61(3): 439-447.

Whittemore, A.S. (1989) Errors-in-variables regression using; Stein estimates. The AmericanStatistician, 43: 226-228.

Whittemore, A.S. and Keller, J.B. (1988) Approximations for errors in variables regression. Journalof the American Statistical Association, 83: 1057-1066.

Williams, J.S. (1962) A confidence interval for variance components. Biometrika, 49: 278-281.

Wolfinger, R. (1992) A Tutorial on Mixed Models, Cary, N.C.: SAS Institute, Inc

179

"

l

,

Page 189: II - NC State Department of · PDF fileLarge sample-based test statistics suited to ... 5.2.3 Sample calculations under Model II 158 5.2.4 Sample calculations for ... Table 2.6: MSEP

.'

Yager, J.W., Paradisin, W.M., and Rappaport, S.M. (1993) Sister-chromatid exchanges in relation tolongitudinally measured occupational exposure to low concentrations of styrene. Mutation Research,319: 155-165.

180