a national and international debate on default uncertainty factors vs. data-derived uncertainty...

18
This article was downloaded by: [Karolinska Institutet, University Library] On: 09 October 2014, At: 15:23 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Human and Ecological Risk Assessment: An International Journal Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/bher20 A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors Published online: 03 Jun 2010. To cite this article: (2002) A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors, Human and Ecological Risk Assessment: An International Journal, 8:4, 895-911 To link to this article: http://dx.doi.org/10.1080/20028091057277 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Upload: not-available

Post on 16-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

This article was downloaded by: [Karolinska Institutet, University Library]On: 09 October 2014, At: 15:23Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Human and Ecological Risk Assessment: AnInternational JournalPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/bher20

A National and International Debate on DefaultUncertainty Factors vs. Data-Derived UncertaintyFactorsPublished online: 03 Jun 2010.

To cite this article: (2002) A National and International Debate on Default Uncertainty Factors vs. Data-Derived UncertaintyFactors, Human and Ecological Risk Assessment: An International Journal, 8:4, 895-911

To link to this article: http://dx.doi.org/10.1080/20028091057277

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Page 2: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Human and Ecological Risk Assessment: Vol. 8, No. 4, pp. 895-911 (2002)

1080-7039/02/$.50© 2002 by ASP

A National and International Debate on DefaultUncertainty Factors vs. Data-Derived UncertaintyFactors

The Topic:

Does the use of default uncertainty factors vs. data-derived enable us to characterizerisk more precisely during informed risk management decisions?

The Titans: Default UFs

Dr. Michael Dourson (TERA), Dr. Margaret Miller (FDA), Dr. Jay Murray (Murray& Associates)

The Vikings: Data-Derived UFs

Dr. Richard Davis (ToxSolutions), Ms. Bette Meek (Health Canada), Dr. BruceNaumann (Merck & Company)

Moderator’s (Dr. Edward Ohanian) Nuggets:

• Best Science

• Risk Characterization

• Economics

• Risk Management

200515.pgs 6/14/02, 1:35 PM895

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 3: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

896 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

PANELISTS’ PRESENTATIONS

THE TITANS: DEFAULT UFS

Dr. Dourson: The debate over whether to use compound specific or defaultuncertainty factors to assess risk is now engaged. At issue is whether sufficient dataexist to support a movement towards the use of Chemical Specific AdjustmentFactors (CSAFs) (Meek et al. 2001), when compared to the default approach ofusing a value of 10-fold, and which method for applying uncertainty factors, com-pound specific or default, provides the best determination of the “safe” dose.**

Aristotle said, “It is the mark of an instructed mind to rest satisfied with thedegree of the precision which the nature of the subject permits and not to seek anexactness where only an approximation of the truth is possible” (Felter and Dourson1998). If we assume for the moment that sufficient data does exist to use CSAFs, itis fair to ask whether their use allows risk assessors to assume precision that is notwarranted from other aspects of the determination of “safe” dose. Thus, while theappeal of using CSAF is perhaps overwhelming from a scientific perspective, theneed for caution and responsibility persists, and applying default uncertainty factorsto determine the “safe” dose may still be the best method for the protection ofpublic health. Of course, such default factor use should not pretend unwarrantedprecision either.

In addition, CSAFs, which by their nature are more precise than default factors,may cause more disagreement as to the validity of the “safe” dose. Such disagree-ment might not be meaningful as part of a comprehensive risk managementdecision, since the determination of a “safe” dose is often only a minor part. In fact,the added disagreement may serve to increase the speculation and uncertaintyassociated with the risk management decision.

Furthermore, consensus on the actual “safe” dose is difficult to achieve, becausemost of the agencies and risk assessment groups involved with estimating these dosesuse methods that vary somewhat (Haber et al. 2001). For example, a comparison ofthe WHO and EPA “safe” doses show that for

— 38 chemicals the “safe” doses are within a 3-fold range of each other, 18 ofthese are essentially identical;

— 20 chemicals the “safe” doses are in a 3- to 30-fold range of each other;

— 6 chemicals the “safe” doses are in a 30- to 300-fold range of each other; and

— 1 chemical the “safe” dose is 700-fold apart (Dourson and Lu 1995).

Perhaps these differences are primarily due to the year of the evaluation, witholder assessments having fewer scientific data. Yet, harmonization in the underlyingmethods could possibly lessen the amount of uncertainty associated with these “safe”doses more so than a similar effort applied to the use of CSAFs.

Therefore, applying standard default factors that have been established throughlong-term use and post hoc scientific justification seems to be a better alternative tothe use of CSAFs. Risk assessors may falsely increase the precision in the estimation

200515.pgs 6/14/02, 1:35 PM896

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 4: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 897

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

of “safe” doses with the use of CSAFs. This may cause more disagreement as to thevalidity of the resulting “safe” doses if the use of CSAF is encouraged.

**Please note that the arguments described here for maintaining the use of thedefault certainty factors are for the purposes of debate only. They do not reflect theauthor’s underlying opinion.

REFERENCESDourson ML and Lu FC. 1995. Safety/risk assessment of chemicals compared for different

expert groups. Biomed Environ Sci 8:1-13Felter S and Dourson ML. 1998. The inexact science of risk assessment (and implications for

risk management). Hum Ecol Risk Assess 4:245-51Haber LT, Dollarhide JS, Maier A, et al. 2001. Noncancer risk assessment: Principals and

practice in environmental and occupational settings. In: Bingham E, B Cohrssen B, andCH Powell CH (eds), Patty’s Toxicology, Fifth edit, pp 169-232. John Wiley & Sons, NY, NY,USA

Meek, M, Renwick A, Ohanian EV, et al. (2001). Guidelines for application of compoundspecific adjustment factors (CSAF) in dose/concentration response assessment. Com-ments in Toxicol (in press)

Dr. Miller: The 10-fold safety factor was originally used to account for inter-individual variability because humans vary greatly in their sensitivity to toxicants basedupon the state of health, diet and living conditions. Studies designed to investigate thedifferences between men and women in pharmacokinetics and pharmacodynamics ofpharmaceuticals show sex differences in the range of three- to eight-fold. It is impor-tant to remember, however, that these studies are conducted in healthy individualswith compounds that have been selected to have efficacy in a wide range of people.I think that it misleading to think that we take the data from these defined studies withpurified compounds and use that data to refine the uncertainty for other classes ofcompounds. Thus, for most risk analysis on will need to apply the default safety factors.

I am concerned that the public will not accept a move away from the default safetyfactor if it increases the level of contaminants in the food supply. Society wants thesame or greater level of protection that we have been providing with the defaultsafety factors for more than 30 years. I’ve always believed that society is accepting thedefault safety factor not because we’ve been using it but because they believe thatit is safe. If one tries to define and refine the estimates of uncertainty and variabilityand this leads to an increase in the levels of residues in food, it imperative that weknow that the higher levels are safe for all segments of the population.

Dr. Murray: Does the use of data-derived uncertainty factors allow risk assessorsto describe uncertainty more accurately?

• A disclaimer: For purposes of this debate, not necessarily what I personallybelieve.

• Asked to play a role, and delighted to have that opportunity.

• That said, I will now do my best to convince you that default factors aresuperior to data-derived uncertainty factors.

200515.pgs 6/14/02, 1:35 PM897

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 5: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

898 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

• They allow risk assessors to more accurately describe uncertainty.

• You have heard a number of compelling arguments from my colleagues.

• I am going to focus on a few additional arguments at the intersection ofscience and public policy:

No Scientific Consensus on How to Do It.

• Data-derived uncertainty factors are wonderful in theory, but in practice, thescience is not sufficiently advanced to allow them to be used with any degreeof reliability.

• (Previous speakers have dealt with the fact that) we do not have enough dataon most chemicals to use data-derived uncertainty factors.

• But the problem goes far deeper than that.

• Even when the data exist, there is no scientific consensus on how to do it.

• There are no established methods, procedures for determining data-deriveduncertainty factors.

• Scientists and regulatory agencies do not agree on how to determine data-derived uncertainty factors.

• This is too important an issue to proceed until scientific consensus is achieved.

• Data-derived uncertainty factors clearly do not allow risk assessors to describeuncertainty more accurately.

• One need only look at the differences in the results of risk assessments basedon data-derived uncertainty factors.

• The uncertainty cannot be accurately described when the risk assessmentresults differ depending on how the data-based uncertainty factors are derived.

Regulatory Agencies Disagree Over Methods and Approach

• IPCS has taken an important first step in this direction; IPCS has published itsguidelines for data-derived uncertainty factors.

• But regulatory agencies do not appear to agree with the IPCS approach.

• For example, USEPA recently published a risk assessment on its IRIS databasestating:“Others have used different methods to derive uncertainty factors. The U.S.

200515.pgs 6/14/02, 1:35 PM898

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 6: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 899

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

EPA has not yet endorsed any of these approaches, as there are a number ofcritical unresolved scientific and methodological issues.” (USEPA, February2001).

• USEPA recognized that until a number of critical scientific and methodologi-cal issues are resolved, data-derived uncertainty factors cannot allow riskassessors to describe risk more accurately.

• Until these unresolved scientific and methodological issues are addressed,data-derived uncertainty factors should not be adopted or used by regulatoryagencies.

• Otherwise, the credibility of regulatory agencies will be undermined.

• The public relies on regulatory agencies to take a health-protective approachto risk assessment.

• Regulatory agencies should not succumb to pressure to use data-deriveduncertainty factors until scientists agree on how to do it.

• There must be consensus among scientific and regulatory agencies beforedata-derived uncertainty factors can ever be successfully implemented.

There Is No Public Support for Data-Derived Uncertainty Factors

• There is already enough distrust of risk assessment.

• I work with activist groups that have a deep-seated distrust for risk assessment.

• They have no appreciation for or need for data-derived uncertainty factors.

• Must be able to persuade public of the accuracy and legitimacy of data-deriveduncertainty factors.

• Default uncertainty factors are easy to explain and easy to understand.

• Contrast that with data-derived uncertainty factors.

• They are hard to explain and difficult to understand.

• There is no agreement on how to determine them.

• And the results vary, depending on who does the risk assessment.

• Data-derived uncertainty factors must pass the test of public opinion.

200515.pgs 6/14/02, 1:35 PM899

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 7: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

900 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

• The public will not accept what it cannot understand, and the scientificcommunity has not taken the time to explain data-derived uncertainty fac-tors.

• Have you ever asked a risk manager what he or she thinks of data-deriveduncertainty factors?

• Risk managers hate them.

• They don’t understand them, they can’t explain them, and they much preferdefault uncertainty factors.

• From the public perspective, default uncertainty factors more accurately andmore honestly describe the uncertainty.

There Is No Legislative Support for Data-Derived Uncertainty Factors

• Public wants to go the other way.

• They want simple risk assessments that use default uncertainty factors.

• Politicians will not support legislation and regulations using data-deriveduncertainty factors.

• So before embarking on a new approach, it is important that the scientificcommunity be truly united on the approach and methods for data-deriveduncertainty factors.

• Otherwise, it will lead to public mistrust and to laws that do not permit the useof data-derived uncertainty factors.

Conclusion

• In conclusion, we should not replace time-tested default uncertainty factorsfor controversial data-derived uncertainty factors when scientists can’t evenagree on how to derive them.

• Uncertainty is more accurately described by default uncertainty factors.

• Until we have standard methods and approaches for establishing data-deriveduncertainty factors, and until we can explain how it works to the public and tolawmakers, we should stick to what has worked well for over 50 years: defaultuncertainty factors.

• To do otherwise would amount to, I think the words were, “culpable neglectof duty or obligation.”

200515.pgs 6/14/02, 1:35 PM900

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 8: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 901

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

THE VIKINGS: DATA-DERIVED UFS

Dr. Davis: I welcome this opportunity to argue in favor of the proposal that data-derived uncertainty factors more accurately describe uncertainty of the extrapola-tions required in risk assessment. Actually, I believe that development of appropri-ate data offers the potential to eliminate uncertainty for many route extrapolationsin non-cancer risk assessment.

First, I want to point out the support for data-derived uncertainty factors, whichhas been expressed by the World health Organization. The following statementcomes from the EHC document in which the World Health Organization (WHO)adopted the Renwick proposal to divide the inter- and intraspecies UFs intotoxicokinetic and toxicodynamic components.

Because of the imprecision of the default uncertainty factors and in order tomaintain credibility of the risk assessment process, the total default uncertaintyfactor should not exceed 10,000. If the risk assessment leads to a higher factor,then the resulting Tolerable Intake (TI) would be so imprecise as to lackmeaning. Such a situation indicates an urgent need for additional data” (WHOIPCS Environmental Health Criteria 170, 1994).

The key phrases relevant to the question we are debating here include: “becauseof the imprecision of the default uncertainty factors the resulting TI would be soimprecise as to lack meaning. Such a situation indicates an urgent need for addi-tional data.” Clearly, the WHO Task Group recognizes the inaccuracy of default UFsand recommends data-derived UFs to improve accuracy of risk assessment.

WHO is not using the statistical definition of “precision” in the statement above. Ibelieve WHO means inaccuracy when using the term imprecision in this context.Although used synonymously in common speech, the statistical definition of accuracyand precision are not the same. Accuracy measures closeness to the true or targetvalue, that is, hitting the bullseye. Precision, however, measures consistency or thespread in hitting the target. In the statistical context, default UFs are highly precisebecause each default UF is limited to one or at most three values. Default UFs areinaccurate because there are too many variables or sources of uncertainty to bereconciled by each UF. It is not likely that the “true” value would be limited to two tothree choices. Data-derived UFs allow greater accuracy but may be less precise.

For example, consider the number of processes to be reconciled by thetoxicokinetic UF. Table 1 shows the major toxicokinetic processes of absorption,distribution, metabolism and excretion. But each of these processes involves manyother more specific processes, such as biological membrane transport, and areaffected by numerous variables, some of which are listed here. Many others couldbe identified. There can be substantial differences within species or between speciesfor all these processes and factors affecting them. Therefore, it is highly unlikely thatthe “true” differences for the universe of chemicals could be reconciled by a coupleof default values.

Furthermore, there are a variety of toxicokinetic parameters that characterizespecific aspects of the processes discussed above. Any of these parameters shown inTable 2 or others may be the basis of important differences between or among

200515.pgs 6/14/02, 1:35 PM901

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 9: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

902 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

Table 1. Toxicokinetic factors.

Table 2. Toxicokinetic parameters.

200515.pgs 6/14/02, 1:35 PM902

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 10: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 903

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

species for particular chemicals. Many researchers have shown that relative differ-ences (i.e., ratios) in several of these parameters for specific chemicals are notlimited to a few default values. The question then arises, which parameter ratiosshould be used to determine the “true” difference among or between species?

Selections of the most relevant parameter for quantifying inter- and intraspeciesdifferences illustrates how data-derived UFs are more accurate. Ultimately, the mostimportant difference we wish to address with the toxicokinetic UF is target tissueconcentration. Target tissue concentrations are difficult to measure in humans;however, well developed PBPK models allow useful estimates. A recent paper byPelekis et al. (Reg ToxPharmacol 33:12-20, 2001) nicely illustrates this approach. Ifno such model is available, ratios of plasma concentration data such as AUC or Cmax

are likely to provide the next best estimate of relative target issue concentration.Either way, a toxicokinetic UF derived in this manner has to be more accurate thana default value. The same conclusion could be drawn from a similar approach todetermine toxicodynamic UFs.

Therefore, given appropriate and relevant data, UF values can be derived that aremore accurate than default values. This benefit of improved accuracy providesincentive and rational for useful mechanistic research. Although data-derived UFsare less statistically precise, their precision will improve as experience and ultimatelyguidance is developed.

Finally, based on this information, I am sure the opposing debate team willagree that default uncertainty factors stink (in the common not literal sense ofthe word)!

Ms. Meek:

What/Why “Data-Derived”?

• Consider in the context of increasingly data informed

default

database

categorical

chemical specific kinetic or dynamic data

BBDR

• Not always necessary but always informative

When Data-Derived?

• Often a function of the economic importance of the substance

• Purpose of the assessment (screening vs. full)

200515.pgs 6/14/02, 1:35 PM903

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 11: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

904 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

Defining Uncertainty Data Derived vs. Default

• Chemical specific “data-derived” informs us not only for individual com-pounds but for the framework of risk assessment as a whole

• Permits greater understanding and transparency in defining components ofthis framework; permits more explicit delineation of the nature of data thatinforms us

• Debatable whether it reduces uncertainty per se, or educates us concerningsources of uncertainty

• Permits greater transparency in delineation

• What do we really know about uncertainty for default?

Why Default?

• Easy

— Requires less experience/consultation

• Protective

• Seems to work

— If it isn’t broken don’t fix it

• Inertia

— There seems to be an exceedingly high hurdle to move from default.Why? What is it about default? Because we’ve always used it? Because it’ssimple to explain? Because we can count on 10 fingers?

Dr. Naumann: When Dr. Abdel-Rahman asked if I would be interested in partici-pating on the panel the answer was easy. I always look forward to the opportunityto discuss the methods we use to set health-based limits, which more often than not,are occupational exposure limits. It was more difficult to answer him when he askedif I wanted to be on the side of the panel that was “for” chemical-specific adjustmentfactors (CSAFs) or “for” default uncertainty factors. My answer was either onedepending on the available data for a given compound.

As a scientist, it is easy to argue “for” the use of CSAFs because risk assessorsshould always review and evaluate all available data, including data oninterindividual and interspecies differences. Using all of the available data is justgood science.

“Informed” regulatory decision-making implies that risk assessors and risk man-agers have brought to bear all of the information needed to arrive at the ultimate

200515.pgs 6/14/02, 1:35 PM904

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 12: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 905

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

goal – a health-based limit that is protective of the intended population (e.g., generalpopulation, patients, workers).

Risk assessors use uncertainty factors to account for what they don’t know as theyare trying to predict a true no-effect level for the population. By definition, defaultUFs are used in the absence of data on interindividual and interspecies differences.We are well aware of the analyses that show that the 10-fold default factors aregenerally protective, perhaps overly so for certain compounds. However, whendefault factors are used in the absence of data some level of uncertainty alwaysremains about how health-protective a limit may be, although it is likely to bereduced significantly.

The distinction between uncertainty and variability needs to be emphasized here.CSAFs are used to characterize the “variability” in kinetics and dynamics in arelevant subset of the intended population. To the extent that this subset is repre-sentative of the overall population and provides quantitative information on differ-ences (or similarities), the level of “uncertainty” should be reduced considerably.The risk assessors/risk managers should feel more confident than the health-basedlimit they derive will provide adequate protection for even the most susceptibleindividuals. There may be residual uncertainties; however, these would obviously beless than when no data are available on potentially important differences.

We have tried to pattern our internal limit setting methods after current riskassessment methodologies, but have modified them to suit the unique needs of ourindustry. In particular, we have the luxury of having a significant amount of humandata on which to base our limits. Dr. Sargent discussed the value of human dataearlier, particularly in terms of identifying the critical endpoint and no-effect level.The availability of pharmacokinetic and pharmacodynamic data in humans alsoargues for the primacy of human data in risk assessment since it permits a charac-terization of interindividual variability in the species of greatest interest to the riskassessor. By focusing of PK and PD data for potentially susceptible subpopulations,the risk assessor consciously evaluates data for the segment of the population ingreatest need of protection.

As a final note it should be noted that use of CSAFs does not automatically implya reduction in the “safety factor” used in setting a health-based limit. Sometimes theavailable data for a given compound suggest a high level of interindividual variabil-ity. We have presented several examples in past workshops where the data-derivedvalue was greater than 10. In these cases, use of a default factor would not haveprovided adequate protection for the sensitive subpopulation.

RESPONSES

Ms. Meek: I was interested in something that Mike presented. You mentionedthat comparison of human with animal data indicated that we are not sufficientlyprotected, in some areas. And I was wondering how you were using that for anargument for default?

Reply (Dr. Dourson): I will yield the point that data-derived uncertainty factorsmight give you more accuracy, but there are other components to estimating areference dose with which data-derived uncertainty factors are not helping us, andthose other factors lead to perhaps large imprecisions. For example, if you have

200515.pgs 6/14/02, 1:35 PM905

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 13: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

906 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

animal data and use a CSAF and get a very nice reference dose, or a tolerable intakeor whatever, you might actually have a situation where the humans aren’t respond-ing to that critical effect at all and the safe dose might be much higher. Thisunknown results in an inaccuracy or imprecision. We can actually plot curves of howoften animal data predict human-based RdFs (see Dourson et al. 2001). It is thislarge degree of imprecision to which I was leading.

Dr. Davis (question for Dr. Murray): Dr. Murray made a statement that defaultuncertainty factors are easy to explain to the public. I question this on the basis thatthe values for default uncertainty factors have never been clear. The quest for howthe UF values of 10 were determined, and where these values come from, has neverbeen very clear. In other words, they were arbitrarily decided and chosen. How doyou explain to the public that something this important to an assessment wasarbitrarily decided?

Reply (Dr. Murray): Even if we accept your position that default uncertaintyfactors are arbitrary, the reality is that it is very easy to explain to people. You cantalk about uncertainty factors and safety factors and say look, we estimated whatdose level we thought would not have an effect and then we divided it by 10 or 100or some other number and the reality is it works. If you talk to risk managers whohave to go out and talk to the public about risk they much prefer default uncer-tainty factors because they can explain them to the public. If they have to explainwhat I believe Ms. Meek called transparent to the public, this is not transparent atall, data-derived uncertainty factors, this is mystical. They can’t begin to under-stand this at least not until we take the time to explain how data -derived uncer-tainty factors work. So for now it’s much easier for the public to understanddefault uncertainty factors than data-derived uncertainty factors, and the publicneeds to understand.

Dr. Miller (Question to the group): Haven’t we been using data in assigninguncertainty factors forever? I was commenting, I have assigned safety factors be-tween 24 and 588 in my career at FDA. If based on knowledge and information, Ihave a reason to use something other than 10. Haven’t we been doing that always?The default is just that, it’s a default in lack of other information.

Reply (Ms. Meek): While we have sometimes qualitatively considered relevantinformation, we’ve not been consistent. We have brought, perhaps, data to bear onthe question that don’t quantitatively inform us, but we have a gut feel aboutsomething and we say let’s make it less than 10. Our approach should be much moreconsistent and quantitatively driven by the data.

Dr. Murray’s question to (Dr. Davis): I think it was one of your first slides whereyou had 25 different factors, mostly pharmacokinetic factors if I’m remembering it.And your point was, how can one default uncertainty factor substitute for thevariability that you might have in these 25 or so different factors. My question to youis, how many chemicals are there where we have data for those 25 different factorsand therefore, how many chemicals exist where we have all the knowledge we needin order to be able to use data derived uncertainty factors?

Reply (Dr. Davis): The number of chemicals for which we have such data isirrelevant. When data are available, we can derive more accurate uncertainty factors.Therefore, we encourage the development of necessary data. My point is that it’s a

200515.pgs 6/14/02, 1:35 PM906

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 14: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 907

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

good idea to have justification for producing the data now missing. In any eventthough data-derived factors are more accurate.

Reply (Dr. Dourson): When you have multiple uncertainty factors, you get anestimate of the RfD that’s very imprecise. Maybe not within an order of magnitude,perhaps its 2 orders of magnitude. Within this framework if you have one compo-nent that’s a CSAF, you may increase the accuracy of that component. But we allknow from our high school chemistry class that our estimate of the RfD is only asprecise as our least precise component. If you have other 10-fold uncertainty factors,your overall estimate is still very imprecise. You haven’t gained any precision.

Dr. Stern’s question: Are regulators quite confident to apply data-derived UFapproach to existing chemical-specific risk assessments?

Reply (Dr. Ohanian): In the majority of regulatory risk assessments, chemical-specific information will not be available for some or many of these areas ofuncertainty, and a default uncertainty factor has to be used for each category ofinformation for which the data are lacking.

Question to Dr. Stern: You talk about good defaults, now how do you get a gooddefault if it’s not data-derived? Where does the good default come from if you don’thave a database to judge it against? What’s a good default?

Reply (Dr. Stern): This is obviously getting into semantics. A good default bydefinition is one that has been timed tested and seems to work. I would argue thatfor the most part we don’t have any evidence that 10 doesn’t work. But a gooddefault is one that we can agree on that is going to be responsible but not overlyprotected. But yes, since it’s uncertain and it’s an uncertain estimate of an uncertainquantity we can never be sure.

Question from (Dr. Schwartz): Is perhaps what we’ve done so far with data-derived uncertainty factors gotten to the point of saying that default factors arecorrect, they are just inaccurate and maybe the defaults are wrong, and maybe weshould be staying with default factors, different than the ones we have now.

Reply (Dr. Miller): It would seem to me from a regulatory perspective that adefault would have to be more conservative in order to provide some impetus topeople to move away from the default. We always try to set up a regulatory strategywhere generating more information is advantageous. So I think that is one of theproblems, that your default factor of 10 is probably too close to make muchincentive if they’re going to go through the work of doing a study. I think the defaultfactor and I think the WHO says its like 10,000. I know Dave Gaylor told me that hethought the default should be 10,000. That would be lowered with better informa-tion. That provides an incentive for people to do the study. Moving toward a muchmore conservative safety factor is going to be very difficult for any regulatory agencyto implement — even though in theory you would like to have incentive to do morestudies.

Reply (Dr. Schwartz): But what that does is argue to Rick’s point about inertia.That we’re comfortable with 10, it’s just an inertia thing. The data that are beingdeveloped now seems to point towards the fact that different defaults may be moreaccurate.

Reply (Ms. Meek): In relation to “inertia”, while we need a fair bit ofinformation for chemical-specific adjustment factors, there are even categoricaldefaults that most risk assessors would agree are better informed than defaults

200515.pgs 6/14/02, 1:35 PM907

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 15: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

908 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

but we’ve not introduced these consistently, either. Until we consistently intro-duce these data-informed modifications, we will not improve the risk assessmentparadigm.

Reply (from audience): I agree with the use of defaults in the absence of gooddata and I think most of us probably agree with that. But I’m actually in support ofdata-derived uncertainty factors. I think it was panelist Miller who made the pointthat we were deluding ourselves with data-derived uncertainty factors, saying that wewere going to be more precise, and we’ve had the discussion about precise vs.accurate. I would say based on a lot of the data we’ve seen today and other times thatwe may be deluding ourselves with the default 10× or other factors. For things likechild sensitivity where in fact, in a lot of cases based on the chemical, they’re notmore sensitive or they’re less sensitive. So I would argue against deluding ourselvesin the data-derived.

Reply (Dr. Miller): An extra 10-fold safety factor for children is part of the lawregarding the regulation of pesticides in the United States. I personally think thatis a problem when the law defines what the safety factor should be rather than whatthe level of risk should be. It is totally appropriate for the Public through its electedofficials to determine what level of risk our society is willing to accept.

Reply from (audience): That’s like saying we’ve always done it that way, therefore,we don’t need to change it.

Reply again (Dr. Miller): Well, no, a regulatory agency only carries out a legisla-tive mandate. We don’t decide what we’re going to regulate or what level of risk isappropriate for society. The public, through its elected officials, decides what we aregoing to regulate and what level of risk is acceptable. The Food and Drug Admin-istration and the Environmental Protection Agency only carry out a legislativemandate. I do not have any control or flexibility in the level of risk that is appropri-ate for the products that we regulate. For example, the Delaney Clause does notallow carcinogens to be added to the food supply, it says NO. We’ve tried many timesto define NO, and every time the courts have assured us that no means no. So thepublic through the legislature and courts tell us what level of risk we are to regulateto.

Reply (from audience): Yes, I understand the regulatory constraints. However, Idon’t think we’re deluding ourselves by looking at data-derived uncertainty factors.I think we’re trying to get more toward the truth.

Reply (Dr. Beck): I would like to reiterate something that I think Bette you werehitting at and also Alan. Isn’t this debate kind of artificial? I mean it’s not one or theother, it’s when do you have enough information that you can develop a data-derived assessment and a data-derived uncertainty factor and when do you not haveenough information and you have to go a default way , and then also, how can weinform ourselves regarding the degree of protection we have when we’re relying ondefault uncertainty factors. Maybe we get some understanding of that from theprobabilistic work that Sandy Baird and others have done. I voted for the data-derived factors of course, because how can you argue against data. But in a way it’snot a totally fair debate that we’ve set up here.

Reply (Dr. Naumann): I agree with you, Barbara. We do need some criteria fordeciding when data are adequate to plug into the data-derived process. While it istrue that there are no procedures in place now, they are coming very soon. Bette

200515.pgs 6/14/02, 1:35 PM908

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 16: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 909

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

mentioned the IPCS document, which has a lot of detail on the methodologies andexplanations on how to use them. It provides sufficient information to help visualizewhat a normal population is and what a sensitive population is. It doesn’t come rightout and define what a sensitive subpopulation is, I think it rings loud and clear whatthat document says and will provide people in terms of guidance. At a given levelof exposure the sensitive individual is going to be the one that has the highest bloodlevel at the site of action that is of concern. There are very simple statisticalapproaches described, e.g., (Mean + 2SD)/Mean, for determining what a data-derived factor should be based on an understanding of the distribution of bloodlevels and where that sensitive population is in that distribution.

Reply to Dr. Miller (Dr. Khodair): Regarding the society and who actually man-dates regulation. The society actually mandates the regulation that is true, but wefeed them with information. They depend on the scientific information given tothem, they don’t know exactly which way to go. If we don’t have enough data forderived uncertainty factor, this does not prohibit us from getting more data insteadof relying completely on the default factor.

Reply (Dr. Miller): That may be true. When the Delaney Clause was passed, it wasthe scientists who went to Congress and told Congress that they couldn’t establisha safe dose for carcinogens. Congress responded by saying that they will have noneof it. However, currently many consumers are skeptical of scientists and they pres-sure Congress to maintain a conservative position.

Comment from (Dr. Davis): When do you move from default to data-derived?You must accept that data-derived is more accurate. Somebody said that defaultuncertainty factors work. We may not know the case where using them hasn’t work,that’s true. But there is a cost to using default factors because they are inaccurateand they are probably overprotective. So it falls to the regulatory authority to allowthe option to develop data and derive a more accurate uncertainty factor. Then itfalls to the regulated individual or company to conduct the studies that are neededto make the uncertainty factor more accurate or pay the costs of using inaccuratedefault factors. That’s how I see it working and that’s how I see benefits. The costsof inaccurate safety factors justify the cost of required research.

Reply to previous question (Dr. Murray): I believe that the public does listen toscientists and take its cues from scientists. But science and the regulatory communityneed to be effectively communicating to the public so that they get that message. Fordata-derived uncertainty factors, until the scientific and regulatory communityagree on some methodology and an approach, it’s going to be very difficult for thepublic to perceive that as something that makes sense. Because what they’ll hear iswhat they always say, is you know, you put two scientists in a room together andyou’ve got two people who are going to disagree with each other. I’m very sensitiveabout this because I come from the state where we have the mother of all defaultuncertainty factors and that’s the 1000-fold uncertainty factor under Proposition 65.If you think about how that came to be, in California, it was a public dissatisfactionwith how chemicals were being regulated. There was a distrust for the way it wasbeing done and the public took it into its own hands and said we’re going to decide,we’re going to vote on it and it was voted into law. All the scientists in the world cancome in now and say: “Well we have all this wonderful data, we have enough data

200515.pgs 6/14/02, 1:35 PM909

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 17: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

910 Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002

Ohanian et al.

to do data-derived uncertainty factors”, but the reality is, it doesn’t matter any more;it’s written into the law and you can’t change that law.

Comment from (Dr. Kadry): Data-derived is something very nice, I’m sure thatevery scientist likes to go through that. However, in the absence of universaltransparent system to evaluate the data and look at it, I think we have to go for thedefault. Because if you look at today’s speakers, each person looked at the data fromdifferent angles. So unless we have a universal system of how we look at the data andevaluate it, maybe we will have to live with different factors.

Reply to Dr. Murray from (Dr. Sargent): I don’t want to debate Prop 65, but Idon’t think the voters in California voted on the basis of whether or not they wanteda 1000-fold uncertainty factor. They voted on the question whether or not theywanted carcinogens and reproductive toxins in their water; that’s what they votedon. But you raised probably the point I think we should consider in all of this andmaybe as we all are in terms of trying to be good scientists as well as public healthofficials. I think that scientists have lost their creditability so much, they believelawyers before they believe us. I think that we need to be very careful about how wecommunicate our science and I think that’s a very important point you make.Probably the only point that your side wins.

Reply (Dr. Murray): Ed, basically I agree with you in terms of what people werevoting on. But the point is, there was a process behind where that 1000-fold was builtin. What people actually voted on, and I actually have my ballot from 1986 were: (1)Are you in favor of getting toxic chemicals out of your drinking water? (2) It will costyou hardly anything. That’s what they voted on.

Comment from (Dr. Stern): I need to make a distinction here if I’m going toexpress support for data-derived uncertainty factors. I need to make a distinctionbetween chemical specific data-derived uncertainty factors and what I guess I couldcall default data-derived uncertainty factors. Which is to say that in the type ofanalysis that I presented here today, I’m comfortable with it naturally. But that’sbecause it was for a specific chemical based on chemical specific data, a lot of dataand three separate analyses which are almost as good as peer-review, all of whichagreed. That leaves me pretty comfortable but when I see a data-base for pharma-ceuticals and I want to apply that to say a heavy metal, there aren’t too many heavymetal pharmaceuticals. I say wait a minute; this is a nice data set but it is not reallyrelevant to me. And so, I think we need to be very specific about what we mean aboutdata-derived and as someone who works for a regulatory agency and makes recom-mendations of scientific validity to a regulatory agency, I might be making a positiverecommendation on the one case but I would be unlikely to make a positiverecommendation in the other.

Reply (Dr. Dourson): A thought on next steps. A lot of us routinely do riskassessments daily or maybe weekly, and we have the opportunity to look at databasesthat have uncertainty factors and toxicokinetic and dynamic data, so we have theopportunity to see if data would work. In most cases perhaps it’s not going workbecause we don’t have the appropriate data. But that doesn’t mean we can’t at leastgo through the thinking process ourselves. If we do have guidance coming outshortly, we might even be able to go through the thinking process and put somecitations about how the thinking process went along, whether or not we use a defaultfactor or CSAF. So hopefully next year the IPCS methods will be out and we can

200515.pgs 6/14/02, 1:35 PM910

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014

Page 18: A National and International Debate on Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

Hum. Ecol. Risk Assess. Vol. 8, No. 4, 2002 911

Default Uncertainty Factors vs. Data-Derived Uncertainty Factors

actually bring more examples in pharmaceuticals and maybe the methylmercuryexample into this meeting and show how it worked and how it didn’t.

Ms. Meek: I think the guidance is a critical step in this. What has been preparedis consensus of a lot of individuals with expertise in various areas including kineticsand risk assessment. You write down what you know right now, you apply thataccordingly. It is desirable also to have a compendium of examples and that’sanother product we want to establish at the website. So you can come back, revisitthe preliminary guidance but keep moving forward. As a minimum, we should havedefined some data requirements for people who want to generate relevant data, andthat’s absent at the moment.

Comment from audience: Moving towards a position where they don’t want totake any actions whatever scientific data is presented has been peer reviewed to theNth degree. As the example was given before, if I went to my management and saidhere would you accept this data derived uncertainty factor in place of 10× for achemical, say for methylmercury, and as Alan said it went through 3 people but notquite peer review but almost peer review, they would probably blow it out of thewater, because they don’t want to stick their necks out. I’m still very pro data driven,but unless it undergoes the highest level of scientific scrutiny it’s not going to be ofany utility in the agency.

Comment (Dr. Kadry): This is the 5th workshop, we come every year we meet for1 or 2 days and then we leave. Why don’t we form some sort of permanent nationalcommittee to examine the issue from different agencies, academia and industry? Itwill be like the first mission when IPCS comes with the report, to examine that andmaybe we can come with some sort of universal procedure for evaluating the data,which can be reported at the next workshop.

200515.pgs 6/14/02, 1:35 PM911

Dow

nloa

ded

by [

Kar

olin

ska

Inst

itute

t, U

nive

rsity

Lib

rary

] at

15:

23 0

9 O

ctob

er 2

014