historical perspective on risk assessment in the federal government

24
ELSEVIER Historical Toxicology 102 (1995) 29 52 perspective on risk assessment in the federal government John D. Graham Center jbr Risk Anulysis, Huruurd School of’ Public Hecrlth 718 Huntington Avenue, Boston. MA 02115. USA Abstract This article traces the evolution of risk assessment as an essential analytical tool in the federal government. In many programs and agencies, decisions cannot be made without the benefit of information from risk assessment. Although this analytical tool influences important public health and economic decisions, there is widespread dissatisfaction with the day-to-day practice of risk assessment. The article describes the sources of dissatisfaction that have been voiced by scientists, regulators, interest groups and ordinary citizens. Problems include the use of arbitrary exposure scenarios, the misuse of the ‘carcinogen’ label, the excessive reliance on animal cancer tests, the lack of formal uncertainty analysis the low priority assigned to noncancer endpoints, the poor communication of risk estimates and the neglect of inequities in the distribution of risk. Despite these limitations. the article argues that more danger rests in efforts to make decisions without any risk assessment. Recent Congressional and Administration interest in risk assessment is encouraging because it offers promise to learn from past mistakes and set in motion steps to enhance the risk assessment process. Keywords: Risk assessment; Carcinogen; Safety 1. Introduction The analytical tools of risk assessment, as ap- plied to chemicals and radiation, have assumed a critical role in decision making in the United States. Risk assessments inform decisions at fed- eral agencies such as the Consumer Product Safety Commission the Department of Defense, the Department of Energy, the Environmental Protection Agency, the Food and Drug Adminis- tration and the Occupational Safety and Health Administration. Many states, including Califor- nia, New Jersey and Wisconsin. also make wide- spread use of risk assessment. While risk assessment might appear to be an arcane subject each day the nation’s public health, environmental resources and economic well being are affected by the outcomes of risk assessment. At EPA, for example, risk assessments play a significant role in determining how much human exposure is permitted to new chemicals, existing chemicals, pesticides hazardous wastes, toxic air pollution, drinking water contaminants and sur- face water pollutants. The risk assessments under- 0300-483X/95/$09.50 0 1995 Elsevier Science Ireland Ltd. All rights reserved SSDl 0300-483X(95)03035-E

Upload: john-d-graham

Post on 21-Jun-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

ELSEVIER

Historical

Toxicology 102 (1995) 29 52

perspective on risk assessment in the federal government

John D. Graham

Center jbr Risk Anulysis, Huruurd School of’ Public Hecrlth 718 Huntington Avenue, Boston. MA 02115. USA

Abstract

This article traces the evolution of risk assessment as an essential analytical tool in the federal government. In many programs and agencies, decisions cannot be made without the benefit of information from risk assessment. Although this analytical tool influences important public health and economic decisions, there is widespread dissatisfaction with the day-to-day practice of risk assessment. The article describes the sources of dissatisfaction that have been voiced by scientists, regulators, interest groups and ordinary citizens. Problems include the use of arbitrary exposure scenarios, the misuse of the ‘carcinogen’ label, the excessive reliance on animal cancer tests, the lack of formal uncertainty analysis the low priority assigned to noncancer endpoints, the poor communication of risk estimates and the neglect of inequities in the distribution of risk. Despite these limitations. the article argues that more danger rests in efforts to make decisions without any risk assessment. Recent Congressional and Administration interest in risk assessment is encouraging because it offers promise to learn from past mistakes and set in motion steps to enhance the risk assessment process.

Keywords: Risk assessment; Carcinogen; Safety

1. Introduction

The analytical tools of risk assessment, as ap- plied to chemicals and radiation, have assumed a critical role in decision making in the United States. Risk assessments inform decisions at fed- eral agencies such as the Consumer Product Safety Commission the Department of Defense, the Department of Energy, the Environmental Protection Agency, the Food and Drug Adminis- tration and the Occupational Safety and Health Administration. Many states, including Califor-

nia, New Jersey and Wisconsin. also make wide- spread use of risk assessment.

While risk assessment might appear to be an arcane subject each day the nation’s public health, environmental resources and economic well being are affected by the outcomes of risk assessment. At EPA, for example, risk assessments play a significant role in determining how much human exposure is permitted to new chemicals, existing chemicals, pesticides hazardous wastes, toxic air pollution, drinking water contaminants and sur- face water pollutants. The risk assessments under-

0300-483X/95/$09.50 0 1995 Elsevier Science Ireland Ltd. All rights reserved

SSDl 0300-483X(95)03035-E

30 J. D. Gruhun~ / To.tkology 102 (1995) 29-52

lying cleanup goals at hazardous waste sites ulti- mately influence how much groundwater will be protected for use by current and future genera- tions. Meanwhile, our nation’s annual investment of $150 billion in environmental protection is influenced by the findings of risk assessments (U.S. EPA December 1990). Many jobs, new products and industrial facilities are threatened, protected or created by the outcomes of risk assessment.

Recent trends in public policy have magnified the importance of risk assessment and raised ex- pectations for good risk assessment practice (Lave and Males, 1989). Some laws and regulations insist that firms meet numerical levels of accept- able risk (so-called ‘bright lines’) in order to oper- ate production facilities or sell products (Rosenthal et al., 1992). The Clean Air Act Amendments of 1990, for example, require EPA to develop risk-based standards for stationary sources of hazardous air pollutants, after industry has implemented the maximum achievable control technologies ( 10 1 st Congress, 1990). Federal judges now expect risk assessments in support of regulatory decisions unless legislation specifically indicates otherwise. The related tool of comparative risk assessment which ranks the rela- tive seriousness of various threats, is now used by EPA and a growing number of states to help set priorities for environmental protection (U.S. EPA, 1987; Science Advisory Board, 1990; The Carnegie Corporation, 1993). Meanwhile, other countries around the globe are scrutinizing Amer- ica’s risk assessment procedures to determine whether they should be adopted abroad (World Health Organization, 1992).

In light of the growing importance of risk as- sessment, it is disconcerting that America’s cur- rent risk assessment processes do not command a high degree of respect within the scientific com- munity, interest groups and the public at large. The purposes of this report are to, (1) trace the history of risk assessment as an analytical tool in the federal government, (2) describe contempo- rary concerns about the tool’s current uses in the field and, (3) explain why some form of risk assessment is critical to sound health, safety and environmental decision-making.

Risk assessment, important as it is, is not the solution for all of the problems in policy making. For example, it is the responsibility of the risk manager to establish a process that engenders public trust and credibility. The value of sound risk assessments can be destroyed by poorly trained or motivated risk managers or risk com- municators. At the same time, better risk assess- ment practices can ultimately help risk managers and communicators do a better job.

2. Working definitions

When the phrase ‘risk assessment’ is used, it means different things to different people. Some use the phrase loosely to refer to the entire risk management process, including science econom- ics, legislation, regulation and communication with the public. Others see risk assessment as an algorithm for producing a regulatory number that will determine how much exposure to a toxic agent is permissible. For the purposes of this report, we use the conceptual definition of risk assessment crafted in 1983 by a committee of the National Academy of Sciences.

According to NAS, risk assessment is ‘the use of the factual base to define the health effects of exposure of individuals or populations to haz- ardous materials and situations’ (National Academy of Sciences, 1983). The analytical pro- cess entails use of scientific data and judgement as well as various ‘default’ assumptions when hard data are lacking. The default assumptions are based on a mix of scientific and policy judgements about how much to err on the side of safety when computing uncertain estimates of risk. In contrast to risk assessment, NAS describes ‘risk manage- ment’ as the process of weighing and selecting policy alternatives on the basis of politics, eco- nomics, ethics science and law.

Within the NAS framework, risk assessment entails more than producing a single risk number; it is an organized process of characterizing risk that includes both qualitative and quantitative components. While it is the challenge of risk assessors to characterize threats to human health and the environment, it is ultimately the challenge of risk managers to decide how safe is safe

enough. The separation of risk assessment and management is not always tidy and it is recog- nized that the two processes must be coordinated (Wilson and Clark, 1991). In this report, we use the phrase ‘risk assessment’ as intended by the original NAS definition recognizing that routine practice often deviates from what NAS intended.

3. The history of risk assessment

Interest in the toxicity of chemicals can be traced to ancient times (Eckert, 1990). Hippocrates wrote of the toxicity of lead; Pliny and Galen described the toxic effect of mercury and Aristotle and others noted the toxicity of fumes from burning charcoal. Their interest in ‘poisons’ was not always benign. Socrates was executed by being required to drink a cup of poison derived from hemlock! The word toxicity is actually derived from the Greek ‘toxikon phar- makon,’ bow drug or arrow poison. A toxicolo- gist is therefore an expert in poisons (Zapp, 1977).

While toxicological reasoning can be found in ancient times the science of toxicology has its most important roots in the 16th century. The physician-alchemist Paracelsus established the fundamental principle of toxicology:

‘All substances are poisons: there is none which is not a poison. The right dose differentiates a poison from a remedy’ (Paracelsus, 199 1).

Toxicity, the capacity of a substance to produce serious bodily injury or death, is an inherent property of every chemical substance. Even those substances which we ordinarily think of as non- toxic, such as sugar, salt or water, possess the inherent capacity to produce serious bodily injury or death, but to a slight degree relative to other substances which are ordinarily considered toxic (Klaassen and Eaton, 1990).

The Industrial Revolution and the attendant concerns about occupational disease, spurred broader interest in toxicology and the related disciplines of industrial hygiene and epidemiology. From 1800 to 1950 efforts to protect workers, consumers and the public from toxic chemicals were directed at determining ‘safe’ levels of hu- man exposure. This period also gave rise to the field of public health, which was directed at pro-

tecting human populations from hazardous agents and behaviors before diseases might be detected by physicians.

3.1. Drtrrrnining ‘safe’ levrls of’ rxpposure The basic goal of regulatory toxicology is to

determine a ‘safe’ level of exposure to a toxic agent such as a chemical or radiation. Intuitively, the safe exposure level is the amount that is sufficiently small that it does not cause any ad- verse effect on human health.

Toxicologists have developed experimental pro- tocols and algorithms for determining what levels of exposure to a toxic agent should be considered ‘safe’ for regulatory purposes. Since experimenta- tion with humans is usually unethical or impracti- cal animal tests are used to determine the ‘acceptable daily intake’ (ADI) of a toxic sub- stance. The ADI is now referred to as the ‘refer- ence dose’ (RfD) by EPA. The notion of an ADI (RfD) arises from the belief that the adverse health effects induced by agents exhibit a true no-effect level of exposure (i.e. a strict threshold in the dose-response curve).

Fig. 1 exhibits five hypothetical dose-response relationships. Curve A exhibits a strict threshold; curve B exhibits sublinearity with no strict threshold; curve C exhibits supralinearity with no strict threshold; curve D is a straight-line (linear) relationship. Curve E, the U-shaped dose-re- sponse curve, expresses the notion that people can

Fig. I Hypothetical dose-response curves for a toxic agent.

32 J.D. Grahatn / Toxicology 102 (1995) 29-52

Table I Key concepts in regulatory toxicology

Ahrrse effecr. Functional impairment or pathological lesions which may atfect the performance of the whole organism. or which reduce an organism’s ability to respond to an additional challenge

Acceptable &i/y intuke (ADI) or rejhwm concmfrarion (R/d). The amount of a chemical in mg per kg body weight per day (or in mg per day for a 70-kg person) which is likely to be without appreciable risk of deleterious effects during a lifetime

No observed adverse effxr level (NUAEL). The dose of a chemical at which there are no statistically or biologically significant increases in the frequency or severity of adverse effects between the exposed population and its appropriate control

Lort,esr observed adverse efl&t level (UIAEL). The lowest dose of chemical in a study or group of studies which produces statistically or biologically significant increases in frequency or severity of adverse etfects between the exposed population and its appropriate control

Source: Dourson and Stara, 1983: Barnes and Dourson, 1988

have too little or too much exposure to a sub- stance (e.g. vitamin A). For agents such as aspirin and alcohol, the ‘optimal dose’ may be greater than zero (Graham, 1993). For most health effects of chemical exposure, toxicologists expect a strict threshold as illustrated in curve A (Rodericks, 1992).

Unfortunately, the ‘safe’ level of exposure on curve A defined as point S on the horizontal axis, cannot be determined precisely due to the limits of laboratory science. Our measurement abilities are constrained by the limits of analytic detection, the difficulty in discerning subtle health effects and the practical limits on the number of animals that can be used in each experiment. Moreover, the ‘safe’ level of exposure even if it can be determined experimentally, may exhibit a distri- bution of values in the human population, since some people may be more or less sensitive to chemical exposures than others. The slight varia- tion of sensitivities exhibited by genetically inbred rodents does not provide a good picture of the extent of variation in human sensitivities to toxic agents (Brain et al., 1988).

In the face of such complexities, regulatory toxicologists have devised the following crude procedures for determining ‘safe’ levels (Table 1). Laboratory animal tests of an agent are con- ducted to establish a ‘no observed adverse effect level’ (NOAEL). Toxicologists then approximate a ‘safe’ level of exposure (the ADI) by dividing the no-effect level by an uncertainty (or safety) factor: ADI = NOAEL level/uncertainty factor. At EPA, the ADI is now called the ‘reference

dose’ (RfD) to remove the implication of com- plete safety. Since 100% safety or ‘freedom from risk’ is an elusive goal, public health scientists now emphasize the degree of protection provided (Goldstein, 1990).

The appropriate size of the safety factor has been a matter of discussion for years. In the 1950s the Food and Drug Administration ini- tiated use of a lOO-fold safety factor to account for factors such as differences between humans and animals in sensitivity, the possible presence of sensitive human subpopulations and the possibil- ity of synergistic effects of multiple contaminants in the diet (Lehman and Fitzhugh, 1954). When considering how responses in exposed animals and humans might differ, it is instructive to con- sider factors such as differences in body size and weight, differences in food requirements, the num- ber of animals tested compared to the size of the human population that may be exposed and the difficulty of estimating human intake of contami- nants (Vettorazzi, 1980). More recently, toxicolo- gists have suggested that one factor of ten be thought of as accounting for ‘interspecies variabil- ity’ in toxic response while the other factor of ten be thought of as accounting for ‘intraspecies vari- ability’ (Klaassen and Doull 1980).

The lOO-fold factor is frequently used when data from chronic 2-year animal experiments are available. Additional factors (sometimes called ‘modifying factors’) are used when test data are inadequate or are of poor quality. When only subchronic (90-day) data are available from two species, it has been recommended that an addi-

J.D. Grahn , To\-icology 102 (1995) 29-52 33

tional factor of up to ten be used to compute the AD1 (RfD), thereby, creating a possible lOOO-fold safety factor (National Academy of Sciences, 1977). If subchronic data from only one species are available, an extra safety margin of two (cre- ating a 2,000-fold safety factor) has been sug- gested. EPA has refined this approach by recommending that an additional uncertainty fac- tor in the interval between 1 and 10 be used in computing the AD1 (RfD) if a LOAEL must be used due to uncertainty about how low the NOAEL is (U.S. EPA 1980).

The severity of adverse effects caused by toxic agents can vary from mild cases of reversible skin irritation to death. Historically, the severity of the adverse health effect caused by an agent has not played an explicit role in the choice of safety factors, although some extremely mild biological responses have on occasion not been considered ‘adverse.’ A larger safety factor of 5000 was pro- posed for irreversible effects such as cancer but, as we shall see, the safety-factor method is no longer applied to carcinogens by U.S. regulatory agen- cies (except in unusual circumstances) (Weil, 1972). Even for cancer, however, the safety-factor method continues to be used in most countries throughout the world.

The use of NOAELs and safety factors to determine ADIs is today often called ‘risk assess- ment,’ but the safety-factor method was used by toxicologists for decades before the phrase risk assessment was used in either government or in- dustry. It should also be understood that the choice of safety factors is not strictly a scientific judgment; the RfD has inputs from both policy and science. ‘Risk assessment’ per se began not in the chemical industry but rather in the field of radiation control.

3.2. The influence of’ radiation biology Since we had regulatory toxicology, why did we

need risk assessment? To answer this question requires a brief digression into the study of radia- tion.

Many experts in radiation biology postulate that any amount of human exposure to ionizing radiation, no matter how small may be associated with an increase in the risk of cancer and genetic

effects. While this hypothesis is by no means proven, it was credible enough to spawn the disci- pline of radiation risk assessment, first as applied to genetic effects and later to cancer.

The idea is to estimate how the incidence of human cancer is influenced by exposure to small doses of radiation. Cancer data from the Japanese atomic bomb survivors and other irradiated hu- man populations have been scrutinized to eluci- date dose-response relationships. A series of panels of the National Academy of Sciences (from BEIR I to BEIR V) have struggled to resolve this challenging question (National Research Council 1990). Perhaps the most famous risk assessment, the 1975 Rasmussen study of core meltdowns at nuclear power plants, used a no-threshold model to estimate the number of cancer deaths following a nuclear reactor accident (Nuclear Regulatory Commission, 1975). Risk assessment is now com- monly used at nuclear power plants to estimate potential risks to community residents and target opportunities for risk reduction (Spangler, 1987; Remick, 1992).

The no-threshold hypothesis was discussed in the context of chemicals for years prior to its application to radiation. As non-threshold models began to be applied to radiation, questions were raised about their applicability to chemicals. Ad- vocates of non-threshold models of chemical car- cinogenesis used the radiation analogy to buttress their case (Maugh, 1978).

From a biological perspective, the non- threshold hypothesis in carcinogenesis has its roots in three rather different ideas. First, the one-hit model of cancer induction and the closely related ‘linearized multistage model’ now used by EPA, posit that even a single molecule of a chem- ical carcinogen is capable of triggering the biolog- ical processes that lead to tumor formation (Office of Technology Assessment, November 1987). Sec- ond, the background rates of human exposure to chemicals and radiation from all sources, natural and manmade, may be so large that the no-effect levels of exposure for individual chemicals are irrelevant (i.e. cumulative exposures to all car- cinogenic influences may already exceed any strict thresholds that might exist), unless cancer-causing agents act by numerous different mechanisms

34 J.D. Graham / Toxicology 102 (1995) 29-52

(Crump et al., 1976). Finally, any population thresholds that do exist may be very close to zero if there are some humans who are highly suscepti- ble to contracting cancer (Brain et al., 1988). The fact that one out of every four Americans now dies of cancer is a stark indication that some combination of genetic and environmental influ- ences (including old age) are sufficient to cause numerous cancers in the human population.

Strictly speaking, it is not known whether true no-effect levels of exposure for cancer-causing agents exist at doses below the range of observa- tion in animals or humans. Nonetheless, it is fair to say that the non-threshold hypothesis, which is consistent with experimental data (Upton 1988; Peto et al., 1991), has presented a powerful chal- lenge to the toxicologist’s conception of ‘safety’. If strict thresholds do not exist for carcinogenic agents, then it is impossible to establish nonzero levels of exposure that are completely safe. It is this line of reasoning that has caused U.S. regula- tory agencies to reject the historical practice of applying safety factors to the NOAELs for cancer determined by animal tests (U.S. EPA, 1976).

While the safety-factor approach is still widely employed for adverse health effects other than cancer, the field of carcinogen risk assessment was spawned to address the concern that even minute exposures to a carcinogenic agent may result in a slight increase in cancer risk. Only recently have non-threshold hypotheses been suggested for non- cancer effects such as developmental and repro- ductive damage (Graham, 1989) and they remain quite controversial (Kodell et al., 1991).

The convergence of interest in the carcinogenic effects of radiation and chemicals spurred intellec- tual interest in the field of risk assessment. A new professional society, the Society for Risk Analy- sis, was established in 1980 to address this fasci- nating question and related issues in risk management and communication.

3.3. The dilemma posed by the Deluney clauses While cancer risk assessment was first applied

in the nuclear power industry, the analytic tools were refined and widely used to regulate the food industry beginning in the early 1970s. In fact, it is the Food and Drug Administration’s efforts to

protect the public from carcinogenic agents in the food supply that gave rise to the ‘one chance in a million’ standard of de minimus or negligible risk that has been used (and abused) in various policy contexts (Rodricks, 1988; Kelly, 1991).

Beginning in the 1930s FDA approved toler- ances for food additives that met a standard of ‘reasonable certainty of no harm when used as intended.’ Congress later passed three laws that led the Food and Drug Administration to con- sider use of risk assessment: the Food Additive Amendments of 1958, the Color Additives Amendment of 1960 and the Animal Drugs Amendment of 1968. Each law contains a ‘De- laney Clause’ stating with only slight variation in wording, that any additive or animal drug shall not be deemed safe if it is found, after appropriate tests, to induce cancer in man or animal.

The 1968 amendment added a statutory excep- tion to the Delaney Clause (the so-called ‘DES proviso’) that authorizes FDA to approve car- cinogenic animal drugs or carcinogenic additives in animal feed, provided that ‘no residue’ of the drug or additive will be found in meat or meat products by an analytical method prescribed by the FDA (Barnard, 1990).

The first use of carcinogen risk assessment oc- curred when FDA defined an amount of carcino- genic residue in meat that was so tiny that it could be considered ‘no residue’ under the DES proviso. FDA decided that it would not be wise to use the analytic limit of chemical detection to define safety because dramatic improvements in detec- tion technology were allowing measurements of residues in the part-per-billion and part-per-tril- lion range (Scheuplein, 1986). In light of scientific developments, FDA developed a creative risk as- sessment method that could be used to define ‘no residue’ as an amount so infinitesimal that any incremental cancer risk that might occur due to the residue would be essentially zero.

In 1973, when FDA was developing this ‘sensi- tivity of method’ (SOM) procedure, agency scien- tists recognized that any exposure to a carcinogen, no matter how small, might be associated with a slight increase in cancer risk (Scheuplein, 1987). Moreover FDA stated that ‘absolute safety can never be conclusively demonstrated experimen-

J. D. Gruhum / Toricology 102 (I 995) 29-52 35

tally’ (U.S. FDA, 1979). Using a mathematical procedure developed by statisticians Mantel and Bryan (1961), FDA proposed that quantitative risk assessment be used to define a level of extra cancer risk in a lifetime, originally set at one chance in 100 million, that could be regarded as essentially zero. Several years later, FDA replaced the Mantel-Bryan model with a more protective, linear dose-response model while relaxing the ‘es- sentially zero’ level of risk from 1 in 100 000 000 to 1 in 1 000000 (U.S. FDA, 1977).

protective regulatory posture toward color addi- tives. This court did not resolve whether the De- laney Clause covering carcinogenic food additives should be similarly interpreted (Public Citizen vs. Young, 1987).

FDA was the first agency to make an explicit policy commitment to a nonzero level of cancer risk that would be considered ‘essentially zero’ for policy purposes, although FDA’s final position on the DES proviso was not officially published until 1987 (U.S. FDA, 1987). FDA’s implementation of the DES proviso has not been contested in litiga- tion.

FDA has avoided the strictness of the Delaney Clause covering food additives by making few rule-making decisions that might trigger De- laney’s prohibitions (Merrill, 1988). A more seri- ous dilemma has emerged at EPA, which must consider the carcinogenicity of pesticide residues on foods when making registration and ieregistra- tion decisions about pesticide products.

In the mid-1970s a federal appeals court high- lighted FDA’s authority to ignore de minimus matters under the general safety clauses of the Food Drug and Cosmetic Act (Monsanto Co. vs. Kennedy, 1979). The critical case involved whether unreacted acrylonitrile monomer that mi- grated from packaging was a food additive and, hence, was covered by the stringent Delaney pro- tections against carcinogenic additives. While FDA said ‘yes’, based on the physical principle that molecules would be exchanged when two substances come into contact, no measurements were available to support FDA’s position. When the Monsanto Company challenged FDA’s deci- sion, a federal appeals court overruled FDA and suggested that de minimus was an appropriate basis for a determination that there was no addi- tive.

EPA faces a curious ‘Delaney Paradox’: Section 408 of the Food, Drug and Cosmetic Act regu- lates residues of pesticides on raw agricultural commodities using a flexible risk-benefit test while Section 409 of the Act, which covers food addi- tives in processed foods (such as pesticide residues that concentrate in processed foods), includes the rigid Delaney Clause. A plain reading of the Delaney Clause would seem to require zero toler- ance levels for all carcinogenic pesticide residues that concentrate in processed foods. The law does permit carcinogenic residues in processed foods that are no greater than the amount in raw food.

EPA chose not to make such a strict reading of the Delaney Clause. Based on the recommenda- tions of a National Academy of Sciences panel (National Research Council, 1987) EPA in 1988 proposed to adopt a uniform ‘negligible risk’ pol- icy for all carcinogenic residues in food, regardless of whether the food is raw or processed (U.S. EPA, 1988). EPA proposed to use FDA’s de minimus risk policy, even though a similar policy had been overruled in 1987 by a federal appeals court addressing color additives.

In the 1980s FDA attempted to extend the de Environmentalists, led by the Natural Re- minimus principle to the Delaney Clause covering sources Defense Council, challenged EPA’s policy carcinogenic color additives. Using risk assess- when the Agency sought to permit continued use ment methods and the one-in-a-million standard of four carcinogenic pesticides as food additives (as used under the DES proviso), FDA sought to under the Delaney Clause. While EPA argued treat tiny carcinogenic risks from color additives that the cancer risks posed by these pesticides as de minimus. In 1987 a federal appeals court were negligible, a federal appeals court ruled in acknowledged that risks less than one in a million July 1992 that the Delaney Clause prohibits food are very small but overruled FDA’s interpretation additives that are carcinogenic, regardless of suggesting that Congress, when writing the De- whether or not the magnitude of the cancer risk is laney Clause, had intended an extremely rigid and negligible (Les vs. Reilly, 1993).

36 J.D. Cruharn / Toxicology 102 (199.5) 29-52

This 1992 judicial decision, which the Supreme Court has decided not to review, foreshadows a major debate in Congress about how food addi- tives and pesticides should be regulated in the future. This debate may cover not only the Food, Drug and Cosmetic Act and the associ- ated Delaney Clauses but also EPA’s current au- thority under the Federal Insecticide, Fungicide and Rodenticide Act to register pesticides based on a flexible balancing of risks and benefits (the ‘unreasonable risk’ test). The future of risk as- sessment in food safety regulation rests on the outcome of this debate.

3.4. industry takes OSHA to the Supreme Court FDA was the first federal agency to embrace

carcinogen risk assessment, but the technique was not used on a widespread basis throughout the federal government until resolution of a ma- jor struggle that developed during the Carter Ad- ministration. Under the leadership of Dr. Eula Bingham, the Occupational Safety and Health Administration (OSHA) proposed in 1977 to reg- ulate all carcinogens in the workplace to the lowest levels of exposure that were technologi- cally and economically feasible. OSHA took this position because safe levels of human exposure to chemical carcinogens could not be demon- strated (OSHA, 198 1).

Major industry groups, led by the American Industrial Health Council and the American Petroleum Institute, objected to OSHAs pro- posal. Among several lines of argument, industry suggested that a carcinogenic agent should not be regulated unless risk assessments indicate that the size of the cancer risk is likely to be signifi- cant. They also argued that OSHA should use the results of risk assessment to facilitate estimat- ing health benefits when benefit-cost analyses of regulatory alternatives are conducted.

Industry opposition to OSHAs emerging generic cancer policy crystallized in legal chal- lenges to a rule-making proposal to tighten the workplace benzene standard. Citing the work of Harvard physicist Richard Wilson, industry ad- vocates argued that quantitative risk assessment of chemical exposures is feasible and useful. Labor and environmental groups urged OSHA

to resist risk assessment on the grounds that it was technically suspect and unethical (i.e. would lead to allowance of unacceptable risks) (OSHA, 1988).

Meanwhile, an analytical unit within EPA, the Carcinogen Assessment Group (CAG), was de- veloping a systematic procedure to perform quantitative risk assessments of environmental exposures to carcinogenic agents (Albert et al., 1977; Anderson and the Carcinogen Assessment Group, 1983). In 1976, under the scientific lead- ership of Dr. Roy Albert, EPA published the agency’s first interim guidelines on carcinogen risk assessment (U.S. EPA, 1976).

Disputes emerged within the Carter Adminis- tration about whether quantitative cancer risk assessment should be used in policy decisions (Carter, 1979). OSHA, in particular, argued that numerical risk assessments were not appropriate for standard-setting decisions. EPA (based on its early experience regulating chlorinated pesti- cides) and FDA countered that quantitative risk assessment could play a constructive role in both priority-setting and standard-setting deci- sions. A consensus-building effort on risk assess- ment by the Interagency Regulatory Liaison Group, led by EPA Administrator Douglas Costle, made some strides forward but was dis- banded after the November 1980 election (Intera- gency Regulatory Liaison Group 1979; Landy et al., 1990).

Controversy about the legitimacy of risk as- sessment was ultimately resolved by a divided U.S. Supreme Court. Writing for a plurality of the Court in the benzene case, Justice John Paul Stevens suggested that risk assessment is feas- ible and that OSHA is required to show that a workplace risk is significant before taking rule- making action to reduce or eliminate the risk. Stevens also suggested that a reasonable person might regard a lifetime cancer risk of one in 1000 as significant yet regard a risk of one in a billion as trivial (Industrial Union Department, 1980).

Justice Thurgood Marshall wrote a bitter dis- sent, suggesting that Justice Stevens had rewrit- ten the Occupational Safety and Health Act. In a later Supreme Court case involving OSHAs cot-

ton-dust standard, a majority of the Court

reaffirmed the Stevens position but refused to permit cost-benefit analysis when setting per- missi-ble exposure limits under the language of the Occupational Safety and Health Act of 1970 (American Textile Manufacturers vs. Donovan,

1981). The Supreme Court’s affirmation of risk as-

sessment exerted a powerful influence on OSHAs rule-making process. OSHA now uses risk assess- ment regularly in its rule-makings on specific chemicals, as reflected in its recent decisions to tighten the benzene and formaldehyde standards. When OSHA neglects risk assessment, it loses in court. For example, OSHA recently attempted to establish permissible exposure limits for 428 air contaminants in a single rule-making. In July 1992 a federal appeals court vacated the OSHA rule-making in part because OSHA had not per- formed risk assessments for each of the 428 chemicals it sought to regulate (AFL-CIO, July 1992). Unless Congress performs substantial surgery to OSHAs statute, risk assessment will continue to exert a profound influence on OS- HAS rule-making activities.

The impact of the Supreme Court’s faith in

risk assessment has extended far beyond OSHA regulation. In overturning the Consumer Product Safety Commission’s proposed ban of urea formaldehyde foam insulation, a federal appeals court emphasized in 1983 the importance of sound risk assessment in chemical regulation (Gulf South Insulation Co., 1983). More re- cently, an EPA rule covering air emissions of vinyl chloride was vacated by a federal appeals court because the agency had not used risk as-

sessment properly in its interpretation of the Clean Air Act’s mandate to protect public health with an ‘ample margin of safety.’ The Court emphasized that ‘safe’ does not mean zero risk

and that acceptable risks can be gauged with the aid of risk assessment methods (Natural Re- sources Defense Council vs. EPA, 1987). The 1990 Clean Air Amendments embraced this judi- cial opinion as the operative statutory interpreta- tion for setting residual emission standards for hazardous air pollutants (Clean Air Act Amend- ments, October 1990).

3.5. The National Toxicology Progrum and the Chemical Industry Institute of Toxicology

While the 1970s and 1980s were characterized by litigation over the role of risk assessment in regulation, the creation of two new scientific or- ganizations laid the groundwork for new techni-

cal and regulatory developments. In 1978, the Department of Health Education and Welfare (HEW) launched the National Toxicology Pro- gram (NTP) to advance scientific understanding of the human health effects of chemicals (Na- tional Academy of Sciences, 1984). At about the same time, the nation’s major chemical and

petroleum companies created the Chemical In- dustry Institute of Toxicology (CIIT). The origi- nal role of CIIT included routine toxicity testing of commodity chemicals, but more recently the organization has emphasized mechanistic re- search that can improve the scientific basis of risk assessment (Neal, 1991). NTP and CIIT are now important organizations in the scientific de- bate about risk assessment reform.

In the 196Os, the National Cancer Institute began testing of chemicals for carcinogenicity. While government laboratories were equipped primarily to conduct basic experimental research NC1 came under increasing public pressure to perform routine animal testing of commodity chemicals. Since regulatory agencies had particu- lar needs for testing information, the NCI’s pro- cess of selecting chemicals for testing became a subject of intense interest among agencies and parties in the private sector.

In 1978, the Secretary of HEW established NTP to strengthen the Department’s activities in the testing of chemicals of public health concern, as well as the development and validation of new and better integrated test methods. The first di- rector of NTP, Dr. David Rall, made it clear the NTP’s role was hazard identification based on animal studies; it was the role of regulatory agencies to determine human exposures to chem- icals and quantify risks to human health.

While NTP was originally seen as an interim, coordinating institution, in 1981 the Secretary of the Department of Health and Human Services granted NTP permanent status. Most of NTP’s funding came from the National Institute for

38 J.D. Graham i Tosicology 102 (1995) 29-52

Environmental Health Sciences. Since its incep- tion in 1978, about 500 substances (including mixtures) have been tested for carcinogenicity (Office of Technology Assessment, 1991).

Those who expected that only a small percent- age of chemicals would show carcinogenic activity have been surprised by the findings of animal tests. Roughly 50% of the chemicals tested in laboratory animals exhibit some carcinogenic ac- tivity (Gold et al., 1989; Huff et al., 1991), al- though NTP scientists believe the rate of positives may ultimately prove to be as low as lo- 15% (Huff and Melnick, 1993). Even so, 10% of the thousands of chemicals in commerce presents a formidable challenge for regulatory agencies.

NTP tests chemicals at or near the so-called ‘maximum tolerated dose,’ which is defined as the largest dose that can be applied to the animals without causing death or chronic toxicity (such as significant loss of body weight). Such large doses are used to help compensate for the small number of animals that are used in the experiments (Huff et al., 1991). Although some tested chemicals cause cancer only at the ‘maximum tolerated dose’ (MTD), many exhibit carcinogenic effects at exposure levels below the MTD (Hoe1 et al., 1988; Rall, 1991). Critics of the NTP program argue that the doses used in the animal tests are too high to be a biologically relevant indicator of cancer risk at smaller levels of human exposure (Schach von Wittnau, 1987; Ames and Gold, 1990a,b). There is no scientific consensus about whether MTD testing should be retained or re- vised (National Research Council, 1992).

In addition to the animal tests sponsored by NTP, private organizations are now taking an increasing role in routine toxicity testing. The Toxics Substances Control Act of 1976 and other environmental laws have placed increasing testing burdens on the manufacturers and users of toxic chemicals. The NTP findings, coupled with the increasing number of positive animal tests re- ported by private organizations, have led to a growing list of chemicals in the federal govern- ment’s Annual Report to Congress on Carcino- gens (U.S. Department of Health and Human Services, 1990).

In light of the large and growing number of animal carcinogens, the key question becomes how to predict the risks of human exposures on the basis of data from laboratory animals. NTP’s Board of Scientific Councilors recently urged NTP to go beyond routine testing and make greater efforts to discover the implications of ani- mal test results for human risk (U.S. Public Health Service, 1992). The new Director of NTP, Dr. Kenneth Olden, has made a public committ- ment to advance scientific understanding of why chemicals cause tumors in animals. Interestingly, CIIT has taken a leadership role in this area by advancing scientific understanding of both ‘phar- macokinetics’ (the delivery of chemicals to target organs and cells in the body) and ‘pharmacody- namics’ (the role of chemicals in generating tu- mors at the organ and cellular level).

Some of CIIT’s recent findings challenge con- ventional assumptions used in carcinogen risk as- sessment. For example risk assessors typically assume that the amount of a carcinogen delivered to target cells in the body is proportional to the amount ingested or inhaled. Yet, CIIT’s research on formaldehyde has revealed nonlinear pharma- cokinetics in both rats and monkeys (Casanova and Heck, 1991). Moreover risk assessors typi- cally assume that even small doses of carcinogens increase cancer risk. Yet, CIIT’s research on cer- tain nongenotoxic chemicals, such as chloroform, suggests that tumors in animals occur only if doses are high enough to cause cell proliferation in the target organ (Goldsworthy et al., 1990). Risk assessment practices are being rethought due to study results from CIIT and other research organizations (such as those from mechanistic research on formaldehyde).

A key unresolved question is how risk assessors should respond to scientific progress. Some would argue that a specific legal burden of proof should be applied to any new scientific evidence that challenges the default assumptions normally used by agency risk assessors. Others argue that risk assessment should be updated periodically using the best available scientific information as deter- mined by qualified scientists. Regardless of how agencies proceed, they will need better analytic tools to characterize how much confidence they

J.D. Gmhnm II Tosicolog,v 102 (1995) -79.-52 39

have in new mechanistic science compared to the

default assumptions.

3.6. Ruckelshaus: risk ussessment and munage-

ment Despite indications of widespread human expo-

sure to animal carcinogens, the early years of the Reagan Administration were dominated by a pro- gram of ‘regulatory relief for private industry. This program affected virtually all federal agen- cies but the program was particularly controver- sial at EPA. In the face of growing distrust of EPA’s pro-business dealings, William Ruck-

elshaus was recruited in 1983 to rejoin the agency and chart a new management strategy. One of his principal themes was the adoption of ‘risk assess- ment and risk management’ as a principled frame- work to govern EPA’s policy decisions (Ruckelshaus, 1983, 1985; U.S. EPA, 1983).

Ruckelshaus depended on the major 1983 re- port from the National Academy of Sciences that called for uniform risk assessment guidelines

throughout the federal government. Although this recommendation was never implemented, a 1985 statement of scientific principles regarding chemi-

cal carcinogens by the White House Office of Science and Technology Policy offered additional support to the credibility of the federal govern- ment’s emerging risk assessment process (Intera-

gency Staff Group on Chemical Carcinogenesis, March 1985).

While some EPA program offices adopted the Ruckelshaus framework more readily than others, the use of risk assessment at EPA increased

sharply during the 1980s (Rodricks et al. 1987). New EPA guidelines were proposed in 1984 and

finalized in 1986 (U.S. EPA, 1986). The cancer risk extrapolation procedure devel-

oped by EPA’s Carcinogen Assessment Group was refined and made widely available in com- puter software. The key feature of this procedure was that its output was claimed to be a ‘plausible

upper bound’ on the incremental cancer risk asso- ciated with any human exposure to a carcinogen (Crump, 1981). The risk assessment procedure also included a carcinogen classification system, which categorized chemicals on the basis of evi- dence from laboratory and human studies. EPA’s

carcinogen classification system, which was mod-

eled after a system devised by the International Agency for Research on Cancer of the World

Health Organization, facilitated the development of lists of cancer-causing chemicals that might be of concern to exposed workers, consumers and the general public. Aspects of EPA’s risk assess- ment procedure began to be used by other federal agencies, such as CPSC and OSHA and by vari- ous state agencies.

3.7. Environmentalists use risk assessment

In the 1980s public dissatisfaction with the slow pace of federal regulation of toxic chemicals

induced many states to take significant regulatory steps on their own. One of the more notable

efforts was Proposition 65, a successful ballot initiative in California. This measure prohibited industrial discharge of carcinogens into water (un-

less the amount was small enough for no signifi- cant impact) and compelled industry to inform

citizens of any other carcinogenic exposures (un- less emitters could demonstrate that the risk to exposed citizens was less than one chance in 100 000 on a lifetime basis) (Pease et al., 1990).

Led by the Environmental Defense Fund, many environmentalists supported Proposition 65’s use of risk assessment as a strategy to encourage pollution prevention and to breathe meaning into a community’s ‘right to know’ about the health risks of pollution (Roe, 1989).

In addition to Proposition 65, California en- acted the Air Toxics ‘Hot Spots’ Information and Assessment Act of 1987. This law mandates the creation of emissions inventories, the conduct of

health risk assessments at key industrial facilities and the public reporting of significant risks to all exposed individuals (California Air Pollution Control Officers Association, January 199 1). More recently, the California legislature has passed laws requiring existing industrial facilities to reduce toxic emissions until risks are insignifi- cant, as determined by risk assessments. Many environmentalists in California have supported this increasing role for risk assessment in the state’s environmental policy. Other states, such as Wisconsin and New Jersey, are also making in- creasing use of risk assessments to inform the

public about chemical hazards and justify regula- tory programs.

Another major national environmental group, the Natural Resources Defense Council, used quantitative risk assessment in 1989 to challenge EPA’s policies regarding reregistration of pesti- cides (Natural Resources Defense Council, 1989). A major NRDC study, which highlighted the risks of children drinking apple juice contami- nated by a carcinogenic metabolite of Alar was used by the CBS show ‘60 min’ to criticize EPA’s inefficient rule-making process. The study has spurred numerous critiques rejoinders and follow- up studies (Risk Focus Inc., June 1989; Zeise et al., 1991; Smith, 1992). New legislative proposals in Congress supported by national environmental groups call for the increasing use of risk assess- ment in pesticide regulation (Statement of Senator Edward M. Kennedy, July 1991).

In debates over the Clean Air Act Amendments of 1990, many environmentalists also supported Republican Senator David Durenberger’s pro- posal to control air toxics through a combination of technology-based standards and a requirement that each major industrial facility protect a hypo- thetical ‘maximally exposed individual’ from life- time cancer risks of one in 10 000 and, eventually, one in 1 000 000 (Doniger September 1989). In- dustry groups such as the Chemical Manufactur- ers Association challenged this particular use of risk assessment, arguing that EPA’s analytical procedure lacks sufficient scientific validity to be used to justify mandatory shutdowns of produc- tive facilities (McBrayer, September 1989). When the Senate revised the Durenberger plan and ap- plied the bright line to the maximally exposed actual person, NRDC and many industry groups abandoned the Senate bill in favor of a House plan that did not constrain how quantitative risk assessment is to be done (Clean Air Act Amend- ments, October 1990).

Although environmentalists make use of risk assessment in selected advocacy campaigns, there is considerable concern within the environmental community about the ethics of risk-based policy, the technical accuracy of risk estimates and the practicality of risk assessment as a tool to expe- dite environmental rule-making (Silbergeld, June

1993). Despite their reservations about risk assess- ment, environmentalists have not developed any systematic alternatives to risk-based policy that have achieved widespread support. Meanwhile, some environmentalists prefer to work for envi- ronmental protection within a risk-based policy framework by shifting the technical burdens of risk assessment from government to industry.

4. Problems with current risk assessment practice

The growing importance of risk assessment has led to increasing scrutiny and criticism of this analytical tool. While some criticisms of current practice are unwarranted (e.g. the claim that all risk estimates are exaggerated), there are genuine problems with current risk assessment practice that need to be corrected. Some flaws in current practice, which cause exaggerated risk estimates, are harmful because they cause unnecessary re- source expenditures in the public and private sec- tors. Other flaws, which lead to underestimation of risk, are harmful because they lead to neglect of significant risks that need to be reduced. In this section, we highlight flaws in current practice and ongoing efforts that are designed to improve prac- tices.

4.1. Use of urbitrury exposure scenarios In the day-to-day practice of risk assessment,

numerical estimates of human risk are often based on single-exposure scenarios that are hypothetical and arbitrary. Consider the following examples: - a resident is assumed to live 200 m from an industrial source of toxic air pollution breathing maximum predicted outdoor concentrations of a single chemical for 70 years, 24 h per day (Hawkins, 1991; National Research Council, 1991); - for 70 years a sport fisherman is assumed to eat 6.5 g per day of contaminated fish caught from specific freshwater river sites near an indus- trial source of water pollution when fish advi- sories are in place at these river sites (U.S. EPA August 1990); - for 350 days per year a child is assumed to eat 200 mg of dirt per day while playing in the soil at a Superfund site that may be surrounded by

.I. D. Gruhum i Toxicology 102 (1995) 29 - 52 41

fences with few residences nearby (Harris and

Burmaster, March 1992). The major problem with arbitrary exposure

scenarios is that risk managers, who are responsi- ble for deciding how much to protect public health, have no clue how many citizens (if any) are actually exposed to the amount of risk indi- cated in the exposure scenario. With virtually no

information about the actual distribution of hu- man exposures and risk, managers have little basis for determining how protective alternative deci- sions would be. The arbitrariness may lead to too much and too little risk-protection expenditures,

depending upon the circumstances. Arbitrary exposure scenarios also raise proce-

dural questions. In many cases, the risk manager implicitly delegates his or her accountability to a risk assessor who has chosen an unspecified de- gree of conservatism in the form of an arbitrary exposure scenario. This shift of accountability is

undemocratic because it puts too much power in the hands of unelected risk assessors. Unless the risk manager makes an explicit decision about how much money will be expended to protect people from various levels of risk, it is very difficult for interested parties and the public to scrutinize the merits of the decision. While this is ultimately a risk management problem, the risk assessment process can be designed to minimize it.

Modern methods of risk analysis are available to address the challenge of characterizing variabil-

ity in human exposure to contaminants. Applica- tions of such methods are available in the scientific literature (Paustenbach et al., 1991;

Thompson et al., 1992). Similar methods are al- ready employed in many EPA reports on criteria air pollutants (e.g. lead and ozone), but the same methods have yet to be applied to toxic chemicals on a systematic basis.

4.2. Excessive credence to the ‘carcinogen’ clus-

sificu tion

The results of laboratory animal bioassays and epidemiological studies play the primary role in decisions about whether a chemical is classified as a ‘carcinogen’ by organizations such as the Inter- national Agency for Research on Cancer and the Environmental Protection Agency (Office of Tech-

nology Assessment, November 1987). Given the

public’s deep fear of cancer, the ‘carcinogen’ label has profound implications in the marketplace, in tort litigation, in implementing community and worker ‘right-to-know’ laws, in certain regulatory decisions and in international trade (Moolenaar,

1992). Few lay people realize that the qualitative de-

termination of carcinogenicity provides little in-

formation about human risk. Without

information about the extent of human exposure and the agent’s carcinogenic potency, knowing

that a chemical is ‘carcinogenic’ should not be accorded much significance (Center for Risk Analysis, July 1992). This point is widely misun- derstood in journalistic coverage of chemical

risks, which often draw attention to the mere finding of carcinogenicity as if it was important to human risk. Current risk assessment practice ex- acerbates this syndrome by reporting the results of official carcinogen classification schemes with-

out adequate explanation. EPA is now consider- ing proposals to refine or even abolish its carcinogen classification scheme (Ashby et al. 1990; U.S. EPA, November 1992).

4.3. E.xcessice reliunce on the findings from uni-

mu1 cuncer tests

Due to the paucity of human data, risk asses- sors rely heavily on the findings from long-term animal studies in the dose-response aspects of cancer risk assessment. This is often the only hard data available. Yet, a good case can be made that

current practices place too much faith in the results of laboratory animal bioassays relative to other types of scientific information and judge- ment. Blind faith in the results of animal studies can result in either under- or over-prediction of human risk, depending upon the circumstances.

If an existing chemical has not yet been tested in a long-term animal bioassay, risk assessors assume (implicitly) that the chemical’s carcino- genic potency is zero. This analytic practice is questionable. If a poorly tested chemical is known to be genotoxic and is structurally similar to another chemical that has tested positive in ani- mal and human studies, then zero is not a sensible potency estimate. In some regulatory decisions

42 J. D. Grcrhum I To.ricoko~y 102 (1995) 29 - 52

about new chemicals, assessments of carcinogenic risk are made on the basis of structure-activity relationships, since long-term data from animals and humans may not be available. Even if a chemical tests negative in the first rodent strain that is tested, the estimated low-dose carcinogenic potency in humans should not necessarily be zero. Proponents of the standard animal bioassays ac- knowledge that current testing protocols cannot detect the carcinogenicity of those chemicals with only moderate carcinogenic potency (Huff and Haseman, 1991). Information on a chemical’s chronic toxicity, genotoxic potential and chemical structure should have some influence on the chemical’s projected carcinogenic potency at low levels of human exposure (Ashby and Tennant, 1988, 1991; Tennant and Ashby 1991).

Once a chemical has tested positive in animals, the resulting linear potency factor for cancer takes on a high presumption of validity, even if phar- macokinetic and pharmacodynamic data suggest that the standard potency calculation may be inappropriate. Some animal carcinogens (e.g. chloroform formaldehyde and unleaded gasoline vapors) now have a substantial body of special- ized information that has yet to be adequately incorporated into standard risk assessments (Gra- ham 1991).

Historically, risk assessors have assumed that the amount of the carcinogen (parent compound or active metabolite) delivered to the target organ in the body is proportional to the amount of the carcinogen administered. Biostatisticians have noted however, that this assumption will produce inaccurate estimates of low-dose risk in cases where pharmacokinetic processes in the body are nonlinear (Hoe1 et al., 1983; Krewski et al., 1987). A small, yet growing, number of compounds have been shown to exhibit nonlinear pharmacokinetics (Gehring et al., 1978; Ramsey and Anderson, 1984; Starr and Buck, 1984; Anderson et al., 1987; Corley et al., 1990; Travis et al., 1990) although the relevant data and models are of varying qual- ity and robustness (Farrar et al., 1989; Portier and Kaplan, 1989; Bois et al., 1990; Spear et al., 1991; Hattis etal., 1990; Woodruff et al., 1992). Govern- mental risk assessments are only beginning to be revised to reflect such chemical-specific informa- tion.

It has also been traditional to assume that any dose of a carcinogen, no matter how small, will be associated with an increased risk of cancer (Zeise et al., 1987). For some carcinogens, particularly those that are not genotoxic (i.e. do not interact with DNA), recent pharmacodynamic research suggests that low doses of exposure may not be associated with an increase in cancer risk. For example, carcinogens that generate tumors by causing cell proliferation in the target organ may exhibit nonlinear dose-response .relationships and in some cases may exhibit strict no-effect levels of exposure (Cohen and Ellwein, 1990; Cohen et al., 1991; Monticello et al., 1992).

Moreover, the tumors observed in laboratory animal studies may not be relevant to human risk, as is the case with the kidney tumors in male rats caused by inhalation of wholly volatilized un- leaded gasoline (Risk Assessment Forum, Septem- ber 1991). Given current risk assessment conventions, federal agencies have difficulty using this kind of scientific information.

Biologically-based models are being developed that incorporate information beyond the tumor counts from an animal test. Such models have been applied to only a handful of chemicals be- cause the needed data are not available on most chemicals. Such data can be expensive to collect and analyze. If such data are to be generated on a large number of chemicals for the purposes of risk assessment, government agencies will need to adopt a posture of receptivity to the use of new kinds of scientific information.

4.4. The degree of uncertainty in risk estimates is

poorly characterized

Most real-world risk assessments produce a sin- gle risk number that is intended to represent a plausible upper bound on the cancer risk that may result from a source. When population risk assess- ments are performed, the final result is often a single number that represents an incidence rate or ‘body count’ due to a specific factory or product. EPA guidelines stress that the agency’s numerical estimates of cancer risk are unlikely to be too low and the true risk may be as low as zero (U.S. EPA 1986) although this caveat is not always commu- nicated prominently to the public.

J.D. Graham / Tosicology 102 (1995) 29-52 43

This type of qualitative uncertainty analysis is not as helpful to risk managers as a numerical distribution of uncertainty would be (Finkel, Jan- uary 1990). The plausible upper bounds that are reported are not necessarily upper bounds since factors such as indirect exposures and variations in human susceptibility are rarely considered (Finkel, 1989). The possibility that the risk is zero is acknowledged but no scientific judgement is offered about the likelihood of this possibility, which varies from case to case (Gray and Graham 1991). The huge range of risk values between zero and the alleged upper bound are implicitly treated as equally likely (since no distinctions are drawn), even though some values may have much more scientific support than others (Sielken, 1989). No central estimates of risk are typically reported (American Industrial Health Council, July 1989).

Incomplete uncertainty analysis makes it very difficult for risk managers to make sound deci- sions. For instance, when environmental threats are compared for purposes of setting priorities, it is misleading to compare two upper bounds since scientists may suspect that the upper bound on risk for one exposure is far more plausible than the upper bound on risk for another exposure (Nichols and Zeckhauser, 1988).

A related problem arises in standard setting. If a risk manager is thinking about banning a pesti- cide, it might be useful to compare the expected reduction in risk to the expected benefits of the pesticide. Agricultural economists would typically report a central estimate of the pesticide’s benefits (not the most optimistic or pessimistic estimate) (Zilberman et al., 1993). Since standard risk as- sessment practices do not report a central estimate of the pesticide’s risk, a fair comparison of benefits and costs cannot be performed. The benefits of a pesticide are also very difficult to measure with precision, which makes it doubly important that uncertainty be characterized fairly on both sides of the risk-benefit equation (General Accounting Office, 1991).

If risk assessors did a better job of characteriz- ing the degree of uncertainty in their risk esti- mates, then the urgent need for additional scientific research to address resolvable uncertain- ties would be apparent (Hammitt and Cave,

1991). Under current practices, the public is pre- sented with crisp estimates of risk that have the veneer of scientific precision and credibility. Since it seems that agencies have the answers, the need for better science is difficult for Congress and the public to understand (Graham et al., 1988).

Better characterizations of uncertainty in risk assessment would also force risk managers to make the policy judgements about what to do in the face of uncertainty. Insofar as risk managers are not informed about the fragility of risk esti- mates they are not in a good position to decide whether to regulate now or await additional scien- tific information (Finkel and Evans, 1987). Only by reporting numerical distributions of uncer- tainty can risk assessors make it possible for risk managers to consider the value of additional sci- entific information. While some risk managers may seek to avoid accountability for policy judge- ments by hiding behind precise risk numbers, the risk assessment process should not facilitate this behavior by permitting false precision in risk as- sessment.

Bayesian analytical procedures can be used to generate probabilistic risk estimates using all of the available scientific evidence (Morgan and Henrion, 1990). While agencies have not yet ap- plied such techniques to toxic chemicals, they have been applied by the Nuclear Regulatory Commission in assessments of the safety of the nuclear fuel cycle (Bonano et al., 1990). EPA’s air office has also demonstrated the feasibility of such techniques in their assessments of the non-cancer health effects of selected criteria air pollutants (Whitfield and Wallsten, 1989).

In November 1991 EPA’s Risk Assessment Council developed new agency-wide guidance on risk characterization for risk assessors and risk managers. This guidance discourages single point estimates of risk and encourages agency analysts to provide a complete picture of the scientific basis of the agency’s risk assessments (Memoran- dum from F. Henry Habicht III, February 1992). It would appear that thus far this guidance is not having the intended effect of changing risk assess- ment practices across all EPA programs as was hoped.

44 J.D. Graham I Tosicology 102 (1995) 29-52

4.5, Noncancer health eflects may be poorly analyzed

While ‘noncancer health effects’ is a very loose phrase, it is recognized to encompass important public health concerns such as neurotoxicity, re- productive and developmental toxicity im- munotoxicity and organ-specific toxicity. Many risk assessments of chemical exposure are domi- nated by concerns about cancer risks from chronic low-level exposures. While a focus on cancer risk is sometimes justified, concerns have been raised that non-,cancer health effects may be neglected by risk assessors (Summary of Work- shop to Review an OMB Report on Risk and Assessment Management, Winter 1992).

If cancer is the most sensitive endpoint for an exposure (i.e. noncarcinogenic effects disappear at the lower levels of exposure where predicted ex- cesses in cancer incidence persist) then the focus on cancer risk may be justified. For example among those chemicals that have been assessed for both cancer and noncancer effects, the ‘safe’ levels of exposure for non-cancer effects are often 100 to 1000 times higher than the levels of expo- sure associated with an excess lifetime cancer risk of one in a million (Personal communication from Sandra Baird, August 1993). But many chemicals are not known to cause cancer and, at least for some agents, the systemic (noncancer) effects may occur at lower doses than the doses at which cancer can be detected.

Historically, there have been cases when risk assessors simply neglected to assess noncancer health and ecological effects, although most gov- ernmental risk assessments now consider all types of health effects. Based on the public deliberations in EPA’s ongoing reassessment of dioxin, it is apparent that previous risk assessments of dioxin may have neglected some subtle immunological and developmental effects. It should not be as- sumed that protecting against cancer will auto- matically protect against all other adverse health effects.

More importantly, technical questions have been raised about whether the traditional safety- factor method of noncancer risk assessment is appropriate. The concept of a strict dose-response threshold may not be appropriate for all noncan-

cer health effects. For example, the neurological effects of lead poisoning are being detected at lower and lower blood lead levels, which suggests that there may not be a strict threshold for these effects (CDC, October 1991).

Even if thresholds exist, questions have been raised about whether the safety-factor method is based on sound science and public policy. For example, NOAEL values are highly sensitive to the number of animals used in toxicity tests and the spacing of test doses, yet the size of the safety factors are invariant to study design. Use of new ‘benchmark dose’ techniques may help address some of these concerns (Crump, 1984; Kimmel and Gaylor 1988).

LOAEL and NOAEL values are also sensitive to how ‘adverse health effect’ is defined. This is a crucial issue because scientific capabilities to de- tect subtle biological effects of chemicals are in- creasing rapidly. Even if human exposures exceed the AD1 (RfD) for a well-defined adverse effect, the implications for risk management are not clear. Current methods cannot be used to estimate the frequency or severity of the adverse health effects that may occur in the population. When ADIs (Rfds) are exceeded for more than one endpoint, current methods do not have a proce- dure for giving weight to this more serious situa- tion.

The uncertainty factors used by regulatory toxi- cologists are not designed to incorporate other relevant policy considerations such as the number of people exposed to potentially adverse effects, the feasibility of achieving the AD1 (RfD) and the economic costs and health risks that may be associated with setting the AD1 (RfD) as a regula- tory limit. These factors may be considered by risk managers after the AD1 (RfD) is reported (unless risk managers become preoccupied with simply reducing exposures below the ADI).

Insufficient technical and policy attention has been given to a wide range of noncancer health effects that may occur at low doses. Federal and state agencies are beginning to recognize this problem and devote more resources to perfecting risk assessment procedures for noncancer health effects.

J.D. Graham / Toxicology 102 (1995) 29-52 45

4.4. Confusing communications of risk estimates When reporting the results of a complex risk

assessment, a key challenge is to characterize the risk in a manner that is meaningful to the public. The standard practice is to report a plausible upper bound on the extra lifetime cancer risk to an exposed person (e.g. 4 x 10 -4) or the numer- ical ratio of human exposure to the ADI(Rfd) (e.g. 1.3). Since journalists and opinion leaders do not understand how these numbers are calculated or what they mean, they are in a poor position to provide the public with a sense of perspective about the risk. Unless presentation of risk esti- mates improves, we can only expect the public to be more confused as the results of numerous risk assessments are reported in the future.

One strategy that deserves more consideration by risk assessors is comparison of target risks to other risks encountered by the public in daily life. Early efforts at risk comparison were provocative but were not sensitive to important public values. A growing body of social science research pro- vides guidance on how to compare risks in a thoughtful and responsible manner (National Re- search Council, 1989; Roth et al., 1990; Morgan et al., 1992). Federal agencies are only beginning to incorporate these insights into their risk assess- ments and communications strategies.

4.7. Environmental inequity The environmental justice movement is con-

cerned that a disproportionate share of the risks of environmental pollution are incurred by low- income and minority populations. A growing body of scientific evidence and political advocacy is drawing attention to the inequitable distribu- tion of risks in society (Zimmerman, 1993).

The risk assessment framework is not inher- ently hostile to concerns about environmental eq- uity. However, the day-to-day practice of risk assessment needs to be refined in order to provide scientific information relevant to environmental equity. For example, rates of fish consumption in various ethnic populations are often much higher than the national average, yet such data are not typically used in the risk assessment process. More generally, risk assessors rarely report infor- mation on the racial and ethnic mix of citizens at

risk from pollution sources. When analysts make comparisons to provide a sense of perspective about risk, it may be appropriate in some cases to compare environmental risks to the specific types of daily risks (e.g. homicide and violence) faced by low-income and minority populations. Al- though the risk assessment community is becom- ing more sensitive to the environmental equity issue, more progress is needed in the years ahead.

5. The danger of proceeding without risk assessment

Frustration with the current risk assessment process is widespread. Anti-regulation critics see current risk-assessment methods as a ploy to ex- aggerate risk and, thereby, provide technical sup- port for expensive regulatory programs. Pro-regulation critics see the analytical burdens of risk assessment as a ploy to slow down and ultimately paralyze the regulatory process. Many members of the scientific community see risk as- sessment as an assumption-laden exercise that cannot be validated with direct measurement. While all of these concerns have merit, it is impor- tant to reconsider why risk-based policy was de- veloped in the first place.

Without risk assessment, regulators do not know which contaminants, exposures and sources should be targets of regulation. Some form of risk assessment is, therefore, necessary to define when ‘pollution’ is worthwhile preventing and con- trolling. Likewise, only risk assessment can begin to tell us whether it is worthwhile to insist on emission reductions that are greater than what can be achieved by installing the best available control technology. If we are to pursue newer policies such as toxics-use reduction, risk assess- ment will be necessary to distinguish chemical uses that pose no significant health threat from those uses that present significant risks. Other- wise, the scarce resources devoted to toxics-use reduction will not be expended wisely (Laden and Gray, 1993).

The economic imperative for risk assessment is real. The public and private sectors of the econ- omy cannot afford a qualitative regulatory pro- cess that seeks to prevent or cleanup all potential

46 J.D. Graham 1 Tosicology 102 (1995) 29-52

hazards from human exposure to toxic agents. In recent ballot initiatives in the states of California (1990) Ohio (1992) and Massachusetts (1992), large majorities of voters rejected ambitious envi- ronmental protection policies in part because they did not perceive enough risk reduction to justify the perceived economic costs. In the long run, the American people will demand risk assessment be- cause they will want to know what benefits they are reaping from their investments in environmen- tal protection.

To convey the usefulness of risk assessment, it is instructive to consider the several case examples in Table 2.

These examples are offered to make the point that risk assessment has already proven to be an influential tool in risk management decisions. It is a two-edged sword in the sense that sometimes its results favor environmental advocates and some- times its results favor industry. While it is easy to criticize this tool, the track record of success should not be forgotten.

The technical uncertainties in current risk esti- mates are large. There is certainly a need for improvements in the science to refine risk assess- ments. Fortunately, a risk-assessment framework can assist decision makers in allocating scarce research dollars to the most important resolvable uncertainties. Research organizations such as EPA’s Office of Research and Development (U.S. EPA, June 1990) and the Chemical Industry Insti- tute of Toxicology (McClellan, 1991) now use a risk assessment framework to guide research plan- ning.

6. Congressional and administration interest in risk assessment

During the clean air debate of 1990, Members of Congress learned that environmentalists and industry officials had basic disagreements about the validity of current risk assessment methods and their proper use in regulatory decision-mak- ing. Congress saw that this dispute was likely to arise again in future legislative deliberations about hazardous wastes pesticides, radiation control, water pollution, drinking water and indoor air quality.

Hence, Congress called for a National Academy of Sciences study of risk assessment methods. This study, chaired by Dr. Kurt Issel- bather of the Harvard Medical School, was re- leased in 1994 and has called for significant reforms of risk assessment practice. Congress also authorized a bipartisan Commission on Risk As- sessment and Management to assess how risk assessments should be conducted and used in environmental decision-making. The members of the Commission have been appointed and deliber- ations should begin shortly.

The 94th Congress has already demonstrated substantial interest in risk assessment. Senator Daniel Patrick Moynihan (D-NY) has introduced legislation that would require EPA to use risk assessment to rank the importance of environ- mental problems for the purpose of setting Agency priorities. Senator Bennett Johnston (D- LA) won approval on the Senate floor of an amendment to the EPA Cabinet-elevation bill that would require rules to be accompanied by formal risk analyses. In the House of Representatives, a bipartisan coalition of congressmen have intro- duced a risk-communication bill aimed at strengthening EPA’s procedures for communicat- ing risk estimates to the public. Specific congres- sional discussions of Superfund reauthorization reform of food safety regulation (the Delaney Clauses) and reauthorization of the Occupational Safety and Health Act are also emphasizing the proper role of risk assessment in public policy.

The Clinton Administration has also indicated a strong interest in risk assessment. For example, the Administration’s 1993 executive order on reg- ulatory planning emphasizes the need for compar- ative risk assessment to help determine an agency’s rule-making priorities. A White House task force is currently examining the case for more far-reaching reforms of the federal govern- ment’s risk assessment process.

Risk assessment is now widely recognized as an essential analytic tool by federal policymakers. While risk assessment is in widespread use and should be used more in the years ahead numerous problems have been revealed in day-to-day prac- tice. These problems need to be considered care- fully as reformers strive to improve national risk policy.

J.D. Grukwn / Tosicology 102 (I 995) 29-52 47

Table 2

Case studies in risk assessment and management

Atrrntptrd ban of’succhurin. In the late 1970s FDA attempted to ban saccharin on the grounds that the artificial sweetener caused

cancer in animals, The public was skeptical of such charges and pressured Congress to allow continued consumption of saccharin,

which had become a very popular product. As questions were raised about the relevance of the animal data to humans large-scale

epidemiological studies were launched to investigate the hypothesis of human cancer risk. Risk assessments based on the new

epidemiological data indicated that the risk of saccharin was quite small compared to what was projected on the basis of animal

experiments. The reassuring risk assessments based on epidemiology have led to increased confidence that consumers of saccharin

are not subjecting themselves to large increases in cancer risk. Recently, laboratory scientists have made significant progress in

modeling the mechanisms that caused rodents to develop tumors when exposed to large amounts of saccharin. This research also

suggests that current levels of human exposure to saccharin are unlikely to cause significant increases in the risk of bladder cancer

(Cohen and Ellwein 1990).

Plrusr-out of’ leuded gusoline. In 1985, EPA decided to begin an accelerated phase out of leaded gasoline. The decision did not

result from any particular pressure from environmental groups or mass media coverage of the ill erects of lead. It resulted primarily

from an EPA risk assessment of lead’s noncancer health etfects (including cardiovascular and neurological etfects). Estimates of the

quantitative health benefits of phasing out lead in gasoline played a key role in persuading skeptics within both EPA and the Reagan

Administration. In this assessment of noncancer endpoints, EPA chose not to use the traditional methods of regulatory toxicology

and instead employed modern methods of risk assessment (U.S. EPA, February 1985).

Reguhtion qf wJue/ing mpors af gus stations. During the l985- 1988 period, EPA examined whether to adopt regulations to reduce

human exposures to unleaded gasoline vapors that occur during refueling of vehicles. The volatile organic compounds emitted

during refueling posed two distinct risks: they aggravate the urban ozone problem. which is associated with both acute and chronic

respiratory etfects and they create a potential cancer hazard among people who inhale the vapors. In a complex risk assessment

process, the Agency decided to regulate refueling vapors primarily on the basis of ozone control rather than cancer prevention. In

this example. EPA’s consideration of new mechanistic information did not change the rule-making outcome, but it did induce the

agency to base its decision on a more valid scientific rationale (ozone control) (Egan-Keane et al., 1991). Congress subsequently

required car companies to control refueling vapors in the design of new vehicles.

Occuputiond rrgulurion c$ brnrenr rxposurrs. OSHAs unsuccessful etfort in 1978 to tighten the permissible exposure limit for

benzene led to increased interest in assessing quantitatively the leukemogenic risk of benzene exposures. Using epidemiological and

animal data, several risk assessors estimated that continuous occupational exposure to benzene for 30.-40 years at IO ppm (8-h.

time-weighted average) was associated with a significant increase in the risk of contracting certain forms of leukemia (White et al..

1982; Rinsky et al., 1987). On the basis of these risk assessments organized labor successfully pressured OSHA during the Reagan

Administration to tighten the benzene standard to I ppm, as Dr. Eula Bingham had proposed originally in 1978

Rrrhction of dioxin emissions ,from puper mills. Under the Clean Water Act, EPA and the states share authority to set risk-based

standards to protect the public health from toxic chemicals in surface water. Since 1987, when dioxin was first detected in the effluent

of kraft mills. risk-based water quality standards, in conjunction with consumer demand and liability concerns. have played an

important role in accelerating technological progress in the paper industry. Mills are rapidly reducing the use of chlorine gas as a

bleaching agent. During the last 6 years, the amount of dioxin in effluent and in fish near kraft mills has declined substantially. At

many rivers fish advisories have been lifted as the risks to subsistence and recreational fishermen have been curtailed.

Rrrrgistrurion of uluddor. Alachlor is a proprietary chemical that is widely used as a commercial weed-killer in growing corn.

soybeans and other crops. Although it is classified as a probable human carcinogen (B2) by EPA based on a strong tumor response

when administered to animals at high doses, EPA decided to allow its continued use under limited conditions because extensive risk

assessments showed that the potential risks from human exposure are acceptable (ShabecotT December 1987). EPA has insisted that

additional monitoring be conducted of surface waters and groundwater to assure that significant risks from use of alachlor do not

emerge. In Canada, which lacks a rigorous internal risk assessment process alachlor was banned even though an external science

court recommended that alachlor and metolachlor (the primary competitive product) be regulated in an equivalent fashion (Canada

Retains Ban on Alachlor as Health Risk, 1988; Dowd 1988).

References

AFL-CIO (American Federation of Labor and Congress of

Industrial Organizations) vs. OSHA. (July 7, 1992) 11th

Circuit Court of Appeals.

Albert, R.E., Train, R.E. and Anderson, E.L. (1977) Rationale

developed by the Environmental Protection Agency for the

assessment of carcinogenic risk. J. Natl. Cancer Inst. 58

1537-1541.

48 J. D. Grd7~1177 / To.~-icolog~ IO.2 (1995) 29-52

Ames, V.N. and Gold, L.S. (1990a) Chemical carcinogenesis:

Too many rodent carcinogens. Proc. Natl. Acdd. Sci. USA.

87 7772~7776.

Ames, B.N. and Gold, L.S. (1990b) Too many rodent carcino-

gens: Mitogenesis increases mutagenesis. Science 249, 970-

971.

American Textile Manufacturers vs. Donovan. (1981) 452 U.S.

513-514, n. 32.

American Industrial Health Council (July 1989) Report of an

Ad Hoc Study Group, Presentation of Risk Assessments of

Carcinogens. Washington, DC.

Anderson, M.E.. Clewell, H.J., Gargas. M.L., Smith, F.A. and

Reitz, R.H. (1987) Physiologically based pharmacokinetics

and the risk assessment process for methylene chloride.

Toxicol. Appl. Pharmacol. 87, 1855205.

Anderson E. and the Carcinogen Assessment Group (1983)

Quantitative approaches in use to assess cancer risk. Risk

Analysis 3 2177295.

Ashby, J. and Tennant, R.W. (1988) Chemical structure

salmonella mutagenicity and extent of carcinogenicity as

indicators of genotoxic carcinogenesis among 222 chemicals

tested in rodents by the U.S. NTP. Mutat. Res. 204, l7-

115.

Ashby. J. and Tennant, R.W. (1991) Definitive relationships

among chemical structure, carcinogenicity and mutagenicity

for 301 chemicals tested by the U.S. NTP. Mutat. Res. 257,

229-306.

Ashby, J. et al. (1990) A scheme for classifying carcinogens.

Regul. Toxicol. Pharmacol. 12, 270-295.

Barnard, R.C. (1990) Some regulatory definitions of risk:

Interaction of scientific and legal principles. Regul. Toxicol.

Pharmacol. II. 201~211.

Barnes, D.G. and Dourson, M. (1988) Reference dose (RfD):

Description and use in health risk assessments. Regul. Toxi-

co]. Pharmacol. 8, 471-486.

Bois, F.Y.. Tozer, T.N. and Zeise, L. (1990) Precision and

sensitivity analysis of pharmacokinetic models for cancer

risk assessment: Tetrachloroethylene in mice. rats and hu-

mans, Toxicol. Appl. Pharmacol. 102, 300-315.

Bonano, E.J., Hora, S.C.. Keeney. R.L. and von Winterfeldt,

D. (May 1990) Elicitation and Use of Expert Judgement in

Performace Assessment for High-Level Radioactive Waste

Repositories. Report prepared for the Nuclear Regulatory

Commission, NUREGCR-541 I.

Brain, J.D., Beck, B.D.. Warren. A.J. and Shaikh, R.A. (1988)

Variations in Susceptibility to Inhaled Pollutants: Jdentificd-

tion, Mechanisms and Policy Implications. Johns Hopkins

University Press, Baltimore. MD.

California Air Pollution Control Othcers Association (January

1991) Air Toxics ‘Hot Spots’ Program Risk Assessment

Guidelines.

Canada Retains Ban on Alachlor as Health Risk (February I

1988) Environ. Health Lett. 6.

The Carnegie Corporation (1993) Risk and the Envirnoment

Improving Regulatory Decision Making. The Carnegie

Commssion on Science, Technology and Government, New

York.

Carter, L.J. (1979) Dispute over cancer risk quantification.

Science 203, 1324- 1325.

Casanova. M. and Heck, H. (August 1991) The impact of

DNA-Protein cross-linking studies on quantitative risk as-

sessments of formaldehyde. CIIT Activities I I. 8, I 7.

CDC (U.S. Centers for Disease Control and Prevention) (Oc-

tober 1991) Preventing Lead Poisoning in Young Children,

Atlanta, GA.

Center for Risk Analysis (July 1992) Recommendations for

Improving Cancer Risk Assessment. Harvard School of

Public Health, Boston, MA.

Clean Air Act Amendments of 1990 (October 26, 1990) Con-

ference Report to Accompany S. 1630. IOlst Congress,

Second Session.

Cohen, S.M. and Ellwein, L.B. (1990) Cell proliferation in

carcinogenesis. Science 249, 10077 IO I 1.

Cohen, SM., Purtilo, D.T. and Ellwein, L.B. (1991) Pivotal

role of increased cell proliferation in human carcinogenesis.

Cell Prolif. Cancer 4, 371-382.

Congress, IO1 st ( 1990) Clean Air Act Amendments of 1990.

Conference Report on S. 1630, Second Session, October 26,

1990.

Corley, R.A., Mendrala. A.L., Smith, F.A., Staats, D.A.,

Gargas M.L., Conolly, R.B., Andersen, M.E. and Reitz,

R.H. (1990) Development of a psychologically based phar-

macokinetic model for chloroform. Toxicol. Appl. Pharma-

co]. 103, 512-527.

Crump, K.S. (1981) An improved procedure for low-dose

carcinogenic risk assessment from animal data. J. Environ.

Pathol. Toxicol. 52, 6755684.

Crump, KS. (I 984) A new method for determining allowable

daily intakes. Fundam. Appl. Toxicol. 4, 8544857.

Crump, K.S.. Howl, D.G.. Langely, C.H. and Peto. R. (1976)

Fundamental carcinogenic processes and their implication

for low-dose risk assessment. Cancer Res. 36, 2973-2979.

Doniger. D. (September 21, 1989) National Clean Air Coali-

tion. Clean Air Act Amendments of 1989. Hearings before

the Senate Committee on Environment and Public Works,

IOlst Congress First Session, pp. 28-30.

Dourson. M.L. and Stara, J.F. (1983) Regulatory history and

experimental support of uncertainty (safety) factors. Regul.

Toxicol. Pharmacol. 3. 2244238.

Dowd, R.M. (1988) The Canadian ‘science court’ model.

Environ. Sci. Tech. 22. 258.

Ecker, W.G. (September 1990) Historical aspects of poisoning

and toxicology. Am. J. Forensic Med. Pathol. I, 261-264.

Egan-Keane, S.. Graham, J.D. and Ruder, E. (1991) Unleaded

Gasoline Vapors. In: J.D. Graham, (Ed), Harnessing Sci-

ence for Environmental Regulation, Praeger, Westport, CT,

pp. 39 -62. EPA (U.S. Environmental Protection Agency) (May 25. 1976)

Interim procedures and guidelines for health risks and eco- nomic impact statements of suspected carcinogens. Fed.

Reg. 41, 21402.

EPA (U.S. Environmental Protection Agency) (1980) Guideli-

nes and methodology used in the preparation of health

etfects assessment chapters of the consent decree water quality criteria. Fed. Reg. 46, 79347779357.

EPA (U.S. Environmental Protection Agency) (1984) Pro-

posed guidelines for carcinogen risk assessment. Fed. Reg.

51. 33992.

EPA (US. Environmental Protection Agency) (December

1984) Risk assessment and management: Framework for

decision making. Washington. DC.

EPA (U.S. Environmental Protection Agency) (February

1985) Costs and benefits of reducing lead in gasoline: Final

regulatory impact analysis. Washington. DC.

EPA (U.S. Environmental Protection Agency) (1986) Guideli-

nes for carcinogen risk assessment. Fed. Reg. 51. 33992.

EPA (U.S. Environmental Protection Agency) (1987, 1990.

1993) Unfinished business: A comparative assessment of

environmental problems. Washington. DC. 1987.

EPA (U.S. Environmental Protection Agency) (October, 19.

1988) Regulation of pesticides in food: Addressing the De-

laney paradox ~ policy statement. Fed. Reg. 53. 41104.

EPA (U.S. Environmental Protection Agency) (1990) Environ-

mental Inrestments: The Cost of A Clean Environment. A

Summary.

EPA (U.S. Environmental Protection Agency) (June 1990)

Research to improve health risk assessments program. Oftice

of Research and Development. Washington, DC EPA 600

9-90 028.

EPA (U.S. Environmental Protection Agency) (November

1992) Working paper for Considering Draft Revisions to the

U.S. EPA Guidelines for Cancer Risk Assessment. Wash-

ington. DC.

FDA (U.S. Food and Drug Administration) (1977) Fed. Reg.

43. 1042.

FDA (U.S. Food and Drug Administration) (March 20. 1979)

Fed. Reg.. 44. 17020.

FDA (U.S. Food and Drug Administration) (December 31.

1987) Fed. Reg. 52, 49572.

Farrar. D.. Allen. B.. Grump, K. and Shipp, A. (1989) Evalu-

ation of uncertainty in input parameters to pharmacokinetic

models and the resulting uncertainty in output. Toxicol.

Lett. 49 371-385.

Finkel. A.M. (1989) Is risk assessment really too ‘conserva-

tibe’? Revising the revisionists. Columbia J. Environ. Law

14. 427-467.

Finkrl A.M. (January 1990) Confronting uncertainty in risk

management: A guide for risk management. Resource for

the Future. Washington DC.

Finkel. A.M. and Evans, J.S. (1987) Evaluating the benefits of

uncertainty reduction in environmental health risk manage-

ment. J. Air Pollut. Control Assoc. 3. 1164&l 171.

Gehring. P.J.. Watanabe. P.G. and Park. C.N. (1978) Resolu-

tion of dose-response toxicity data for chemicals requiring

metabolic activation: Example ~ vinyl chloride. Toxicol.

Appl. Pharmacol. 44. 581-591.

General Accounting Otlice (March 7. 1991) Pesticides: EPA’s

use of benefit assessments in regulating pesticides. GAO

RCED-91-52.

Gold. L.S.. Bernstein. L.. Magaw. R. and Slone. T.H. (1989)

lnterspecies extrapolation in carcinogenesis: Prediction be-

t&ccn rats and mice. Environ. Health Perspect. 81. 21 I --219.

Goldstein, B.D. (1990) The problem with the margin of safety:

Toward the concept of protection. Risk Analysis IO, 7- IO.

Goldsworthy, T.L.. Morgan, K.T.. Popp, J.A. and Butter-

worth, B. (April 1990) Chemically-induced cell proliferation.

CIIT Activities lO(4). l-7.

Graham, J.D. (September 21. 1989) Clean air act amendments

of 1990. Hearing before the Senate committee on environ-

ment and public works. IOlst Congress. First Session. p. 43

Graham. J.D. (1991) Harnessing Science for Environmental

Regulation. Praeger, Westport, CT.

Graham, J.D. (April, 1993) Synopsis of the BELLE Confer-

ence on Chemicals and Radiation. Arlington, VA, (in press).

Graham. J.D.. Green. L. and Roberts. M.J. (1988) In Search

of Safety: Chemicals and Cancer Risk. Harvard University

Press Cambridge. MA.

Gray. G.M. and Graham. J.D. (1991) Risk assessment and

clean air policy. J. Pol. Anal. Mgmt. IO. 286 295.

Gulf South Insulation Co. vs. CPSC. (5th Cir. 1983) 701 F.2d

1137.

Hammitt. J.K. and Cave. J.A.K. (1991) Research Planning for

Food Safety. Rand Corporation. Santa Monica. CA.

Harris. R.H. and Burmaster. D.E. (March 25. lY92) Restoring

science to superfund risk assessment. Toxics Law Reporter

pp. l318m 1323.

Hattis. D.. White. PI. Marmorstein. L. and Koch, P. (1990)

Uncertainties in pharmacokinetic modeling for

perchoroethylene. Risk Analysis IO. 449 458.

Hawkins. Neil C. (1991) ‘Conservatism in maximally exposed

individual predictive exposure assessments: A first-cut analy-

sis.’ Regul. Toxicol. Pharmacol. 14. 107 117.

Hoel, D.G.. Haseman, J.K.. Hogan M.D.. Hufl‘, J. and Mc-

Connell E. (1988) The impact of toxicity on carcinogenicity

studies: Implications for risk assessment. Carcinogenesis. 9

2045-2052: Rail. D. (1991) Letter to the Editor. Carcino-

gens and human health: Part II. Science 251. IO I I.

Hoel. D.G.. Kaplan, N.L. and Anderson. M.W. (1983) Impli- cation of nonlinear phramacokinetics on risk estimation in

carcinogenesis. Science 29 I. 103& 1037.

HuHl J. and Haseman. J. (1991) Long-term chemical carcino-

genesis experiments for identifying potential human cancer

hazards: Collective database of the national cancer institute

and national toxicology program (1976 I99 I ). Environ.

Health Perspect. 96. 23-31.

Hutf. J.. Haseman, J. and Rail. D. (I90 I) Scientific concepts

value and significance of chemical carcinogenesis studies.

Annu. Reb. Pharmacol. Toxicol. 31. 621 652.

HUH: J., Haseman. J. and Rail. D. (lY91) Scientific concepts

value and significance of chemical carcinogenesis studies.

Annu. Re\. Pharmacol. Toxicol. 31. 621 662.

HutT. J. and Melnick. R. (1993) Identifying carcinogens. Letter

to the Editor. Issues in Sci. Tech. 9. I4 15.

Industrial Union Department (1980) AFL-CIO vs. American

Petroleum Institute. 448 U.S. 607.

Interagency Regulatory Liaison Group (1979) Scientific basis for the identification of potential carcmogens and estimation

of risks. J. Natl. Cancer Inst. 63. 243 268: also see Landy.

M., Thomas. S. and Roberts. M. (1990) Asking the Wrong

Questions. Oxford University Press NY. pp. 172 203.

50 J. D. Graham / To.uicology 102 (1995) 29-52

Interagency Staff Group on Chemical Carcinogenesis. White

House Office of Science and Technology Policy (March 14,

1985) Chemical Carcinogens: A Review of the Science and

Its Associated Principles. Fed. Reg.. 50, 10371.

Kelly, K.E. (June 1991) The Myth of One in a Million as a

Definition of Acceptable Risk. Presentation to the 84th

Annual Meeting of the Air and Waste Management Associ-

ation, Vancouver British Columbia, Canada.

Kimmel, C. and Gaylor, D.W. (1988) Issues in qualitative and

quantitative risk analysis for developmental toxicology. Risk

Analysis 8, 15-20.

Klaassen, C.D. and Eaton, D.L. (1990) Principles of Toxicol-

ogy. In: M.O. Amdor, J. Doull and C.D. Klaassen (Eds)

Casarett and Doull’s Toxicology. 4th edn., Pergamon Press.

Klaassen, C.D. and Doull, J. (1980) Evaluation of Safety:

Toxicological Evaluation. In: C.D. Klaassen and M.O. Am-

dur (Eds), Toxicology, MacMillan, NY, p. 26.

Kodell, R.L., Howe, R.B., Chen, J.J. and Gaylor, D.W. (1991)

Mathematical modeling of reproductive and developmental

toxic etfects for quantitative risk assessment. Risk Analysis

I I 583-590.

Krewski. D., Murdoch, D.J. and Withey, J.R. (1987) The

Application of Pharmacokinetic Data in Carcinogenic Risk

Assessment. In: Pharmacokinetics in Risk Assessment:

Drinking Water and Health, National Academy Press,

Washington, DC, 8 pp. 441-468.

Laden, F. and Gray, G.M. (1993) Toxics Use Reduction: Pro

and Con. In: Risk: Issues in Health and Safety 4, 213-234.

Lave. L.B. and Males, E.H. (1989) At risk: The framework for

regulating toxic substances. Environ. Sci. Tech. 23. 386-

391.

Lehman, A.J. and Fitzhugh, O.C. (1954) IOO-fold Margin of

safety. Assoc. of Food Drug Off. U.S.Q. Bull. 18, 33-35.

Les vs. Reilly. (1993) 968 F.2d 985, 987-988 (9th Cir. 1992)

Cert. denied, U.S., 112 L.Ed.Zd 740, 61 U.S.L.W. 3584

Mantel, N. and Bryan, W.R. (1961) Safety testing of carcino-

genic agents. J. Natl. Cancer Inst. 42, 455.

Maugh, T.H. (1978) Chemical carcinogens: How dangerous

are low doses? Science 202, 37741

McBrayer, H.E. (1989) Chemical Manufacturers Association.

Clean Air Act Amendments of 1989. Hearings before the

Senate Committee on Envirnomental and Public Works.

IOlst Congress First Session, September 21, 1989, pp. 12-

13.

McClellan, R. (1991) Personal Communication. President

Chemical Industry Institute of Toxicology, Research Trian-

gle Park, NC.

Memorandum from F. Henry Habicht II, on behalf of the

Agency’s Risk Assessment Council, Deputy Administrator,

EPA to Assistant and Regional Administrators, EPA

(February 26, 1992) Guidance on Risk Characterization for

Risk Managers and Risk Assessors, Risk Assessment Coun-

cil, U.S. EPA, Washington, DC.

Merrill, R.A. (1988) Implementation of the Delaney Clause:

Repudiation of Congressional Choice or Reasoned Adapta-

tion to Scientific Progress. Yale J. on Regul. 5, 1-88

Moolenaar, R.J. (Summer 1992) Overhauling carcinogen clas-

sification. Issues in Sci. Tech. 8. 70-75.

Monsanto Co. vs. Kennedy (1979) 613 F.2d 947.

Monticello, T.M., Miller, F.J.. Swenberg. J.A., Starr, T.B. and

Gibson, J.A. (1992) Association of enhanced cell prolifera-

tion and nasal cancer in rats exposed to formaldehyde. The

Toxicologist 12, I. 1017.

Morgan, M.G.. Fischhotf, B.. Bostrom, A., Lave, L. and

Atman C.J. (1992) Communication risk to the public. Envi-

ron Sci. Tech. 26. 204992056.

Morgan, M.G. and Henrion, M. (1990) Uncertainty: A Guide

to Dealing With Uncertainty in Quantitative Risk and Pol-

icy Analysis. Cambridge University Press, NY.

NAS (National Academy of Sciences) (1977) Drinking Water

and Health. Washington, DC.

NAS (National Academy of Sciences) (1983) Risk Assessment

in the Federal Government: Managing the Process. National

Academy Press, Washington. DC. p. 3.

National Research Council (1987) Regulating Pesticides in

Food: The Delaney Paradox. National Academy Press.

Washington DC.

National Research Council (I 989) Improving Risk Communi-

cation. National Academy Press, Washington. DC pp. 94.

181.

National Research Council (1990) Health Etfects of Exposure

to Low Levels of Ionizing Radiation. Committee on the

Biological Effects of Ionizing Radiation. Washington. DC.

National Research Council (199la) Issues in Risk Assessment.

National Academy Press, Washington, DC.

National Research Council (1991b) Human Exposure Assess-

ment for Airborne Pollutants: Advances and Opportunities.

National Academy Press, Washington. D.C.

Natural Resources Defense Council vs. EPA (D.C. Cir. 1987)

838 F.2d 1224.

Natural Resources Defense Council (1989) Intolerable Risk:

Pesticides in Our Children’s Food. NRDC, New York.

Neal R.A. (1991) The Chemical Industry Institute of Toxicol-

ogy. In: J.D. Graham (Ed), Harnessing Science for Environ-

mental Regulation. Praeger Press, Westport, CT, pp. 27-37.

Nichols, A.L. and Zeckhauser, R.J. (1988) The perils of pru-

dmce: How conservative risk assessments distort regulation.

Regul. Toxicol. Pharmacol. 8, 61-74.

Nuclear Regulatory Commission (1975) Reactor Safety Study:

An Assessment of Accident Risks in U.S. Commmercial

Nuclear Power Plants. NUREG 75/014, Washington, DC.

Othce of Technology Assessment (November 1987) Identifying

and Regulating Carcinogens. U.S. Congress, 54.

Office of Technology Assessment (1987) Identifying and Regu-

lating Carcinogens. U.S. Congress, Washington, DC.

OSHA (Occupational Safety and Health Administration)

(1977) Generic Cancer Policy. Fed. Reg. 42, 54148.

OSHA (Occupational Safety and Health Administration)

(1988) On the history of OSHA’s benzene rule. In: John D.

Graham, Laura Green and Marc J. Roberts, In Search of

Safety: Chemicals and Cancer Risk, Harvard University

Press, Cambridge. MA, pp. 80- 114.

J.D. Graham i Toxicolog_y 102 (I 995) 29-52 51

Paracelsus (Theophrastus ex Hehenheim Eremita): Von der

Besucht. Diilingen. 1567, as quoted, In: M.O. Amdur, J.

Doull and C. Klaassen (Eds), Casarett and Doull’s Toxicol-

ogy: The Basic Science of Poisons, Fourth Edition, Perga-

mon Press, NY. p. 5.

Paustenbach, D.J., Meyer, D.M.. Sheehan, P.J. and Lau, V.

(1991) An assessment and quantitative uncertainty analysis

of the health risks to workers exposed to chromium contam-

inated soils. Toxicol. Ind. Health 7, 159-196.

Pease. W.S., Zeise, L. and Kelter, A. (1990) Risk assessment

for carcinogens under California’s proposition 65. Risk

Analysis IO, 2555271.

Personal communication from Sandra Baird (August 1993)

For example, the RfD (EPA’s estimate at a ‘safe’ exposure

for noncancer health effects) for Acrylamide is 900-fold

higher than the exposure associated with a one in a million

cancer risk (U.S. EPA, Integrated Risk Information Ser-

vice).

Peto, R., Gray, R.. BrdntOn, P. and Grasso, S. (1991) Effects

on 4080 rats of chronic ingestion of N-nitrosodiethylamine

or N-nitrododimethylamine: A detailed dose-response study.

Cancer Res. 51, 640776491.

Portier, C.J. and Kaplan, N.L. (1989) Variability of safe dose

estimates when using complicated models of the carcino-

genic process. Fundam. Appl. Toxicol. 13, 5333544.

Public Citizen vs. Young. (D.C. Cir 1987) 831 F.2d 1108.

Ramsey, J.C. and Anderson, M.E. (1984) A physiologically

based description of inhalation pharmacokinetics of styrene

in rats and humans. Toxicol. Appl. Pharmacol. 73, l59- 175.

Remick, Commissioner F.J. (1992) Regulatory Risk Coher-

ence. U.S. Nuclear Regulatory Commission. Plenary Ses-

sion. American Nuclear Society, Boston. MA.

Risk Assessment Forum (September 1991) Alpha-Zu-Globulin:

Association with Chemically-Induced Renal Toxicity and

Neoplasia in the Male Rat. U.S. Environmental Protection

Agency Washington, DC.

Risk Focus, Inc. (June 1989) A Critical Review of the

NRDC’s Report, Intolerable Risk: Pesticides in Our Chil-

dren’s Food. Natural Agricultural Chemicals Association,

Washington, DC.

Rinksy, R.A. et al. (1987) Benzene and leukemia: An epidemi-

ologic risk assessment. N. Engl. J. Med. 316 1044 1050.

Rodricks. J.V. (1988) Origins of risk assessment in food safety

decision making. J. Am. COIL Toxicol. 7, 539-542.

Rodricks, J.V., Brett, S.M. and Wrenn, G.C. (1987) Significant

risk decisions in Federal regulatory decisions. Regul. Toxi-

co]. Pharmacol. 7. 3077320.

Rodericks, J. (1992) Calculated Risks, Cambridge University

Press. NY.

Roe, D. (August 1989) An incentive-conscious approach to

toxic chemical controls. Econ. Dev. Q. 3(3), 1799187.

Rosenthal, A.. Gray. G.M. and Graham, J.D. (1992) Legislat-

ing acceptable cancer risk from exposure to toxic chemicals.

Ecol. kdW Q. 19, 2699362.

Roth. E., Morgan, M.G., Fischhoff, B., Lave, L. and Bostrom,

A. (1990) What do we know about making risk compari-

sons Risk Analysis IO, 375-387.

Ruckelshaus, W. (1983) Science risk and public policy. Science

221, 1026-1028.

Ruckelshaus, W. (1985) Risk, science and democracy. Issues in

Sci. Tech. I, 3. 19-38.

Ruttenberg, R. and Bingham, E. (1981) A Comprehensive

Carcinogen Policy as a Framework for Regulatory Activity.

In: W.J. Nicholson (Ed), Management of Assessed Risk for

Carcinogens. Ann. NY Acad. Sci. 17.

Schach von Wittenau, M. (1987) Strengths and weaknesses of

long-term bioassays. Regul. Toxicol. Pharmacol. 7, I I3

119.

Scheuplein. R.J. (1986) SOM and Food Safety Policy. In: W.

Eugene Lloyd (Ed), Safety Evaluation of Drugs and Chemi-

cals Hemisphere Publishing Company. Washington. DC.

pp. 4055410.

Scheuplein, R.J. (1987) Risk assessment and food safety: A

scientist’s and regulator’s view. Food Drug Cosmet. J. 42

2377250.

Science Advisory Board (1990) Reducing Risk: Setting Priori-

ties and Strategies for Environmental Protection. Washing-

ton, DC September, 1990.

Shabecoff, P. (December 16, 1987) Citing ‘Reasonable’ Health

Risk, EPA to Allow Use of Herbicide, New York Times, p.

I.

Sielken, R.L. (1989) Useful tools for evaluating and presenting

more science in quantitative cancer risk assessments. Toxics

Subst. J. 9. 3533404.

Silbergeld, E.K. (June 1993) Risk assessment: The perspective

and experience of U.S. environmentalists, Environ. Health

Perspect. 101, lOO&lO4.

Smith, K. (1992) Alar Three Years Later: Science Unmasks a

Hypothetical Health Scare. American Council on Science

and Health. NY.

Spangler, M.B. (1987) A summary perspective on the Nuclear

Regulatory Commission’s implicit and explicit use of de

minimus risk concepts in regulating for radiological protec-

tion in the nuclear fuel cycle. In: C. Whipple (Ed). De

Minimus Risk Plenum Press, NY, pp. I I I 143.

Spear, R.C., Bois, F.Y.. Woodrutf, T., Auslander. D., Parker,

J. and Selvin, S. (1991) Modeling benzene pharmacokinetics

across three sets of animal data: Parametrtc sensitivity and

risk implications. Risk Analysis I I, 641 654.

Starr, T.B. and Buck, R.B. (1984) The importance of delivered

dose in estimating low-dose cancer risk from inhalation

exposure to formaldehyde. Fundam. Appl. Toxicol. 4. 740

753.

Statement of Senator Edward M. Kennedy (July IO. 1991) The

Safety of Pesticides and Food Act of 199 I. Press Statement

for Immediate Release.

Summary of Workshop to Review an OMB Report on Risk

Assessment and Management (Winter 1992) In Risk: Issues

in Health and Safety 3, 71-84.

Tennant, R.W. and Ashby, J. (1991) Classification according

to chemical structure, mutagenicity to salmonella and level

of carcinogenicity by the U.S. National Toxicology Pro-

gram. Mutat. Res. 257. 2099227.

52 J.D. Gmlmm 1 Toxicology 102 (1995) 29-52

Thompson, K.M., Burmaster, D.E. and Crouch, E.A.C. (1992) Monte Carlo techniques for quantitative uncertainty analy- sis in public health risk assessments, Risk Analysis 12, 53-63.

Toxicity Testing: Strategies to Determine Needs and Priorities. (1984) National Academy of Sciences, Washington, DC, 29.

Travis. C.C., Quillen, J.L. and Arms, A.D. (1990) Pharma- cokinetics of benzene. Toxicol. Appl. Pharmacol. 102 400- 420.

Upton, A.C. (1988) Are there thresholds for carcinogenesis? The thorny problem of low-level exposure. Ann. NY Acad. Sci. 534, 863-884.

U.S. Centers for Disease Control (October 1991) Preventing Lead Poisoning in Young Children. Atlanta, GA.

U.S. Department of Health and Human Services (1990) Fifth Annual Report on Carcinogens. Washington, DC.

U.S. Public Health Service (1992) National Toxicology Pro- gram (NTP). final report of the advisory review by the NTP board of scientific counselors. Request for comments. Fed. Reg. 57, 31721~~31730.

Vettorazzi, G. (1980) Handbook of International Food Regu- latory Toxicology, Spectrum, NY. I, 66668.

Weil, C.S. (1972) Statistics versus safety factors and scientific judgement in the evaluation of safety for man. Toxicol. Appl. Pharmacol. 21, 454-463.

White, M.C., Infante, P.F. and Chu, K.C. (1982) A quantita- tive estimate of leukemia mortality associated with occupa- tional exposure to benzene. Risk Analysis 9, 197.-208.

Whitfield, R.G. and Wallsten, T.S. (1989) A risk assessment

for selected lead-induced health effects: An example of a general methodology. Risk Analysis 9, 197-208.

WHO (World Health Organization) (1992) Report of IPCS Discussions on Deriving Guidance Values for Health-Based Exposure Limits. International Programme on Chemical Safety. United Nations Environment Programme, Geneva, Switzerland.

Wilson, R. and Clark, W. (1991) Risk assessment and risk management: their separation should not mean divorce. In: C. Zervos (Ed), Risk Analysis, Plenum Press, New York, pp. 1877196.

Woodruff, T.J., Bois, F.Y., Auslander, D. and Spear, R.C. (1990) Structure and parameterization of phannacokinetic models: Their impact on model predictions. Risk Analysis 12, 189-201.

Zapp, J.A. (1977) An acceptable level of exposure. Am. Ind. Hyg. Assoc. J. 38, 425-431.

Zeise, L.. Painter, P., Berteau, P.E., Fan, A.M. and Jackson, R.J. (1991) Alar in fruit: limited regulatory action in the face of uncertain risks. In: B.J. Garrick and W.C. Geckler (Eds), The Analysis, Communication and Perception of Risk, Plenum Press New York, pp. 275-284.

Zeise. L., Wilson, R. and Crouch, E.A.C. (1987) Dose-re- sponse relationships for carcinogens: A review. Environ. Health Perspect. 73, 2599308.

Zilberman, D., Schmitz, A., Casterline, G., Lichtenberg, E. and Seibert, J.B. (1993) The economics of pesticide use and regulation. Science 253, 5188521.

Zimmerman, R. (1993) Social equity and environmental risk. Risk Analysis IO, 649-666.