misconceived bioethics?: the misconception of the “therapeutic misconception”

11
Misconceived bioethics?: The misconception of the btherapeutic misconceptionQ B Gary S. Belkin * Bellevue Hospital Center, 462 First Ave. Room C-D 252, New York, NY 10016, United States Received 1 March 2002; received in revised form 6 June 2005; accepted 17 September 2005 Abstract Bioethics needs to include study of the social and historical context in which ethical meanings in medical encounters make sense. It needs to do this in order to remain relevant, vibrant, and aware of how it might unwittingly facilitate the agendas of others. As an illustration, this paper critiques some of the accepted meanings and purposes of the idea of the Therapeutic Misconception (TM) which has been an increasingly attractive concept with which to organize thinking about experimentation ethics. By considering the history of alternative viewpoints against which TM was offered as a replacement, this paper suggests that TM, and bioethics more generally, may contribute to increasingly technocratic and standardized practices in medicine. D 2005 Elsevier Inc. All rights reserved. Historians, anthropologists, and sociologists are beginning to focus more concerted criticism and concern over the practice of bbioethicsQ.(Evans, 2002; Hoffmaster, 2001; Martenson, 2001; Belkin, 2001, 2004; McCullough, 2000; Chambers, 1998). The concern is that the increasingly formalized and institutionalized set of writings, methods, and organizations that fall under the bioethics umbrella may be missing salient aspects of what makes a moral dilemma, a dilemma. Work in bioethics is often reduced in practice to spinning analytic normative constructions or building consistent networks of theory or principle isolated from how commitments to the rightness or wrongness of a given medical intervention actually form and cohere in the real world. Personally specific, emotionally and psychologically shaped, and culturally and historically situated, knowledge and practice often does, and should, resolve contentious medical decision making. But what the bbioethicistQ supposedly knows does not generally include serious familiarity with or exploration of such psychological, cultural, and social factors. This critique echoes challenges from some prominent thinkers working within moral philosophy itself, such as Charles Taylor’s insistence that moral beliefs are things one has rather than proves , challenges within bioethics to focus on more context-based, casuistical, care, and narrative ethics approaches, and prominent skepticism in the popular press as to what bioethicists really bknowQ . (Taylor, 1984, Stolberg, 2001; Satel & Stolba, 2001; Smith, 2000; Shalit, 1997.). We need more efforts to understand ethical claims as social, cultural, psychodynamic, and historical events that are incompletely understood and debated via the methods and questions bioethics has generally brought to the table. As an example of how such alternative perspectives, in this case historical ones, might change and expand the ethical 0160-2527/$ - see front matter D 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.ijlp.2005.09.001 B A version of this paper was presented at the annual meeting of the American Psychiatric Association, May 22, 2002. * Tel.: +1 212 263 6220; fax +1 212 263 8097. E-mail address: [email protected]. International Journal of Law and Psychiatry 29 (2006) 75 – 85

Upload: gary-s-belkin

Post on 05-Sep-2016

220 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Misconceived bioethics?: The misconception of the “therapeutic misconception”

International Journal of Law and Psychiatry 29 (2006) 75–85

Misconceived bioethics?: The misconception of the

btherapeutic misconceptionQB

Gary S. Belkin *

Bellevue Hospital Center, 462 First Ave. Room C-D 252, New York, NY 10016, United States

Received 1 March 2002; received in revised form 6 June 2005; accepted 17 September 2005

Abstract

Bioethics needs to include study of the social and historical context in which ethical meanings in medical encounters make

sense. It needs to do this in order to remain relevant, vibrant, and aware of how it might unwittingly facilitate the agendas of others.

As an illustration, this paper critiques some of the accepted meanings and purposes of the idea of the Therapeutic Misconception

(TM) which has been an increasingly attractive concept with which to organize thinking about experimentation ethics. By

considering the history of alternative viewpoints against which TM was offered as a replacement, this paper suggests that TM, and

bioethics more generally, may contribute to increasingly technocratic and standardized practices in medicine.

D 2005 Elsevier Inc. All rights reserved.

Historians, anthropologists, and sociologists are beginning to focus more concerted criticism and concern over the

practice of bbioethicsQ. (Evans, 2002; Hoffmaster, 2001; Martenson, 2001; Belkin, 2001, 2004; McCullough, 2000;

Chambers, 1998). The concern is that the increasingly formalized and institutionalized set of writings, methods, and

organizations that fall under the bioethics umbrella may be missing salient aspects of what makes a moral dilemma, a

dilemma. Work in bioethics is often reduced in practice to spinning analytic normative constructions or building

consistent networks of theory or principle isolated from how commitments to the rightness or wrongness of a given

medical intervention actually form and cohere in the real world. Personally specific, emotionally and psychologically

shaped, and culturally and historically situated, knowledge and practice often does, and should, resolve contentious

medical decision making. But what the bbioethicistQ supposedly knows does not generally include serious familiarity

with or exploration of such psychological, cultural, and social factors. This critique echoes challenges from some

prominent thinkers working within moral philosophy itself, such as Charles Taylor’s insistence that moral beliefs are

things one has rather than proves, challenges within bioethics to focus on more context-based, casuistical, care, and

narrative ethics approaches, and prominent skepticism in the popular press as to what bioethicists really bknowQ.(Taylor, 1984, Stolberg, 2001; Satel & Stolba, 2001; Smith, 2000; Shalit, 1997.).

We need more efforts to understand ethical claims as social, cultural, psychodynamic, and historical events that are

incompletely understood and debated via the methods and questions bioethics has generally brought to the table. As an

example of how such alternative perspectives, in this case historical ones, might change and expand the ethical

0160-2527/$ - s

doi:10.1016/j.ijl

B A version of

* Tel.: +1 212

E-mail addr

ee front matter D 2005 Elsevier Inc. All rights reserved.

p.2005.09.001

this paper was presented at the annual meeting of the American Psychiatric Association, May 22, 2002.

263 6220; fax +1 212 263 8097.

ess: [email protected].

Page 2: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–8576

discussion, I will focus here on one prominent idea in the bioethical toolkit, particularly in American discussions, that

of the Therapeutic Misconception (TM). Coined by Applebaum and colleagues in the pages of this journal over two

decades ago, TM is a central part of the lexicon of experimentation ethics. What might be gained by using historical

research to think about TM? Revisiting the conditions that spawned questions and conclusions different than those TM

reflects, alters how we should regard TM, and perhaps bioethics more generally. Rather than being the fruit of ethical

maturation, TM may instead conceal and further larger, concerning, historical changes: specifically the standardization

and commodification of medical practice. These are changes especially prominent in the American context. This in turn

highlights the possible technocratic, rather than democratic or introspective, consequences of bioethics.

1. The therapeutic misconception

It is almost impossible to discuss the ethics of experimentation without some explicit or implicit reflection on TM.

This is because it neatly captures and orders a variety of ethical concerns, such as how much an bexperimentQ shouldbenefit its subject, deviate from usual treatments, address important scientific questions, or pose risk.

TM organizes thinking about these issues by focusing on the degree that research is different from usual clinical

practice. Briefly put, TM describes the misconception subjects may have that research is treatment. The TM

conception of research practice is that what makes research ethically unique is that it involves interventions not

primarily intended to be of optimal value to subjects. It is never ethically equivalent to treatment. Treatment choices

made outside of a research context can be tailored to meet the needs of individuals in ways that research, even so-

called btherapeuticQ research, is restrained from doing. Research is an artifice created primarily to pose questions in

rigorous ways, not to meet the needs of subjects. TM seems, to many, to resolve perceived limitations encountered

when relying on other ethical frameworks or emphases—such as distinguishing between research that involves

treatment and that done bjustQ for gaining knowledge, or focusing on the comparison between the level of risk of a

research project with its scientific value.

An ethical emphasis on TM reflects skepticism about the therapeutic–nontherapeutic distinction that guided, and

often still guides, discussions of experimentation ethics. But that distinction faced criticism as inadequate conceptual

scaffolding upon which top rest a commitment to the primacy of subject autonomy that increasingly characterized

discussions of human subjects ethics through the 1970’s and beyond. TM instead focuses on the belief that human

subjects research, whatever the beneficial byproduct to individual patients or to society, however similar to accepted

treatment, and regardless of how low the risk or great the value, still remains a use of subjects for the purposes of

others. At a minimum, then, research can only be justified when research subjects can fully recognize that they are

being busedQ. Participation by subjects should not be merely consented, but willed. No matter how much a patient

may benefit from, or appropriately receive treatment within, a research protocol, a protocol inherently describes a way

of being treated that is distinct from real treatment. The integrity of research is fundamentally tested in its ability to

convey this stark fact. This logically follows as an extension of the precept to treat others only as ends in themselves,

and to enhance the agency of the willing subject.

This set of commitments summarized under the thumbnail phrase of bTMQ plays an important role in increasingly

effective efforts to move away from the previously dominant risk/value or therapeutic/non-therapeutic distinctions

which shaped discussions of research ethics and deliberations of Institutional Review Boards (IRB’s). For example,

attempts to question the false therapeutic promise of clinical trials (Miller, 2000; Miller & Brody, 2003) as well as calls

for required independent review of competency to consent for mentally ill participating in studies posing more than

minimal risk irrespective of therapeutic benefit, reflect such efforts (National Bioethics Advisory Commission, 1998).

Indeed, widely (and heatedly) discussed efforts by President Clinton’s National Bioethics Advisory Commission in the

United States to require such competency review (a subsequently growing practice), explicitly connected itself to a

history of evolving efforts to question received rules governing IRB guidelines which relied upon therapeutic/non-

therapeutic distinctions. (National Bioethics Commission, Appendix I, 1998; Capron, 1999). While the Commission

disbanded with the election of George W. Bush, such heightened scrutiny has been widely pursued and advocated

since, and the degree such a prominent effort explicitly invoked an apparent historical evolution away from reliance

upon therapeutic/non-therapeutic distinctions, deserves attention.

That particular use of history was a superficial one. As I will discuss here, a more careful history leaves us with far

more ambiguous conclusions about what should count as welcome ethical bprogressQ. Bioethical writings frequentlymake historical claims to celebrate and justify the appearance of the field, rather than to examine and perhaps

Page 3: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–85 77

critically question how the field works, or how ethical ideas and dilemmas appear (Belkin, 2001). Without enlarging

the use of history and allowing it to roam more freely and with greater curiosity, the use of TM may take place without

appreciation of its larger impact. In particular, as will be argued here, the bioethics critique of which TM is a part may

reflect and perpetuate forces shaping medical practice that should alarm the same bioethics community. Proponents of

TM frequently root the concept in the work of Hans Jonas who was primarily concerned with the oppressiveness of

the dominant value of progress. (Jonas, 1969). The logical extension of his ideas by writers on TM may make perfect

sense, but that bsensibilityQ reflects changes in what increasingly began to count as relevant medical knowledge in the

latter 20th century. Such changes, made in the name of progressive use of aggregate and bevidence-basedQ data, may

ironically carry new ethical risks, risks Jonas would have been the first to point out.

2. From therapeutic optimism to skepticism

So what set of assumptions and practices was TM out to correct? What happens when we try to see that prior world

on its own terms and in its own historical context? Before research was thought to risk being dangerously misconceived

as treatment, its legitimacy was anchored precisely as part of a moral vision of treatment. When sociologist Renee Fox

turned her eye on the everyday experience of doctors and patients on a specialized research ward at the Peter Bent

Brigham Hospital during the 1950’s, she described everything the TM-perspective fears. Here was a setting where very

ill individuals were treated by daring specialists, pushing the envelope of known therapies, trying new approaches in

their treatments with the clear purpose of testing them in a conscious bexperimentQ. We can hear, through her,

discussions about whether to pursue a procedure tested and compared against more standard treatments despite unclear

benefit. These decisions were at times influenced by a need for more subjects in order to adequately understand the

value of the procedure. Fox introduces us to physicians and patients grappling over when to stop a procedure that had

yet shown beneficial results. We can see patients who became familiar friends, and often very well-read and informed

partners, with investigators in the research enterprise. This research partnership was indistinguishable from, and indeed

relied upon, the loyalty and dependency of these very ill patients towards their physicians, grateful for success and

perseverance in the constant and changing efforts to maintain their life in the face of relentless decline. The faith

patient’s had in their doctors, and the unique access they had to these physicians, was a pool of good will drawn upon

for voluntary and obligatory participation in departures from usual treatment. But, Fox noted:

bThis is not to imply that the patients of Ward F-Second were improperly persuaded or forced to participate in

research. We already know that the physicians of the Metabolic Group obtained their voluntary consent for any

experimental measures they tried. However, as we shall see, the serious, chronic nature of their diseases, along

with certain characteristics of the ward community to which they belonged, and the nature of their relationship

to the physicians of the Metabolic Group, made many patients feel that they doughtT to consent to experimen-

tation, and others that they bvery much wanted to.Q (Fox, 1959, 136).

Fox’s account at the time betrayed little alarm at finding that there were those who felt they bought toQ, or those, onthis ward where a summary of the Nuremberg Code was posted on the wall, who felt they bvery much wanted toQ,consent to experimentation. Those sentiments reflected a spectrum of motivation and intent that was sensibly the

result of the larger experience of shared management of peril her ethnography set out to describe. This community of

people, Ward F-Second, was described as a social system formed to identify and cope with the circumstances of

serious illness. Community identity was precisely shaped through taking on new risks without which any hope of

triumph against such circumstances was considered unrealistic. Describing a similar metabolic research ward at

Massachusetts General Hospital (MGH), Ward 4, James Howard Means wrote:

bWe have hardly ever had any difficulty in inducing patients we wished to study to enter the ward, or in having

them wanting to leave before our work was. . .Certainly it can be said that the patients in Ward 4 have had closer

and more understanding relations with their doctors than most. Their devotion to a common cause and the study

of disease for the good of man draws them together.Q (Means, 1958, 18).

bWe assume a most definite responsibility to the patient for the proper treatment of his disease, and see that this has

precedence over any investigationQ, commented an earlier account describing the Ward. (Means, 1958, p. 18).

Page 4: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–8578

But a later sociological critique did sound an alarm and deep discomfort about this state of affairs when in 1975

Bradford Gray published his thesis work which involved interviews with subjects in order to gauge their under-

standing of the research in which they participated. He found that reliance upon physician’s assurances obscured, for

many, accurate understanding of the content of the informed consent form they signed. The belief that one was being

treated and under a physician’s bcareQ was blinding. Gray’s study was cited and used with illustrative effect by

Appelbaum, Roth and Lidz in their first account of TM published in 1982.

The distance traveled between the reactions of Fox in 1950 and Gray in 1975 (and likely by this point Fox as well)

captured the emergence of contemporary bioethics. It captured the eventual rejection of physician prerogative in favor

of viewing the medical relationship as properly analyzed and governed through a particular moral analysis. This

analysis required unique expertise that physicians were not, at least not due to their position as physicians, able to

provide. But assumptions that such rejection of one expertise and attempted replacement with another was a good

thing has eclipsed interest in perhaps learning from, and about, the expertise which was replaced. Asking how that

prior set of beliefs was sustained may more clearly point to those factors, or historical forces, that those who advocate

the new way of doing things may or may not wish to inherit. Without at least being curious about the possibility of the

historical contingency of moral commitments, bioethics fails to exercise the kind of analysis that should be expected

of a mature scholarly discipline. (McCullough, 2000).

So, what changed for the appearance of TM to seem bobviousQ when it was not before? What other historical

changes and interests might this vision of the experimental setting, wittingly or not, possibly reflect and further? What

might we learn about the assumptions present in TM, and about bioethics more generally, from asking such

questions?

3. The therapeutic conception

The post-WWII expansion of clinical research funding and opportunities also expanded discussion of the ethics of

research within medicine. There was clearly widespread interest in the issue of human experimentation ethics in the

post-War decades. Irving Ladimer’s 1963 anthology of papers on experimentation ethics included extensive excerpts

of 73 papers written in the prior decade or so, with an accompanying bibliography exceeding 500 citations in the

English literature. (Ladimer & Newman, 1963).

The contemporary debate around human experimentation is also usually timed by ethicists and historians to follow

W.W. II, the discovery of Nazi atrocities, and the subsequent Nuremberg Code. This Code appeared as a section in the

final judgement of the Nuremberg medical trials as an enumeration of 10 characteristics of bPermissible Medical

ExperimentsQ culled from an unspecified consensus of bprotagonists of the practice of human experimentationQ.(Trials of War Criminals Before the Nuernberg Military Tribunals, 1950, 181–182). Establishment of these principles

was a parry against the thrust of the German defense that the exigencies of war negated the possibility of a

universalizable ethical standard with which to judge the defendants. To argue otherwise, the defense claimed, was

merely to impose the preference of victors. The Code was the Court’s statement of authority and objectivity against

such claims, and thus required seeing medical conduct within standardized moral rules that were arguably already

established, and self-evident.

The American Medical Association (AMA) sent physician and physiologist Andrew C. Ivy as a medical expert to

the team prosecuting Nazi physicians. Ivy, in his report to the AMA, along with a draft of principles penned by a

similar expert witness, neurologist Leo Alexander, apparently generated much of the language used by the Nuremberg

judges in writing the Code (Weindling, 2001; Schmidt, 2004). For Ivy, these rules were, as the Court’s opinion

implied them to be, bwell established by custom, social usage and the ethics of medical conductQ, (Jonsen, p. 135) andreflected a bcommon understandingQ. (Ivy, 1948, p. 3).

But what understanding was that? Throughout the published transcripts of this trial hangs the question of

justification. What was the bwrongQ being prosecuted here? Defendants often argued that they simply were associated

with these experiments by mistake. More challenging, however, was their insistence as well that technically they in

fact met moral expectations. After all, their subjects gave bconsentQ. Subjects were argued to be consenting subjects

because, for example, they were given the chance for a reprieve from a death sentence in exchange for research

participation. The Code elaborated, in exhaustive detail, what was owed the subject in terms of informed consent. But

this elaboration of relevant, specific, details, to bwell established. . .customQ appeared tailored to close off such

creative loopholes offered at trial by the defense.

Page 5: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–85 79

The custom eventually articulated reflected values of professionalism, rather than commitments to some worked

out philosophical principle like autonomy. When reading the published judgment itself, as well as the discussions that

shaped Ivy’s and Alexander’s contributions, the sheer horror at the brutality of the experiments comes across as what

primarily offended and informed those who sat in judgment. (Weindling, 2001). At root, the experiments struck those

who judged them as simple abandonment of the kind of relationship and virtue doctors were expected to owe patients,

be it as objects of treatment or research, all in order to service the needs of a racist state. Some notion of Hippocratic

fidelity and integrity was more offended than a developed theory of informed consent.

What primarily offended and concerned those who pondered and prosecuted the Nazis in the medical cases was the

successful use of state power over subjects of experimentation in the name of a deranged notion of societal benefit.

That this could occur shaped discussions of experimentation ethics in subsequent decades. The ideal physician was

posited as a safeguard in that context. The differences between the persona of researcher and physician, the issue

central to TM, were relevant in those discussions as well, but because in the former persona the physician risked

becoming a servant of society’s interests. Addressing this concern was in most writings on human experimentation

ethics into the 1970’s to be via a commitment to bringing the researcher more tightly within, rather than distinctly

without, the orbit of its physician bbetter-halfQ.Since at least the outset of the twentieth century, the distinction between therapeutic and nontherapeutic research

was crucial to how organized medicine justified its increasing commitment to incorporating and expanding research

work as part of physician authority and identity (Lederer, 1995). It continued to be regularly invoked to negotiate the

appropriate level of scrutiny and oversight of research practices. Ivy, for example, understood the Nuremberg

provisions to apply to nontherapeutic research (Ivy, 1948). Importantly, this therapeutic/nontherapeutic distinction

seemed to carry a somewhat different set of meanings for Ivy’s audience after, as opposed to before, the War. It was

adapted to make sense of this enhanced concern about the exploitation of persons not just by a physician, but by the

state, a concern that motivated the Code itself, a concern arguably more about political belief and offended virtue,

than moral theory. Both therapeutic and nontherapeutic research had to remain answerable to expectations of the

busual doctor–patient relationshipQ as that was perceived as the core safeguard whose abandonment opened the flood

gates of Nazi atrocities. Understanding research as a kind of questioning normally occurring between doctor and

patient placed it safely within this sphere for protection from abuse. Such an understanding seemed necessary and

logical in the context of these post-WW II purposes to avoid possible rejection by the public of research work. It

served to reintegrate research into the whole of the medical enterprise in a way that redeemed both.

Minutes of the MGH Committee on Human Studies during the Chairmanship of MGH Anesthesiology Chief,

Henry Beecher, reflected this understanding in action. At the time, Beecher was also Chairman of the Human Studies

Committee of Harvard Medical School and a well-known commentator on ethics and human subjects research.

Original army reports on the Nazi concentration camps remain in his archived files. His leadership of this early bIRBQoffers a window on the prevailing logic, and tensions, of thinking about experimental ethics. If there was no

discernible risk and a placebo might reasonably be expected to have equivalent value to a tested intervention, then

ethically, testing that intervention was equivalent to using new but bunproven remedies in individual patients in a

doctor–patient relationship. In such cases informed consent would seem unnecessary and quite likely to distort the

reactions of patients.Q By this sentence Beecher wrote in his own hand—bconsent having been given in the coming of

the patient to the physicianQ. (MGH Committee on Human Studies, 1966). Yet, soon before he scrawled this

endorsement of implied consent, he published a widely quoted and seminal critique in the New England Journal

of Medicine that publicly criticized and shamed certain published experiments precisely because of their disregard for

informed consent. (Beecher, 1966). These writings demonstrate less a contradiction than a reflection of the kind of

attempts to parse the role of informed consent that were typical for this period.

Informed consent was considered fundamental. Yet it was fragile. Caution was frequently advised against

positioning informed consent as the core safeguard for ethical research—after all, how much and well could anyone

truly, fully understand? This tension was stabilized, and thus able to exist as a tension and not a problem, by a more

primary focus on issues of professional identity and purpose—specifically the notion of a mutually agreed upon larger

relationship with a physician. The MGH minutes repeatedly contained the phrase, busual doctor patient relationshipQ,which provided the substantive branching point of a decision tree of ethical decision making and scrutiny. So, simple

blood tests in one study needed formal, reviewed consent since taking blood needed for research was not otherwise a

practice found for the condition at issue in the busual doctor patient relationshipQ. On the other hand, a study that used

biopsy materials taken in the otherwise usual course of diagnosis of breast lesions did not need specific forms or

Page 6: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–8580

review of informed consent. The practice of informed consent would be up to the physician in each case (MGH

Committee on Human Studies, 1966).

Beecher’s famed MGH colleague, endocrinologist and Ward 4 investigator Fuller Albright, left a trove of patient

records that bring to life a busualQ practice familiar to those writing and thinking about experimentation ethics at the

time wherein treatment melded with asking questions, medication treatment choices with investigation, all in the

context of a familiar personal history and relationship between Albright and these patients. Albright’s files are filled

with correspondence with patients throughout the world, instructing them to try various hormonal preparations often

sent with the same mail. Patient descriptions of outcomes, and various requested tissue or fluid samples, would return

to him. Correspondents shared with him disappointment, criticism, embarrassment, despair facing death. Albright’s,

1939 profuse correspondence with patients portrays a practice of trying, re-trying, various approaches through mail

with his patients, at times openly sharing the hypothesis-testing rather than specifically therapeuthic purposes of some

of his instructions he described as dexperimentT. Fluctuations in the course of the chronic illnesses Albright treated

prompted arranged stays at MGH for monitoring, treatment and research often soliciting from favored donors funds to

subsidize these admissions, one at a time. With such practices, Albright helped shape the field of endocrinology for

the remainder of the 20th century.

It is hardly surprising that physicians would, self-servedly, argue that their idealized norms of practice in general,

by including research, protected individuals from the possible dangers of research. But others argued this as well.

Philosopher Samuel Stumpf, for example, in a paper described as the bfirst philosophical contribution to medical

ethicsQ, (Jonsen, 1999, p. 88) found consent both morally indecisive (one could not be permitted to consent to

anything) and often simply not feasible. The roots of the propriety of experimentation thus often had to lie elsewhere.

bThe agony of the present situation is, of course, that despite the uncertainties of moral insight and conviction

on the part of modern man, physicians and medical scientists do have to make decisions and they have to make

them frequently in the gray areas surrounding some therapies and some experiments.Q (Stumpf, 1966, p. 468).

Professor of Philosophy at Yale, John E. Smith, among other non-physician scholars, concurred with a medical

colleague at a Yale Medical School multidisciplinary public panel discussion of experimentation ethics in 1964. bIwould want to defend you on the point that the sharp distinction between experiments and treatment is probably a

mistakeQ. In this as well as other public or published exchanges, the claim that bexperimentQ was a part of the usualtasks of medicine seemed to address the challenge of avoiding the slide to Nazi abuses, protecting the safety and

dignity of subjects, and advancing knowledge while attentive to the limits of consent. Throughout these discussions is

an understanding, explicit or implicit, that the exercise of a certain degree of unilateral physician decision making was

part of normal, responsible doctoring, but was a practice less relied upon when direct patient benefit was not at stake

in an experiment, that is, the less the situation resembled bnormalQ professional practice.While open to critique as outrageous paternalism, within its context, such behavior was understood by many

commentators as a reflection of the view that the avoidance of abuse was best secured by the guiding goal of benefit.

And discerning that goal was ultimately the responsibility of experts, not of society with its own interests, nor of

persuadable, gullible, coercible patients. Consistent with this, some physicians voiced concern about Nuremberg’s

permissiveness in sacrificing the individual for public gains. Similarly, pervasive skepticism prevailed regarding the

adequacy of informed consent to protect subjects due to inherent difficulties in knowing and truly communicating

information, and of threats it posed to some experimental designs (especially in maximizing placebo effects). This

opening was used by some to argue for favoring assessment of social need over individual risks or benefits to justify a

study against consent. That kind of an argument was rejected by most commentators. Indeed combating such

worrisome arguments further reinforced the need for mirroring a beneficial doctor–patient relationship as the solution

to the problem of the inherent inadequacy of informed consent (Dietrich, 1960).

What would later be regarded as self-promoting paternalism was at the time experienced and explained quite

differently by certain prominent physicians as well as others interested in these issues. That doesn’t necessarily excuse

such attitudes, nor make them models for our practice, but it does challenge how we think historically about what led

to changes since, and the impact of such changes. To these commentators and researchers, protection of the individual

from state and social interest in generating new knowledge was a central problem. It was a problem illusorily solved

by informed consent, and best ameliorated by responsibly striving to mirror the idealized therapeutic situation

wherein expert-derived patient benefit drove behavior. They responded to the specific challenge of discerning what

Page 7: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–85 81

safeguards protected the vulnerability of patients from society’s curiosity while continuing to practice a medicine

experienced as fundamentally and inherently hypothesis-testing and knowledge generating. Medical practice, in this

view, belied the ease with which patient vulnerability could be unambiguously identified. All the more reason to post

physicians as supposedly vigilant arbiters of the mix of hopefulness, security, real benefits, and real risks, that was

found and sought in research and treatment.

4. Benefit and peril

Legal literature reinforced this btherapeutic-conceptionQ framework used by physicians and others in the decades

prior to the flowering of bioethics. It did so because thinking about how much bnormalQ practice overlapped with

normal research mattered as to when and whether research was even a legal activity. One of the more prominent

writers on the legal aspects of experimentation in the 1950’s and 1960’s was Irving Ladimer, a member of the Law-

Medicine Research Institute at Boston University, founded by William Curran in 1958. Both he and Curran

acknowledged repeatedly what the clinical literature anxiously observed, repeatedly that there was no clear statutory

or jurisprudential understanding of experiment as a legal activity. A substantial legal literature and body of case law

dealt with precisely the question of what an experiment was. In several articles Ladimer, for example, described in

great detail the legal tradition that understood physician experimentation and research within a traditional malpractice

framework. bExperimentQ generally referred, since 18th century English common law cases, to the deviation by

physicians from accepted treatments. The judiciary linked experimentation with professional disregard or negligence.

Physicians thus experimented bat their perilQ.Instead, in what I will call the bubiquity viewQ, many writers on human experimentation, as we have seen, argued

that experiment was usually not a deviation, but an inherent, ubiquitous, part of medical practice. While starting with

Fortner v. Koch in 1935 courts increasingly acknowledged that experiment was a part of medicine, the legal

significance of this trend remained unclear and commentators considered the bexperiment at perilQ admonition a

real threat to research. If the bperilQ view prevailed, then claiming ubiquity only poured salt into the potential wound

of legal exposure. Thus Ladimer, seeing shaky legal ground for research, compounded by flaunting the bubiquityQposition, tried to sell what I alternatively label a bresearch-is-different-in-kindQ position.

Ladimer attempted to describe experimentation as institutionally and practically different from the rest of medical

work—precisely the point at which TM begins. He tried to offer this strategy as preferable to the bubiquityQ approach.Judges, this bdifference-in-kindQ view suggested, used the word bexperimentQ too broadly. They should use its unique,specific, meaning which clearly differentiated it from the rest of medicine. He, and others, attempted to critique

bexperiment as perilQ as quaint, outdated, needing to be updated to a new appreciation of experiment as distinct from

medicine, as ba sequence resulting from an active determination to pursue a certain course and to record and interpret

the ensuing observationsQ. (Shimkin, 1953).

But this project faltered when it came to actually specify what these unique distinctions really were. Such attempts at

this time, including Ladimer’s, often could not get away from at the same time acknowledging the shared character-

istics of experiment and general medical practice, quoting, among others, Ivy. Ladimer thus oscillated between

characterizing research as a distinct practice, such as in an organized clinical trial, and research as a nontherapeutic

part of medicine but thus still tied to, and so judged within, broader norms of medical conduct. (Ladimer, 1955, 1957).

Architects of the new, and at the time still quite controversial, method of the randomized clinical trial similarly set

out to define experiment as a distinctly new practice. But they too often ended up describing the ethical obligations

and safeguards of such experimental work by casting the trial within familiar descriptions of the challenges of

managing uncertainty in usual medical practice. The ethics of a trial was thus validated to the extent that it resembled

such breal worldQ clinical dilemmas and ambiguities of treatment choice. Deviation from such usual circumstances

signaled the presence of risks that required the most scrupulous of review, justification, informed patient participation

and ease of exit. (Hill, 1963; Fox, 1960). The therapeutic–nontherapeutic divide helped gauge different levels of

vigilance and justification, but did not replace, and was instead sustained and shaped by, the idea that experiment was

an inherent part of the practice of medicine.

Attempts to draw a difference-in-kind picture that distinguished medicine from research during this period thus

tended to fall back on the ubiquity view as details, such as rules for informed consent, and the way to gauge the

necessity and safety of research, remained comprehensible within their connections to the ideals of everyday medical

practice. The amount of disclosure necessary was tied to the relevance of treatment and likely benefit to the

Page 8: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–8582

individual, one paper instructively citing where a physician was successfully sued for disclosing information about

procedures because doing so upset a patient (McCoid, 1959). That was a breach of a larger duty.

So it made more sense to many of Ladimer’s late 1950 and early 1960 contemporaries to counter the bexperiment

at perilQ admonition by instead characterizing it as a misunderstanding about the nature of medicine. Research was not

so unique. Responsible, probabilistic, detours from a frequently nonexistent formula of standard treatment was

bexperimentQ essential to all medicine. Research should mimic medical practice traditions aimed at safeguarding

patient benefit. When it could not, extra protections were needed to keep society at bay, but these were protections

that were tied to the integrity and thoughtful decision making of the physician-investigator.

The anchoring function of this commitment reflects synergism between the idealization of medical practice at the

time, with the perceived need that any ethics had to adequately and consistently offer an acceptable definition of what

experimentation was. It also reflected a medical practice that was geared around the autonomy and viability of

individual physician practitioners and their decisions. By the mid-1960’s prominent legal commentators felt confident

that a consensus had moved far enough away from the peril idea. But there was still concern with the general absence

of statute or judicial traditions explicitly endorsing nontherapeutic study. Instead of suggesting research was some

new practice, prominent medical–legal commentators William Curran and Paul Freund continued to find it advisable

to take the ubiquity view. The basis for ethical, and legal, experimentation was found in good medicine. Freund and

Curran declared in the pages of the New England Journal of Medicine that the status of the law indicated that research

be understood within a broader understanding and expectation of the ethical physician. Wrote Curran:

bIn my own writings and those of Marcus Plante, of Michigan Law School, the position is taken that ’informed

consent’ is not a clear concept so developed by the courts that it can now be followed with security by the medical

profession in patient-treatment decisions..Even less, then, in clinical investigationQ. (Curran, 1966, p. 324).

The solution, wrote Freund, is the bgreat traditional safeguard in the field of medical experimentation... the

disciplined fidelity of the physician to his patient... First of all, do not do injuryQ. (Freund, 1965, 689). The physicianideal was able to meet the need, perceived by many, to support both investigation that offered therapeutic benefit and

that which generated knowledge as they blurred within the contingent notion of medical practice that found reliance

upon an expert’s care.

Paternalism and self-righteousness loom as a part of the claims that physician-driven ideals and beneficent

judgement provided the needed safety net of protection for research subjects, especially when reading the human

experimentation literature with bioethics-era eyes. But leaving it at that misses asking historical questions that are still

relevant to us as to what sustained that set of beliefs. The mixed nature of medicine as innovating, hypothesis-testing,

and information generating was an understanding and experiencing of medical practice that bworkedQ to provide

moral and legal coherence to experimentation in the face of reaction to perversions done in the name of research, and

uncertainty as to its legal status. Such an understanding of medicine was also resonant with the degree of decisional

and economic stewardship and independence that characterized much of medical practice.

5. Changing medicine, changing research, what role for TM?

In the 1960’s charges were made by many concerned with the safety of patients in research that new FDA

requirements for consent undermined themselves, that Nuremberg created an impossible ideal that could erode the

emphasis made by Beecher and others on the ethical design of experimentation bat the outsetQ and actually left

subjects uneasily vulnerable to societal imperatives, and concerns that NIH and Public Health Service requirements

for review committees would dangerously privilege the investigator. These perceptions of ethical vulnerability in

those events gave way to quite opposite impressions and routines-normalization of IRB review and authority, the

dominance of informed consent, a progressively widening range of activity subject to IRB scrutiny, and an

assumption that FDA practices represented a scientific gold standard.

These shifts and the shift to TM resulted in large part from the rise of the commitments of bioethics, but I would

argue that both reflected and furthered other changes in the institutional arrangements and practices that shaped

research and medicine from the 1970’s forward. These were changes that stood in sharp contrast to the social

conditions of medical and research practice in the prior era. More than a change in moral insight per se, it was

research and medical practices that changed, allowing new moral visions to be compelling and possible. Research

Page 9: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–85 83

would be increasingly considered an activity distinct from the tasks of the physician not simply because bioethicists

were uniquely perceptive, but because of how research and medical practice became more capitalized, industrialized,

standardized, and thus more easily and necessarily organized in predictable and generalizable routines. No more

cottage industries like Albright’s. An ethics of research that rested upon the distinctiveness of research vs. clinical care

was compelling when such separateness was more concretely experienced. A new set of medical and research

practices that heightened institutional, theoretical, and practical differences between them emerged as part of a larger

shift transforming medicine itself a shift towards embracing standardized, aggregate forms of organization, practice

and knowledge typical of industries transformed by growth and capitalization. (Healy, 1997; Belkin, 1997).

What my historical sketch of the therapeutic conception suggests is that the way people talk about experimental ethics

is tightly connected to the way they talk about the nature of medical knowledge and physician effectiveness. And what

counts as valid knowledge in medicine is itself constrained by, and constrains, the economic organization and

management of health care delivery. Increased interest in so-called bevidence-based medicineQ towards the end of

the 20th century brought important rigor and critical self-reflection to the medical enterprise. But it was (and is) but one

set of tools, one that relied upon aggregate measures to generate standardized, if not literally algorithmic, practices.

These methodological commitments, which are often illusorily believed to exhaust what can or should count as

relevant knowledge, emerged more importantly as part and parcel of changes in the political economy of health care

within which both managed care and the medicine-research division of labor implicit in TM made sense. They both

legitimated and eased the move to a new transparency of medicine to non-medical managers. (Timmermans and Berg,

2003). This was especially true and reflective of the American context. Ironically, agitation for a more aggregate data-

based culture of scientific medicine by voices within academic medicine in the United States undermined claims by

physicians to unique expertise. It undermined the idea that medical knowledge is generated by individual physician

experts, an idea and associated set of practices that needed to be undermined in order to think of research as

bdifferent-in-kindQ. To adopt an ethics of research that separated treatment from the realm of uncertain, hypothesis-

testing, knowledge-generating activity, meant, inevitably perhaps, adopting and facilitating mutually reinforcing

economic and epistemological changes in medical practice. These were changes that risked, and risk, leaving both

medicine and research limited in their abilities to legitimately hear the messy wishes for hope, insight, reassurance,

and expertise that have often brought people to doctors.

The emerging medical marketplace and the purified separation of research and treatment portrays treatment as

applied, proven, knowledge, as opposed to research which is cast as probabilistic, uncertain, hypothesis-testing,

questioning. However, in many ways the opposite is true. TM grew and persists out of the frustration that despite

increased expectations of informed consent, patients still wished for things in the research setting. But how sure are we

that those things are not nor should be found there? How clear are we that a newly found ease in separating medical care

and research did not reflect a parallel erosion of meanings available in the receipt of health care more generally?

I am not advocating a return to folding much of what is now considered appropriately reviewed research back

within physician discretion. I share the discomfort of those who find patients relying on a sense of hope or bdoingsomethingQ in a Phase I therapeutic cancer trial where little hope actually exists. It seems, though, that the stubborn

sense of being cared for by research participants is a slippery and difficult, but perhaps essential, residue of

expectations of medical care without which research loses legitimacy and plausibility. We should pause before

breaking those connections.

A medicine of greater latitude, interpersonal responsiveness, and beneficent agency of practitioners, requires

practices that also see medical knowledge as more diverse, conditional, qualitative, and closely tied to relationships of

therapeutic hope and interest. Changes in the political economy of health care and a related emphasis on making

practice more uniform through a selective version of what counts as evidence-based knowledge, idealize and isolate

medical practice as applied knowledge. While that eased the way to see research and treatment as very different things

within much of the general Western discourse on experimentation ethics, this isolation at the same time removed from

both research and treatment qualities perhaps still valued and yearned for. When TM advocates note that research

should not serve expectations of patients to give them meaning or hope or comfort or security, to what degree do they

only reflect the changes that have made it hard for medical practice in general to provide those things as well? Perhaps

a medicine of individuals, and not aggregates, cannot do research without the Therapeutic Misconception. It is not

immediately clear that such a state of affairs is the worse scenario.

Certainly a medicine of doing best for individuals has often meant doing what is best for physicians. But the

bioethics movement, in attempts at a corrective, asserted claims as it grew, especially again in the American context,

Page 10: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–8584

through a critique of what medicine was capable of knowing. Asserting generalizable knowledge over medicine, the

time has come to consider the degree bioethics appeared as part of this larger move towards making medicine

increasingly transparent to managers, health services researchers, accountants and managed care clerks through

standardized formulations. Bioethics itself needs to face the degree it is a technocratic enterprise. (Evans, 2002).

6. Misconceiving, and reconceiving, bioethics

However ultimately unsuccessful or undesirable, what I have been calling the therapeutic conception reflected a set

of coherent commitments shaped around new scrutiny of research practices that was closely tied to specifying and

advancing a physician identity that emphasized individual integrity, expertise, and judgment. A rights-based, consent-

driven culture of health care knowledge and practice responded, refreshingly, to the growing mismatch between a

world of a therapeutic conception and the many turbulent and often productive social changes in the 1960’s and

1970’s (Belkin and Brandt, 2001). But the world is different yet again. Concern over the degree reliance of informed

consent could protect subjects that Beecher and his colleagues shared is making a comeback as the degree reliance on

the ritual of informed consent has descended into legalistic, technocratic safeguards undermining the value of IRB’s

attracts increased attention and criticism. In an era of managed care and increasingly algorithmic, corporative forms of

medicine and research, a beneficent-driven individualized expertise as the social ideal shaping clinical care and

medical knowledge production begins to look somewhat rosier than it did as America emerged out of 1960’s and

1970’s reforms, and spurred a worldwide bioethics movement. That ideal should be revisited, not as an alternative

ideological commitment, but as a prompt to a set of questions that need to be studied, and taken seriously. Questions

such as—Just how, in the real world, do conditions of consent and care impact well-being, outcomes, continuity of

care, etc? How, actually, does hopefulness operate, or for that matter the experience of being cared for, and how do

each get valued and exchanged in specific, vulnerable, terminal patients entertaining Phase-I clinical trials? How does

the feared multiple-agency of the doctor–investigator actually affect practice outcomes? How does that multiple-

agency fare compared to countless other competing interests facing physicians in everyday work? How do different

interventions shape the impact of these conflicts?

Accumulating work in consent for psychiatric treatment and research offer a nascent foundation for reframing

bethicalQ questions in psychiatry and psychiatric research into more empirically informed conversations about the

impact and goals of systems of care (Gardner and Lidz, 2001; Candilis, 2001; Carpenter et al., 2000; Roberts et al.,

2000; Lidz et al., 1995). No doubt such conversations need robust attention to ethical purposes. But bioethics should

be a label that also describes curiosity about how ethical practices and commitments appear and function, rather than

just what they may or should be. It should reflect an exploration of the ways practices occur and change due to other

forces understandable through historical, sociological, anthropological, psychological, political, etc. analysis. It

should take stock of the question aired here: Have escalating efforts to isolate and distinguish from the rest of

medicine the vocabularies, practices, personnel, expectations, etc. of research only left each more vulnerable to

depersonalizing commodification and standardization?

That question is an empirical, not a conventionally bbioethicalQ, question to which I have here been inclined to

answer byesQ. It is a question on which I think history arguably weighs in on my side. It seems important to know if I

am right. To answer it will mean reconstituting the ways we are predominantly interested in studying and arguing over

disputes regarding the moral dimensions of certain medical procedures. The current approach predominantly spins

abstracted stories that offer moral imprimatur and ease the way for those historically changing forces, which bioethics

poorly studies or acknowledges, that transform ways of engaging the person/patient.

References

Albright, F. (1939). Letter to patient, December 26, 1939, Box #, folder #, Papers of Fuller Albright [H MS c72], Harvard Medical Library in the

Francis A. Countway Library of Medicine.

Appelbaum, P. S., Roth, L. H., & Lidz, C. (1982). The therapeutic misconception: Informed consent in psychiatric research. International Journal of

Law and Psychiatry, 5, 319–329.

Beecher, H. (1966). Ethics and clinical research. New England Journal of Medicine, 274(24), 1354–1360.

Belkin, G. S. (1997). The technocratic wish: Making sense and finding power in the medical marketplace. Journal of Health Politics Policy and

Law, 22(2), 509–532.

Belkin, G. S. (2001). Toward a historical ethics. Cambridge Quarterly of Healthcare Ethics, 10, 345–350.

Page 11: Misconceived bioethics?: The misconception of the “therapeutic misconception”

G.S. Belkin / International Journal of Law and Psychiatry 29 (2006) 75–85 85

Belkin, Gary S., & Brandt, Allan (2001). Bioethics: Using its Historical and Social Context, International Anesthesiology Clinics. Summer, 39(3),

1–11.

Belkin, G. (2004). Moving beyond bioethics–history and the research for medical humanism, Perspectives in Biology and Medicine. Summer,

47(3), 373–385.

Candilis, P. J. (2001). Advancing the ethics of research. Psychiatric Annals, 31(2), 119–124.

Capron, A. (1999). Ethical and human rights issues in research on mental disorders that may affect decision-making capacity. New England Journal

of Medicine, 340(18), 1430–1434.

Carpenter, W. T., et al. (2000). Decisional capacity for informed consent in schizophrenia research. Archives of General Psychiatry, 57, 533–538.

Chambers, T. (1998). Retrodiction and the histories of bioethics. Medical Humanities Review, 12(1), 9–22.

Curran, W. J. (1966). The law and human experimentation. New England Journal of Medicine, 275(6), 323–325, p. 324.

Dietrich, D. (1960). Legal Implications of Psychological Research with Human Subjects. Duke Law Journal, p. 689, 265–274.

Evans, J. H. (2002). Playing God?—Human genetic engineering and the rationalization of the bioethical debate. Chicago7 University of Chicago

Press.

Fox, R. (1959). Experiment perilous-physicians and patients facing the unknown. Glencoe, III.7 Free Press.

Fox, T. F. (1960). The ethics of clinical trials. Medico-Legal Journal, 28, 132–141.

Freund, P. (1965). Problems in human experimentation. New England Journal of Medicine, 273(13), 687–692, p. 689.

Gardner, W., & Lidz, C. (2001). Gratitude and coercion between physicians and patients. Psychiatric Annals, 31(2), 125–129.

Healy, D. (1997). The Anti-Depressant Era. Cambridge, MA7 Harvard University Press.

Hill, A. B. (1963). Medical ethics and controlled trials. British Medical Journal, 5337, 1043–1049, April 20.

Hoffmaster, B. (Ed.). (2001). Bioethics in social context (pp. 219–247). Philadelphia7 Temple University Press.

Ivy, A. C. (1948). The history and ethics of the use of human subjects in medical experiments. Science, 108(1), 1–5, p. 3.

Jonas, J. (1969). Philosophical reflections on experimenting with human subjects. Daedelus, Spring (pp. 219–247).

Ladimer, I. (1955). Ethical and legal aspects of medical research on human beings. Journal of Public Health Law, 3, 467–511.

Ladimer, I. (1957). Human experimentation—medicolegal aspects. New England Journal of Medicine, 257(1), 18–24.

Ladimer, I., & Newman, R. W. (Eds.). (1963). Clinical investigation in medicine: Legal, ethical and moral aspects. Boston7 Law-Medicine

Research Institute, Boston University.

Lederer, S. (1995). Subjected to science—human experimentation in America before the Second World War. Baltimore, MD7 Johns Hopkins

University Press.

Lidz, C. W., et al. (1995). Perceived coersion in mental hospital admission: Pressures and process. Archives of General Psychiatry, 52, 1034–1039.

Martenson, R. (2001). The history of bioethics: An essay review. Bulletin of the History of Medicine and Allied Sciences, 56(2), 168–175.

McCoid, A. H. (1959). Some particular duties of medical practitioners. Vanderbilt Law Review, 12: 549–632. In: Ladimer and Newman (Eds.)

Clinical investigation in medicine, pp. 210–223, p. 213.

McCullough, L. (2000). Holding the present and future accountable to the past: History and the maturation of clinical ethics as a field of the

humanities. Journal of Medicine and Philosophy, 25(1), 5–11.

Means, J. H. (1958).Ward four—the Mallinckrodt research ward of the Massachusetts General Hospital. Cambridge, MA7 Harvard University Press.

MGH Committee on Human Studies (1966). Box 8, folder 63, Papers of Henry K. Beecher [H MS c64], Harvard Medical Library in the Francis A.

Countway Library of Medicine, July 21, p. 2, Minutes.

Miller, F. G., & Brody, H. (2003). A critique of clinical equipoise—therapeutic misconception in the ethics of clinical trials. Hastings Center Report,

33(3), 19–28.

Miller, M. (2000). Phase I cancer trials: A collusion of misunderstanding. Hastings Center Report, 30(4), 34–42.

National Bioethics Advisory Commission (1998). Research involving persons with mental disorders that may affect decisionmaking capacity.

Washington, D.C.7 U.S. Government Printing Office.

Roberts, L. W., et al. (2000). Perspectives of patients with schizophrenia and psychiatrists regarding ethically important aspects of research

participation. American Journal of Psychiatry, 157, 67–74.

Satel, S., & Christine Stolba, C. (2001). Who needs medical ethics? Commentary.

Schmidt, U. (2004). Justice at Nuremberg—Leo Alexander and the Nazi doctor’s trial. New York7 Palgrave MacMillan.

Shalit, R. (1997). When we were philosopher kings. New Republic (April 28), 24–28.

Shimkin, M. B. (1953). The problem of experimentation in human beings. Science, 117, 205–207, p. 205.

Smith, W. J. (2000). Culture of death: The assault on medical ethics in America. San Francisco7 Encounter Books.

Stolberg, S. (2001). Bioethicists find themselves the ones being scrutinized. (p. A1). Cambridge7 The New York Times.

Taylor, C. (1984). Philosophy and its history. In R. Rorty, J. B. Schneewind, & Q. Skinner (Eds.), Philosophy in History (pp. 17–30). Cambridge7

Cambridge University.

Timmermans, S., & Berg, M. (2003). The gold standard: The challenge of evidence-based medicine and standardization in health care.

Philadelphia7 Temple University Press.

Trials of War Criminals Before the Nuernberg Military Tribunals, vol. 2 (pp. 181–182). Washington D.C7 U.S. Government Printing Office.

Weindling, P. (2001). The origins of informed consent: The International Scientific Commission on Medical War Crimes and the Nuremberg Code.

Bulletin of the History of Medicine, 75(1), 37–71.