articole eficienta

80
Tolin, David F, Is cognitive-behavioral therapy more effective than other therapies?: A meta-analytic review. Clinical Psychology Review, 08 2010, vol./is. 30/6(710-720), 0272-7358 (Aug 2010) Abstract: Cognitive-behavioral therapy (CBT) is effective for a range of psychiatric disorders. However, it remains unclear whether CBT is superior to other forms of psychotherapy, and previous quantitative reviews on this topic are difficult to interpret. The aim of the present quantitative review was to determine whether CBT yields superior outcomes to alternative forms of psychotherapy, and to examine the relationship between differential outcome and study-specific variables. From a computerized literature search through September 2007 and references from previous reviews, English-language articles were selected that described randomized controlled trials of CBT vs. another form of psychotherapy. Of these, only those in which the CBT and alternative therapy condition were judged to be bona fide treatments, rather than "intent-to-fail" conditions, were retained for analysis (28 articles representing 26 studies, N =1981). Four raters identified post-treatment and follow- up effect size estimates, as well as study-specific variables including (but not limited to) type of CBT and other psychotherapy, sample diagnosis, type of outcome measure used, and age group. Studies were rated for methodological adequacy including (but not limited to) the use of reliable and valid measures and independent evaluators. Researcher allegiance was determined by contacting the principal investigators of the source articles. CBT was superior to psychodynamic therapy, although not interpersonal or supportive therapies, at post- treatment and at follow-up. Methodological strength of studies was not associated with larger or smaller differences between CBT and other therapies. Researchers' self-reported allegiance was positively correlated with the strength of CBT's superiority; however, when controlling for allegiance ratings, CBT was still associated with a

Upload: oierusonia

Post on 24-Nov-2014

141 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: articole eficienta

Tolin, David F, Is cognitive-behavioral therapy more effective than other therapies?: A meta-analytic review. Clinical Psychology Review, 08 2010, vol./is. 30/6(710-720), 0272-7358 (Aug 2010)

Abstract:Cognitive-behavioral therapy (CBT) is effective for a range of psychiatric disorders. However, it remains unclear whether CBT is superior to other forms of psychotherapy, and previous quantitative reviews on this topic are difficult to interpret. The aim of the present quantitative review was to determine whether CBT yields superior outcomes to alternative forms of psychotherapy, and to examine the relationship between differential outcome and study-specific variables. From a computerized literature search through September 2007 and references from previous reviews, English-language articles were selected that described randomized controlled trials of CBT vs. another form of psychotherapy. Of these, only those in which the CBT and alternative therapy condition were judged to be bona fide treatments, rather than "intent-to-fail" conditions, were retained for analysis (28 articles representing 26 studies, N =1981). Four raters identified post-treatment and follow-up effect size estimates, as well as study-specific variables including (but not limited to) type of CBT and other psychotherapy, sample diagnosis, type of outcome measure used, and age group. Studies were rated for methodological adequacy including (but not limited to) the use of reliable and valid measures and independent evaluators. Researcher allegiance was determined by contacting the principal investigators of the source articles. CBT was superior to psychodynamic therapy, although not interpersonal or supportive therapies, at post-treatment and at follow-up. Methodological strength of studies was not associated with larger or smaller differences between CBT and other therapies. Researchers' self-reported allegiance was positively correlated with the strength of CBT's superiority; however, when controlling for allegiance ratings, CBT was still associated with a significant advantage. The superiority of CBT over alternative therapies was evident only among patients with anxiety or depressive disorders. These results argue against previous claims of treatment equivalence and suggest that CBT should be considered a first-line psychosocial treatment of choice, at least for patients with anxiety and depressive disorders. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Mansell W, Core processes of psychopathology and recovery: "Does the Dodo bird effect have wings?", Clinical psychology review, 25 June 2010, 1873-7811

This editorial proposes that the task of identifying common processes across disorders and across psychotherapies will be the most fruitful way to develop efficient, easily trainable and coherent psychological interventions. The article adapts the concept of the 'Dodo Bird Effect' to argue for a mechanistic, testable account of functioning, akin to other unified approaches in science. The articles in the special issue complement this perspective in several ways: (1) three articles identify common processes across disorders within the domains of anger dysregulation, sleep disruption and perfectionism; (2) one article emphasises a case conceptualisation approach that is applied across different

Page 2: articole eficienta

disorders and integrates theoretical approaches; (3) three articles focus on the utility of a control theory approach to understand the core processes of maintenance and change. Critically, there is a consensus that change involves facilitating the integration within the client's awareness of higher level, self-determined goals (e.g. insight; cognitive reappraisal) with their lower level regulation of present-moment experience (e.g. emotional openness; exposure). Taken together, these articles illustrate the benefits of a convergent rather than divergent approach to the science and practice of psychological therapy, and they strive to identify common ground across psychotherapies and across widely different presentations of psychopathology.

Berman, Jeffrey S,Reich, Catherine M, Investigator allegiance and the evaluation of psychotherapy outcome research. European Journal of Psychotherapy and Counselling, 03 2010, vol./is. 12/1(11-21), 1364-2537;1469-5901 (Mar 2010)Author(s):Abstract:Considerable evidence has demonstrated that the beliefs of researchers can inadvertently influence research findings. The possibility of this type of bias is of special concern in studies evaluating the outcome of psychotherapy, where investigators frequently have marked allegiances to particular therapies and these allegiances have been found to correlate substantially with the pattern of results. In this article we discuss the evidence concerning investigator allegiance in psychotherapy research, emphasize the need to distinguish between this factor as a potential confound and a proved causal effect, and outline strategies that have been suggested for researchers to minimize the potential for bias both when designing future research and drawing conclusions from existing evidence. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Budge, Stephanie,Baardseth, Timothy P,Wampold, Bruce E,Fluckiger, Christoph, Researcher allegiance and supportive therapy: Pernicious affects on results of randomized clinical trials. European Journal of Psychotherapy and Counselling, 03 2010, vol./is. 12/1(23-39), 1364-2537;1469-5901 (Mar 2010)Author(s):Abstract:Allegiance effects have been discussed, debated, and tested over the past several decades. The evidence clearly shows that allegiance affects the findings and representation of therapies that are considered efficacious. We argue that allegiance effects are evident in randomized controlled trials of supportive therapy. Supportive therapy is an established and bona-fide therapy but its implementation as a control group for non-specific elements is a treatment that does not resemble supportive therapy as would be used therapeutically. Allegiance effects in the use of supportive therapy are caused by the design of supportive therapy controls, the therapists who deliver supportive therapy, and the patients who are enrolled in the trials. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Page 3: articole eficienta

Botella, Luis,Beriain, Diana. Allegiance effects in psychotherapy research: A constructivist approach. European Journal of Psychotherapy and Counselling, 03 2010, vol./is. 12/1(55-64), 1364-2537;1469-5901 (Mar 2010)Author(s):Abstract:This paper examines the concept of allegiance effects in psychotherapy research from a constructivist approach. After considering their role in outcome and process research, a constructivist explanation of them is proposed. It is also suggested that traditional ways to control them, while necessary and sound, may not be enough. Alternatively, a call for methodological pluralism in psychotherapy research is made, especially regarding the inclusion of qualitative, hermeneutic, phenomenological and discovery oriented case studies that privilege the voice of clients and not only the researchers favoured constructs. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Voracek, Martin,Tran, Ulrich S,Fisher, Maryanne L, Evolutionary psychology's notion of differential grandparental investment and the Dodo Bird Phenomenon: Not everyone can be right. Behavioral and Brain Sciences, 02 2010, vol./is. 33/1(39-40), 0140-525X;1469-1825 (Feb 2010)Abstract:Presents open peer commentary on an article in the current issue by Coall and Hertwig (see record 2010-08500-001), who addressed the question of whether the help that grandparents provide, which may have benefited grandchildren in traditional and historical populations, still yields benefits for grandchildren in industrialized societies. The current authors note that integration of different lines of research concerning grandparental investment appears to be both promising and necessary. However, it must stop short when confronted with incommensurate arguments and hypotheses, either within or between disciplines. Further, some hypotheses have less plausibility and veridicality than others. This point is illustrated with results that conflict previous conclusions from evolutionary psychology about differential grandparental investment. (PsycINFO Database Record (c) 2010 APA, all rights reserv

Pence, Steven L Jr.,Sulkowski, Michael L,Jordan, Cary,Storch, Eric A, When exposures go wrong: Trouble-shooting guidelines for managing difficult scenarios that arise in exposure-based treatment for obsessive-compulsive disorder. American Journal of Psychotherapy, 2010, vol./is. 64/1(39-53), 0002-9564 (2010)Author(s):Abstract:Cognitive-behavioral therapy (CBT) with exposure and ritual prevention (ERP) is widely accepted as the most effective psychological treatment for obsessive compulsive disorder (OCD). However, the extant literature and treatment manuals cannot fully address all the variations in client presentation, the diversity of ERP tasks, and how to negotiate the inevitable therapeutic challenges that may occur. Within this article, we attempt to address common difficulties encountered by therapists employing exposure-based therapy in areas related to: 1) when clients fail to habituate to their anxiety, 2) when clients misjudge how much anxiety an exposure will actually cause, 3) when incidental exposures happen in session, 4) when mental or covert rituals interfere with treatment,

Page 4: articole eficienta

and 5) when clients demonstrate exceptionally high sensitivities to anxiety. The goal of this paper is to bridge the gap between treatment theory and practical implementation issues encountered by therapists providing CBT for OCD. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Duncan, Barry L, Some therapies are more equal than others?PsycCRITIQUES, 2010, vol./is. 55/37, 1554-0138 (2010)Author(s):Abstract:Comments on Thomas L. Rodebaugh's review (see record 2010-12182-001) of Barry L. Duncan, Scott D. Miller, Bruce E. Wampold, and Mark A. Hubble's edited book, The heart and soul of change: Delivering what works in therapy (2nd ed.) (see record 2009-10638-000). In his review, Rodebaugh candidly admits his allegiance to empirically supported treatments, which perhaps explains the myopic lens used to examine the book. The dodo verdict ("Everybody has won and all must have prizes") still perfectly describes the state of affairs in psychotherapy--all bona fide approaches, in spite of vociferously argued differences, appear to work equally well. Rodebaugh's assertion that one must examine specific treatments for specific disorders to uncover differences between treatments ignores the many direct comparisons that have not yielded any differences for specific disorders, such as the Treatment of Depression Collaborative Research Program, Project Match, and the Cannabis Youth Treatment Project, to mention a few (see these program descriptions in The Heart and Soul of Change). No wear in the book is there any suggestion that the dodo verdict implies that we should "leave well enough alone" regarding research, that (perhaps the most egregious comment) anything goes in the consulting room, or that there is little point to training. Quite the contrary. The book advocates for a shift toward research and training about what works and how to deliver it, and away from a sole reliance on comparative, "battle of the brands" clinical trials. Dismissing the book on the basis that some therapies are more equal than others is reminiscent of another set of animals in another classic story. It's time to transcend the polemics and instead focus on what works with the client in my office now. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Rodebaugh, Thomas L, The heart and soul of the dodo. PsycCRITIQUES, 2010, vol./is. 55/28, 1554-0138 (2010)Abstract:Reviews the book, The Heart and Soul of Change: Delivering What Works in Therapy (2nd ed.) edited by Barry L. Duncan, Scott D. Miller, Bruce E. Wampold, and Mark A. Hubble (see record 2009-10638-000). In this book, considerable attention is paid to establishing that Saul Rosenzweig was the original articulator of the dodo bird hypothesis: All psychotherapies work about equally effectively. The dodo bird's statement is not meant to be a hypothesis: It is meant to quiet the animals. Taken literally, the declaration regarding winners and prizes is clearly intended as nonsensical. The dodo, otherwise best known as a dead bird, is thereby made immortal as a purveyor of nonsense. The dodo is a strong force in The Heart and Soul of Change. The book is a series of chapters by different authors but maintains a structure largely focused on the dodo bird hypothesis, its historical context, the research that can be taken to support it,

Page 5: articole eficienta

and its implications for practice. Much of the rest of the book consists of further demonstrations that the dodo bird hypothesis is the most sensible interpretation of the data, set alongside critiques of empirically supported therapies (ESTs) and policies that support their adoption. Some later chapters focus primarily on what should be the next steps given that the dodo bird's viewpoint is better supported than is a viewpoint that emphasizes ESTs. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Johansson H, The effectiveness of psychologically/psychodynamically-oriented, pharmacological and combination treatment in a routine psychiatric outpatient setting. Internet Journal of Mental Health, 01 January 2010, vol./is. 6/2(0-9), 15312941Author(s):Abstract:Background: This was an outcome study in a routine psychiatric outpatient unit in Sweden where the treatment the patients received was determined by normal routines at the unit and performed by members of the staff. The aim of the present study was to study the effectiveness of psychologically/psychodynamically-oriented-, pharmacological treatment and its combination. Method: Newly admitted patients were diagnosed according to the ICD-10 and completed questionnaires regarding symptoms and interpersonal problems at the beginning and termination of their treatment (n = 76). Follow-up assessments were conducted 18 months after treatment began. An ANCOVA was used to calculate differences between groups. Effect sizes and clinical significance were also calculated.Results: The results showed that there was a significant treatment effect for all treatment conditions in many outcome variables. Further, all three treatment groups showed equal effectiveness. However, the combination treatment used significantly more treatment sessions than the other two groups. The results also showed that many of the patients had considerable problems after the treatment.Conclusion: The results indicate that the patients were offered treatment, and achieved what they required in order to reach a positive outcome. It appears that the treatment was determined by responsiveness and regulatory processes of both the staff and patients, and that the patients acquired what they needed to accomplish with a sufficient outcome.

Budd R,Hughes I, The Dodo Bird Verdict--controversial, inevitable and important: a commentary on 30 years of meta-analyses. Clinical psychology & psychotherapy, November 2009, vol./is. 16/6(510-22), 1099-0879Author(s):Abstract:In this article, the assertion that different psychological therapies are of broadly similar efficacy-often called the 'Dodo Bird Verdict'-is contrasted with the alternative view that there are specific therapies that are more effective than others for particular diagnoses. We note that, despite thirty years of meta-analytic reviews tending to support the finding of therapy equivalence, this view is still controversial and has not been accepted by many within the psychological therapy community; we explore this from a theory of science perspective. It is further argued that the equivalence of ostensibly different therapies is an inevitable consequence of the methodology that has dominated this field of investigation; namely, randomised controlled trials [RCTs]. The implicit assumptions of RCTs are analysed and it is argued that what we know about psychological therapy indicates that it

Page 6: articole eficienta

is not appropriate to treat 'type of therapy' and 'diagnosis' as if they were independent variables in an experimental design. It is noted that one logical consequence of this is that we would not expect RCTs to be capable of isolating effects that are specific to 'type of therapy' and 'diagnosis'. Rather, RCTs would only be expected to be capable of identifying the non-specific effects of covariates, such as those of therapist allegiance. It is further suggested that those non-specific effects that have been identified via meta-analysis are not trivial findings, but rather characterise important features of psychological therapy.Language:ENG

The Dodo Bird Verdict--Controversial, inevitable and important: A commentary on 30 years of meta-analyses.Citation:Clinical Psychology & Psychotherapy, 11-12 2009, vol./is. 16/6(510-522), 1063-3995;1099-0879 (Nov-Dec 2009)Author(s):Budd, Rick,Hughes, IanAbstract:In this article, the assertion that different psychological therapies are of broadly similar efficacy--often called the 'Dodo Bird Verdict'--is contrasted with the alternative view that there are specific therapies that are more effective than others for particular diagnoses. We note that, despite thirty years of meta-analytic reviews tending to support the finding of therapy equivalence, this view is still controversial and has not been accepted by many within the psychological therapy community; we explore this from a theory of science perspective. It is further argued that the equivalence of ostensibly different therapies is an inevitable consequence of the methodology that has dominated this field of investigation; namely, randomised controlled trials [RCTs]. The implicit assumptions of RCTs are analysed and it is argued that what we know about psychological therapy indicates that it is not appropriate to treat 'type of therapy' and 'diagnosis' as if they were independent variables in an experimental design. It is noted that one logical consequence of this is that we would not expect RCTs to be capable of isolating effects that are specific to 'type of therapy' and 'diagnosis'. Rather, RCTs would only be expected to be capable of identifying the non-specific effects of covariates, such as those of therapist allegiance. It is further suggested that those nonspecific effects that have been identified via meta-analysis are not trivial findings, but rather characterise important features of psychological therapy. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Siev, Jedidiah,Huppert, Jonathan D,Chambless, Dianne L, The Dodo Bird, treatment technique, and disseminating empirically supported treatments. the Behavior Therapist, 04 2009, vol./is. 32/4(69, 71-76), 0278-8403 (Apr 2009)Author(s):Abstract:The aim of this article is to provide some historical context in terms of previous attempts to respond to these contentions and to present an update on recent research bearing

Page 7: articole eficienta

directly on the Dodo Bird verdict and the assertions regarding variance accounted for by active ingredients (e.g., technique). Evidence for the claim that all psychotherapies are equally efficacious derives from meta-analyses that combine various treatments for various disorders. Therapist effects have been discussed on and off for over 30 years. More recently, some have shown that differences between therapists in treatment outcome may be decreased with manualized treatments. However, the question of what makes therapists different from each other remains, and one answer may be technique. Some therapists are likely more adept than others at using some techniques, formulating treatment plans, encouraging their patients to do difficult exposures, etc., even within CBT. (PsycINFO Database Record (c) 2010 APA, all rights reserved)Language:English

Overholser, James C,Braden, Abby,Fisher, Lauren You've got to believe: Core beliefs that underlie effective psychotherapy. Journal of Contemporary Psychotherapy, 12 2010, vol./is. 40/4(185-194), 0022-0116;1573-3564 (Dec 2010)Author(s):

Abstract:A mixture of core beliefs may lay the foundation for effective psychotherapy. Sincere trust in these beliefs may help to promote therapeutic change. The therapist must have faith in the power of words to promote change. Clients usually change in a gradual manner, and the initial plan for therapy can be simplified by focusing on strategies for changing actions and attitudes. Also, therapy can help to improve various aspects of clients' intimate relationships. However, before attempting to promote therapeutic change, it is important for the therapist to begin by understanding the client's life situation, current distress, and natural tendencies. Clients benefit from emotional tolerance of stressors by recognizing that many negative life events turn out better than initially expected. A tendency to dwell on past events can perpetuate problems, while it can be more helpful to accept and grow from negative events. Therapists are encouraged to view a client's emotions as natural reactions, not deviant dysfunctions that need to be blocked or suppressed through medications. In a similar manner, most labels, including many psychiatric diagnoses, pose a danger through societal discrimination and self-stigma. When therapists adopt these core beliefs, they can more effectively help clients move forward, making adaptive psychological changes. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)Language:English

Cooper, Mick, The challenge of counselling and psychotherapy research.Counselling & Psychotherapy Research, 09 2010, vol./is. 10/3(183-191), 1473-3145 (Sep 2010)Author(s):Abstract:Aims: The purpose of this commentary is to argue that the value of counselling and psychotherapy research lies, not only in what it teaches us as therapists, but also in its

Page 8: articole eficienta

ability to challenge us and our assumptions. Method: The paper identifies eight beliefs that may be prevalent in sections of the counselling and psychotherapy community, and presents evidence that challenges them. Findings: While many of our beliefs may hold true for some clients some of the time, the research evidence suggests that they are unlikely to be true for all clients all of the time. Discussion: By questioning and challenging therapists' a priori assumptions, research findings can help counsellors and psychotherapists to be less set in their beliefs; and more open to the unique experiences, characteristics and wants of each individual client. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)The challenge of counselling and psychotherapy research.Citation:Counselling & Psychotherapy Research, 09 2010, vol./is. 10/3(183-191), 1473-3145 (Sep 2010)Author(s):Cooper, MickAbstract:Aims: The purpose of this commentary is to argue that the value of counselling and psychotherapy research lies, not only in what it teaches us as therapists, but also in its ability to challenge us and our assumptions. Method: The paper identifies eight beliefs that may be prevalent in sections of the counselling and psychotherapy community, and presents evidence that challenges them. Findings: While many of our beliefs may hold true for some clients some of the time, the research evidence suggests that they are unlikely to be true for all clients all of the time. Discussion: By questioning and challenging therapists' a priori assumptions, research findings can help counsellors and psychotherapists to be less set in their beliefs; and more open to the unique experiences, characteristics and wants of each individual client. (PsycINFO Database Record (c) 2010 APA, all rights reserved) (journal abstract)

Copyright 1987 by the American Psychological Association, Inc.0022-006X/87/J00.75

John R. Weisz , Bahr Weiss, Mark D. Alicke, M. L. Klotz, Effectiveness of Psychotherapy With Children and Adolescents:A Meta-Analysis for Clinicians, Journal of Consulting and Clinical Psychology, !987,Vol. 55, No.4. 542-549

University of North Carolina at Chapel HillUniversity of Florida at GainesvilleUniversity of North Carolina at Chapel HillHow effective is psychotherapy with children and adolescents? The question was addressed by metaanalysisof 108 well-designed outcome studies with 4-18-year-old participants. Across various outcomemeasures, the average treated youngster was better adjusted after treatment than 79% of thosenot treated. Therapy proved rnore effective for children than for adolescents, particularly when thetherapists were paraprofessionals (e.g., parents, teachers) or graduate students. Professionals (withdoctor's or master's degrees) were especially effective in treating overcontrolled problems (e.g., phobias,shyness) but were not more effective than other therapists in treating undercontrolled problems(e.g., aggression, impulsivity). Behavioral treatments proved more effective than nonbehavioral treatmentsregardless of client age, therapist experience, or treated problem. Overall, the findings revealedsignificant, durable effects of treatment that differed somewhat with client age and treatment methodbut were reliably greater than zero for most groups, most problems, and most methods.In the late 1970s and early 1980s, experts raised diverse concernsabout child and adolescent psychotherapy research. Somecomplained of insufficient information about therapy outcomes(e.g., Achenbach, 1982). Others suggested that outcome studiesrevealed few or no effects of therapy (e.g., Gelfand, Jenson, &Drew, 1982). Still others (e.g., Barrett, Hampe, & Miller, 1978)argued that researchers were too preoccupied with the globalquestion of psychotherapy effects per se and should insteadstudy outcomes as a function of treatment approach, type of

Page 9: articole eficienta

child, and therapist characteristics. In recent years, the prospectsfor addressing these concerns have improved considerably.With the development of meta-analytic techniques (Smith,Glass, & Miller, 1980), it is now possible to aggregate findingsacross multiple studies and to systematically compare findingsacross dimensions such as treatment approach and client characteristics.The basis for analysis is the effect size, which is computedseparately for the treatment group versus control groupcomparisons of interest. The effect size is an estimate of themagnitude of the treatment effect (treatment group versus controlgroup scores on measures of psychological functioning) adjustedfor sample variability.Meta-analyses of adult outcome studies (Shapiro & Shapiro,This project was facilitated by support from the North Carolina Divisionof Mental Health, Mental Retardation, and Substance Abuse Services(Department of Human Resources) and by National Institute ofMental Health Grant R01 MH 38240-01. We are grateful to DavidLangmeyer for his support and suggestions, to Gary Bornstein, LarryCrum, and Lynn Fisher for their assistance in data collection and computation,and to Thomas Achenbach for his thoughtful review of anearlier draft of this article.Correspondence concerning this article should be addressed to JohnR. Weisz, Department of Psychology, Davie Hall 013A, University ofNorth Carolina, Chapel Hill, North Carolina 27514.1982; Smith & Glass, 1977) were recently complemented byCasey and Herman's (1985) meta-analysis of studies with childrenaged 12 years and younger. In the 64 studies that includedtreatment-control comparisons, the average effect size was ,71,indicating a reliable advantage for treatment over no treatment.Although the percentage of boys in the samples was negativelycorrelated with outcome, Casey and Herman found no substantialdifferences as a function of age or of group versus individualtherapy. Even initial findings showing the superiority of behavioralover nonbehavioral treatments were judged to be artifactual,the result of confounding type of treatmeit with outcomemeasure characteristics. The findings suggested that child therapyis demonstrably effective (and about equally so) across agegroups and types of therapy. However, before we can concludethat age level is unrelated to therapy effects, we must sampleadolescents as well as children. Moreover, as Parloff (1984)noted, findings suggesting that different therapies work equallywell deserve close scrutiny because their implications are so significant.Might therapy effects be different for adolescents than forchildren? Various cognitive and social developments (see Rice,1984), including the advent of formal operations (see Piaget,1970), make adolescents more cognitively complex than children,less likely to rely on adult authority, and seemingly lesslikely to adjust their behavior to fit societal expectations (seeKendall, Lerner, & Craighead, 1984). Thus, it is possible thatadolescents may be more resistant than children to therapeuticintervention. On the other hand, adolescents are more likelythan children to comprehend the purpose of therapy and to understandcomplex, interactive psychological determinants of behavior,which could make them better candidates for therapythan children. Because most child clinicians actually treat bothchildren and adolescents, a comparison of therapy effects in542EFFECTS OF CHILD AND ADOLESCENT PSYCHOTHERAPY 543these two age groups could have implications for their work. Toeffect this comparison, we reviewed studies of youngsters aged4-18 years.We also reexamined the question of whether therapy methodsdiffer in effectiveness. Casey and Herman (1985) found that anapparent superiority of behavioral over nonbehavioral methodslargely evaporated when they excluded cases in which outcomemeasures "were very similar to activities occurring duringtreatment" (p. 391). Our own review suggested that many ofthe comparisons that were dropped may actually have involvedsound measurement of therapy effects. Consider, for example,interventions in which phobic children are exposed to modelswho bravely approach animals that the phobic children fear.The most appropriate outcome assessment may well be one thatis similar to therapy activities-behavioral tests of the children'sability to bravely approach the animals. Certainly, such testsmay be less than ideal; for example, they may involve contrivedcircumstances that differ from the clinical situation of primaryinterest. However, in many such tests, children who performthe target behavior may well be displaying not a measurementartifact but real adaptive behavior. On the other hand, some ofthe Casey and Herman (1985) exclusions seem quite appropriate.

Page 10: articole eficienta

For example, researchers who use the Matching FamiliarFigures Test (MFFT) to teach reflectivity should not use theMFFT to assess outcomes because the ultimate goal is broaderthan improved MFFT performance. In this case, the similaritybetween treatment activity and outcome measure is inappropriatebecause it is unnecessary.To further examine the issue of behavioral-nonbehavioraldifferences, we distinguished between two kinds of comparisonsin which outcome measures were similar to training activities:(a) comparisons in which such similarity was necessary for afair and valid assessment and (b) comparisons in which suchsimilarity was unnecessary and posed a risk of artifactual findings.In a key behavioral-nonbehavioral contrast, we includedcomparisons in the first group but not those in thesecond.Our third objective was to examine treatment effects as afunction of treated problem. However, we focused not only onthe specific problem categories used in other meta-analyses(e.g., phobias, impulsivity) but also on the two overarching,broadband categories most often identified in factor analyticresearch (Achenbach, 1982; Achenbach & Edelbrock, 1978):overcontrolled (e.g., phobias, shyness) and undercontrolled(e.g., aggression, impulsivity).Finally, we questioned whether therapy effects differ withlevel of therapist training. The evidence thus far is mixed but isoften discouraging for trained therapists in general (see Auerbach& Johnson, 1977; Parloff, Waskow, & Wolfe, 1978). However,there is little summary evidence on therapists who workwith young clients in particular, which is an unfortunate gapgiven the substantial time and resources invested in the professionaltraining of child clinicians. We attempted to fill this gap.Recognizing that the effects of training might differ dependingon client age, on the treatment method used, or on the problembeing treated, we also explored interactions of training witheach of these factors.MethodDefining PsychotherapyWe defined psychotherapy as any intervention designed to alleviatepsychological distress, reduce maladaptive behavior, or enhance adaptivebehavior through counseling, structured or unstructured interaction,a training program, or a predetermined treatment plan. We excludedapproaches involving drug administration, reading materialonly (bibliotherapy), teaching or tutoring only to increase knowledge ofa specific subject, moving youngsters to a new living situation (e.g., afoster home), and efforts solely to prevent problems to which youngsterswere deemed at risk.We did not require that psychotherapy be conducted by fully trainedprofessionals. Some schools of thought hold that extensive professionaltraining is not required for effective interventions but that parents,teachers, or siblings may function as effective change agents. This suggestedto us that, rather than prejudge the issue, we should treat theimpact of therapist training as an empirical question in the analysis.Literature SearchSeveral approaches were combined to identify relevant publishedstudies. A computer search was carried out using 21 psychotherapyrelatedkey words and synonyms, and the resulting data were crossedwith appropriate age group and topic constraints.1 This helped us torule out the myriad of articles that merely described or advocated certaintherapy methods. The initial pool of 1,324 articles was reduced instepwise fashion by using title, abstract, and method-section informationto select only those studies that met our inclusion requirements.We used three other procedures to enhance the comprehensiveness ofour survey. First, all articles cited in the meta-analyses by Smith, Glass,and Miller (1980) and by Casey and Herman (1985) were surveyed andincluded if they met our selection criteria. Second, Psychological Abstractsentries from January 1970 to September 1985 were searched byhand. Third, four journals accounting for the majority of appropriatearticles in the preceding steps were searched, issue by issue, for the sametime period. These were Behavior Therapy, the Journal of AbnormalPsychology, the Journal of Consulting and Clinical Psychology, and theJournal of Counseling Psychology. The result of the search was a poolof 108 controlled studies of psychotherapy outcomes among childrenand adolescents that met our criteria.2 The psychological literature accountedfor more of the studies than the psychiatry, social work, or nursingliterature. Of the 108 studies, 24 (22%) were also included in theSmith et al. (1980) analysisand 32 (30%) were also included in the Caseyand Herman (1985) analysis.Subject PopulationThe studies focused on the prekindergarten through secondary school

Page 11: articole eficienta

age range (i.e., 4-18 years). Across the 108 studies, mean age was 10.23(SD = 4.00). Approximately 66% of the youngsters sampled were male.1 The 21 psychotherapy key words and synonyms were client-centered,contract- (ing, systems, etc.), counseling, cotherapy, dream analysis,insight-, intervention-, model-, modinca-, operant-, paradox-, psychoanaly-,psychodrama-, psychothera-, reinforce-, respondent, roleplaying,therap-. training, transactional, and treatment. The age groupconstraints were adolescen-, child-, juvenile-, pre-adolescen-, andyouth-. The evaluation-oriented topic constraints were assess-, comparison,effect-, efficacy, evaluat-, influence, impact, and outcome.2 A list of the studies included in this meta-analysis is available fromthe first author for a $5 fee to cover printing, postage, and handling.544 WEISZ, WEISS, ALICKE, AND KLOTZTo be consistent with other reviews (e.g., Casey & Berman, 1985; Smith& Glass, 1977), we included studies focusing on a broad variety of psychologicalor adjustment problems. However, we excluded mental retardation;underdeveloped reading, writing, or knowledge of specificschool subjects; problems involving seizures; and physically disablinghandicaps. Problems that have been attributed to physiological causesbut for which etiology has not been well-established (e.g., attentiondeficit, hyperactivity, and learning disability) were included providedthat a behavioral or psychological problem (e.g., impulsivity) was actuallyaddressed in treatment.Design and Reporting RequirementsWe required that a study compare a treated group with an untreatedor minimally treated control group and that the control condition providelittle more than attention to the youngsters. We classified controlgroups that had provided alternate treatment or one element of a fulltreatment package as treatment groups. If such treatment constitutedthe only control condition included, the study was dropped. We alsoexcluded studies that used subjects as their own controls in single-subjector within-subject designs. Such studies generate an unusual formof effect size (e.g., based on intrasubject variance, which is not comparableto conventional variance statistics) and do not appear to warrantequal weighting with studies that include independent treatment andcontrol samplesClassification and Coding SystemsStudies were coded for sample, treatment, and design characteristics,with some coding systems patterned after those of Casey and Berman(1985).3 Coding and effect size calculation were carried out independentlyto avoid bias. One fourth of the studies were randomly selectedfor independent coding by two judges.Treatment approaches. Treatment methods were classified using thethree-tiered system shown in Table 1. Tier 1 included the broad categoriesof behavioral and nonbehavioral. Tier 2 included the subcategories(e.g, respondent procedures) grouped within each Tier 1 category. Tier 3included fine-grained descriptors (e.g., extinction); only the behavioralstudies could be classified this finely. Despite what were often quite limiteddescriptions of treatment methods, the two raters achieved kappasof .74, .71, and .78 on Tiers 1, 2, and 3, respectively. Two of the 163comparisons (1%) were described too vaguely to be coded. We alsocoded treatment approaches as group-administered or individually-administered(« = .92).Target problem. Treated problems were coded using the two-tieredsystem shown in Table 3. At the most general level, problems weregrouped into the two broadband categories most often identified in factoranalyses of child and adolescent behavior problems: undercontrolled(e.g., aggressive, acting out, or externalizing behavior) and overcontrolled(e.g., shy or withdrawn, phobic, or internalizing behavior; seeAchenbach & Edelbrock, 1978). Problems not fitting either categorywere coded as other. The second tier consisted of descriptive subcategories(e.g., shy, withdrawn; phobias, specific fears). The two ratersachieved kappas of .94 and .86 for Tiers 1 and 2, respectively. In 9 ofthe 163 group comparisons (6%), the problems were described toovaguely to be coded.Outcome measures We used Casey and Herman's (1985) system tocode whether outcome measures were similar to treatment activities(our K = .82). As noted previously, we also carried out one further codingof outcome measures that were rated similar to treatment activities:We coded for whether the similarity was necessary for a fair assessment(given the treatment goals) or unnecessary (K = .81). We also used Caseyand Herman's (1985) category systems for coding outcome measuresTable 1Mean Effect Size for Each Therapy TypeNo.treatmentTherapy type groupsBehavioralb

Operant

Page 12: articole eficienta

Physical reinforcersConsultation in operant methodsSocial/verbal reinforcementSelf-reinforcementCombined physical and verbalreinforcementMultiple operant methodsRespondentSystematic desensitizationRelaxation (no hierarchy)Extinction (no relaxation)Combined respondentModelingO'Connor film0

Live peer modelLive nonpeer modelNonlive peer modelNonlive nonpeer modelSocial skills trainingCognitive/cognitive behavioralMultiple behavioralNonbehavioral "Client-centered/nondirectiveInsight-oriented/psychodynamicDiscussion group3981662321783322546392 510102034Effectsize.78.92.77.78.33.75.72.75.65.431.46.431.192.901.25.62.79.29.90.681.04.56.01.18P".0001.07.0001.002.40.01

Page 13: articole eficienta

.07

.0002

.01

.05

.14

.39

.003

.27

.02

.26

.0001

.21

.04

.0004

.0002

.0001

.98

.18Note. Because some descriptions of treatment methods were too vagueto be classified, not all Ns sum as expected.a The probability that a particular effect size is greater than zero reflectsthe variability of effect sizes across the category of comparisons beingsampled in addition to the number of studies and the mean effect size.Thus, where effect sizes are quite variable across a category, the p valuemay be low despite a high mean effect size and a substantial pool ofstudies. A Bonferroni correction applied to the tests in this table indicatedthat probability values ^ .002 should not be regarded as statisticallysignificant.bN= 126; effect size = . 88; p<. 0001.c The O'Connor film shows a child engaging in social entry behaviorOne study using this film reported an unusually high effect size (9.5);with this study dropped, effect size = .77.d N = 28; effect size = .42; p < .0001.into type and source. Seven types were included, and five types occurredwith sufficient frequency to warrant kappa calculations. The categorieswere (a) fear/anxiety (K - .98); (b) cognitive skills (« - .87); (c) globaladjustment (K = .83); (d) social adjustment (K = .91); (e) achievement(including school grades and achievement test scores; * = .98) (f) personality(including scales measuring attitudes and beliefs); and (g) self-3 We used Casey and Berman's (1985) coding schemes for (a) whetheroutcome measures were similar to activities occurring during treatment,(b) type of outcome measure, and (c) source of outcome measure.In addition, we distinguished between behavioral and nonbehavioraltherapies, as did Casey and Berman, but we also distinguished amongsubtypes within each broad category.EFFECTS OF CHILD AND ADOLESCENT PSYCHOTHERAPY 54STable 2Mean Effect Size for Behavioral andNonbehavioral TreatmentsNonbe-BehavioralAnalysis MAll comparisons .88Omitting therapy-likeoutcomes .6 1Omitting unnecessarytherapy-like outcomes .93N1263494havioralM.44.51.45N t27 2.1422 .6424 2.09P<.05.52<.05Note. All six effect size means were significantly different from zero (allps < .0006).concept. The kappa calculation for the full system was .90. Casey andBerman coded eight sources of outcome measures in their system: (a)observers (* = .90); (b) therapists; (c) parents (« = .96); (d) subject's own

Page 14: articole eficienta

performance (K = .9 l);(e) expert judges; (f) peers (a = . 90); (g) teachers(>: = .97); and (h) self-report by subjects (* = .96). The kappa calculationfor the full system was .92.Therapist training. We classified therapists according to level of training.Levels included (a) professionals who held a doctor's or master'sdegree in psychology, education, or social work; (b) graduate studentswho were working toward advanced degrees in psychology, education,or social work; and (c) paraprofessionals who were parents, teachers, orothers lacking mental-health-related graduate training but trained toadminister the therapy.Calculation of Effect SizesEffect sizes were estimated using the procedures of Smith et al. (1980,Appendix 7). In each calculation, the mean posttherapy treatmentgroup-control group difference was divided by the control group standarddeviation.* Most studies in our pool included multiple outcomemeasures, and a number of studies included more than one treatmentcondition. Thus, in most cases, a study initially produced numerouspossible effect sizes. To retain all effect sizes in our analysis would haveresulted in disproportionate weighting of those studies with the mostmeasures and groups. Several solutions to this problem are available(e.g., Glass, McGaw, & Smith, 1981). We chose to collapse effect sizesacross outcome measures except in analyses comparing such measures.However, the comparison of treatment procedures was central to theoverall analysis, and separate treatments within studies appeared sufficientlyindependent to warrant separate attention. Consequently, foreach study we computed one effect size estimate for each of the treatmentconditions included. The 108 studies included an average of 1.54therapy conditions for a total of 163 effect sizes over the entire pool.Some studies included both follow-up and posttreatment assessmentsof therapy effects. We describe follow-up findings separately in a finalsection.ResultsOverview of Procedures for AnalysisTwo problems often present in meta-analyses are that mostgroup comparisons involve tests of main effects alone and thatkey variables (e.g., treatment method and treated problem) areconfounded (see Glass & Kliegl, 1983; Mintz, 1983). The fewcases that have attempted more controlled comparisons haveoften dropped substantial portions of available data. Here, weselected an approach intended to use all the data available whileavoiding undue risk of either Type I or Type II error. MinimizingType II error is particularly important in meta-analysesgiven its potential heuristic, hypothesis-generating value.The first wave of analysis focused planned comparisons onour four factors of primary interest; age group, therapy type,target problem type, and therapist training. For these analyses,we tested the four main effects. We then tested each main effectfor its robustness, using (a) general linear models (GLM) proceduresthat eliminated (i.e., controlled for) the effects of each ofthe other three factors (see Appelbaum & Cramer, 1974) and(b) tests of whether any of the main effects were qualified byinteractions with any of the other three factors.5 We also testedthe robustness of the therapy type effect using the Casey andBerman (1985) procedure followed by our own revised procedure.For all other group comparisons, we applied a Bonferronicorrection (Neter & Wasserman, 1974), which set the alpha at.01. Bonferroni corrections were also applied to each family oftests to compare obtained effect sizes to the null hypothesis ofzero (see Tables 1, 3, and 4).Overall Effect SizeAcross the 163 treatment-control comparisons, the meaneffect size was 0.79 (significantly different from zero, p < .0001).The average treated youngster was placed at the 79th percentileof those not treated. Of the 163 effect sizes, only 10 (6%) werenegative, indicating an adverse effect of treatment. The meaneffect size of 0.79 was comparable to those reported in earliermeta-analyses of therapy effects among children (0.71 in Casey& Berman, 1985), adults (0.93 in Shapiro & Shapiro, 1982),and mixed age groups (0.68 in Smith & Glass, 1977).Preliminary Check for Sex EffectsBefore proceeding to the group comparisons of primary interest,we checked the relation between effect size and gendercomposition of treated groups. For 99 of the treatment-controlcomparisons, information was sufficient to reveal whether amajority of the treatment group comprised male (A' = 72) orfemale (A' = 27) participants. Effect sizes averaged 0.80 for themale majority groups and 1.11 for the female majority groups(p value for difference = .33). Studies that did not report gendercomposition averaged an effect size of 0.55.

Page 15: articole eficienta

4 Some researchers (e.g., Casey & Berman, 1985; Hedges, 1982) favorthe use of a pooled treatment and control group standard deviation asdenominator. If one consequence of therapy is an increase in behavioralvariability, as some researchers have suggested (e.g., Bergin & Lambert,1978), such pooling can cause interpretational and statistical problems(see Smith et al., 1980), which we sought to avoid.* We considered using full factorial model ANOVAS, but cell samplesizes were too unbalanced to yield interpretable results and some wereso small that tests of 3-way and 4-way interactions were impossible.546 WEISZ, WEISS, ALICKE, AND KLOTZAge LevelA question of primary interest was whether the impact oftherapy differed for children and adolescents. The mean effectsize for the 98 treatment-control comparisons involving children(ages 4-12 years) was 0.92 (82nd percentile), which wassignificantly larger than the mean of 0.58 (72nd percentile) forthe 61 comparisons involving adolescents (ages 13-18 years),((157) = 2.17, p < .05. The correlation between age and effectsize was -0.21 (p < .01) over the entire sample; the coefficientwas —0.17 (p < .10) for effect sizes involving children and 0.15(ns) for those involving adolescents.We next tested the robustness of the age group differences byusing eliminating tests. The age effect was reduced slightly whentherapy type (behavioral vs. nonbehavioral) was controlled(p = .084) and when problem type (overcontrolled vs. undercontrolled)was controlled (p = .086). Both reductions werecaused partly by reduced sample size because not all treatmentsor target problems could be coded. However, the age groupdifference grew more reliable statistically when therapist trainingwas controlled (p = .013).A series of 2 X 2 analyses of variance (ANOVAS) testing interactionsof age with therapy type, problem type, and therapisttraining, respectively, revealed no significant effects (all Fs <1.8; all ps > . \ 5). To provide the most thorough assessment, wecarried out GLM ANOVAS on the same interactions, with ageentered as a continuous variable. Neither therapy type norproblem type interactions were significant (both ps > .30), butthe Age X Therapist Training interaction was significant, F(2,102) = 3.49, p < .05. Age and effect size were uncorrelatedamong professionals (N = 39, r = 0.11, p < .50) but were negativelycorrelated among graduate students (N = 43, r = —0.31,p < .05) and paraprofessionals (N = 26, r = —0.43, p < .05).Trained professionals were about equally effective with all ages,but graduate students and paraprofessionals were more effectivewith younger than with older clients.Therapy TypeBehavioral versus nonbehavioral. Table I shows that meaneffect size was higher for behavioral than for nonbehavioraltreatments, r(152) = 2.14, p < .05. The difference remainedsignificant after eliminating tests controlled for age (p < .05),problem type (p < .05), and therapist training (p < .05). Interactionsof therapy type with problem type and with therapisttraining were not significant (both h's < 0.50, both ps > .65).Similarity of therapy procedures and outcome measures.Next, we examined the main effect for therapy type, followingupon Casey and Herman's (1985) analysis. We first excludedcomparisons involving an outcome measure similar to treatmentprocedures. Consistent with Casey and Herman's (1985)finding, the procedure reduced the behavioral-nonbehavioraldifference to nonsignificance, ;(55) = 0.64, p = .52. As we notedearlier, the Casey and Herman procedure may rule out somecarefully designed studies in which measures similar to thetraining procedures are appropriate and necessary for a fair testof treatment success. To correct for this apparent limitation,we again compared behavioral and nonbehavioral methods andTable 3Mean Effect Size for Each Target ProblemTarget problemUndercontrolledDelinquencyNoncomplianceSelf-control (hyperactivity,impulsivity)Aggressive/undisciplinedOvercontrolledPhobias/anxietySocial withdrawal/isolationOtherAdjustment/emotional

Page 16: articole eficienta

disturbanceUnderachievementNo. treatmentgroups7619931176739281899Effectsize.79.661.33.75.74.88.741.07.56.69.43If.0001.0004.005.0001.0002.0001.0001.002.0002.001.046Note. Because some problem descriptions were too vague to be classified,not all Ns sum as expected.' A Bonferroni correction applied to this table indicated that probabilityvalues s^ .005 should not be regarded as statistically significant.restored to the sample all treatment-control comparisons inwhich similarity of training and assessment methods wasjudged by our raters to be necessary for a fair test. In this analysis,as Table 2 shows, behavioral-method comparisons showed asignificantly larger effect size than nonbehavioral comparisons,r(117) = 2.09,p<.05.Specific therapy types. The ANOVAS focused on Tier 2 of thetherapy-type coding system (see Table 1) revealed no significantdifference between the behavioral subtypes (e.g., operant, modeling)or the nonbehavioral subtypes (e.g., psychodynamic, clientcentered), all Fs < 1.9; all ps > . 15. The comparison of nonbehavioralsubtypes should be interpreted with caution: Themajority of these studies used client-centered therapy and onlythree used insight-oriented psychodynamic therapy. TheANOVAS focused on Tier 3 revealed no significant differencesbetween the therapies within each subtype. As Table 1 shows,effect sizes for most of the categories within each tier were significantlygreater than zero.Target ProblemOvercontrolled versus Undercontrolled problems. Next, we focusedon the problems for which the youngsters were treated.There was no significant difference between the broad categoriesof overcontrolled and undercontrolled problems (p = .46;see Table 3). This continued to be true when eliminating testswere used to control for age level (p = .59), therapy type (p =.67), and therapist training (p = .67).However, problem type and therapist training did interact,F(2, 90) = 2.93, p = .059. Tests of simple effects revealed nosignificant differences in effect size between overcontrolled andundercontrolled problems at any of the three levels of therapisttraining. Nor did the three therapist groups differ in their successwith undercontrolled youngsters. The groups did differ,however, in their effectiveness with overcontrolled youngsters:As amount of formal training increased, so did effectiveness,EFFECTS OF CHILD AND ADOLESCENT PSYCHOTHERAPY 547Table 4

Page 17: articole eficienta

Effect Size as a Function of Source of Outcome MeasureSourceObserversParentsTeachersSubject performanceSubject reportPeersN6219481286118Effectsize1.08.660.680.650.490.33If.0001.0001.0001

.0001

.0001

.01" A Bonferroni correction applied to this table indicated that probabilityvalues > .008 should not be regarded as statistically significant.linear trend F\ 1, 35) = 4.68, p < .05. Professionals achieved amean effect size of 1.03, graduate students of 0.71, and paraprofessionalsofO.53.Specific problem types. The categories in Tier 2 of the problemcoding scheme (e.g., delinquency, specific fears) did notdiffer significantly in effect size (p = .41). However, as Table 3shows, effect sizes were reliably greater than zero for most problemcategories.Therapist TrainingAlthough therapist training entered into two interactions, itsmain effect was not significant (p = .43). This continued to betrue when eliminating tests were used to control for age (p =.56), therapy type (p = .65), and problem type (p = .51).Other FindingsIndividual versus group therapy. Does it matter whetheryoungsters are treated individually or in groups? Our data revealedsomewhat larger effect sizes for therapy that was individuallyadministered rather than group administered (M = 1.04and M = 0.62, respectively), but the difference did not attainsignificance under our Bonferonni correction procedure(/- = .03).Source and content of outcome measure. Casey and Herman(1985) found significant differences as a function of the sourceof outcome measures (e.g., observers, teachers). Measures derivedfrom observers revealed the largest difference between thetreated and control groups. Using their category system (withtwo low-frequency categories dropped), we also found a maineffect for source, ^5, 330) = 4.00, p < .005. Newman-Keulstests indicated that observers reported more change than any ofthe other sources (all ps <.05), none of which differed significantlyfrom one another (see Table 4).Casey and Berman also reported significant differences as afunction of the content of outcome measures (e.g., fear and anxiety,cognitive skills). We used their category system but failedto find a significant main effect (p = .61). We dropped two lowfrequency categories, but the main effect remained nonsignificant(p = .42).Clinic-referred versus analog samples. Outcome research,and meta-analyses of such research, have been criticized for excessivereliance on analog samples, that is, samples that havebeen recruited by researchers specifically for treatment studiesrather than samples that have been spontaneously referred byclinics (e.g., Parloff, 1984; Shapiro & Shapiro, 1982). Combiningoutcome results from analog and clinic samples makes itdifficult to gauge the relevance of findings to actual clinic practice.For this reason, we separated the 126 comparisons involvinganalog samples from the 37 comparisons involving true

Page 18: articole eficienta

clinical samples. Mean effect sizes were 0.76 for analog samplesand 0.89 for clinical samples: The difference was not significant(F< 1).Follow-up findings: Do therapy effects last? The precedingfindings suggest that therapy does have positive effects that aremeasurable at the end of therapy. To have real practical value,however, the effects must be durable and must persist beyondtreatment termination. To assess the durability of therapyeffects, we analyzed the follow-up treatment-control comparisonscontained in our sample; these follow-ups averaged 168days subsequent to termination of treatment. Average effect sizeat follow-up (0.93) was actually larger than effect size immediatelyafter treatment (0.79), although the difference was not significant(p = .45). When we included in the posttreatmentgroup only those studies that also had a follow-up assessment,the means for posttreatment and follow-up were identical(0.93). Thus, the effects of therapy in this pool of studies appearto be durable.DiscussionIs psychotherapy effective with children and adolescents? Thefindings reviewed here suggest that it is. After treatment, acrossmultiple measures of adjustment, the average treated youngsterwas functioning better than 79% of those not treated. However,the benefits of therapy depended to some extent on the age levelof the youngsters treated, with children profiting more than adolescents.This is consistent with the idea, suggested earlier, thatcognitive changes such as the advent of formal operations (Piaget,1970), and other quasicognitive changes (Perlmutter, 1986;Rice, 1984), may make adolescents less responsive than childrento therapeutic influence; enhanced powers of reasoningmay strengthen adolescents' convictions regarding their ownbehavior and may also make them adept at circumventing orsabotaging a therapist's efforts. Of course, it is also possible thatoutcome measures used with adolescents are less sensitive tochange than most child measures or that adolescents' problemsare more entrenched (see Kendall et al., 1984, for further ideasabout age group differences and their sources).Only paraprofessionals and graduate student therapists weremore effective with younger than older children; trained professionalswere about equally effective with younger and older clients.Taken together, the age main effect and the Age X Traininginteraction suggest an intriguing possibility. It may be that adolescents,for cognitive or other reasons, are generally moredifficult than children to treat successfully, but formal trainingmay provide professionals with sufficient therapeutic acumento override age differences in initial treatability.Our findings on therapy type do not support the often notedsummary of adult psychotherapy findings that different forms548 WEISZ, WEISS, ALICKE, AND KLOTZof therapy work about equally well (for a critical discussion, seeParloff, 1984; see also Frank's, 1973, "nonspecificity hypothesis").Instead, we found that behavioral methods yielded significantlylarger effects than nonbehavioral methods. This findingheld up across differences in age level, treated problem, andtherapist experience, and it was not qualified by interactionswith any of these factors. The behavioral-nonbehavioraldifference was reduced to nonsignificance when we excludedall therapy-like outcome measures (following Casey & Herman,1985) but was readily revived when we excluded only the unnecessarytherapy-like measures that seemed likely to produce artifactualfindings. Overall, the findings make a case for the superiorityof behavioral over nonbehavioral approaches.On the other hand, we found comparatively few controlledstudies assessing nonbehavioral methods; it might be arguedthat these studies do not represent the best of nonbehavioralapproaches. In fact, many would argue that nonbehavioral approachesare best suited to the only partly scrutable process ofunraveling complex causal dynamics and of stimulating insightover months or even years of treatment. By contrast, most controlled-outcome research focuses on relatively specific targetproblems, directly observable outcome measures, and brieftreatment. Do such studies miss the point of nonbehavioral intervention?Perhaps, but the present findings seem to place theburden of proof on those who make that argument.Those familiar with the evidence on the long-term stability ofundercontrolled behavior and the relative instability of overcontrolledbehavior (reviewed in Robins, 1979) may have been surprisedto find that therapy effects were no more pronouncedwith the latter than the former. Note, though, that the evidenceon long-term stability concerns the tendency of problems to

Page 19: articole eficienta

persist or dissipate over time, independent of therapy. Our review,by contrast, focuses on the persistence of various problemsin treated groups relative to control groups. When the naturaldissipation of problems over time is thus held constant, ourfindings suggest that undercontrolled problems may be no moreintractable than overcontrolled problems.Our failure to find an overall difference in effectiveness betweenprofessionals, graduate students, and paraprofessionalsmight be disquieting to those involved in clinical training programs(see also Auerbach & Johnson, 1977; Parloff, Waskow,& Wolfe, 1978). A simplistic interpretation might suggest thattraining does not enhance therapeutic effectiveness. A morethoughtful evaluation, though, suggests otherwise. First, itshould be noted that the therapeutic work of the graduate studentsand paraprofessionals did not take place in a vacuum: Innearly every instance, these therapists were selected, trained,and supervised by professionals in techniques that professionalshad designed. Thus, the success enjoyed by the two less clinicallytrained groups might actually have reflected the judgmentand skill of professionals working behind the scenes.Moreover, the finding of no overall difference was qualifiedby two important interactions, both suggesting possible benefitsof training. A Training X Age interaction suggested that professionaltraining may enhance therapist effectiveness with older,more difficult-to-treat children. And a Training X ProblemType interaction suggested that training may enhance therapisteffectiveness in treating overcontrolled problems. On a morenegative note, this interaction suggested that training may havelittle impact on therapists' effectiveness with undercontrolledproblems. Why? One possibility is that youngsters with undercontrolledproblems are responsive to interventions that are relativelyeasy to learn; some of these interventions may be similarto natural, naive, parent-like responses that arise in situationsrequiring discipline and rule enforcement.Two findings were particularly encouraging. The first revealedthat therapy studies with analog samples yielded resultsquite similar to studies with true clinic-referred samples. Thus,we found no evidence that the positive findings generated byanalog therapy studies presented an inflated or otherwise distortedpicture of therapy effects among children and adolescents.A second source of encouragement was the finding thatthe impact of therapy assessed at immediate posttest was notreliably different from the impact assessed at follow-up, whichoccurred an average of 6 months later. This is consistent withother findings across a broader age range (reviewed by Nicholson& Herman, 1983) indicating that therapy effects may be relativelyenduring.Here, and in most of the other findings, there is reason foroptimism about therapy effects with children and adolescents.On the other hand, the number of available outcome studies isstill much too modest to permit a definitive analysis. A welldevelopedunderstanding of therapy outcomes within the broadrange sampled here will require the continued efforts of our bestresearchers.ReferencesAchenbach, T. M. (1982). Developmental psychopathology (2nd ed.).New York: Wiley.Achenbach, T. M., & Edelbrock, C. S. (1978). The classification of childpsychopathology: A review and analysis of empirical efforts. PsychologicalBulletin, 85, 1275-1301.Appelbaum, M. I., & Cramer, E. M. (1974). Some problems in the nonorthogonalanalysis of variance. Psychological Bulletin, SI, 335-343.Auerbach, A. H., & Johnson, M. (1977). Research on the therapist'slevel of experience. In A. S. Gurman & A. M, Razin (Eds.), Effectivepsychotherapy: A handbook of research (pp. 84-99). New York: PergamonPress.Barrett, C. L., Hampe, I. E., & Miller, L. (1978). Research on psychotherapywith children. In S. L. Garfield & A. E. Bergin (Eds.), Handbookof psychotherapy and behavior change: An empirical analysis(pp. 411^135). New York: Wiley.Bergin, A. E., & Lambert, M. J. (1978). The evaluation of therapeuticoutcomes. In S. L. Garfield & A. E. Bergin (Eds.), Handbook of psychotherapyand behavior change: An empirical analysis (pp. 139-189). New York: Wiley.Casey, R. J., & Berman, J. S. (1985). The outcome of psychotherapywith children. Psychological Bulletin, 98, 388-400.Frank, J. D. (1973). Persuasion and healing. Baltimore, MD: JohnsHopkins University Press.Gelfand, D. M., Jenson, W. R., & Drew, C. J. (1982). Understandingchild behavior disorders. New "York: Holt, Rinehart, & Winston.

Page 20: articole eficienta

Glass, G. V., & Kliegl, R. M. (1983) An apology for research integrationin the study of psychotherapy. Journal of Consulting and Clinical Psychology,51. 28-41.Glass, G. V., McGaw, B., & Smith, M. L. (1981). Mela-analysis in socialresearch. Beverly Hills, CA: Sage.EFFECTS OF CHILD AND ADOLESCENT PSYCHOTHERAPY 549Hedges, L. V. (1982). Estimation of effect size from a series of independentexperiments. Psychological Bulletin, 92, 490-499.Kendall, P. C., Lerner, R. M., & Craighead, W. E. (1984). Human developmentand intervention in childhood psychopathology. Child Development,55, 71-82.Mintz, J. (1983). Integrating research evidence: A commentary on metaanalysis.Journal of Consulting and Clinical Psychology, 51, 71-75.Neter, J., & Wasserman, W. (1974). Applied linear statistical models.Homewood, IL: Irwin.Nicholson, R. A., & Herman, J. S. (1983). Is follow-up necessary inevaluating psychotherapy? Psychological Bulletin, 93, 261-278.Parloff, M. B. (1984). Psychotherapy research and its incredible credibilitycrisis. Clinical Psychology Review, 4, 95-109.Parloff, M. B., Waskow, I. E., & Wolfe, B. E. (1978). Research on therapistvariables in relation to process and outcome. In S. L. Garfield &A. E. Bergin (Eds.), Handbook of psychotherapy and behavior change:An empirical analysis (pp. 233-274). New \brk: Wiley.Perlmutter, M. (Ed.). (1986). Cognitive perspectives on children's socialand behavioral development: The Minnesota symposia on child psychology(Vol. 18). Hillsdale, NJ: Erlbaum.Piaget, J. (1970). Piaget's theory. In P. E. Mussen (Ed.), Carrmchael'smanual of child psychology'(3rd ed., Vol. 1, pp. 703-732). New York:Wiley.Rice, F. P. (1984). The adolescent: Development, relationships, and culture.Boston: Allyn and Bacon.Robins, L. N. (1979). Follow-up studies. In H. C. Quay & J. S. Werry(Eds.), Psychopalhological disorders of childhood (2nd ed., pp. 483-513). New York: Wiley.Shapiro, D. A., & Shapiro, D. (1982). Meta-analysis of comparativetherapy outcome studies: A replication and refinement. PsychologicalBulletin, 92,581-604.Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapyoutcome studies. American Psychologist, 32, 752-760.Smith, M. L., Glass, G. V., & Miller, T. I. (1980). Benefits of psychotherapy.Baltimore, MD: Johns Hopkins University PressReceived July 31, 1986Revision received December 1, 1986Accepted December 1, 1986 •

Low Publication Prices for APA Members and AffiliatesKeeping You Up-to-DateAll APA members (Fellows, Members, and Associates) receive — as part of their annual dues — subscriptions tothe American Psychologist, the APA Monitor, and Psychology Today.High School Teacher and Student Affiliates receive subscriptions to the APA Monitor and Psychology Today, andthey can subscribe to the American Psychologist at a significantly reduced rate.In addition, all members and affiliates are eligible for savings of up to 50% on other APA journals, as well assignificant discounts on subscriptions from cooperating societies and publishers (e.g., the British PsychologicalSociety, the American Sociological Association, and Human Sciences Press)Essential ResourcesAPA members and affiliates receive special rates for purchases of APA books, including the PublicationManual of the APA. the Master Lectures, and APA's Guide to Research Support.Other Benefits of MembershipMembership in APA also provides eligibility for low-cost insurance plans covering life: medical and incomeprotection: hospital indemnity: accident and travel; Keogh retirement; office overhead: and student/school.professional, and liability.For more information, write to American Psychological Association.Membership Records. 1200 Seventeenth Street NW. Washington. DC 20036

Kazdin, Alan E., Evidence-based treatment and practice: New opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care. American Psychologist, Vol 63(3), Apr 2008, 146-159.AbstractThe long-standing divide between research and practice in clinical psychology has received increased attention in view of the development of evidence-based interventions and practice and public interest, oversight, and management of psychological services. The gap has been reflected in concerns from those in practice about the applicability of findings from psychotherapy research as a guide to clinical work and concerns from those

Page 21: articole eficienta

in research about how clinical work is conducted. Research and practice are united in their commitment to providing the best of psychological knowledge and methods to improve the quality of patient care. This article highlights issues in the research- practice debate as a backdrop for rapprochement. Suggestions are made for changes and shifts in emphases in psychotherapy research and clinical practice. The changes are designed to ensure that both research and practice contribute to our knowledge base and provide information that can be used more readily to improve patient care and, in the process, reduce the perceived and real hiatus between research and practice. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Albert J. Bellg, Belinda Borrelli, Barbara Resnick, Jacki Hecht, Daryl Sharp Minicucci, Marcia Ory, Gbenga Ogedegbe, Denise Orwig, Denise Ernst, Susan CzajkowskiTreatment Fidelity in Health Behavior Change Studies: BestPractices and Recommendations From the NIH Behavior Change ConsortiumAppleton Heart InstituteBrown Medical SchoolUniversity of MarylandBrown Medical SchoolUniversity of RochesterNational Institutes of HealthCornell UniversityUniversity of MarylandUniversity of New MexicoNational Institutes of Health(For the Treatment Fidelity Workgroup of the NIH Behavior Change Consortium)Treatment fidelity refers to the methodological strategies used to monitor and enhance the reliability andvalidity of behavioral interventions. This article describes a multisite effort by the Treatment FidelityWorkgroup of the National Institutes of Health Behavior Change Consortium (BCC) to identify treatmentfidelity concepts and strategies in health behavior intervention research. The work group reviewedtreatment fidelity practices in the research literature, identified techniques used within the BCC, anddeveloped recommendations for incorporating these practices more consistently. The recommendationscover study design, provider training, treatment delivery, treatment receipt, and enactment of treatmentskills. Funding agencies, reviewers, and journal editors are encouraged to make treatment fidelity astandard part of the conduct and evaluation of health behavior intervention research.Key words: treatment fidelity, health behavior, translational research, reliability, validityTreatment fidelity refers to the methodological strategies used tomonitor and enhance the reliability and validity of behavioral interventions.It also refers to the methodological practices used to ensurethat a research study reliably and validly tests a clinical intervention.Although some strategies to enhance treatment fidelity in researchmay be quite familiar (e.g., the use of treatment manuals, videotapemonitoring of therapist adherence to research protocols, and testingsubject acquisition of treatment skills), there is inconsistency in theiruse, particularly in health behavior intervention research. Methodologicalprocedures for preserving internal validity and enhancingexternal validity in studies, though critical to the interpretation offindings, are not emphasized in research-training curricula, and theirrelative lack of perceived importance is also evidenced by the scantreporting of treatment fidelity practices in journal articles. By comparison,procedures for evaluating the reliability and validity of questionnairesand other measurement instruments are well understood.Our purpose in this article is to provide a useful conceptualization oftreatment fidelity, describe specific treatment fidelity strategies, andoffer recommendations for incorporating treatment fidelity practicesin health behavior intervention research. We believe that adoptingthese practices will contribute to the continued development of innovative,credible, and clinically applicable health behavior interventions

Page 22: articole eficienta

and programs.The concept of treatment fidelity has evolved over time. Althoughtreatment fidelity was mentioned in a few social andbehavioral studies in the late 1970s and early 1980s (e.g., Peterson,Homer, & Wonderlich, 1982; Quay, 1977), Moncher and Prinz’sAlbert J. Bellg, Appleton Cardiology Associates, Appleton Heart Institute,Appleton, Wisconsin; Belinda Borrelli and Jacki Hecht, Center forBehavioral and Preventive Medicine, Brown Medical School; BarbaraResnick, School of Nursing, University of Maryland; Daryl Sharp Minicucci,School of Nursing, University of Rochester; Marcia Ory, NationalInstitute on Aging, National Institutes of Health (NIH); Gbenga Ogedegbe,Weill Medical College, Cornell University; Denise Orwig, School ofMedicine, University of Maryland; Denise Ernst, Department of FamilyPractice, University of New Mexico; Susan Czajkowski, National Heart,Lung, and Blood Institute (NHLBI), NIH.Marcia Ory is now at the Department of Social and Behavioral Health,School of Rural Public Health, Texas A&M University. Gbenga Ogedegbeis now at the Department of Medicine, College of Physicians and Surgeons,Columbia University.Senior authorship is shared equally between Albert J. Bellg and BelindaBorrelli. Funding for this multisite project was provided by NIH/NHLBIGrant R01 HL62165 to Belinda Borrelli (principal investigator). We wouldlike to thank all the principal investigators and staff members of theBehavior Change Consortium who contributed to this article by identifyingtreatment fidelity practices used in their studies.Correspondence concerning this article should be addressed to Albert J.Bellg, Appleton Heart Institute, 1818 North Meade Street, Appleton, WI54911. E-mail: [email protected] Psychology Copyright 2004 by the American Psychological Association2004, Vol. 23, No. 5, 443–451 0278-6133/04/$12.00 DOI: 10.1037/0278-6133.23.5.443443(1991) article was the first to formally introduce a definition andpropose guidelines for the enhancement of treatment fidelity. Priorto Moncher and Prinz’s article, treatment fidelity was generallyconsidered as treatment integrity—that is, whether the treatmentwas delivered as intended. Moncher and Prinz added the conceptof treatment differentiation, or whether the treatment conditionsdiffered from one another in the intended manner (Kazdin, 1986).Subsequently, Lichstein, Riedel, and Grieve (1994) argued thattwo additional processes needed to be assessed in order to properlyinterpret the results of studies: (a) treatment receipt, which involvesboth assessing and optimizing the degree to which theparticipant understands and demonstrates knowledge of and abilityto use treatment skills, and (b) treatment enactment, which involvesassessing and optimizing the degree to which the participantapplies the skills learned in treatment in his or her daily life.They considered treatment delivery, receipt, and enactment toconstitute a full treatment implementation model (Burgio et al.,2001; Lichstein et al., 1994).Lichstein and colleagues (1994) used a medical example toillustrate these different components. Assessment of whether aphysician wrote a prescription (delivery) is inadequate for ensuringthat the treatment has been implemented as intended. To receive anactive dose of the treatment, the patient must then fill the prescription(receipt) and take the medicine as prescribed (enactment).Although enactment is identical to treatment adherence in theirexample, there are numerous situations in health behavior researchin which enactment is distinguished from adherence. For instance,in a Behavior Change Consortium (BCC) study on smoking cessationfor parents of children with asthma, smokers motivated toquit were given strategies that would help them do so (delivery),and the strategies were discussed with them to verify that theyunderstood and could use them (receipt). However, the strategiesmay or may not have actually been used (enactment), and if theywere used, they may or may not have led to smoking cessation

Page 23: articole eficienta

(adherence to the treatment recommendation to stop smoking). Inboth examples, assessment and potential intervention with therapistbehavior (in relation to treatment delivery) and with patientbehavior (in relation to treatment receipt and enactment) are integralto maintenance of a study’s reliability and validity.Rationale for Considering Treatment FidelityTreatment fidelity influences a variety of study issues. Questionableinternal and external validity may make it impossible todraw accurate conclusions about treatment efficacy or to replicatea study. For example, in evaluating a new intervention, if significantresults were found but fidelity was not monitored and optimized,one does not know whether the outcome was due to aneffective treatment or to unknown factors that may have beenunintentionally added to or omitted from the treatment (Cook &Campbell, 1979). If, however, nonsignificant results were foundand the level of treatment fidelity is unknown, one does not knowwhether the outcome was due to an ineffective treatment or to lackof treatment fidelity (Moncher & Prinz, 1991), because internalvalidity and effect size are highly correlated (Smith, Glass, &Miller, 1980). In the latter case, new, potentially effective treatmentsmay be prematurely discarded, whereas in the former case,unsuccessful treatments may be implemented and disseminated inclinical and public health settings at a high cost to patients,providers, and organizations.By assessing treatment fidelity, however, investigators can havegreater confidence in their results. If they go a step further and usequantitative methods for assessment, they can use treatment fidelitymeasures in data analyses to determine the extent to which theirresults are actually due to the study intervention. For instance, onemight use a measure of nonspecific treatment effects associated withdifferent therapists (a treatment delivery variable) as a covariate tobetter define the effects of the intervention apart from the effects ofthe therapists. Treatment fidelity may also be assessed with the goalof improving the design of a study (Kazdin, 1994). For example, in astudy with poor treatment adherence among participants, if measuresof treatment receipt are found to be associated with adherence, thestudy procedures may be redesigned to improve receipt and therebyprovide a better test of the intervention.By reducing random and unintended variability in a study, improvingtreatment fidelity can also improve statistical power. Monitoringand optimizing treatment fidelity over a series of studies may increaseeffect sizes and reduce the number of subjects required in later studies,thereby decreasing costs and improving the efficacy of an interventionresearch program. Even during a single study, optimizing treatmentfidelity increases the chance that investigators will find significantresults. For instance, evaluation of treatment delivery over time mightreveal a drift in interventionist adherence to a smoking cessationtreatment protocol, perhaps warranting retraining of those providingthe intervention to minimize the problem’s impact on the internalvalidity of the intervention.Procedures to maximize treatment fidelity also have implicationsfor research focusing on theory development, comparison,and application (Nigg, Allegrante, & Ory, 2002b). Only whenthere is a high degree of awareness and control over factorsassociated with a study’s internal validity, such as the impact ofnonspecific treatment effects and unintended clinical processes onan intervention (e.g., a treatment provider’s inadvertent use of acognitive procedure in a behavioral protocol) is it possible toevaluate the efficacy of a theory-based intervention, test a theoreticalquestion, or compare the impact of two or more theoreticalprocesses on an outcome. Unless treatment fidelity is explicitlymaintained, the extent to which the theory-based intervention

Page 24: articole eficienta

being tested is the primary mechanism for the observed changes inthe dependent variables of interest will remain unclear.Finally, treatment fidelity is also a potentially important componentof successful research dissemination. Behavioral medicine practitionersare often in the position of attempting to implement new proceduresin medical settings where medical and nursing staff have clinicalexpertise but limited familiarity with behavioral change research.Translating effective behavioral change interventions from researchsettings to clinical practice can be facilitated when investigators employand describe treatment fidelity strategies that can be used asguidelines for implementing the new interventions in the clinic.Addressing Treatment Fidelity in the BCCIn July 1999, the National Institutes of Health (NIH), along withthe American Heart Association, established the BCC to providean infrastructure to foster collaboration among 15 projects that hadbeen funded under a request for applications calling for proposalsto test innovative approaches to health behavior change in diversepopulations. The studies either test two theories of health behaviorchange or the effectiveness of one theory across multiple healthbehaviors such as diet, exercise, or smoking (Ory, Jordan, &444 BELLG ET AL.Bazzarre, 2002); details of the studies are available in a specialissue of Health Education Research (Nigg et al., 2002a). The BCCconsists of principal investigators and coinvestigators on theseprojects, key staff, program representatives from the NIH whowere involved in the projects, and representatives from the AmericanHeart Association and foundations such as the Robert WoodJohnson Foundation that provided additional support.Because of the complexity of research designs, the diversity ofpopulations, and the greater than usual need to maintain credibilitywhen testing innovative interventions, issues of design and implementationin the BCC studies were particularly challenging. This resultedin formation of a set of BCC work groups to address these issuesacross studies. As part of this effort, the Treatment Fidelity Workgroupwas formed and charged with advancing the definition, methodology,and measurement of treatment fidelity both within the BCCand, more generally, for the field of health behavior change (BelindaBorrelli, Albert J. Bellg, and Susan Czajkowski were the cochairs). Inpursuing that mission, the Treatment Fidelity Workgroup developednew recommendations for treatment fidelity that expand upon theLichstein et al. (1994) model and increase the relevance of treatmentfidelity for health behavior change studies. A detailed survey of all 15BCC studies was also conducted to identify the strategies the studiesused to address their particular treatment fidelity issues. From theresponses, a list of “best practices” in treatment fidelity was created toprovide examples of how the BCC recommendations may be used inhealth behavior intervention research.BCC Treatment Fidelity RecommendationsThe BCC treatment fidelity recommendations intend to linktheory and application in five areas: study design, training providers,delivery of treatment, receipt of treatment, and enactment oftreatment skills. The five areas (with examples from BCC studies)are intended to provide behavioral health investigators with acomprehensive way to conceptualize and address treatment fidelityissues in their studies.Design of StudyPractices. Treatment fidelity practices related to study design areintended to ensure that a study can adequately test its hypotheses inrelation to its underlying theory and clinical processes. Ensuring thatinterventions are congruent with relevant theory and clinical experienceinvolves operationalizing treatments to optimally reflect theirtheoretical and pragmatic roots and precisely defining independent

Page 25: articole eficienta

and dependent variables most relevant to the “active ingredient” of thetreatment (Moncher & Prinz, 1991). The active ingredient of a treatmentmay vary substantially depending on whether an intervention isdesigned to influence cognitions, behavior, or a subjective motivationalstate. In addition, the effect of an intervention can only beadequately assessed when the research design does not confoundtreatment effects with extraneous differences between treatmentgroups or treatment and control groups. Therefore, treatment fidelitygoals in this category (see Table 1) include establishing procedures tomonitor and decrease the potential for contamination between activetreatments or treatment and control, procedures to measure dose andintensity (e.g., length of intervention contact, number of contacts, andfrequency of contacts), and procedures to address foreseeable setbacksin implementation (e.g., therapist dropout over the course of amultiyear study).For example, a BCC study looking at dietary change controlledtreatment dose by using group sessions of the same length for bothTable 1Treatment Fidelity Strategies for Design of StudyGoal Description StrategiesEnsure same treatment dose within conditions. Ensure that treatment “dose” (measured bynumber, frequency, and length ofcontact) is adequately described and isthe same for each subject within aparticular treatment condition.Use computer prompts for contacts; ensure fixedlength, number, and frequency of contactsessions; ensure fixed duration of interventionprotocol; record deviations from protocolregarding number, length, and frequency ofcontacts; ensure fixed amount of informationfor each treatment/control group; use scriptedcurriculum or treatment manual; externallymonitor sessions and provide feedback toproviders; have provider self-monitor or keeplog of encounter; monitor homeworkcompletion; give specialized training toproviders to deal with different types ofpatients equally.Ensure equivalent dose across conditions. Ensure that treatment dose is the sameacross conditions, particularly whenconditions include multiple behavioraltargets (e.g., exercise, smoking).Have equal number of contacts for eachintervention; use equal length of time for eachintervention; use same level of informationalcontent for each intervention. When dose isnot the same, stipulate the minimum andmaximum amount of treatment provided andtrack number, frequency, and duration ofcontacts.Plan for implementation setbacks. Address possible setbacks inimplementation (e.g., treatmentproviders dropping out).Have pool of potential providers so that newproviders need not be trained in a hurry; trainextra providers beyond those needed; havehuman backup for computer-deliveredintervention; track provider attrition.SPECIAL NIH REPORT: TREATMENT FIDELITY IN RESEARCH 445treatment and control conditions, with attendance at all sessionsencouraged by a reward at the end of the study. A study providinga smoking cessation intervention to individuals, however, couldnot reasonably control the length of contact with subjects asclosely and so encouraged treatment providers to stay within acertain range of time and had them record the exact amount of time

Page 26: articole eficienta

spent delivering the intervention so that the possible effect of thisvariable could be examined.Addressing possible setbacks in implementation at the outset isimportant to ensure consistency throughout the course of the study.For example, unanticipated provider dropout may result in hurriedattempts to recruit and train new providers, which may lead toperformance differences between the new and existing providers.The majority of the BCC sites reported that they were takingmeasures to prevent setbacks in implementation, such as trainingextra providers or, when the intervention is delivered by computerand the study design permitted it, training humans as a backup forthe computerized intervention.Recommendations. Strategies for enhancing treatment fidelityrelated to study design should be well defined and thoroughlydescribed prior to study implementation. We recommend thatresearchers consider the following questions during the designphase of their study: How well does the intervention itself reflectits theoretical foundations, and in what specific ways does it do so?What are the areas where it might not do so? How does the studyensure that each participant receives the same “dose” of the treatmentor treatments? How does the study ensure that treatment doseis the same across multiple interventions or multiple behavioraltargets? How does the study anticipate and address possible implementationsetbacks?Training ProvidersPractices. An important area of treatment fidelity is assessingand improving the training of treatment providers to ensure thatthey have been satisfactorily trained to deliver the intervention tostudy participants. Training in a specific intervention often requiresthe acquisition of new skills, which may interact significantlywith a clinician’s existing clinical training and experience.The adequacy of training to implement the intervention needs to beevaluated and monitored on an individual basis both during andafter the training process. General strategies in this category includestandardizing training, measuring skill acquisition in providers,and having procedures in place to prevent drift in skills overtime (see Table 2).The first strategy in Table 2, standardization of training, involvesensuring that all providers are trained in the same mannerin order to increase the likelihood that the intervention will bedelivered systematically across providers, decrease the likelihoodthat there will be Provider _ Treatment interactions, and preventdifferential outcomes by provider. Standardization, however, doesnot preclude individualization of training, which includes accountingfor different levels of education, experience, and implementationstyles. Some methods of standardizing training include usingstandardized training materials, conducting role-playing, and observingactual intervention and evaluating adherence to protocol.Standardized training of providers to criteria also needs to beviewed as an ongoing effort rather than as a one-time event. Thisis especially important when it is likely that there will be turnoverof staff throughout the intervention period. When multiple trainingsessions are required, it is helpful to have the same trainersconducting training workshops in order to maintain and reinforcestandards across providers and throughout the study period. Certificationor recertification of providers is another way to enhanceand document adequacy of provider training and standardization oftraining procedures. Using standardized and pretested trainingmaterials and manuals can also increase the likelihood that allproviders are receiving similar training. Setting performance criteriaand documenting that all providers meet those standardsbefore delivering interventions also help to ensure the required

Page 27: articole eficienta

skill level of all providers.Measuring provider skill acquisition both during and after trainingis necessary to ensure that training has been successful. Nearlyall BCC sites measured skill acquisition either by direct observation,written pre- and posttests, or some combination of the twomethods. However, although initial skill acquisition may be adequate,such skills may be vulnerable to deterioration over time.Intervention components may be unintentionally omitted or extraneouscomponents unintentionally added, thus contaminating deliveryof the intervention. It is essential that procedures be put inplace to address provider deficiencies throughout the study. “Drift”from the original protocol can be minimized in a variety of ways,such as by scheduling periodic training “booster” sessions withproviders or having regular supervision with providers. All but oneBCC site systematically evaluated provider skills and implementedmeasures to prevent skills drift over time. The sites that reportedusing layperson providers used many of the same training strategiesoutlined previously but also made training more intensive andtook professional experience into account when evaluating theintervention’s effectiveness.Recommendations. Most researchers make sure that providertraining is addressed at the beginning of studies. There is lessfocus, however, on monitoring and maintaining provider skills asthe study progresses. We recommend that researchers be able toanswer the following questions: How will training be standardizedacross providers? How will skill acquisition in providers be measured?How will decay or change in provider skills be minimized?How will providers of differing professional training or skill levelsbe trained to deliver the intervention in a similar way?Delivery of TreatmentPractices. Treatment fidelity processes that monitor and improvedelivery of the intervention so that it is delivered as intendedare essential. Even well-trained interventionists may not alwaysdeliver an intervention protocol effectively when clinical circumstancesor their training or involvement in other types of interventionsinterfere with their doing so. General goals in this categoryinclude using procedures to standardize delivery and checking forprotocol adherence (see Table 3).The gold standard to ensure satisfactory delivery is to evaluateor code intervention sessions (observed in vivo or video- or audiotaped)according to a priori criteria. Requiring providers tocomplete process evaluation forms or behavior checklists aftereach intervention session may remind them to include the requisiteskills and content appropriate for each intervention and minimizecontamination from comparison interventions. Checklists, however,are less reliable correlates of what actually happens in a446 BELLG ET AL.session (W. Miller, personal communication, March 22, 2002).Alternatively, creating forums or case conferences where providerscan discuss intervention cases and review skills required for eachintervention can help ensure that interventions are standardizedacross providers and are being conducted according to protocol.Whether the treatment is being delivered in the way in which theintervention was conceived may be affected by providers nothaving enough time to implement the intervention, by havingunforeseen obstacles to intervention delivery, or by nonspecifictreatment effects such as the warmth and credibility of the provider.Behavior Change Consortium sites reported using audiotapes,videotapes, in vivo observation, or behavioral checklists toensure that providers adhered to the treatment protocol. Mostresearch sites used more than one method. All but one site reportedthat their providers used a treatment manual or an intervention

Page 28: articole eficienta

protocol or script to aid in standardization of delivery. Severalstudies reported that they were using the same provider to deliverboth treatment and control interventions. These sites reported thatthey were taking steps to reduce cross-contamination betweentreatments by using direct observation, audiotape monitoring, orsubject exit interviews to ensure that control participants did notreceive any of the intervention components.To control for subtle expectations on the part of interventionists,one BCC study emphasized to treatment providers that it wasimportant to give both treatment and control interventions thesame emphasis because a primary outcome was long-term dietaryadherence, and the posttreatment baselines needed to be similar forboth groups. Behavior Change Consortium studies reported measuringother nonspecific treatment effects by self-report questionnairescompleted by study participants, or in some cases, ratingaudiotaped intervention sessions for therapist–provider nonspecificeffects.Recommendations. Verifying the extent to which treatmentwas delivered as intended (and having a mechanism to improvedelivery as needed) is crucial to preserve both internal and externalstudy validity. We recommend that researchers be able to answerthe following questions: How will the study measure and controlfor nonspecific treatment effects? How can you ensure that pro-Table 2Treatment Fidelity Strategies for Monitoring and Improving Provider TrainingGoal Description StrategiesStandardize training. Ensure that training is conducted similarlyfor different providers.Ensure that providers meet a priori performance criteria;have providers train together; use standardizedtraining manuals/materials/provider resources/fieldguides; have training take into account the differentexperience levels of providers; use structured practiceand role-playing; use standardized patients; observeintervention implementation with pilot participants;use same instructors for all providers; videotapetraining in case there needs to be future training forother providers; design training to allow for diverseimplementation styles.Ensure provider skill acquisition. Train providers to well-defined performancecriteria.Observe intervention implementation with standardizedpatients and/or pilot participants (role-playing); scoreprovider adherence according to an a priori checklist;conduct provider-identified problem solving anddebriefing; provide written exam pre- andposttraining; certify interventionists initially (beforethe intervention) and periodically (during interventionimplementation).Minimize “drift” in provider skills. Ensure that provider skills do not decayover time (e.g., show that providerskills demonstrated halfway through theintervention period are not significantlydifferent than skills immediately afterinitial training).Conduct regular booster sessions; conduct in vivoobservation or recorded (audio- or videotaped)encounters and review (score providers on theiradherence using a priori checklist); provide multipletraining sessions; conduct weekly supervision orperiodic meetings with providers; allow providerseasy access to project staff for questions about theintervention; have providers complete self-reportquestionnaire; conduct patient exit interviews toassess whether certain treatment components weredelivered.Accommodate provider differences. Ensure adequate level of training in

Page 29: articole eficienta

layperson providers or providers ofdiffering skill level, experience orprofessional background.Have professional leaders supervise lay group leaders/paraprofessionals; monitor differential drop-out rates;evaluate differential effectiveness by professionalexperience; give all providers intensive training; useregular debriefing meetings; use provider-centeredtraining according to needs, background, or clinicalexperience; have inexperienced providers add totraining by attending workshops or training programs.SPECIAL NIH REPORT: TREATMENT FIDELITY IN RESEARCH 447viders deliver the intended intervention? How will you ensure thatproviders adhere to the treatment protocol? How will you minimize“contamination” across treatments when they are implementedby the same provider?Receipt of TreatmentPractices. The last two treatment fidelity categories shift thefocus from the provider to the patient. Receipt of treatment involvesprocesses that monitor and improve the ability of patients tounderstand and perform treatment-related behavioral skills andcognitive strategies during treatment delivery. If the interventionseeks to increase motivation for change or alter other subjectivestates conceptually related to motivation (e.g., readiness to change,self-determination, self-efficacy), receipt refers to the extent towhich the patient’s speech or behavior endorses the increased levelof motivation. Note that treatment receipt specifically relates to theability of patients to demonstrate during the intervention that theyunderstand and can perform the behavioral skills (e.g., relaxationtechniques, completing food diaries) or cognitive strategies (e.g.,reframing, problem solving) that have been presented to them orthat they are able to experience the desired change in subjectivestate induced by the intervention. If a patient does not understandor is not able to implement the new skills, then an otherwiseeffective intervention may be incorrectly deemed as ineffective.For receipt of treatment (see Table 4), most BCC sites reportedthat they verified that participants understood the interventionduring treatment sessions. Methods of measurement included administeringpre- and posttests, structuring the intervention aroundachievement-based objectives, and reviewing homework assignments.A majority of sites also reported employing strategies toverify that participants were able to use the cognitive, behavioral,and subjective skills provided in the intervention. For instance, aBCC study that focused on changing exercise behavior usedmonthly review-of-goal forms and activity calendars to confirmthat participants were able to perform the treatment activitiesduring training sessions.Recommendations. It is important to choose measures of receiptthat take into account the specific types of information andskills that are part of the intervention. We recommend that researchersbe able to answer the following questions: How will youverify that subjects understand the information you provide themwith? How will you verify that subjects can use the cognitive andTable 3Treatment Fidelity Strategies for Monitoring and Improving Delivery of TreatmentGoal Description StrategiesControl for provider differences. Monitor and control for subject perceptions ofnonspecific treatment effects (e.g.,perceived warmth, credibility, etc., oftherapist/provider) across intervention andcontrol conditions.Assess participants’ perceptions of provider warmthand credibility via self-report questionnaire andprovide feedback to interventionist and include in

Page 30: articole eficienta

analyses; select providers for specificcharacteristics; monitor participant complaints;have providers work with all treatment groups;conduct a qualitative interview at end of study;audiotape sessions and have different supervisorsevaluate them and rate therapist factors.Reduce differences within treatment. Ensure that providers in the same conditionare delivering the same intervention.Use scripted intervention protocol; provide atreatment manual; have supervisors rate audioandvideotapes.Ensure adherence to treatment protocol. Ensure that the treatments are being deliveredin the way in which they were conceivedwith regard to content and treatment dose.Provide computerized prompts to providers duringsessions about intervention content; audio- orvideotape encounter and review with provider;review tapes without knowing treatmentcondition and guess condition; randomly monitoraudiotapes for both protocol adherence andnonspecific treatment effects; check for errors ofomission and commission in interventiondelivery; after each encounter, have providercomplete a behavioral checklist of interventioncomponents delivered; ensure provider comfortin reporting deviations from treatment manualcontent.Minimize contamination between conditions. Minimize contamination across treatment/control conditions, especially whenimplemented by same provider.Randomize sites rather than individuals; usetreatment-specific handouts, presentationmaterials, manuals; train providers to criterionwith role-playing; give specific training toproviders regarding the rationale for keepingconditions separate; supervise providersfrequently; audiotape or observe sessions withreview and feedback; conduct patient exitinterviews to ensure that control subjects did notreceive treatment.448 BELLG ET AL.behavioral skills you teach them or evoke the subjective state youtrain them to use? How will you address issues that interfere withreceipt?Enactment of Treatment SkillsPractices. Enactment of treatment skills consists of processesto monitor and improve the ability of patients to performtreatment-related behavioral skills and cognitive strategies in relevantreal-life settings. In the case of an induced motivational orsubjective state, enactment is the degree to which the state can beadopted in the appropriate life setting. This treatment fidelityprocess is the final stage in implementing an intervention in that itinvolves patients’ actual performance of treatment skills in theintended situations and at the appropriate time.Enactment of treatment skills may seem to be confounded withtreatment adherence or treatment efficacy, and making clear distinctionsbetween these three concepts is useful. Enactment specificallyrelates to the extent to which a patient actually implementsa specific behavioral skill, cognitive strategy, ormotivational state at the appropriate time and setting in his or herdaily life (e.g., fills a pill organizer at the beginning of the week,uses a cognitive strategy to deal with a craving for cigarettes, ortries out new recipes to identify healthy and appealing dinners). Incontrast, treatment adherence relates to whether the patient performsthe tasks definitive of medical treatment or a healthy lifestylechange (e.g., actually takes medications, avoids smoking, or

Page 31: articole eficienta

eats a healthy diet). Treatment efficacy relates primarily to whetherthe intervention influences the research or clinical endpoint ofinterest (e.g., whether a cholesterol-lowering medication lowerscholesterol or reduces acute medical events or hospitalization,whether stopping smoking reduces asthma severity, or whethereating a low-salt diet results in lower blood pressure).It is therefore possible to have a study with adequate or excellentenactment of treatment skills that has poor treatment adherence ortreatment efficacy (e.g., someone who fills a pill organizer butnever takes his or her medications or gets the health benefit oftaking them, deals with cravings for cigarettes but does not stopsmoking or have fewer asthma symptoms, or tries out healthylow-salt recipes but does not keep eating them or achieve areduction in blood pressure). Such a study would provide a goodTable 4Treatment Fidelity Strategies for Monitoring and Improving Receipt of TreatmentGoal Description StrategiesEnsure participant comprehension. Ensure that participants understand theinformation provided inintervention, especially whenparticipants may be cognitivelycompromised, have a low level ofliteracy/education, or not beproficient in English.Use pre- and posttest process and knowledgemeasures; have providers reviewhomework or self-monitoring logs; haveproviders ask questions/discuss materialwith subjects; use scripts that promptproviders to paraphrase/summarizecontent; complete activity logs; structureintervention around achievement-basedobjectives; conduct structured interviewwith participants; have providers workwith subjects until they can demonstratethe skills; have providers monitor and givefeedback on practice sessions.Ensure participant ability to use cognitiveskills.Make sure that participants are able touse the cognitive skills taught in theintervention (e.g., reframing,problem solving, preparing for highrisksituations, etc.).Conduct structured interviews withparticipants; have providers reviewhomework; have providers work withparticipants until they can demonstrateskills; use measures of mediatingvariables; have providers monitor and givefeedback on practice sessions; measureparticipant performance and completion oftraining assignments; have providersassess cognitive skills; have participantsprovide feedback on ability; usequestionnaires; use problem-solvingstructured interview that sets uphypothetical situations and asksparticipants to provide strategies forovercoming obstacles to changing theirbehaviors.Ensure participant ability to perform behavioralskills.Make sure that participants are able touse the behavioral skills taught inthe intervention (e.g., relaxationtechniques, food diaries, cigaretterefusal skills, etc.).

Page 32: articole eficienta

Collect self-monitoring/self-report data(participants verbally confirmcompetence); observe subjects; usebehavioral outcome measures; completetraining assignments; monitor(electronically/objectively) behavioraladherence; follow-up telephone contacts/counseling.SPECIAL NIH REPORT: TREATMENT FIDELITY IN RESEARCH 449test of the intervention, because treatment skills are being used bypatients but are not effective at changing their health behavior ortheir health outcomes. In a study with poor enactment, however,neither treatment adherence nor efficacy is likely to be high, butthe researcher will be unable to state whether this is due to poorenactment or to an ineffective intervention.It should be noted that in psychological intervention studies inwhich the outcome is incorporation of a set of psychological,social, or behavioral skills into daily life (e.g., mental health orpsychotherapy outcome studies) and in biomedical studies thatinvolve routine use of medication or medical devices, treatmentgoals may be defined in such a way that treatment enactment maybe the same as adherence to treatment. For example, in a studyexamining ways in which to train patients with heart failure to carefor a ventricular assist device, the patient’s proper response towarning alarms may be defined as both enacting the skill thepatient is trained in and adhering to the health behavior outcome ofinterest. However, for behavioral change studies in which behavioral,psychological, or social treatments are used to alter behavioralrisk factors such as diet, physical activity, or smoking behavior,enactment is appropriately distinguished from adherence, as inthe previous examples.As for enactment of treatment skills in real-life settings (seeTable 5), most BCC sites reported assessing whether participantsactually used the cognitive skills that are part of their intervention.Enactment assessments and interventions included questionnairesand self-reports, structured follow-up interviews, and telephonecalls. All but one study site also reported assessing whether subjectsactually used the behavioral skills in the intervention. Alongwith the above strategies, enactment of behavioral skills wasmonitored with activity logs, participation in social-learning gamesthat provided a record of the desired activity, electronic monitoringof behavior (engaging in exercise or pill taking), and measurementof biological markers associated with the desired behaviors. Forexample, a BCC smoking cessation study measured enactment bytracking the use of nicotine patches, and a study intervening withdiet and exercise tracked participants’ reports of using problemsolvingand emotional expressiveness skills taught during treatmentwith their spouse or partner.Recommendations. Enactment is one of the most challengingaspects of treatment fidelity, both conceptually and pragmatically.Even so, we believe that an important distinction needs to be madebetween what is taught (treatment delivery), what is learned (treatmentreceipt), and what is actually used (enactment). We recommendthat researchers be able to answer the following questions:How will you verify that subjects actually use the cognitive,behavioral, and motivational skills and strategies you provide themwith in the appropriate life situations? How will you address issuesthat interfere with enactment?Discussion and General RecommendationsThe following are our general recommendations to the researchcommunity for improving the current state of the art in treatmentfidelity and making it a practical and useful part of health behaviorresearch.

Page 33: articole eficienta

We recommend that plans for enhancing and monitoring treatmentfidelity be conceptualized as an integral part of the initialplanning and design of health behavior intervention studies. This isparticularly important for studies venturing into less wellunderstoodareas. The needs of each study are different, andideally the components of the treatment fidelity plan are selectedTable 5Treatment Fidelity Strategies for Monitoring and Improving Enactment of Treatment SkillsGoal Description StrategiesEnsure participant use of cognitive skills. Ensure that participants actually use thecognitive skills provided in theintervention in appropriate lifesettings.Use process measure; assess with questionnaire;use self-report regarding achievement ofgoals; provide contact form to monitorparticipant interaction with staff; usestructured interview with participants; useexercises, goal sheets, and other printedmaterial to foster adherence; assess mediatingprocesses periodically; record telephonecontacts; discuss ongoing use of skills withsubjects; conduct follow-up discussions withparticipants.Ensure participant use of behavioral skills. Ensure that participants actually use thebehavioral skills provided in theintervention in appropriate lifesettings.Assess with questionnaires; observe participants’in vivo interactions; assess during providerencounter; use social-learning game,providing record of behaviors; conduct selfreportor self-monitoring and maintain activitylog; measure objective biological orphysiological markers; maintain longitudinalcontact (telephone, mailed information, etc.)to encourage adherence; record time spent atfacility; monitor frequency of sessions; usespecific behavioral skill use measures;electronically monitor behavior; follow updiscussions with participants; conduct followupdiscussions/telephone calls/counseling withparticipants.450 BELLG ET AL.on the basis of the theoretical and clinical framework for eachintervention. For example, a participant’s demonstration of certainbehavior- and knowledge-based skills in his or her life might be anappropriate indication that the participant is enacting an educationallybased intervention but may not accurately reflect enactment ofa motivational intervention. Enactment of a motivational interventionmay be better indicated by a participant’s self-statementsreflecting confidence in being able to make changes to improve hisor her health. With multilevel interventions, it is also important toassess treatment fidelity issues at both the micro and the macrolevels, examining, for instance, whether interventions both incorporatespecific behaviors and achieve broader behavioralobjectives.We also recommend that investigators not only institute treatmentfidelity plans at the outset of the study but also maintainconsistent efforts to adhere to a comprehensive treatment fidelityplan throughout the study period. We recognize, however, thatsuch plans may need to be modified to accommodate practicalneeds and other study demands. In studies where interventionproviders work exclusively for the study, for example, it may bepossible to use numerous strategies to maintain high standards oftreatment fidelity. However, in situations where intervention providers

Page 34: articole eficienta

are integrating the intervention into their current clinicalpractice, it may not be feasible to use all desirable treatmentfidelity strategies. In these situations, a more pragmatic and limitedplan may be necessary and should be documented. Therefore, it isimportant to consider the setting, other study demands, and providerand participant burden in order to design a plan that ispractical, achievable, and effective for monitoring and improvingtreatment fidelity.Overall, we believe that having a specific plan to enhance andmonitor treatment fidelity concerns addressed in all five areascovered by the BCC treatment fidelity recommendations will helpcounter threats to the study’s internal and external validity andtherefore enable investigators to draw more accurate conclusionsabout the validity and effectiveness of study interventions. It alsowill help guide future researchers and program developers intesting and selecting intervention components that have the mostpositive impact on behavioral and treatment outcomes. For clinicians,it will make it possible to identify interventions appropriateto the available resources and implement them with the reasonableexpectation that the results will be similar to those achieved inclinical trials.It is particularly important that funding agencies, reviewers, andjournal editors who publish behavioral change research considertreatment fidelity issues. It is our hope that funding initiatives (e.g.,Requests for Applications and Requests for Proposals), reviewerguidelines, and publishing requirements will include an explicitfocus on the methods used by researchers to monitor and enhancetreatment fidelity in health behavior intervention studies. As is thecase with current efforts to ensure adequate representation ofwomen and minorities in clinical research studies, those chargedwith oversight of the funding, review, conduct, and reporting ofbehavioral change research need to take the lead in encouragingresearchers to address treatment fidelity issues. By asking researchersto address this issue in funding applications and bymaking report of treatment fidelity methods a part of journaleditorial policy, methods to enhance and measure treatment fidelityare more likely to become standard features in health behaviorintervention studies. Ultimately, this will lead to increased credibilityfor the field of behavioral medicine research.Some researchers may be concerned that such efforts will betime-consuming and costly. It is not our intention to add to thework and cost of health behavior intervention studies but to makethem more efficient and effective in identifying useful interventions.Each study deals with unique circumstances, and there is nofixed set of treatment fidelity practices that must be added to thebudgets of research projects and the burdens of researchers. Indeed,our list of “best practices” compiled from BCC studiesrepresents existing assumptions and strategies for the use of treatmentfidelity practices in research. However, it is our hope that theBCC treatment fidelity recommendations will play a role in identifyingand organizing treatment fidelity practices so that they maybe more easily and regularly applied by the research community.Our contention is that not devoting resources to treatment fidelityis ultimately more costly in time, financial resources, andcredibility than doing so. Moreover, with the current focus ontranslation of research findings into real-world settings, treatmentfidelity issues become all the more important. Funding agenciesand researchers clearly have an interest in minimizing the chancethat the studies they are involved in produce equivocal results orcannot be replicated in the laboratory or the clinic. Health behaviorintervention research and behavioral medicine as a whole can onlybenefit from studies that are more reliable, valid, and clinically

Page 35: articole eficienta

applicable. Our final recommendation is that treatment fidelityshould become an integral part of the conduct and evaluation of allhealth behavior intervention research.ReferencesBurgio, L., Corcoran, M., Lichstein, K. L., Nichols, L., Czaja, S.,Gallagher-Thompson, D., et al. (2001). Judging outcomes in psychosocialinterventions for dementia caregivers: The problem of treatmentimplementation. The Gerontologist, 4, 481–489.Cook, T. D., & Campbell, D. J. (1979). Quasi-experimentation: Designand analysis issues for field settings. Geneva, IL: Houghton Mifflin.Kazdin, A. E. (1986). Improving the quality of research: Reflections andcommentary. Counseling Psychologist, 14, 127–131.Kazdin, A. E. (1994). Methodology, design, and evaluation in psychotherapyresearch. In A. E. Bergin & S. L. Garfield (Eds.), Handbook ofpsychotherapy and behavior change (4th ed., pp. 19–71). New York:Wiley.Lichstein, K. L., Riedel, B. W., & Grieve, R. (1994). Fair tests of clinicaltrials: A treatment implementation model. Advances in Behavior Researchand Therapy, 16, 1–29.Moncher, F. J., & Prinz, F. J. (1991). Treatment fidelity in outcome studies.Clinical Psychology Review, 11, 247–266.Nigg, C. R., Allegrante, J. P., & Ory, M. (Eds.). (2002a). Behavior ChangeConsortium [Special issue]. Health Education Research, 17(5).Nigg, C. R., Allegrante, J. P., & Ory, M. (2002b). Theory-comparison andmultiple-behavior research: Common themes advancing health behaviorresearch. Health Education Research, 17, 670–679.Ory, M., Jordan, P. J., & Bazzarre, T. (2002). The Behavior ChangeConsortium: Setting the stage for a new century of health behaviorchange research. Health Education Research, 17, 500–511.Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity ofindependent variables in behavior analysis. Journal of Applied BehaviorAnalysis, 15, 477–492.Quay, H. C. (1977). The three faces of evaluation. Criminal Justice andBehavior, 4, 341–354.Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits ofpsychotherapy. Baltimore: Johns Hopkins University Press.SPECIAL NIH REPORT: TREATMENT FIDELITY IN RESEARCH 451

Journal of Counseling Psychology2001, Vol. 48, No. 3, 251-257Copyright 2001 by the American Psychological Association, Inc.0022-0167/01/S5.00 DOI: 10.1037//O022-OI67.48.3.251

Hyunnie Ahn, Bruce E. Wampold, Where Oh Where Are the Specific Ingredients? A Meta-Analysis of Component Studies in Counseling and Psychotherapy, Journal of Counseling Psychology, 2001, Vol. 48, No. 3, 251-257

University of Wisconsin—MadisonComponent studies, which involve comparisons between a treatment package and the treatment packagewithout a theoretically important component or the treatment package with an added component, useexperimental designs to test whether the component is necessary to produce therapeutic benefit. Ameta-analysis was conducted on 27 component studies culled from the literature. It was found that theeffect size for the difference between a package with and without the critical components was notsignificantly different from zero, indicating that theoretically purported important components are notresponsible for therapeutic benefits. Moreover, the effect sizes were homogeneous, which suggests thatthere were no important variables moderating effect sizes. The results cast doubt on the specificity ofpsychological treatments.It was established in the 1980s that counseling and psychotherapyare remarkably efficacious (Lambert & Bergin, 1994; Wampold,2000); now on center stage is the controversy about whetherthe beneficial effects of counseling and psychotherapy are due tothe specific ingredients of the treatments or to the factors commonin all therapies (Wampold, 2000). On one side are the advocates ofempirically supported treatments who claim that treatments areanalogues of medical treatments in that efficacy is attributed totheir respective specific ingredients, which are usually presented in

Page 36: articole eficienta

treatment manuals (see, e.g., Chambless & Hollon, 1998; Chamblesset al., 1996; Crits-Christoph, 1997; DeRubeis & Crits-Christoph, 1998; DeRubeis et al., 1990; DeRubeis & Feeley, 1990;Task Force on Promotion and Dissemination of PsychologicalProcedures, 1995; Waltz, Addis, Koerner, & Jacobson, 1993;Wilson, 1996). Specificity (i.e., attributing outcome to specificingredients) is one of the hallmarks of the medical model. On theother side are the advocates of models that stipulate that thecommon factors, such as the healing context, the working alliance,and belief in the rationale for treatment and in the treatment itself,are the important therapeutic aspects of counseling and psychotherapy(see, e.g., Frank & Frank, 1991; Garfield, 1992; Luborsky,Singer, & Luborsky, 1975; Parloff, 1986; Rosenzweig, 1936;Strupp, 1986; Wampold, 1997, 2000, 2001; Wampold et al., 1997).From a scientific perspective, the specific ingredient versus commonfactor polemic should be settled empirically rather thanrhetorically.Demonstrating that the specific ingredients of a treatment areresponsible for the benefits of counseling and psychotherapy isHyun-nie Ahn and Bruce E. Wampold, Department of CounselingPsychology, University of Wisconsin—Madison.We thank Nancy Picard and Dongmin Kim for volunteering to ratearticles for this study. This meta-analysis was conducted as part of thedissertation of Hyun-nie Ahn under the supervision of Bruce E. Wampold.Correspondence concerning this article should be addressed to Bruce E.Wampold, Department of Counseling Psychology, 321 Education Building—1000 Bascom Mall, University of Wisconsin, Madison, Wisconsin53706. Electronic mail may be sent to [email protected] (see Wampold, 2001, for a discussion of research strategiesfor establishing specificity). There are many research strategiesthat can be used to demonstrate the specificity of psychologicaltreatments. Of such designs, component studies comeclosest to the "gold standard" of experimental designs and, as such,should show evidence for specificity, should specificity exist.Component studies attempt to isolate the effects of ingredients bycomparing treatments with and without those ingredients. Componentstudies contain two similar designs, dismantling designs andadditive designs.The dismantling design involves a comparison between theentire treatment and the treatment without a given specific ingredientthat is hypothesized to be critical to the success of thetreatment, as shown in Figure 1. Provided the treatment has beenshown to be efficacious, the logic of the design is to "dismantle"the treatment to identify those ingredients that are responsible forthe benefits that accrue from administration of the treatment. In adismantling study, if removing the specific ingredients results inpoorer outcomes vis-a-vis the complete treatment, evidence accruesfor the specificity of those ingredients. Borkovec (1990)described the advantages of the dismantling study:One crucial feature of the [dismantling] design is that more factors areordinarily common among the various comparison conditions. Inaddition to representing equally the potential impact of history, maturation,and so on and the impact of nonspecific factors, a proceduralcomponent is held constant between the total package and the controlcondition containing only that particular element. Such a designapproximates more closely the experimental ideal of holding everythingbut one element constant. . . . Therapists will usually havegreater confidence in, and less hesitancy to administer, a componentcondition than a pure nonspecific condition. They will also be equivalentlytrained and have equal experience in the elements relative tothe combination of elements in the total package.... At the theoreticallevel, such outcomes tell what elements of procedure are mostactively involved in the change process. . . . At the applied level,determination of elements that do not contribute to outcome allowstherapists to dispense with their use in therapy, (pp. 56-57)

Page 37: articole eficienta

251252 AHN AND WAMPOLDGroup IComplete Treatment• All specificingredients,including criticalspecific ingredients• All incidental aspectsGroup IITreatment withoutCritical SpecificIngredient• All otherspecific ingredients• All incidental aspectsGroups for Dismantling StudyI ~\ Effect due to critical" specific ingredientsComplete TX withoutTX IngredientsEffect for Specific IngredientFigure 1. Dismantling study illustrated. Tx = treatment.In the additive design, a specific ingredient is added to anexisting treatment (Borkovec, 1990). Typically, there is a theoreticalreason to believe that the ingredient added to the treatment willaugment the benefits derived from the treatment:The goal is ordinarily to develop an even more potent therapy basedon empirical or theoretical information that suggests that each therapy[or component] has reason to be partially effective, so that theircombination may be superior to either procedure by itself. In terms ofdesign, the [dismantling] and additive approaches are similar. It ispartly the direction of reasoning of the investigator and the history ofliterature associated with the techniques and the diagnostic problemthat determine which design strategy seems to be taking place. (Borkovec,1990, p. 57)A prototypic component study was used by Jacobson et al.(1996) to determine what components of cognitive-behavioraltreatment of depression were responsible for its established efficacy.Jacobson et al. separated cognitive-behavioral therapy intothree components: behavioral activation, coping strategies fordealing with depressing events and the automatic thoughts thatoccur concurrently, and modification of core depressogenic cognitiveschemas. Participants were randomly assigned to a behavioralactivation group, a treatment involving behavioral activationcombined with coping skills related to automatic thoughts, or thecomplete cognitive treatment, which included behavioral activation,coping skills, and identification and modification of coredysfunctional schemas. Generally, the results showed equivalencein outcomes across the groups at termination and at follow-up.This study illustrates the logic of the component design. As well,the results failed to produce evidence of the specificity of ingredientsof cognitive-behavioral therapy.If specific ingredients are indeed responsible for the benefits ofcounseling and psychotherapy, then component studies shouldconsistently demonstrate an effect when a treatment condition iscompared with a condition not involving a theoretically stipulatedcomponent. Bearing in mind that a few component studies coulddemonstrate such differences by chance (i.e., Type I errors), it isimportant to determine whether the corpus of component studiesproduces specificity effects. Meta-analysis has been shown to be apowerful method to review literature and bring clarity to disputes

Page 38: articole eficienta

in education, medicine, psychology, and public policy (Hunt,1997; Mann, 1994). The purpose of this study was to metaanalyticallyexamine component studies to determine the degree towhich these studies produce evidence that supports the specificityof psychological treatments.MethodProcedureBecause this meta-analysis involved a methodological feature (viz.,component studies), determining a keyword for an electronic literaturesearch was not possible. Therefore, a comprehensive search of journals thatpublish outcome research was undertaken. Wampold et al. (1997) reviewedthe research included in Shapiro and Shapiro's (1982) meta-analysis ofcomparative studies and found that the preponderance of such studies werepublished in four journals: Behaviour Research and Therapy, BehaviorTherapy, Journal of Consulting and Clinical Psychology, and Journal ofCounseling Psychology. Stiles, Shapiro, and Elliott (1986) noted thatdetecting the relative efficacy of treatments depended on sophisticatedresearch methods and that more recent studies, involving improved methods,would be more likely to reveal differences between treatments, shouldthey be present. Accordingly, we searched for component studies publishedin the most recent decade (i.e., 1990 to 1999) in the four identified journals.This strategy eliminated dissertations, presentations, and other unpublishedstudies. However, given that studies with statistically significant results aremore likely to be published (Atkinson, Furlong, & Wampold, 1982),omitting unpublished studies would tend to overestimate the effect ofspecific ingredients; consequently, the present analysis yields a liberal testof specificity.In identifying the studies for this meta-analysis, Hyun-nie Ann examinedevery study published in the four journals just identified from 1990 to1999. To be included in this meta-analysis, a study had to (a) involve apsychological treatment intended to be therapeutic for a particular disorder,problem, or complaint and (b) contain the necessary statistics to conductthe meta-analysis. To determine that a treatment was intended to betherapeutic, we used the criteria developed by Wampold et al. (1997);specifically, a treatment had to involve a therapist who had at least amaster's degree and who met face to face with the client and developed arelationship with the client. Moreover, the treatment had to contain at leasttwo of the following four elements: (a) The treatment was based on anestablished treatment that was cited, (b) a description of the treatment wascontained in the article, (c) a manual was used to guide administration ofthe treatment, and (d) active ingredients of the treatment were identifiedand cited. Finally, the study's research design had to involve a comparisonof one group with another group, and one of the following two conditionshad to be satisfied: (a) One, two, or three ingredients of the treatment wereremoved, leaving a treatment that would be considered logically viable(i.e., coherent and credible), or (b) one, two, or three ingredients that werecompatible with the whole treatment and were theoretically or empiricallyhypothesized to be active were added to the treatment, providing a "supertreatment." A study was excluded when treatment A was compared withWHERE ARE THE SPECIFIC INGREDIENTS? 253treatment B, where B was a subset of A but both A and B were establishedtreatments in their own rights.Initially, all studies were gathered that compared one treatment groupwith another group that had components added or removed, although thestudy may not have met the inclusion and exclusion criteria. Two raters(both doctoral students in counseling psychology) were then asked todetermine the suitability of each study for this meta-analysis using a ratingsheet listing the inclusion and exclusion criteria. A study was retained ifboth raters agreed on its inclusion in the study. When the raters disagreedon a study, Bruce E. Wampold rated the study, and the study was includedif he determined that it met the criteria. The resulting meta-analytic sampleincluded 27 treatment comparisons derived from 20 studies (see Table 1).Analytic StrategyFor each study i, an estimate of the effect size dt for study i that reflectedthe effect of a given component or components, as well as an estimate ofthe variance of this estimate—that is, SP-(,d,)—was calculated in the followingway. First, for each dependent variable, a sample effect size wasobtained by calculating the difference in the means of the two conditionsand standardizing by dividing by the pooled standard deviation: (morecomponent-

Page 39: articole eficienta

group M - fewer-component-group M)/SD. This value wasadjusted to yield an unbiased estimate of the population effect size; as well,the standard error of estimate was calculated (Hedges & Olkin, 1985). Todetermine a single estimate of the effect size for each study, we combinedthe effect sizes for each dependent variable under the assumption that thecorrelation among the dependent variables was .50, a reasonable value forthis correlation in psychotherapy studies (see Hedges & Olkin, 1985, pp.212-213, for the method and Wampold et al., 1997, for a justification andapplication in the psychotherapy context). This procedure yielded, forstudy i, the desired estimates dt and a2^,); it also provided a more preciseestimate of d{ (i.e., reduced the standard error of estimate) than would theestimate for any single dependent variable (Wampold et al., 1997).To aggregate the effect sizes over the 27 comparisons, we weighted eachdj by the inverse of the variance, in the standard fashion, to yield theaggregated effect size estimate d+ (Hedges & Olkin, 1985). As well, theTable 1Component Studies of PsychotherapyStudyAppelbaum et al. (1990)Barlow et al. (1992)Baucom et al. (1990)Blanchard et al. (1990)Borkovec & Costello (1993)Dadds & McHugh (1992)Deffenbacher & Stark (1992)Feske & Goldstein (1997)Halford et al. (1993)Hope et al. (1995)Jacobson et al. (1996)Nicholas et al. (1991)Ost et al. (1991)Porzelius et al. (1995)Propst et al. (1992)Radojevic et al. (1992)Rosen et al. (1990)Thackwray et al. (1993)Webster-Stratton (1994)Williams & Falbo (1996)DisorderTension headacheGeneralized anxiety disorderMarital discordTension headacheGeneralized anxiety disorderChild conduct problemGeneral angerPanic disorderMarital discordSocial phobiaDepressionChronic low back painBlood phobiaEating disorderDepressionRheumatoid arthritisBody imageBulimia nervosaParenting effectivenessPanic attack with agoraphobiaMorecomponents groupCT + PMRCT + PMRCT + PMRCR + BMTEET + BMTEET + CR + BMTCT + PMRCBTCMT + Ally

Page 40: articole eficienta

CRCSEMDREnhanced BMTCBTBA + ATBA + ATCT + PMRBT + PMRApplied tension package (BT)Applied tension package (BT)OBETCBT-ReligiousBT + social supportCBT + size perceptiontrainingCBTGDVM + ADVANCECBTCBTFewercomponents groupPMRCTPMRBMTBMTBMTPMRARCMTRCSEFERBMTExposure onlyATBACTBTTension techniqueonlyExposure in vivoonlyCBTCBTBTCBTBTGDVMBTCTComponent(s) testedCognitive componentRelaxation skillsCRCREETEET + CRCognitive componentCognitive component +self-control desensitizationSocial supportCognitive componentEye movementCR + generalized training+ affective explorationCognitive componentBAModification of automaticthoughtsRelaxation skills

Page 41: articole eficienta

Behavioral componentExposure in vivoTension techniquesAdvanced CBT with a focuson coping skills andcognitive interventionsReligious content modifiedto fit CBTFamily supportSize perception trainingCognitive componentCognitive social learning +group discussionCognitive componentBehavioral componentNote. CT = cognitive therapy; PMR = progressive muscle relaxation; CR = cognitive restructuring; BMT = behavioral marital therapy; EET =emotional expressiveness training; CBT = cognitive-behavioral therapy; AR = applied relaxation; CMT = child management training; CRCS = cognitiveand relaxation coping skills; RCS = relaxation coping skills; EMDR = eye movement desensitization and reprocessing; EFER = eye fixation exposureand reprocessing; BA = behavioral activation; AT = automatic thoughts; BT = behavioral therapy; OBET = obese binge eating treatment; GDVM =videotaped parent skills training program; ADVANCE = cognitive training social learning program.254 AHN AND WAMPOLDstandard error of this estimate (d+), which is used to calculate the confidenceinterval of d+ and to test the null hypothesis that the populationeffect size is zero, was calculated according to the methods developed byHedges and Olkin. Finally, a homogeneity test was conducted to determinewhether the 20 effect sizes were drawn from the same population.ResultsUsing the aggregation strategy just described, we obtained thefollowing estimates: d+ = -0.20 and o2^,) = 0.176. The negativevalue for d+ indicates that the treatment conditions with fewercomponents outperformed the treatment conditions with morecomponents, a result in the opposite direction from that anticipated.In any event, an effect size of magnitude 0.20 is consideredsmall (Cohen, 1988).The 95% confidence interval for the population effect size,given a normal effect size distribution, was as follows: lowerbound, d+ - 1.96 d<<f,.) = -0.541, and upper bound, d+ + 1.96d(dj) = 0.149. Because this confidence interval contained zero, thenull hypothesis that the population effect size is zero was notrejected.To determine whether the effect sizes for the 20 comparisonswere drawn from a single population, we conducted a test ofhomogeneity using the methods described by Hedges and Olkin(1985). The Q statistic is a goodness-of-fit statistic, as follows:where k is the number of studies aggregated. The Q statistic hasapproximately a chi-square distribution with k - 1 degrees offreedom. If Q is sufficiently large, the homogeneity hypothesis isrejected. In the present case, Q was 33.34, which, when comparedwith a chi-square distribution with 26 degrees of freedom, wasinsufficiently large to reject the null; therefore, it was concludedthat the effect sizes were homogeneous. Thus, it appears that therewere no variables that would moderate the overall effect size,which was not different from zero. However, this conclusion mustbe tempered by the fact that the power of the homogeneity test canbe low when various assumptions are violated and the sample sizesof the studies are small in comparison with the number of studies(see Harwell, 1997).DiscussionThe present meta-analysis of component studies produced noevidence that the specific ingredients of psychological treatmentsare responsible for the beneficial outcomes of counseling and

Page 42: articole eficienta

psychotherapy. For example, the aggregate effect size for comparisonswas not significantly different from zero. Moreover, theeffect sizes from the 27 comparisons were homogeneous, rulingout rival hypotheses that a missing variable would moderate therelationship between components and outcome.It should be recognized that the studies reviewed in this metaanalysisexamined treatments that have been found to be efficacious.Moreover, the component removed or added was hypothesizedby the researchers to be efficacious according to thetheoretical tenets of the respective treatments. For example, in thecomponent study described in the introduction, Jacobson et al.(1996) clearly described the theoretical basis of the study:Beck and his associates are quite specific about the hypothesizedactive ingredients of CT [cognitive-behavioral treatment], statingthroughout their treatment manual (Beck et al., 1979) that interventionsaimed at cognitive structures or core schema are the activechange mechanisms [for treating depression]. Despite this conceptualclarity, the treatment is so multifaceted that a number of alternativeaccounts for its efficacy are possible. We label two primary competinghypotheses the "activation hypothesis" and the "coping skills" hypothesis.. . . If an entire treatment based on activation interventionsproved to be as effective as CT, the cognitive model of change in CT(stipulating the necessary interventions for the efficacy of CT) wouldbe called into question, (pp. 295-296)In the Jacobson et al. (1996) study, the authors were examining themost validated psychotherapeutic treatment in existence, namelycognitive-behavioral treatment for depression, and testingwhether the cognitive ingredients were indeed necessary to producebenefits.A criticism could be raised that included in the corpus of studiesexamined were some ingredients that are important and others thatare not and that aggregating across diverse studies yields spuriousconclusions. This is a familiar criticism of meta-analysis. First, thehomogeneity finding suggests that there are not two classes ofcomparisons, those with efficacious specific ingredients and thosewithout. Second, an occasional study demonstrating that a componentwas related to the outcome must be considered, in light ofthe present results, a Type I error. The argument that a givenspecific ingredient is efficacious would need to be supported byreplications, a situation not evident in the studies reviewed. Third,it is important to note that Jacobson et al.'s dismantling of theempirically supported cognitive-behavioral treatment of depression,probably the most established psychological treatment inexistence, failed to demonstrate that the components of the treatmentwere responsible for the benefits.The evidence produced by this meta-analysis casts suspicion onthe specificity of psychological treatments. Although some of thetreatments contained in the studies reviewed were designed fordisorders that are not prevalent (e.g., blood phobia), all of thetreatments contained discrete components that lend themselves todetecting the efficacy of specific ingredients. That is, if the specificingredients of treatments are responsible for the benefits of psychotherapy,then the expected effects should appear in the studiesreviewed. As well, it would not be expected that specific ingredientsof treatments with less well-defined components would beresponsible for the benefits of such treatments.Other research evidence tends not to support the benefits ofspecific ingredients of psychological treatments. If specific ingredientswere remedial for a problem, then it would be expected thatsome treatments (viz., those containing potent specific ingredients)would be superior to other treatments. However, the outcomeresearch conclusively has shown that all treatments produce approximatelyequal benefits generally (Wampold, 2000; 2001;

Page 43: articole eficienta

Wampold et al., 1997) as well as in particular areas, such asdepression (e.g., Elkin et al., 1989; Robinson, Berman, & Neimeyer,1990; Wampold, Minami, Baskin, & Tierney, in press) andanxiety (see Wampold, 2001). Attempts to demonstrate specificityby examining mediating effects have failed to show that specificWHERE ARE THE SPECIFIC INGREDIENTS? 255treatments work through the theoretically hypothesized mechanisms(Wampold, 2001). For example, in the National Institute ofMental Health Treatment of Depression Collaborative ResearchProgram, cognitive-behavioral treatment and interpersonal treatmentsdid not operate uniquely through the intended respectivecognitive and interpersonal mechanisms, as hypothesized (Imber etal., 1990). Finally, specificity predicts that certain treatments willbe particularly effective with clients with certain deficits, forexample, cognitive treatments for clients with irrational thoughtsand interpersonal treatments for clients with maladaptive socialrelations. However, theoretically predicted interactions betweentreatments and client characteristics of this type have never beendemonstrated (for laudable attempts, see McKnight, Nelson-Gray& Barnhill, 1992; Project MATCH Research Group, 1997; Simons,Garfield, & Murphy, 1984).The results of the present meta-analytic study are not an anomalyin an otherwise uniform field of research results supportingspecificity; rather, the preponderance of the research evidence isnot supportive of the benefits of specific ingredients. This suggeststhat the benefits of treatments are probably due to the pathwayscommon to all bona fide psychological treatments, such as thehealing context, the belief in the rationale for and the efficacy oftherapy by the client and by the therapist, the therapeutic alliance,therapeutic procedures consistent with the client's understandingof his or her problems, the development of increased self-efficacyto solve one's problems, and remoralization (Frank & Frank, 1991;Garfield, 1992; Wampold, 2001). The research evidence supportsthe notion that the benefits of counseling and psychotherapy arederived from the common factors. For example, it has been shownthat the therapeutic alliance, measured at an early stage, accountsfor a significant portion of the variability in treatment outcomes(Horvath & Symonds, 1991; Martin, Garske, & Davis, 2000).Moreover, the variance due to therapists within treatments isgreater than the variance between treatments, lending primacy tothe person of the therapist rather than to the particular treatment(Crits-Christoph et al., 1991; Wampold & Serlin, 2000). Indeed,the common factors account for about 9 times more variability inoutcomes than do the specific ingredients (Wampold, 2001).Rejecting the specificity of counseling and psychotherapy hasimplications for training, practice, and research. Training modelsshould focus on the common factors as the bedrock of skillsnecessary to become an effective practitioner. The importance ofinterviewing skills, establishment of a therapeutic relationship, andthe core facilitative conditions in the training of counselors andpsychologists is supported by the empirical evidence. Omittingthese vital components and training students to conduct solelyvarious empirically supported treatments is contraindicated. Nevertheless,counselors and therapists need to learn techniques, aposition well stated by common factor advocate Jerome Frank:My position is not that technique is irrelevant to outcome. Rather, Imaintain that, as developed in the text, the success of all techniquesdepends on the patient's sense of alliance with an actual or symbolichealer. This position implies that ideally therapists should select foreach patient the therapy that accords, or can be brought to accord, withthe patient's personal characteristics and view of the problem. Alsoimplied is that therapists should seek to learn as many approaches as

Page 44: articole eficienta

they find congenial and convincing. Creating a good therapeuticmatch may involve both educating the patient about the therapist'sconceptual scheme and, if necessary, modifying the scheme to takeinto account the concepts the patient brings to therapy. (Frank &Frank, 1991, p. xv)The use of treatment manuals in practice is not supported by theresearch evidence. Although standardization of treatment appearsscientific and may be required for experimental control in theresearch context, there is no evidence that adherence to a treatmentprotocol results in superior outcomes; in fact, slavish adherence toa manual can cause ruptures in the alliance and, consequently,poorer outcomes (Wampold, 2001). As well, use of manuals restrictsadaptation of treatments to the attitudes, values, and cultureof the client, a necessary aspect of multicultural counseling.A common factor perspective places emphasis on the skill of thetherapist. There is compelling evidence that a large proportion ofvariability in outcomes is due to therapists, even when therapistsare "experts" in a particular approach and are supervised andmonitored (Wampold, 2001, chap. 8). Thus, emphasis should beplaced on the therapist or counselor rather than on the particulartherapy. Consequently, those who control access to therapy (e.g.,health maintenance organizations) should refer clients to counselorswho have demonstrated efficacy rather than mandate particularservices. Indeed, it would be in the best interest of agencies to havetherapists of various orientations so that clients could receive thetype of therapy that best accords with their worldview.Combined with the evidence that all bona fide treatments areequally efficacious (see Wampold, 2001, chap. 4), the results ofthis meta-analysis suggest that comparative outcome studies willyield nonsignificant differences and therefore are costly experimentsin futility. It is safe to say that hundreds of millions ofdollars have been spent on outcome research that has shown thatbona fide psychological treatments are efficacious but that all suchtreatments produce about the same benefits. Continued outcomeresearch will only support that general pattern of results and yieldlittle informative evidence about counseling and psychotherapy.Rather, the focus of counseling research should be on the processof counseling and on the common factors that have historicallyinterested humanistic and dynamic researchers and clinicians.ReferencesReferences marked with an asterisk indicate studies included inthe meta-analysis.*Appelbaum, K. A., Blanchard, E. B., Nicholson, N. L., Radnitz, C,Kirsch, C, Michultka, D., Attanasio, V., Andrasik, F., & Dentinger,M. P. (1990). Controlled evaluation of the addition of cognitive strategiesto a home-based relaxation protocol for tension headache. BehaviorTherapy, 21, 293-303.Atkinson, D. R., Furlong, M. J., & Wampold, B. E. (1982). Statisticalsignificance, reviewer evaluations, and the scientific process: Is there a(statistically) significant relationship? Journal of Counseling Psychology,29, 189-194.*Barlow, D. H., Rapee, R. M., & Brown, T. A. (1992). Behavioraltreatment of generalized anxiety disorder. Behavior Therapy, 23, 551—570.*Baucom, D. H., Sayers, S. L., & Sher, T. G. (1990). Supplementingbehavioral marital therapy with cognitive restructuring and emotionalexpressiveness training: An outcome investigation. Journal of Consultingand Clinical Psychology, 58, 636-645.*Blanchard, E. B., Appelbaum, K. A., Radnitz, C. L., Michultka, D.,Morrill, B., Kirsch, C, Hillhouse, J., Evans, D. D., Guamieri, P.,Attanasio, V., Andrasik, F., Jaccard, J., & Dentinger, M. P. (1990).256 AHN AND WAMPOLDPlacebo-controlled evaluation of abbreviated progressive muscle relaxationand of relaxation combined with cognitive therapy in the treatmentof tension headache. Journal of Consulting and Clinical Psychology, 58,

Page 45: articole eficienta

210-215.Borkovec, T. D. (1990). Control groups and comparison groups in psychotherapyoutcome research. National Institute on Drug Abuse ResearchMonograph, 104, 50-65.•Borkovec, T. D., & Costello, E. (1993). Efficacy of applied relaxation andcognitive-behavioral therapy in the treatment of generalized anxietydisorder. Journal of Consulting and Clinical Psychology, 61, 611-619.Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supportedtherapies. Journal of Consulting and Clinical Psychology, 66, 7-18.Chambless, D. L., Sanderson, W. C, Shoham, V., Johnson, S. B., Pope,K. S., Crits-Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S.,Beutler, L., Williams, D. A., & McCurry, S. (1996). An update onempirically validated therapies. The Clinical Psychologist, 49(2), 5-18.Cohen, J. (1988). Statistical power analysis for the behavioral sciences(2nd ed.). Hillsdale, NJ: Erlbaum.Crits-Christoph, P. (1997). Limitations of the dodo bird verdict and the roleof clinical trials in psychotherapy research: Comment on Wampold et al.(1997). Psychological Bulletin, 122, 216-220.Crits-Christoph, P., Baranackie, K., Kurcias, J. S., Carroll, K., Luborsky,L., McLellan, T., Woody, G., Thompson, L., Gallagier, D., & Zitrin, C.(1991). Meta-analysis of therapist effects in psychotherapy outcomestudies. Psychotherapy Research, 1, 81-91.*Dadds, M. R., & McHugh, T. A. (1992). Social support and treatmentoutcome in behavioral family therapy for child conduct problems. Journalof Consulting and Clinical Psychology, 60, 252-259.*Deffenbacher, J. L., & Stark, R. S. (1992). Relaxation and cognitiverelaxationtreatments of general anger. Journal of Counseling Psychology,39, 158-167.DeRubeis, R. J., & Crits-Christoph, P. (1998). Empirically supportedindividual and group psychological treatments for mental disorders.Journal of Consulting and Clinical Psychology, 66, 37-52.DeRubeis, R. J., Evans, M. D., Hollon, S. D., Garvey, M. J., Grove, W. M.,& Tuason, V. B. (1990). How does cognitive therapy work? Cognitivechange and symptom change in cognitive therapy and pharmacotherapyfor depression. Journal of Consulting and Clinical Psychology, 58,862-869.DeRubeis, R. J., & Feeley, M. (1990). Determinants of change in cognitivetherapy for depression. Cognitive Therapy and Research, 14, 469-482.Elkin, I., Shea, T., Watkins, J. T., Imber, S. D., Sotsky, S. M., Collins, J. F.,Glass, D. R., Pilkonis, P. A., Leber, W. R., Docherty, J. P., Fiester, S. J.,& Parloff, M. B. (1989). National Institute of Mental Health Treatmentof Depression Collaborative Research Program: General effectiveness oftreatments. Archives of General Psychiatry, 46, 971-982.*Feske, U., & Goldstein, A. J. (1997). Eye movement desensitization andreprocessing treatment for panic disorder: A controlled outcome andpartial dismantling study. Journal of Consulting and Clinical Psychology,65, 1026-1035.Frank, J. D., & Frank, J. B. (1991). Persuasion and healing: A comparativestudy of psychotherapy (3rd ed.). Baltimore: Johns Hopkins UniversityPress.Garfield, S. L. (1992). Eclectic psychotherapy: A common factors approach.In J. C. Norcross & M. R. Goldfried (Eds.), Handbook ofpsychotherapy integration (pp. 169-201). New York: Basic Books.•Halford, W. K., Sanders, M. R., & Behrens, B. C. (1993). A comparisonof the generalization of behavioral marital therapy and enhanced behavioralmarital therapy. Journal of Consulting and Clinical Psychology,61, 51-60.Harwell, M. (1997). An empirical study of Hedge's homogeneity tests.Psychological Methods, 2, 219-231.Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis.San Diego, CA: Academic Press.*Hope, D. A., Heimberg, R. G., & Bruch, M. A. (1995). Dismantlingcognitive-behavioural group therapy for social phobia. Behaviour Researchand Therapy, 33, 637-650.Horvath, A. O., & Symonds, B. D. (1991). Relation between workingalliance and outcome in psychotherapy: A meta-analysis. Journal ofCounseling Psychology, 38, 139-149.Hunt, M. (1997). How science takes stock: The story of meta-analysis. NewYork: Russell Sage Foundation.Imber, S. D., Pilkonis, P. A., Sotsky, S. M., Elkin, I., Watkins, i. T.,

Page 46: articole eficienta

Collins, J. F., Shea, M. T., Leber, W. R., & Glass, D. R. (1990).Mode-specific effects among three treatments for depression. Journal ofConsulting and Clinical Psychology, 58, 352-359.*Jacobson, N. S., Dobson, K. S., Truax, P. A., Addis, M. E., Koerner, K.,Gollan, J. K., Gortner, E., & Price, S. E. (1996). A component analysisof cognitive-behavioral treatment for depression. Journal of Consultingand Clinical Psychology, 64, 295-304.Lambert, M. J., & Bergin, A. E. (1994). The effectiveness of psychotherapy.In A. E. Bergin & S. L. Garfield (Eds.), Handbook of psychotherapyand behavior change (4th ed., pp. 143-189). New York: Wiley.Luborsky, L., Singer, B., & Luborsky, L. (1975). Comparative studies ofpsychotherapies: Is it true that "everyone has won and all must haveprizes?" Archives of General Psychiatry, 32, 995-1008.Mann, C. C. (1994). Can meta-analysis make policy? Science, 266, 960-962.Martin, D. J., Garske, J. P., & Davis, M. K. (2000). Relation of thetherapeutic alliance with outcome and other variables: A meta-analyticreview. Journal of Consulting and Clinical Psychology, 68, 438-450.McKnight, D. L., Nelson-Gray, R. O., & Barnhill, J. (1992). Dexamethasonesuppression test and response to cognitive therapy and antidepressantmedication. Behavior Therapy, 23, 99-111.•Nicholas, M. K., Wilson, P. H., & Goyen, J. (1991). Operant-behaviouraland cognitive-behavioural treatment for chronic low back pain. BehaviourResearch and Therapy, 29, 225-238.*Ost, L.-G., Fellenius, J., & Sterner, U. (1991). Applied tension, exposurein vivo, and tension-only in the treatment of blood phobia. BehaviourResearch and Therapy, 29, 561-574.Parloff, M. B. (1986). Frank's "common elements" in psychotherapy:Nonspecific factors and placebos. American Journal of Orthopsychiatry,56, 521-529.*Porzelius, L. K., Houston, C, Smith, M., Arfken, C, & Fisher, E. Jr.(1995). Comparison of a standard behavioral weight loss treatment anda binge eating weight loss treatment. Behavior Therapy, 26, 119-134.Project MATCH Research Group. (1997). Matching alcoholism treatmentsto client heterogeneity: Project MATCH posttreatment drinking outcomes.Journal of Studies on Alcohol, 58, 7-29.*Propst, L. R., Ostrom, R., Watkins, P., Dean, T., & Mashburn, D. (1992).Comparative efficacy of religious and nonreligious cognitive-behavioraltherapy for the treatment of clinical depression in religious individuals.Journal of Consulting and Clinical Psychology, 60, 94—103.*Radojevic, V., Nicassion, P. M., & Weisman, M. H. (1992). Behavioralintervention with and without family support for rheumatoid arthritis.Behavior Therapy, 23, 13-30.Robinson, L. A., Berman, J. S., & Neimeyer, R. A. (1990). Psychotherapyfor the treatment of depression: A comprehensive review of controlledoutcome research. Psychological Bulletin, 108, 30-49.*Rosen, J. C, Cado, S., Silberg, N. T., Srebnik, D., & Wendt, S. (1990).Cognitive behavior therapy with and without size perception training forwomen with body image disturbance. Behavior Therapy, 21, 481—498.Rosenzweig, S. (1936). Some implicit common factors in diverse methodsof psychotherapy: "At last the Dodo said, 'Everybody has won and allmust have prizes.' " American Journal of Orthopsychiatry, 6, 412-415.Shapiro, D. A., & Shapiro, D. (1982). Meta-analysis of comparativetherapy outcome studies: A replication and refinement. PsychologicalBulletin, 92, 581-604.WHERE ARE THE SPECIFIC INGREDIENTS? 257Simons, A. D., Garfield, S. L., & Murphy, G. E. (1984). The process ofchange in cognitive therapy and pharmacotherapy for depression. Archivesof General Psychiatry, 41, 45-51.Stiles, W. B., Shapiro, D. A., & Elliott, R. (1986). "Are all psychotherapiesequivalent?" American Psychologist, 41, 165—180.Strupp, H. H. (1986). The nonspecific hypothesis of therapeutic effectiveness:A current assessment. American Journal of Orthopsychiatry, 56,513-519.Task Force on Promotion and Dissemination of Psychological Procedures.(1995). Training in and dissemination of empirically-validated psychologicaltreatments: Report and recommendations. The Clinical Psychologist,48(1), 2-23.*Thackwray, D. E., Smith, M. C, Bodfish, J. W., & Meyers, A. W. (1993).A comparison of behavioral and cognitive-behavioral interventions forbulimia nervosa. Journal of Consulting and Clinical Psychology, 61,

Page 47: articole eficienta

639-645.Waltz, J., Addis, M. E., Koerner, K., & Jacobson, N. S. (1993). Testing theintegrity of a psychotherapy protocol: Assessment of adherence andcompetence. Journal of Consulting and Clinical Psychology, 61, 620-630.Wampold, B. E. (1997). Methodological problems in identifying efficaciouspsychotherapies. Psychotherapy Research, 7, 21-43.Wampold, B. E. (2000). Outcomes of individual counseling and psychotherapy:Empirical evidence addressing two fundamental questions. InS. D. Brown & R. W. Lent (Eds.), Handbook of counseling psychology(4th ed., pp. 711-739). New York: Wiley.Wampold, B. E. (2001). The great psychotherapy debate: Models, methods,and findings. Mahwah, NJ: Erlbaum.Wampold, B. E., Minami, T., Baskin, T. W., & Tierney, S. C. (in press).A meta-(re)analysis of the effects of cognitive therapy versus "othertherapies" for depression. Journal of Affective Disorders.Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., &Ahn, H. (1997). A meta-analysis of outcome studies comparing bonafide psychotherapies: Empirically, "all must have prizes." PsychologicalBulletin, 122, 203-215.Wampold, B. E., & Serlin, R. C. (2000). The consequences of ignoring anested factor on measures of effect size in analysis of variance designs.Psychological Methods, 5, 425-433.•Webster-Stratton, C. (1994). Advancing videotape parent training: Acomparison study. Journal of Consulting and Clinical Psychology, 62,583-593.•Williams, S. L., & Falbo, J. (1996). Cognitive and performance-basedtreatments for panic attacks in people with varying degrees of agoraphobicdisability. Behaviour Research and Therapy, 34, 253-264.Wilson, G. T. (1996). Manual-based treatments: The clinical application ofresearch findings. Behaviour Research and Therapy, 34, 295-314.Received July 14, 2000Revision received September 26, 2000Accepted October 24, 2000 •ORDER FORMStart my 2001 subscription to Journal ofCounseling Psychology! ISSN: 0022-0167$38.00, APA Member/Affiliate$76.00, Individual Non-Member$170.00, InstitutionIn DC add 5.75% sales taxTOTAL AMOUNT ENCLOSED $.Subscription orders must be prepaid. (Subscriptions are ona calendar basis only.) Allow 4-6 weeks for delivery of thefirst issue. Call for international subscription rates.SENDTWSORDER FORM TO:AmericanPsychological AssociationSubscriptions750 First Street, NEWashington, DC 20002-4242Or call (800) 374-2721, fax (202) 336-5568.TDD/TTY (202)336-6123. Email: [email protected]

Send me a Free Sample Issue •• Check Enclosed (make payable to APA)Chargemy: Q VISA QMastaCard QAmerieanExpressCardholder NameCard No. Exp. dateSignature (Required for Charge)Credit CardBilling AddressCity State Zip _Daytime PhoneSHIPTO:Name

Page 48: articole eficienta

AddressCity State. Zip.APA Customer #GAD01

PLEASE DO NOT REMOVE - A PHOTOCOPY MAY BE USED

Abstract

The number of sessions required to produce meaningful change has not been assessed adequately, in spite of its relevance to current clinical practice. Seventy-five clients attending outpatient therapy at a university-affiliated clinic were tracked on a weekly basis using the Outcome Questionnaire (Lambert et al., 1996) in order to determine the number of sessions required to attain clinically significant change (CS). Survival analysis indicated that the median time required to attain CS was 11 sessions. When current data were combined with those from an earlier investigation (Kadera, Lambert, and Andrews, 1996), it was found that clients with higher levels of distress took 8 more sessions to reach a 50% CS recovery level than clients entering with lower levels of distress. At a six-month follow-up, CS gains appeared to have been maintained. Other indices of change also were examined (reliable change, average change per session). The implications of these results for allocating mental-health benefits, such as the number of sessions provided through insurance, are discussed. © 2001 John Wiley & Sons, Inc. J Clin Psychol 57: 875–888, 2001.

Wampold, Bruce E.; Mondin, Gregory W.; Moody, Marcia; Stich, Frederick; Benson, Kurt; Ahn, Hyun-nie, A meta-analysis of outcome studies comparing bona fide psychotherapies: Empiricially, "all must have prizes." Psychological Bulletin, Vol 122(3), Nov 1997, 203-215.This meta-analysis tested the Dodo bird conjecture, which states that when psychotherapies intended to be therapeutic are compared, the true differences among all such treatments are 0. Based on comparisons between treatments culled from 6 journals, it was found that the effect sizes were homogeneously distributed about 0, as was expected under the Dodo bird conjecture, and that under the most liberal assumptions, the upper bound of the true effect was about .20. Moreover, the effect sizes (a) were not related positively to publication date, indicating that improving research methods were not detecting effects, and (b) were not related to the similarity of the treatments, indicating that more dissimilar treatments did not produce larger effects, as would be expected if the Dodo bird conjecture was false. The evidence from these analyses supports the conjecture that the efficacy of bona fide treatments are roughly equivalent. (PsycINFO Database Record (c) 2010 APA, all rights reserved)

Sol L. Garfield Washington University, USA, 2005 - Electicism and integration in psychotherapy

Page 49: articole eficienta

Several of the important developments in the field of psychotherapy and behavior change are discussed, including the relative decline in popularity of psychodynamic orientations and the increase in electic preferences. The variation in operational meanings of an electic approach is described, and possible commonalities among diverse forms of psychotherapy are suggested. In terms of patients' views of the important factors in psychotherapy, characteristics of therapists and some common aspects of therapy appear to be emphasized over differences in techniques. Finally, some of the recent emphases on convergence and integration in psychotherapy are discussed.

Comments on the State of Psychotherapy Research (As I See It)

David Orlinsky

University of Chicago

Note: This essay was written in response to an invitation by Chris Muran, North American SPR regional chapter president, to contribute my views on the current state of psychotherapy research for the past-president’s column of the NASPR Newsletter. It appeared, sans references, in the January 2006 issue. Comments on the essay are welcome at <[email protected]>.

I must start by confessing that I don’t really read psychotherapy research when I

can help it. Why? The language is dull, the story lines are repetitive, the characters lack

depth, and the authors generally have no sense of humor. It is not amusing, or at least not

intentionally so. What I do instead of reading is scan or study. I do routinely scan the

abstracts of articles as issues of journals arrive to assure myself there is nothing I need or

want to know in it, and if the abstract holds my interest then I scan tables of results. Also,

at intervals of years, I have agreed to study the research on psychotherapy systematically,

usually with a specific focus on studies that related process and outcome (Howard &

Orlinsky, 1972; Orlinsky & Howard, 1978, 1986; Orlinsky, Grawe & Parks, 1994;

Orlinsky, Rønnestad & Willutzki, 2004). I have been doing this for 40 years more or

Page 50: articole eficienta

less, and on that basis (for what it is worth) here is what I think about the state of

psychotherapy research.

I think in recent years that psychotherapy research has taken on many of the

trappings of what Thomas Kuhn (1970) described as “normal science”—meaning that

research by and large has become devoted to incrementally and systematically working

out the details of a general “paradigm” that is widely accepted and largely unquestioned.

The research paradigm or standard model involves the study of (a) manualized

therapeutic procedures (b) for specific types of disorder (c) in particular treatment

settings and conditions. This is very different from the field that I described three decades

ago (Orlinsky & Howard, 1978) as “pre-paradigmatic,” and in some ways it represents a

considerable advance. However, I refer above to the “trappings of normal science” as a

double entendre to suggest that the appearance (trappings) of normal science with its

implicit paradigmatic consensus may also represent entrapment (trapping) in a

constricted and unrealistic model.

The paradigm is familiar. It holds that psychotherapy is basically a set of specific

and specifiable procedures (“interventions” or “techniques”) that can be taught, learned,

and applied; and that the comparative potency or efficacy of these procedures in treating

specific and specifiable psychological and behavioral disorders defines more or less

effective forms of psychotherapy—if patients are willing and able to comply with the

treatment provided by a competently trained therapist.

In this process, therapists are assumed to be active subjects (agents, providers)

and patients are assumed to be reactive objects (targets, recipients). Researchers may well

believe theoretically that patients as well as therapists are active subjects, and that what

Page 51: articole eficienta

transpires between them in therapy should be viewed as interaction, but in practice the

paradigm or standard research model that they typically follow implicitly defines

treatment as a unidirectional process.

Evidence of these implicit conceptions of the patient, therapist, and treatment

process is to be found in experimental designs that randomly assign patients to alternative

treatment conditions, just as if they were ‘objects’ (rarely bothering to inquire about their

preferences) whereas they never assign therapists to alternative treatment conditions,

randomly or systematically (because it seems essential to consider their subjective

treatment preferences). The consequence is that comparisons between treatment

conditions reflect treatment-x-therapist interaction effects rather than treatment main

effects—as Elkin (1999) and others have made clear—but it is an embarrassment that is

conveniently ignored by all (as in the tale of the emperor’s new clothes).

In addition, the dominant research paradigm constricts our view of the phenomena

that psychotherapy researchers presume they are studying by focusing on certain

abstracted qualities or characteristics of patients and therapists. The target of treatment is

not actually the patient as an individual but rather a specifically diagnosed disorder.

Other personal characteristics of patients are presumed to be “controlled” either through

random assignment (another embarrassing myth, since the effectiveness of random

assignment depends on the law of large numbers, and the number of subjects in a sample

or of replicated samples is rarely large enough to sustain this), or controlled statistically

by using the few characteristics of patients that are routinely assessed in studies as

covariates. The covariates most typically are atheoretically selected demographic

variables assessed for the purpose of describing the sample—age, gender, marital status,

Page 52: articole eficienta

race/ethnicity, and the like—since there are no widely accepted theories to guide the

selection of patient variables. (More recently, “alliance” measures have been routinely

collected from patients, reflecting the massive accumulation of empirical findings on the

impact of therapeutic relationship.)

Psychotherapists are likewise viewed in terms of certain abstracted qualities or

characteristics. The agent of treatment studied is not actually the therapist as an

individual but rather a specific set of manualized treatment skills in which the therapist is

expected to have been trained to competence and to which the therapist is expected to

show adherence in practice. The few other therapist characteristics that are routinely

assessed—professional background, career level, theoretical orientation, and perhaps

gender and race/ethnicity—are used largely to describe the sample or, occasionally, as

covariates. Again, this is because there are no widely accepted theories, or extensively

replicated empirical findings, to guide the selection of therapist variables.

The constricted and highly abstracted view of patients, therapists, and the

therapeutic process in the dominant research paradigm is supported by cognitive biases in

modern culture that all of us share. One of these was well-described by the sociologist

Peter Berger and his colleagues as componentiality. This is a basic assumption that “the

components of reality are self-contained units which can be brought into relation with

other such units—that is, reality is not conceived as an ongoing flux of juncture and

disjuncture of unique entities. This apprehension in terms of components is essential to

the reproducibility of the [industrial] production process as well as to the correlation of

men and machines. … Reality is ordered in terms of such units, which are apprehended

and manipulated as atomistic units. Thus, everything is analyzable into constituent

Page 53: articole eficienta

components, and everything can be taken apart and put together again in terms of these

components” (Berger, Berger & Kellner, 1974, p. 27).

This componentiality is reflected in the highly individual and decontextualized

way that we think about persons. We tend to think of individuals as essentially separate,

independent and basically interchangeable units of ‘personality’ that in turn are

constituted by other internal, more or less mechanistically interacting components—

whether those are conceptualized as traits that may be assessed quantitatively as

individual difference variables, or more holistically but less precisely as clinical

components of personality (e.g., ego, id, and superego). Thus when researchers seek to

assess the (hopefully positive but sometimes negative) impact of psychotherapy on

patients, they routinely focus their observations on componential individuals abstracted

from life-contexts, and on the constituent components of individuals toward which

therapeutic treatments are targeted—symptomatic disorders and pathological character

traits. They do not generally assess individuals as essentially embedded in sociocultural,

economic-political and developmental life-contexts. A componential view of

psychotherapy and of the individuals who engage in it is implicit in the dominant

research paradigm, and produces a comforting sense of cognitive control for researchers

—but does it do justice to the realities we seek to study or does it distort them?

Another widely shared bias of modern culture that complicates and distorts the

work of researchers on psychotherapy and psychopharmacology (and medicine more

broadly) is the implicit assumption of an essential distinction or dichotomy between soma

and psyche (or matter and mind), notwithstanding the efforts of modern philosophers like

Ryle (1949) to undo this Cartesian myth. Because of this, findings that psychological

Page 54: articole eficienta

phenomena have neurological or other bodily correlates (e.g., using MRI or CT scans to

detect changes in emotional response) are viewed as somehow amazing and worthy of

note even in the daily press. The materialist bias of modern culture also fosters a

tendency to view this correlation in reductionist terms, so that the physiological aspects

of the phenomena studied are assumed to be more basic, and to cause the psychological

aspect.

Thanks to a conversation at the recent SPR conference in Montreal among

colleagues from different cultural traditions (Bae et al., 2005), I became aware of how

unnatural the body-mind dichotomy (with its consequent distinction between ‘physical

health’ and ‘mental health’) appears from other cultural perspectives, and of how grossly

it distorts the evident psychosomatic continuity of the living human person. When this

basic continuity is conceptually split into ‘psyche’ and ‘soma’, a mysterious quality is

created as the byproduct (much as energy is released when atoms are split)—a mysterious

quality that is labeled (and as much as possible viewed dismissively) as “the placebo

effect.” This effect, mysteriously labeled in Latin, is viewed as a “contaminant” in

research designs—but, struggle as researchers do to “control” it (rather than understand

it), they typically fail in the attempt because the ‘effect’ reflects an aspect of our reality as

human beings that cannot be eliminated.

The reality, as I see it, is that a person (a) is a psychosomatic unity, (b) evolving

over time along a specific life-course trajectory, and (c) is a subjective self that is

objectively connected with other subjective selves, (d) each of them being

active/responsive nodes in an intersubjective web of community relationships and

Page 55: articole eficienta

cultural patterns, a web in which those same patterns and relationships (e) exert a

formative influence on the psychosomatic development of persons.

The reality of psychotherapy, as I see it, is that it involves (a) an intentionally-

formed, culturally-defined social relationship through which a potentially healing

intersubjective connection is established (b) between persons who interact with one

another in the roles of client and therapist (c) for a delimited time during which their life-

course trajectories intersect, (d) with the therapist acting on behalf of the community that

certified her (e) to engage with the patient in ways that aim to influence the patient’s life-

course in directions that should be beneficial for the patient.

Neither of these realities seems to me to be adequately addressed by the dominant

paradigm or standard research model followed in most studies of psychotherapeutic

process and outcome. Instead, the dominant research paradigm seriously distorts the real

nature of persons and of psychotherapy (as I see them). Why then does this paradigm

dominate the field of psychotherapy research, and why do researchers persist in using it if

it is as uncomfortably ill-fitting a Procrustean bed as I have claimed?

The answer is partly cultural, as the paradigm neatly reflects the componential,

psycho/somatically split, materialist cognitive biases of Western culture. It is also partly

psychological, with supporters of the paradigm becoming more militant as a result of

cognitive dissonance generated by the incipient failure of the paradigm’s utopian

scientific promise (see, e.g., Festinger, Riecken & Schachter, 1956). It is partly historical

too, as the field of psychotherapy originated and initially evolved largely as a medical

subspecialty in the field of psychiatry—as well as the field of clinical psychology that

overlapped with, imitated, and set out to rival psychiatry. Again, the answer is partly

Page 56: articole eficienta

economic, since it is necessary to please research funding agencies (the real ‘placebo’

effect) in order to gain funding for research and advance one’s career by contributing

publications to one’s field and reimbursement for “indirect costs” to the institution where

one is employed.

It may be ironic that the paradigm adheres so closely to the medical model of

illness and treatment at a time when the psychiatric profession which historically

represented medicine’s presence in the field has largely (and regrettably) withdrawn from

the practice of psychotherapy (Luhrmann, 2000). The apparent solidity of the paradigm

that survives is based (a) on the fact that psychotherapeutic services still are largely

funded through health insurance which had been politically expanded (after much

lobbying) to include non-medical practitioners, and (b) on the fact that psychotherapy

research still is largely funded through grants from biomedical research agencies.

Although there is no for-profit industry promoting psychotherapy and supporting research

on it as Big Pharma does with the psychopharmacologic treatments of biological

psychiatry, most of the money that can be had in psychotherapeutic practice and

psychotherapy research comes from sources that implicitly support a medical model of

mental health. As ever “they who pay the piper call the tune,” though perhaps it is more

subtle and accurate to say that pipers who need and seek financial support (therapists and

researchers) play their tunes in ways that they hope will be pleasing to potential sponsors.

Necessity drives us (always), but we (all) have an uncanny ability to persuade ourselves

that advantage and merit coincide.

A sociology-of-knowledge confession: I know full well that I can say these things

mainly because I am privileged by having an old-fashioned, tenured, hard-(but small)-

Page 57: articole eficienta

money position in an arts and sciences faculty, and because I am not really in the

competition for funds. As a producer of psychotherapy research, I am free to go my own

way through my work as participant in the SPR Collaborative Research Network; but as a

consumer of psychotherapy research, I have serious misgivings about the state of the filed

stem from a perception that the prevailing paradigm which permits research to pursue

their studies in the manner of “normal science” represents a risky premature closure in

understanding the actual nature of psychotherapy and the people who engage in it. If it is

not overtly corrupting (as may be true of some research on psychopharmacological

treatments funded by pharmaceutical firms), it is nevertheless constricting in ways that

seem to me highly problematic.

If we are indeed to have evidence-based psychotherapies grounded in systematic,

well-replicated research (e.g., Goodheart, Kazdin & Sternberg, 2006), and evidence-

based training for psychotherapists (e.g., Orlinsky & Rønnestad, 2005), then it would be

very nice—in fact, I would think essential—for that research to be based on a standard

model or paradigm which more adequately matches the actual experience and lived

reality of what it presumes to study. I don’t know what that new paradigm or model for

research will turn out to be. Constructing it is the task of the next generation—but from it

will come the sort of psychotherapy research I think I would like to read.

Page 58: articole eficienta

References

Bae, S. H., Smith, D. P., Gone, J., & Kassem, L. (2005). Culture and

psychotherapy research-II: Western psychotherapies and indigenous/non-western

cultures. Open discussion session, international meeting of the Society for Psychotherapy

Research, Montreal Canada, June 22-25, 2005.

Berger, P., Berger, B., & Kellner, H. (1974). The homeless mind: Modernization

and consciousness. New York: Vintage Books.

Elkin, I. E. (1999). A major dilemma in psychotherapy outcome research:

Disentangling therapists from therapies. Clinical Psychology: Science and Practice, 6,

10-32.

Festinger, L., Riecken, H. H., & Schachter, S. (1956). When prophecy fails: A

social and psychological study of a modern group that predicted the destruction of the

world. New York: Harper.

Goodheart, C. D., Kazdin, A. E., & Sternberg, R. J., Eds. (2006). Evidence-based

psychotherapy: Where practice and research meet. Washington, DC: American

Psychological Association.

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd edition). Chicago:

University of Chicago Press.

Howard, K. I., & Orlinsky, D. E. (1972). Psychotherapeutic processes. In Annual

review of psychology, vol. 23. Palo Alto, Cal.: Annual Reviews.

Luhrmann, T. M. (2000). Of two minds: The growing disorder in American

psychiatry. New York: Knopf.

Page 59: articole eficienta

Orlinsky, D. E., Grawe, K., & Parks, B. K. (1994). Process and outcome in

psychotherapy—noch einmal. In A. Bergin S. & Garfield, Eds., Handbook of

psychotherapy and behavior change, 4th ed. New York: Wiley.

Orlinsky, D. E., & Howard, K. I. (1978). The relation of process to outcome in

psychotherapy. In S. Garfield and A. Bergin, Eds., Handbook of psychotherapy and

behavior change, 2nd ed. New York: Wiley.

Orlinsky, D. E., & Howard, K. I. (1986). Process and outcome in psychotherapy.

In S. Garfield and A. Bergin, Eds., Handbook of psychotherapy and behavior change, 3rd

ed. New York: Wiley.

Orlinsky, D. E., Rønnestad, M. H. (2005). How psychotherapists develop: A study

of therapeutic work and professional growth. Washington, DC: American Psychological

Association.

Orlinsky, D. E., Rønnestad, M. H., & Willutzki, U. (2004). Fifty years of

psychotherapy process-outcome research: Continuity and change. In M. Lambert, Ed.,

Bergin and Garfield’s Handbook of Psychotherapy and Behavior Change, 5th ed. (pp. ).

New York: Wiley.

Ryle, G. (1949). The concept of mind. New York: Barnes & Noble.