quantifying quality

19
This article was downloaded by: [University of Maryland, Baltimore] On: 17 September 2011, At: 11:48 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Employee Assistance Quarterly Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/wzea20 Quantifying Quality Dale A. Masi PhD LICSW CEAP a b c , Jodi M. Jacobson MSW LGSW a & Allen R. Cooper d e f a University of Maryland School of Social Work, USA b EAP Specialization c Inc. a corporation specializing in the evaluation of EAPs d Dallen, Inc. e Evaluation and Planning for Motion Pictures Association of America f Planning for National Broadcasting Corporation Available online: 14 Aug 2009 To cite this article: Dale A. Masi PhD LICSW CEAP, Jodi M. Jacobson MSW LGSW & Allen R. Cooper (2000): Quantifying Quality, Employee Assistance Quarterly, 15:4, 1-17 To link to this article: http://dx.doi.org/10.1300/J022v15n04_01 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms- and-conditions This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan, sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be

Upload: umaryland

Post on 02-May-2023

1 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [University of Maryland, Baltimore]On: 17 September 2011, At: 11:48Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH,UK

Employee Assistance QuarterlyPublication details, including instructions forauthors and subscription information:http://www.tandfonline.com/loi/wzea20

Quantifying QualityDale A. Masi PhD LICSW CEAP a b c , Jodi M. JacobsonMSW LGSW a & Allen R. Cooper d e fa University of Maryland School of Social Work, USAb EAP Specializationc Inc. a corporation specializing in the evaluation ofEAPsd Dallen, Inc.e Evaluation and Planning for Motion PicturesAssociation of Americaf Planning for National Broadcasting Corporation

Available online: 14 Aug 2009

To cite this article: Dale A. Masi PhD LICSW CEAP, Jodi M. Jacobson MSW LGSW &Allen R. Cooper (2000): Quantifying Quality, Employee Assistance Quarterly, 15:4, 1-17

To link to this article: http://dx.doi.org/10.1300/J022v15n04_01

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching and private study purposes.Any substantial or systematic reproduction, re-distribution, re-selling, loan,sub-licensing, systematic supply or distribution in any form to anyone isexpressly forbidden.

The publisher does not give any warranty express or implied or make anyrepresentation that the contents will be complete or accurate or up todate. The accuracy of any instructions, formulae and drug doses should be

independently verified with primary sources. The publisher shall not be liablefor any loss, actions, claims, proceedings, demand or costs or damageswhatsoever or howsoever caused arising directly or indirectly in connectionwith or arising out of the use of this material.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Quantifying Quality:Findings from Clinical Reviews

Dale A. MasiJodi M. JacobsonAllen R. Cooper

ABSTRACT. This article presents findings from 42 clinical reviews ofemployee assistance programs across the country. As companies arebecoming increasingly concerned about the quality of their services,EAPs and managed behavioral health care organizations (MBHC) arebeing required to present quantifiable data that indicates that their pro-grams are of an acceptable level of quality. Clinical review is onemethod to objectively measure the quality of documentation and clini-cal work of EAPs and MBHCs. This method is clearly outlined in thearticle and its relevance to a total quality management framework utiliz-ing ‘‘six sigma’’ criteria is illustrated. The findings presented in thisarticle support the notion that quality-of-care in the employee assistancefield is often lacking. Implications for below standard quality servicesare discussed and recommendations for improving the field are sug-gested. [Article copies available for a fee from The Haworth Document Deliv-ery Service: 1-800-342-9678. E-mail address: <[email protected]>Website: <http://www.HaworthPress.com>]

KEYWORDS. Clinical review, EAP, managed behavioral health care,total quality management, six sigma, outcome indicators

Dale A. Masi, PhD, LICSW, CEAP, is Professor, University of Maryland Schoolof Social Work, Chair of EAP Specialization, and CEO President of Masi ResearchConsultants, Inc., a corporation specializing in the evaluation of EAPs. Jodi M.Jacobson, MSW, LGSW, is a PhD Student, University of Maryland School of SocialWork and is currently employed as a mental health therapist, substance abuse spe-cialist. Allen R. Cooper is President of Dallen, Inc. and former Executive Vice-Presi-dent of Evaluation and Planning for Motion Pictures Association of America andformer Corporate Vice-President of Planning for National Broadcasting Corporation.

Employee Assistance Quarterly, Vol. 15(4) 2000E 2000 by The Haworth Press, Inc. All rights reserved. 1

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY2

INTRODUCTION TO CLINICAL REVIEWSAND TOTAL QUALITY MANAGEMENT

The International Organization of Medicine defines quality as ‘‘thedegree to which health services for individuals and populations in-crease the likelihood of desired health outcomes and are consistentwith current professional knowledge’’ (Mechanic, 1998, p. 67). Theneed to ascertain the ‘‘quality-of-care’’ has always been present in themedical field. It is often determined by methods such as mortalityconferences, case conferences, and grand rounds. The current practic-es used to measure or assess ‘‘quality’’ in the mental health field havebeen both sporadic and loosely structured. In the area of quality as-sessment and outcomes measurement, the mental health field has longbeen thought to be ‘‘lagging behind other areas of medical care, inpart, because mental health practice lacks readily available objectiveoutcome indicators’’ (Rosenheck, & Cicchetti, 1998, p. 86).One reason for the current lack of outcome indicators in the mental

health field is due to the field’s long tradition of functioning throughprivate practice where it was generally assumed that clinical serviceswere already being offered at high quality and any attempt to measureperformance would be seen as an insult by the physicians and unset-tling to the general public (Eddy, 1998). Another reason for the lack ofoutcome indicators is due to the licensing process. Licenses for privatepractitioners are renewed annually through the mail. Consequently,clinicians are free to practice for years without any monitoring. Thesefactors seriously limited the demand for outcome measures prior to theonset of managed behavioral health care (MBHC) and employee assis-tance programs (EAPs). Private practitioners are frequently used asaffiliates and sub-contractors in EAP cases, and always in MBHCprograms.Managed care represents the first effort to monitor the performance,

professionalism, and licensing of private practitioners. However, theirearly focus was on ‘‘cost-effectiveness’’ as the principal outcomemeasure of the value of mental health services. In the past, mosthuman services managers who manage health care benefits, relied onmonthly or quarterly reports as the primary method for determininghow their programs were functioning. ‘‘The marketing of managedmental health care has emphasized the bottom line and has, too often,ignored careful examination of provider behaviors and review ap-

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 3

proaches designed to monitor the activities and quality control stan-dards of the managed care vendor’’ (Astrachan, Essock, Kahn, Masi,McLean, and Visotsky, 1995, p. 581). The sole reliance on cost-effec-tive indicators did not furnish managers with information on the over-all ‘‘quality of care’’ that was being provided. However, they soonrealized the critical importance of quality and began to demand moreexplicit outcome measurements.There are numerous reasons why those responsible for an MBHC,

EAP, and/or other mental health care program, as well as individualproviders, should evaluate the quality of the services they provide.Perhaps the most important of these reasons is to ensure that theservices or care being provided to clients is both beneficial and ‘‘con-sistent with established professional, scientific, and ethical standardsof practice’’ (Sharma, 1998, p. 167). Other reasons why quality mea-surements are of high importance include the simple fact that just aspurchasers of services are properly concerned with the cost of ser-vices, they also ‘‘want to know that what they pay for is of value’’ and‘‘demand that quality be demonstrated in an objective and clear man-ner’’ (Minden, Campbell, Dumont, Fisher, Flynn, Henderson, John-son, Kramer, Manderscheid, Nelson, Panzarino, Weaver, & Zieman,1998, p. 125). It has been predicted ‘‘that service cost, currently themost important factor on which managed care companies compete,will eventually level off and that companies will begin to compete onservice quality’’ (Salzer, Nixon, Schut, Karver, & Bickman, 1997,p. 292). This raises concern for vendors of mental health services as totheir ability to quantitatively demonstrate their ability and commit-ment to provide increasingly high-quality services. If mental healthprograms are unable to prove that the services they provide are of highquality, they may be denied contracts and eventually cease to exist.The major assumption underlying the use of outcomes measure is

‘‘that care providers know which processes and structures of care. . . .to produce better outcomes, an assumption for which the proof islimited’’ (Hammermeister, Shroyer, Sethi, & Grover, 1995, p. OS10).The method utilized by MASI Research Consultants, Inc. (MASI) forassessing the quality of care includes indicators that measure the struc-ture, process, and outcome of services to gain a more complete pictureof the level of care that is being provided and links between theaspects of care that led to specific outcomes.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY4

LITERATURE REVIEW

A literature review was completed using the University of Mary-land System Libraries, the National Library of Medicine, the NationalInstitute of Mental Health Library, and the Library of Congress. Avariety of databases that focus on health, psychology, and sociology,as well as collections of business and health-related publications weresearched including Medline, PsycLit, SWAB, Cinhal, and HealthStarusing terms such as: health and mental health, evaluation and clinicalpractice, quantitative and outcome measures, six sigma, total quality,quality assurance, and clinical review. This search of the literatureproduced approximately one hundred articles, the majority of whichwere either not specifically related to mental health, or the content wasnot appropriate for scholarly research.The literature search did identify a number of initiatives in the

mental health field that are currently involved in quality assessmentand outcomes management. In 1994, the National Alliance for theMentally Ill (NAMI), the Center for Mental Health Services (CMHS),the National Institute of Mental Health (NIMH), the National Instituteon Alcohol and Alcohol Abuse (NIAA), the National Institute on DrugAbuse (NIDA), the Washington Business Group on Health, the JohnsHopkins University, the American Psychiatric Association, Eli Lillyand Company, and the National Depressive and Manic-DepressiveAssociation, formed the Outcomes Roundtable, a multidisciplinaryand multistakeholder group with three task forces. The three taskforces have been actively working on outcome measurement develop-ment, standards, recommendations, and findings dissemination to as-sist the mental health field (consumers, families, providers, payers,managed care organizations, policy makers, and the general public) tobetter equip themselves to evaluate the quality of services received bypeople with mental and addictive disorders (Shern, & Flynn, 1996). Asecond initiative, being developed and used by health and mentalhealth programs to evaluate quality is the Health Employer Data andInformation Set (HEDIS) system designed by the National Committeefor Quality Assurance (NCQA). Originally the HEDIS system wasdesigned to ‘‘provide comparable data on the quality of care in healthplans to facilitate purchaser decisions about which health plans tooffer employees, to help consumers choose among health plans, and toprovide health plans with information for quality improvement’’(McGlynn, 1998, p. 470). The NCQA has recently formed a Behavior-

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 5

al Health/Substance Abuse panel to ‘‘address issues in evaluation ofaccess to, appropriateness of, and outcomes for mental health care’’(Friedman, Minden, Bartlett, Ganju, Gettys, Henderson, Kaufman,Manderscheid, O’Kane, Pandiani, Romero, Ross, & Teplow, 1998,p. 146). The recommendations from this committee will be incorpo-rated into the revised HEDIS, which was due out in late 1999. Anothermethod for obtaining information about quality of service is clientsatisfaction surveys. This method is often the only measure companiesuse to evaluate clinical practice. Although important for gatheringinformation from the customer, client satisfaction measures are sub-jective and not quantifiable; therefore, they do not present a clear andaccurate picture of the overall quality of care.

TOTAL QUALITY MANAGEMENT AND SIX SIGMA

The evolution of total quality management (TQM) in the UnitedStates has recently brought an innovative way to view and evaluatequality into the health and mental health fields. The traditional Japa-nese model of TQM views quality in terms of ‘‘design, quality ofconformance, quality of sales, and service functions’’ (Dale, Cooper, &Wilkinson, 1997, p. 41). The ultimate goal of TQM is to continuouslyimprove quality and to prevent defects or problems by involving alllevels of employees in decisions relating to their work, thereby in-creasing the level of individual responsibility and accountability. TheUnited States business field has adopted this thinking style. TheUnited States government, through the prestigious Malcolm BaldridgeAward, established in 1988, recognizes ‘‘U.S. companies who practiceand promote quality and sets out clear criteria for quality improve-ment. This Award also publicizes successful management and im-provement strategies’’ (Dale, Cooper, & Wilkinson, 1997, p. 59).Companies that apply for the Malcolm Baldridge Award are scored onseven different categories, including ‘‘leadership, strategic planning,customer and market focus, information and analysis, HR develop-ment and management, process management, and business results’’(Dale, Cooper, & Wilkinson, 1997, p. 59). Award winners are man-dated to disseminate their knowledge with other companies and thepublic for at least one year after receiving the Award.The Motorola Company has been recognized as taking the Malcolm

Baldridge Award criteria one step further by developing and imple-

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY6

menting ‘‘Six Sigma’’ criteria. The goal of Six Sigma means ‘‘settingtolerance limits for defective products at such high levels that fewerthan 3.4 defects occur per million units (or opportunities) within sixstandard deviations of the mean’’ (Chassin, 1998, p. 567). This TQMapproach is intended to prevent defects before they occur by ‘‘antici-pating customer needs’’ (Schmidt, & Finnigan, 1992, p. 206). GeneralElectric recently extended the application of Six Sigma criteria to theirdirect customer services, one being GE Lighting. The incorporation ofSix Sigma not only improved the quality of their services, but alsosaved money, as demonstrated in the following: ‘‘Clearly, Six Sigma--which added nearly $50 million to our 1998 operation income--isrenewing virtually every aspect of GE Lighting’’ (General ElectricCompany, 1998). Motorola has defined six steps to reach Six Sigmawhich can be used by companies offering services or products of alltypes to consumers. This includes mental health providers. The sixsteps are:

1. ‘‘Identify the product or service you provide;2. Identify the customers for your product or service, and deter-mine what they consider important;

3. Identify your needs to provide a product or service that satisfiesthe customer;

4. Define the process for doing the work;5. Mistake-proof the process and eliminate wasted effort;6. Ensure continuous improvement by measuring, analyzing, andcontrolling the improved process’’ (Schmidt, & Finnigan, 1992,p. 206).

With respect to the mental health field, the ideas and approaches toTQM and Six Sigma can be readily transferred and used by MBHC,EAPs, and other providers who deliver mental health services. Statisti-cal control processes can be used to ‘‘reduce variation and increase thereliability in priority areas of patient care which are considered routineand well accepted for the standard of care as well as for a few criticaloutcomes’’ (Graham, 1995, p. 294). For example, a defect in mentalhealth care may be ‘‘the number of patients with clinical depressionwho are not diagnosed or well treated per million patients with depres-sion’’ (Chassin, 1998, p. 567). TQM focuses on total systems of care,rather than individuals, and the outcomes or results of a TQM systemwill ‘‘cut across professional and functional boundaries and improve

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 7

the overall quality process for the entire program’’ (Dale, Cooper, &Wilkinson, 1997, p. 27). To work effectively, TQM requires that vari-ous professions and departments work in conjunction with each otherto identify flaws in the system and eliminate or prevent problems anddefects. Mental health programs, such as MBHCs and EAPs, currentlydeal in business settings, so they are already exposed to TQM and SixSigma on a daily basis. Because EAPs and MBHCs assume a pro-ac-tive approach, with prevention being a primary goal, both can treatclient problems before more serious ones develop, which is in directaccordance with the principles of TQM and Six Sigma criteria.This paper presents the findings from 42 clinical reviews of large

and small organizations, both public and private, for-profit and non-profit employers, that have been evaluated through 1998 by MASI.The clinical review process used by MASI represents one method forassessing quality of care in mental health programs that uses quantifi-able quality measurements, including Six Sigma. The reviews wereundertaken by expert panelists, using an objective process that isdescribed in the Methods section below.

METHODS

MASI has been evaluating companies’ quality of services for over15 years. As a consultant to EAPs, MASI was invited and funded by amajor corporation to develop a unique approach that measures qualityof care from an objective view, utilizing quantifiable measurements asmuch as possible in clinical evaluations. Since then, numerous EAPsand MBHCs have been evaluated using this process of clinical review.The sample in this study consists of client cases randomly selectedover a given period of time, from each company who contracted forevaluation services. An expert psychometrician determined the neces-sary number of client cases for the sample, as well as selected theactual cases themselves to be reviewed, to achieve statistical signifi-cance according to the size of each company. The majority of compa-nies included in this review were external EAPs, that is contracted-outas compared to being run by an internal staff.The summary of results presents findings from 42 clinical reviews

conducted by MASI between 1984 and 1998. The sizes of the compa-nies varied from small (less than 300 employees) to large corporations(over 500,000 employees). By multiplying these numbers by 2.5, the

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY8

number of covered lives can be calculated (750 covered employeelives and 1,250,000 covered lives respectively). This analysis repre-sents 4000+ separate cases involving a total of 3200 employees. Theremainder of the cases in the sample refers to eligible participants. Thenumber of employees in the firms who participated in the clinicalreviews is approximately 7,000,000, or a total of 17,500,000 coveredlives. The names of the companies included in the results of this studyhave been withheld in order to maintain confidentiality.The case records were reviewed using the copyrighted protocol

explained in the Design section below. The major areas addressed bythe protocol include; demographics, clinical services, referrals, anddocumentation. The experimental units for this analysis were the actu-al reviews. The measures corresponded to the selected items in theprotocol. Inter-rater reliability was established prior to beginning theclinical review process. Before the cases were distributed to the panelmembers, a smaller number of cases, selected at random, were used asthe standard. These cases were reviewed and rated by all panel mem-bers and then the results were discussed and any differences in ratingstyles were reconciled. The use of experts (i.e., persons of nationalprofessional recognition) in evaluating programs has been found toboth ‘‘enhance credibility and increase objectivity’’ (Klarreich, Fran-cek, & Moore, 1985, p. 215).The validity of the protocol used in these clinical reviews has been

established through years of experience. All of the questions includedin the protocol have been developed and modified by over forty differ-ent experts in the mental health field, along with the client companiesthat have been reviewed. Expert indicators based on professionalnorms and consensus are noted for having both normative andconsensual validity (Salzer, Nixon, Schut, Karver, & Bickman, 1997,p. 299). The numerical value of each measure was the percentage ofcases meeting the established criteria.

DESIGN

MASI utilizes an eight-step process in conducting clinical reviewsof EAPs and MBHCs. This process can be easily adapted to be usedfor any type of health or mental health program. Each step is part of astandardized process to evaluate clinical practice and recommend im-provements or changes. The seven essential steps necessary for asuccessful clinical review process will be briefly described in the

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 9

following section. A more detailed description can be found in Evalu-ating Your Employee Assistance and Managed Behavioral Care Pro-gram (Masi, 1994).

1. Selection of Reviewers and the Role of the Chairand the Psychometrician

The most critical element of this first step is to guarantee that panelmembers are qualified experts who excel in their area of practice interms of possessing superior, professional, and multi-disciplinaryjudgement. Members should represent various professional areas (i.e.,psychiatry, psychology, and social work) in order to collectively re-view cases both comprehensively and accurately. A multi-disciplinaryteam of experts enhances the quality of care and improves the effec-tiveness of all clinicians. All panel members utilized by MASI are at asenior professional level and many are nationally recognized for theirclinical achievements. For example, past reviewers have included aformer drug czar, the Chief of Psychology for the United States Navy,and the Commissioner of Addictions for New York State.The ideal clinical review panel for a mental health program would

consist of a psychiatrist, a psychologist, and a social worker, with atleast one member having expertise in the addiction field. The numberof panel members is usually related to the size of the program beingevaluated. Other professional authorities may be included in the panelas needed.The Chair of the clinical review coordinates the entire clinical re-

view process with the client company; interviews, selects and moni-tors peer panelists; works with the psychometrician to analyze thedata; and prepares and presents the final oral and written reports. Aftercompletion of the clinical review process, the Chair then continuescontact with the client company to ensure compliance with the ActionPlan (see Number Seven).

2. Selection of Sample

The actual cases that will be reviewed are directly related to thenumber of clients served by the program being evaluated. The psycho-metrician computes the number of cases needed to be reviewed toassure significance and approves the method of case selection to en-sure a random sample. MASI staff draws cases on-site at the corporateheadquarters or at the external EAP’s office.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY10

3. Development of the Instrument

The case review protocol is used to ascertain the profile of eachclient, determine the appropriateness of treatment, and to analyze thecounseling process. All questions contained in the protocol are an-swered using ‘‘yes/no’’ or a five point Likert scale ranging from ‘‘5 =superior, exceeds expectations,’’ to ‘‘1 = unacceptable.’’ The protocolis divided into two major sections, documentation and clinical, andconsists of over 50 questions covering general areas such as: clientdemographic information; primary assessed problem; primary pre-senting problem; initial contact information; clinical documentation;type, quantity and quality or services provided; assessment informa-tion; determining if it is (or was) a high risk case, with possibility ofviolence; efficacy of short-term counseling; level of clinical supervi-sion; follow-up/referral information; overall panelist rating; case sum-mary with strengths and weaknesses and rationale for re-opening acase, if indicated. A documentation question will ask reviewers ifappropriate documentation (i.e., Statement of Understanding) waseasily found in the case record. An example of a clinical question mayask reviewers to rate the appropriateness of an assessment of depres-sion or rate a clinical service numerically. By forcing the reviewers toanswer yes or no, the answer to the question is quantified. Questionsmay be added or deleted depending on the nature of the servicesprovided by the client company. The use of this type of protocol,which has been reviewed and refined many times by expert panelists,is essential to objectively quantify and evaluate clinical practice.Each case in the sample is read by one panelist but rated jointly by

all panelists and write-ups are produced for cases that need to bere-opened. Re-opening of specific cases has occurred in virtually allclinical reviews conducted by MASI and often included some threat orviolence that needs further examination to ensure the safety and wellbeing of the client. The fact that the clinical reviewers have found that‘‘closed’’ cases needed to be re-opened supports the seriousness thatlack of quality services can result in and also supports the need forongoing, regular quality assessment and evaluation.

4. Review of Case Records

Cases are read and rated by panel members at a neutral site undersecure and confidential conditions. To begin the actual clinical review,

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 11

cases are randomly dealt to panel members. The members are requiredto sign a Chain of Custody form which states that the peer reviewerhas been given access to specific and identified case reports, and willreturn the case reports immediately upon completion of the clinicalreview. After the panelists have completed reviewing and rating thecases in their possession, the reviewers come together as a group anddiscuss and present their findings. When reviewing the clinical re-cords, certain measurable items are selected for inclusion in Six Sig-ma. The following items are considered from the clinical record whenapplying the Six Sigma criteria: demographics, initial contact (initialinquiry by the client), documentation, services provided (e.g., assess-ment, brief counseling), high risk of violence (e.g., cases should be inred folders), short term counseling, follow-up contacts, referrals, andvolunteers (when applicable). All of these categories should be madeavailable on the client intake form and should be answered completelyby the clinician. ‘‘Not specified’’ is an unacceptable answer accordingto Six Sigma criteria.

5. Analysis of Data or Written Report

Data are analyzed using a variety of descriptive statistics. Basiccounts, averages, percentages and frequencies are used to examinequalitative data and means and standard deviations are used to de-scribe quantitative data. Occasionally, when sample sizes are largeenough, inferential statistics (i.e., t-test, analysis of variance) may beused to examine differences between sub-groups of employees. Oncethe review of the data has been completed and analyzed, a writtenreport is produced by the Chair based on the findings from the review.The contents of the final report include: Introduction, Methodology,Demographics, Procedural/Documentation, Findings, Clinical Find-ings, Counselor Credentials, Cases for Special Attention, and Quanti-tative Tables. When items are found to be out of contract compliance,they are specially noted. The chair also prepares an oral report andboth reports are then presented to the client company.

6. Company Debriefing

About one month after the clinical review findings report has beenissued, a meeting is arranged to discuss the panel’s findings. Theparticipants include the client company and sometimes, the vendor ofservices.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY12

7. Action Plan

The action plan, prepared by the vendor, based on the recommenda-tions from the written report, is an essential element of the clinicalreview. It describes the changes that the vendor agrees to make toimprove the overall level of quality and correct any problem areasidentified by the peer panelists. Because the idea behind these clinicalreviews is TQM, the action plan and the recommendations by theclinical review panel are incorporated into future contracts and areused as a standard in the following year’s review of the company.

RESULTS

Documentation Analysis (Table 1)

The number of employee cases refers to the percent of employeeswho actually used the program; 80.6% of all cases seen by the EAPwere employees. It is interesting to note that 53.1% of these clientswere women, meaning that the other 46.9% of clients were men. Thisfinding is important in establishing that men use EAP services almost asmuch as do women. Another important demographic statistic to note isthat 27.4% of all clients seen by the EAPs were of minority status. Thisis a fairly high percentage and should be taken into consideration whenexamining staff make-up and diversity in programs and services. Thethree types of referrals were all of a voluntary nature; 63.6% of allclients went to the EAP on their own and 18.4% were referred as asuggestion by someone else (i.e., peer, co-worker, family member, etc.),which represent fair to good participation by family members and co-workers. Of the clients seen in the EAP 17.6% came from a manage-ment suggestion. This percentage is also quite high and illustrates thatmanagers may be starting to feel more comfortable talking with em-ployees about the need to seek counseling and may be better trained tosuccessfully assist employees in seeking help.The remainder of the table represents the percentage of client cases

that had the appropriate documentation in his or her file. The State-ment of Understanding is an essential component of the client’s fileand no less than 100% accuracy should be tolerated according to ‘‘SixSigma’’ criteria. Only 86.0% of the cases reviewed had this document,leaving one seventh of the clients possibly not clear on what the EAP

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 13

TABLE 1. Clinical Review Findings: Documentation

Title Number or Percent

Demographics

Client Cases Analyzed 4000+

Number of EmployeesIncluded in Analysis 7,000,000

Number of Covered Lives 17,500,000

Employee Cases 80.6%

Females 53.1%

Minorities 27.4%

Referral Source

Self-Referral 63.6%

Referred or Suggested by Others 18.4%(i.e., family, union, co-worker)

Management Suggested 17.6%

Inclusion of Specific Documents

Statement of Understanding 86.0%

Release of Information 49.7%

Overall Rating

Overall Documentation 2.98 (out of highest possible 5.00)

services would mean for them, therefore opening the door for legalliability problems for the employer. The Release of Information formis another vital element of the client’s file and should be completed100% of the time to be acceptable according to Six Sigma criteria. Thefindings from this study show that the form was only completed49.7% of the time which therefore prevented EAP counselors fromcontacting referral sources, family members and employers for over50% of the cases. Not having a signed Release of Information formmay also lead to serious legal liability problems for employees.The average Overall Documentation score was only a 2.98 out of a

possible 5.00. In academic terms this would be considered a ‘belowaverage’ grade and is a clear indicator that overall documentation isnot meeting the required standards. Specific areas which were noted asparticularly low for this section involved the actual charts. For exam-

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY14

ple, in many charts reviewed, the demographic section was not com-pleted by the clinician or cases would be intermingled by family. In thelatter case, information would often only be completed for the em-ployee and not the relatives who were also seeking clinical services.This may lead to serious liability issues due to the fact that clients havea legal right to read their own charts, and in this case, may discoverinformation about other family members. Programs need to keep sepa-rate cases for all clients. A second reason contributing to the lowoverall Documentation score was due to the fact that a high number ofcase notes were missing in charts, even notes for an entire session.Finally, referrals were not tracked well overall. Often the case recorddid not clearly indicate who referred the client to what service andwhether or not he/she followed through with the referral.

ANALYSIS AND COMMENTS ABOUT CLINICAL SERVICES

The results of the Clinical Services are again, surprisingly low(Table 2). The average number of Face-To-Face Sessions is 3.32 basedon an eight-session model. This is too low for an eight-session model.According to clinical research, 65% to 70% of all client cases shouldbe seen in 4 to 4.5 sessions, based on this model. For the percent of

TABLE 2. Clinical Review Findings: Clinical Service

Title Number or Percent

Average Number of Face-To-Face Sessions 3.32(from an eight-session model)

Alcohol or Drug Related 18.9%

Additional Medical, Psychological orPsychiatric Workups Conducted if Needed 37.8%

Short-Term Face-to-Face 56.8%

Self-Help Discussed 47.9%

Cases Referred Out 40.6%

Referral In-Patient 11.5%

Referral Out-Patient 88.4%

Referral Used 52.5%

Average Overall Quality 2.81 (out of a highestpossible 5.00)

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 15

cases referred to other resources in a short-term model 40.6% is toohigh. The number of Alcohol or Drug Related cases was low; only18.9% of all clients seen by the EAP were recognized as having aproblem. Based on general statistics in the workforce, this percentageshould be higher, at least 25%. This leads the reviewers to believe thatcounselors were not accurately screening for substance abuse prob-lems in both the client and anyone in the client’s family that may havea problem. When additional medical or psychiatric attention was re-quired, only 37.6% of clients received services. This is startlingly lowand is a classic example of the lack of quality services clients arecurrently receiving. Of cases reviewed 47.9% are a fairly good num-ber of cases in which self-help was discussed. The referrals to varioussettings are appropriate with expected results. However, there was alarge percentage (88.4%) of clients referred for out-patient, and due tolack of obtaining a Release of Information form, it is impossible todetermine if over 50% of these clients actually followed their referral. Ofreferrals to in-patient facilities 11.5% is considerably higher than onewould expect, but may be skewed due to a few extreme ratings. For thenumber of people who did sign a Release of Information form (49.7%)and could be followed after being referred somewhere, only 52.5% ofclients used their referral. This is an extremely low percentage and leadsone to be concerned as to what the other 47.5% of clients in need oftreatment did or did not do.Overall, the average quality score for clinical services corresponded

to the poor level of documentation and received a rating of only 2.81out of a possible 5.00. In many cases the assessed problem was thesame as the presenting problem which allowed the client to self-diag-nose. One of the main concerns with this is that the clinicians in thereview consistently misidentified or did not notice at all addictionproblems. Clinicians also follow-up on those clients seen to be in moreserious crises, needing to be referred to outside programs.Of the professionals employed by the EAPs who were reviewed in

this study 85% consisted of clinical psychologists (PhD level) andlicensed social workers with a Masters in Social Work. These profes-sionals, often employed as affiliates or sub-contractors in EAP casesand often used in MBHC programs, represent the same people that areworking in private practice. Therefore results of these clinical reviewscan be generalized to most of the mental health field. The fact thatquality services and evaluation methods are lacking is evident.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

EMPLOYEE ASSISTANCE QUARTERLY16

CONCLUSION

It is apparent from the results of this study that quality of care isoften over-looked or neglected by mental health companies. The find-ings in this article reflect the limitations evident in today’s clinicalmental health practice. The employees or clients and the client compa-nies need to be aware of this unfortunate lack of quality services theymay be receiving and not automatically assume that the services beingprovided are of high quality standards. Too many companies rely onlyon customer-satisfaction forms or ‘‘report cards’’ to evaluate theiroverall level of quality service. These forms are only one way tomeasure quality, and they certainly do not evaluate the full spectrumof quality of care. For liability reasons, and in keeping with profes-sional ethical values, it is essential that mental health professionals,clients and client companies demand more from their mental healthservices and demand quality assurance.Clinical review is only one system to assess the quality of care. It is

an important process, and should be utilized with other quality im-provement methods in order to implement a total quality managementsystem that continuously evaluates and raises the level of quality carein clinical mental health practice. The individual companies in thisstudy that abided by their Action Plan and followed-up with yearlyclinical reviews showed significant improvement in their overall qual-ity over time. It is important to note that quality improvement is acontinuous process that always needs assessment, evaluation, andmodification. ‘‘All elements of the mental health field--consumers,family members, providers, managed care organizations, and payers--will need to engage in a joint process to develop needed practiceguidelines, outcome measures, and report cards’’ (Mandersheid, 1998,p. 233).The mental health field has been lackadaisical about quality for far

too long. A demand for total quality improvement/management andthe assurance of quality services for the mental health field has arrivedand will remain a force to be contended with throughout the newmillennium. Companies and private practitioners need to begin askingthemselves if they are really providing the highest quality of carebecause the demand for quality services is steadily increasing andclients and client companies alike will no longer settle for anythingless than Six Sigma performance.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011

Masi, Jacobson, and Cooper 17

REFERENCES

Astrachan, B. M., Essock, S., Kahn, R., Masi, D., McLean, A. A., & Visotsky, H.(1995). ‘‘The role of a payer advisory board in managed mental health care: TheIBM approach,’’ Administration and Policy in Mental Health, 22 (6), 581-595.

Chassin, M. R. (1998). ‘‘Is health care ready for Six Sigma quality?’’ The MilbankQuarterly, 76 (4), 565-591.

Dale, B., Cooper, C., & Wilkinson, A. (1997). Managing Quality and Human Re-sources: A Guide to Continuous Improvement. Oxford, UK: Blackwell Publishers.

Eddy, D. M. (1998). ‘‘Performance measurement: Problems and solutions,’’ HealthAffairs, 17 (4), 7-25.

Friedman, M., Minden, S., Bartlett, J., Ganju, V., Gettys, W. D., Henderson, M. J.,Kaufman, C., Manderscheid, R. W., O’Kane, M., Pandiani, J., Romero, L. G.,Ross, E. C., & Teplow, D. (1998). ‘‘Mental health report cards,’’ Journal of theWashington Academy of Sciences, 85 (1), 144-153.

General Electric Company. (1998). 1998 Annual Report, Fairfield, Connecticut:General Electric Company.

Graham, N. O. (1995). Quality in Health Care. Theory, Application, and Evolution.Maryland: Aspen Publishers.

Hammermeister, K. E., Shroyer, A. L., Sethi, G. K., & Grover, F. L. (1995). ‘‘Why isit important to demonstrate linkages between outcomes of care and processes andstructures of care,’’ Medical Care, 33 (10, Supplement), OS5-OS16.

Klarreich, S. H., Francek, J. L., & Moore, C. E. (Eds.) (1985). The Human ResourcesManagement Handbook. New York: Praeger Publishers.

McGlynn, E. A. (1998). ‘‘Choosing and evaluating clinical performance measures,’’Journal on Quality Improvement, 24 (9), 470-479.

Manderscheid, R. W. (1998). ‘‘From many into one: Addressing the crisis of qualityin managed behavioral health care at the millennium,’’ The Journal of BehavioralHealth Services and Research, 25 (2), 233-237.

Masi, D. A. (1994). Evaluating Your Employee Assistance and Managed BehavioralCare Program. Michigan: Performance Resource Press.

Mechanic, D. (1998). ‘‘Managed behavioral health care: Current realities and futurepotential,’’ New Directions For Mental Health Services, 78, 67-76.

Minden, S., Campbell, J., Dumont, J., Fisher, B., Flynn, L., Henderson, M. J., John-son, J. R., Kramer, T., Manerscheid, R. W., Nelson, D., Panzarino, P., Weaver, P.,& Zieman, G. (1998). ‘‘Measuring outcomes of mental health care services,’’Journal of the Washington Academy of Sciences, 85 (1), 125-143.

Rosenheck, R., & Cicchetti, D. (1998). ‘‘A mental health program report card: Amultidimensional approach to performance monitoring in public sector pro-grams,’’ Community Mental Health Journal, 34 (1), 85-106.

Salzer, M. S., Nixon, C. T., Schut, L. J. A., Karver, M. S., & Bickman, L. (1997).‘‘Validating quality indicators, quality as relationship between structure, process,and outcome,’’ Evaluation Review, 21 (1), 292-309.

Schmidt, W. H., & Finnigan, J. P. (1992). The Race Without A Finish Line. America’sQuest for Total Quality. California: Jossey Bass.

Sharma, S. (1998). ‘‘Quality assurance in mental health service in India,’’ Interna-tional Medical Journal, 5 (3), 167-173.

Shern, D. L., & Flynn, L. M. (1996). ‘‘The outcomes roundtable,’’ BehavioralHealthcare Tomorrow, 5, 25-30.

Dow

nloa

ded

by [

Uni

vers

ity o

f M

aryl

and,

Bal

timor

e] a

t 11:

48 1

7 Se

ptem

ber

2011