an odd couple with promise: researchers and practitioners in evaluation settings

7
2000., Vol. 49, No. 3 341 An Odd Couple with Promise: Researchers and Practitioners in Evaluation Settings* Judith A. Myers-Walls** Evaluation of programs for families continues to grow in importance. The best evaluation studies involve collaborations between evaluation researchers and practitioners, but the two groups represent different cultures. Cultural differences are seen in temporal orientation, cognitive resources, values and definitions of excellence, patterns of communication, daily life styles, and use of tools. The author provides eight suggested steps to improve collaboration through the determination of shared goals, clarification of boundaries, and improved communication. *Earlier versions of this paper were presented at the National Council of Family Relations and to the Healthy Families America Research Network. Funding for the evalu- ation study described in this manuscript came from the Indiana Family and Social Services Administration. Appreciation is expressed to Phyllis Kikendall and Brenda Walsh who as- sisted with earlier versions of this paper. **Address correspondence to: Judith A. Myers-Walls, Department of Child Devel- opment and Family Studies, 1269 Fowler House, Purdue University, West Lafayette, IN 47907–1269. Key Words: evaluation, researchers, practitioners. (Family Relations, 2000., 49, 341–347) R esearchers and practitioners make an odd couple. As a professional whose job it is to communicate research- based information about children and families to the pub- lic and to community professionals, I have heard complaints from both members of this couple. It seems to be a never-ending cycle. Practitioners create programs. Researchers ask practition- ers to identify the empirical base to support the program’s design. Practitioners complain that the relevant research is not available. Researchers reply that the research is being conducted, but it is too soon to provide answers. Practitioners remind researchers that they are working with families now and need answers im- mediately. Researchers add that they do not understand enough about practice to be able to identify program implications of their findings. Practitioners go away complaining that researchers do not care about what is ultimately most important: the well-being of real families in need. Researchers go away complaining that practitioners do not care about what is ultimately most important: seeking truth about families, children, and their interactions. In many situations this couple’s differences may remain un- resolved, and the result is not necessarily seriously detrimental to either one. However, these stresses and conflicts seem to reach a crisis level when the two attempt to collaborate. That collab- oration may be most intense when they work together to evaluate a program. As a professional who has experience with both re- search and practice, I was asked to evaluate a statewide early- intervention home-visiting program. In that role, I have experi- enced many of those crises. Researchers and practitioners live in different worlds with different traditions, rules, and expectations—different cultures. The goal of this paper is to help all family specialists to lower the frequency and intensity of conflicts that arise from the cul- tural differences between those who take a research/data-based approach to understanding families and those who take a client- centered, hands-on approach aimed at changing outcomes for individual families. I hope that researchers will begin to under- stand the perspective of the practitioners and anticipate misun- derstandings and concerns. I hope that practitioners will begin to understand the needs and perspective of the researchers and learn ways to negotiate evaluation strategies that benefit both the clientele and the knowledge base. The issues and examples may not be new for many readers—in fact, I expect that readers will often nod in recognition. Hopefully, what is new is the larger picture that makes the issues and examples understandable and manageable in new ways. Although technically researchers and evaluators are engaged in different pursuits that involve different motivations and goals, this paper focuses instead on the differences between two groups: those who are primarily concerned with knowledge gen- eration and data manipulation, which includes both researchers and evaluators, compared to those who are concerned with help- ing and making a change in individuals and families. Although researchers and evaluators are not identical, both groups tend to think in very similar ways and use very similar tools. I leave it to others to further explore how the roles of researchers and evaluators differ in relationship to practitioners. The call for high-quality evaluations is sounding loud and clear and has for some time. This progression of citations shows the increasing intensity of these demands. In a 1988 paper writ- ten as part of a national evaluation committee for family life Extension programs, Paula Dail (1988) stated that ‘‘strong eval- uation data [are] now needed in order to justify program devel- opment, funding, expansion (or avoiding elimination) of pro- grams, staffing, and future directions of staff efforts’’ (p. 2). In 1993 Diana Leslie, Vice President of the Indiana Youth Institute, commented that ‘‘many nonprofits have felt that formal evalua- tions were a luxury they couldn’t afford. . . [but] program eval- uations may no longer be a choice but a necessity’’ (p. 1). Anne Brady and Julia Coffman (1997) of the Harvard Family Research Project recently called this an ‘‘era of increased accountability’’ and said that ‘‘family support and parenting programs are under pressure to demonstrate their results’’ (p. 11). The demand for quality in these evaluations also has increased. In January of 1998 the National Academy of Sciences distributed a news re- lease that included this statement: ‘‘After identifying and study- ing 114 evaluations of treatment and prevention programs and soliciting input from service providers, the committee concluded that local officials often are too quick to adopt into policy and practice the results from small-scale studies that have not been replicated and whose limitations have not been adequately con- sidered.’’ Thus the call has been issued to both practitioners and researchers: produce new, strong evaluation research findings to support programs for children, youth, and families. Researchers and Practitioners in Evaluations Evaluations may link evaluation researchers and practition- ers in a number of different ways. For example, the evaluation

Upload: judith-a-myers-walls

Post on 21-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

2000., Vol. 49, No. 3 341

An Odd Couple with Promise: Researchers and Practitioners inEvaluation Settings*

Judith A. Myers-Walls**

Evaluation of programs for families continues to grow in importance. The best evaluation studies involve collaborations betweenevaluation researchers and practitioners, but the two groups represent different cultures. Cultural differences are seen in temporalorientation, cognitive resources, values and definitions of excellence, patterns of communication, daily life styles, and use of tools. Theauthor provides eight suggested steps to improve collaboration through the determination of shared goals, clarification of boundaries,and improved communication.

*Earlier versions of this paper were presented at the National Council of FamilyRelations and to the Healthy Families America Research Network. Funding for the evalu-ation study described in this manuscript came from the Indiana Family and Social ServicesAdministration. Appreciation is expressed to Phyllis Kikendall and Brenda Walsh who as-sisted with earlier versions of this paper.

**Address correspondence to: Judith A. Myers-Walls, Department of Child Devel-opment and Family Studies, 1269 Fowler House, Purdue University, West Lafayette, IN47907–1269.

Key Words: evaluation, researchers, practitioners.

(Family Relations, 2000., 49, 341–347)

Researchers and practitioners make an odd couple. As aprofessional whose job it is to communicate research-based information about children and families to the pub-

lic and to community professionals, I have heard complaintsfrom both members of this couple. It seems to be a never-endingcycle. Practitioners create programs. Researchers ask practition-ers to identify the empirical base to support the program’s design.Practitioners complain that the relevant research is not available.Researchers reply that the research is being conducted, but it istoo soon to provide answers. Practitioners remind researchersthat they are working with families now and need answers im-mediately. Researchers add that they do not understand enoughabout practice to be able to identify program implications of theirfindings. Practitioners go away complaining that researchers donot care about what is ultimately most important: the well-beingof real families in need. Researchers go away complaining thatpractitioners do not care about what is ultimately most important:seeking truth about families, children, and their interactions.

In many situations this couple’s differences may remain un-resolved, and the result is not necessarily seriously detrimentalto either one. However, these stresses and conflicts seem to reacha crisis level when the two attempt to collaborate. That collab-oration may be most intense when they work together to evaluatea program. As a professional who has experience with both re-search and practice, I was asked to evaluate a statewide early-intervention home-visiting program. In that role, I have experi-enced many of those crises.

Researchers and practitioners live in different worlds withdifferent traditions, rules, and expectations—different cultures.The goal of this paper is to help all family specialists to lowerthe frequency and intensity of conflicts that arise from the cul-tural differences between those who take a research/data-basedapproach to understanding families and those who take a client-centered, hands-on approach aimed at changing outcomes forindividual families. I hope that researchers will begin to under-stand the perspective of the practitioners and anticipate misun-derstandings and concerns. I hope that practitioners will beginto understand the needs and perspective of the researchers andlearn ways to negotiate evaluation strategies that benefit both theclientele and the knowledge base. The issues and examples maynot be new for many readers—in fact, I expect that readers willoften nod in recognition. Hopefully, what is new is the largerpicture that makes the issues and examples understandable andmanageable in new ways.

Although technically researchers and evaluators are engagedin different pursuits that involve different motivations and goals,this paper focuses instead on the differences between twogroups: those who are primarily concerned with knowledge gen-

eration and data manipulation, which includes both researchersand evaluators, compared to those who are concerned with help-ing and making a change in individuals and families. Althoughresearchers and evaluators are not identical, both groups tend tothink in very similar ways and use very similar tools. I leave itto others to further explore how the roles of researchers andevaluators differ in relationship to practitioners.

The call for high-quality evaluations is sounding loud andclear and has for some time. This progression of citations showsthe increasing intensity of these demands. In a 1988 paper writ-ten as part of a national evaluation committee for family lifeExtension programs, Paula Dail (1988) stated that ‘‘strong eval-uation data [are] now needed in order to justify program devel-opment, funding, expansion (or avoiding elimination) of pro-grams, staffing, and future directions of staff efforts’’ (p. 2). In1993 Diana Leslie, Vice President of the Indiana Youth Institute,commented that ‘‘many nonprofits have felt that formal evalua-tions were a luxury they couldn’t afford. . . [but] program eval-uations may no longer be a choice but a necessity’’ (p. 1). AnneBrady and Julia Coffman (1997) of the Harvard Family ResearchProject recently called this an ‘‘era of increased accountability’’and said that ‘‘family support and parenting programs are underpressure to demonstrate their results’’ (p. 11). The demand forquality in these evaluations also has increased. In January of1998 the National Academy of Sciences distributed a news re-lease that included this statement: ‘‘After identifying and study-ing 114 evaluations of treatment and prevention programs andsoliciting input from service providers, the committee concludedthat local officials often are too quick to adopt into policy andpractice the results from small-scale studies that have not beenreplicated and whose limitations have not been adequately con-sidered.’’ Thus the call has been issued to both practitioners andresearchers: produce new, strong evaluation research findings tosupport programs for children, youth, and families.

Researchers and Practitioners in EvaluationsEvaluations may link evaluation researchers and practition-

ers in a number of different ways. For example, the evaluation

342 Family Relations

and the intervention/educational program may be intimately con-nected, as they often are in model demonstration projects. (Seestudies by David Olds as an example of this model: e.g., Olds,Henderson, & Kitzman, 1994.) In those cases, the researcheroften conceives of the intervention and guides the operation ofthe program. Brian Stecher and Alan Davis (1987) placed suchstudies in historical perspective when they pointed out that theearliest modern evaluations were conducted with academic re-search principles in mind. In situations of model demonstrationprojects or when academic research principles dominate, the re-search/practice dichotomy may be minimized, because the prac-titioners are almost employees of the research team. This modeloften allows the researcher to wield a high amount of controland leads to replicable, reliable, and generalizable conclusionsregarding the program’s outcomes and effectiveness. However,it also begs the question of how the program may operate inreal-life situations and with normal human-service operatingbudgets, demands, and personnel. As Stecher and Davis (1987)have pointed out, ‘‘much of the evaluation performed accordingto this model was not very useful for the local program staff orthe clients themselves’’ (p. 23). In Martin Seligman’s words(1996): ‘‘experiments resemble real [practice] only slightly’’ (p.1072).

At the other end of the continuum, evaluations conductedby practitioners themselves without the consultation of research-ers also minimize the clash of approaches between researchersand practitioners. Researchers simply aren’t brought into the pic-ture, so that conflicts of approaches are not likely to occur. Theprograms in the evaluation are realistic and honest, and the staff-directed evaluation efforts may produce findings that are im-mediately useful and of interest to the program and sometimesto its funding sources, but the validity and reliability of suchevaluations may come under scrutiny. As Sarah Conning andJudith Erickson (1993) have warned, many evaluations requirethe expertise of a skilled, external evaluator. Bias is very com-mon when the people conducting the evaluation are the samepeople whose jobs and well-being depend on the evaluation todemonstrate particular outcomes.

Between these two extremes are evaluation approaches thatinvolve cooperation between evaluation researchers and practi-tioners. ‘‘All evaluators should strive to establish good rapportwith clients and all would be wise to attend to organizationalfactors and political influences that affect clients’’ (Stecher &Davis, 1987, p. 23). One version of this collaborative model hasbeen called participatory evaluation (Cousins & Earl, 1982), andothers have called this relationship ‘‘collaborators in inquiry’’(Starke, 1983). ‘‘Participatory evaluation may establish evalua-tion as an organizational learning system that facilitates the de-velopment of shared understanding of organizational operationsand of cause and effect relationships’’ (Cousins & Earl, 1995, p.2). Such approaches are said to establish a balance between theneeds for technical rigor and responsiveness in evaluation. ‘‘Dataare more likely to be used if those in a position to do somethingwith them help to inform the evaluation’’ (Cousins & Earl, p.2). Betty Cooke (1998) has described the use of this model withthe Early Childhood Family Education program in Minnesota.She has said that the staff have reported ‘‘personal change, in-depth understanding of the families they serve, ideas for programchange, better understanding of the evaluation process and in-formation, and the superior advantage staff have over outsidersin engaging families in the evaluation’’ (p. 8–9). So how do

evaluation researchers and practitioners become a team andachieve these outcomes? Some authors have suggested key ele-ments to include when establishing that partnership (e.g. Greene,1988), but evaluation researchers and practitioners still encountermany roadblocks. How do they overcome the gaps in under-standing to reach that collaborative stance?

Understanding the Cultures of Researchers andPractitioners

Before trying to bridge the gap between researchers andpractitioners, it helps to examine the nature of the differencebetween them. Researchers and practitioners represent what canbe seen as two different cultures. As such, one can examine their‘‘cultural characteristics,’’ such as temporal orientation, primarycognitive resources or ‘‘ways of knowing,’’ values and defini-tions of excellence, patterns of communication, daily life styles,and use of tools.

Temporal orientation. The time orientation of a practitionertends to be short-term and immediate while for a researcher ittends to be long-term. Practitioners tend to deal with clientelewho have immediate needs and a desire for quick results. Cli-entele might include people whose food stamps are running outtoday, who must complete a parenting class in order to regaincustody of their children, or whose teen was arrested last night.Family practitioners themselves tend to live in a short-term pro-fessional environment with many external constraints on the useof time. As Michael Carrera (1996) has noted, ‘‘a counselingsession lasts only so long; a school period is 45 minutes or less;our case-management meetings or individual check-ins are ofteninterrupted by other staff members, important phone calls, orpaper work we need to get done’’ (p. 147). These time con-straints often are embedded in jobs with a one-year contract, orno contract at all. Low levels of job security and high job stressmean that many practitioners move often from one job to anoth-er. The bottom line is that practitioners deal with people andenvironments that present deadlines of hours, days, or weeks, sothey need answers right away.

Many evaluation researchers, on the other hand, are asso-ciated with universities or research institutes and work in tenure-track positions. They look at life in terms of academic semestersor quarters or perhaps in terms of years rather than hours, days,or weeks. Research grants may be funded for two or three years(or more) at a time, and individual research projects are oftenseen as part of a much larger body of inquiry—a life’s work. Ifresearchers, especially academics, choose to change jobs, theyoften give a six-month to one-year notice. The bottom line fora researcher’s temporal orientation is that they know that findingresearch answers takes a long time, but they are patient becausethey are building a large body of evidence, often with the pro-tection of a significant amount of job security after tenure hasbeen granted.

Temporal orientation therefore becomes a point of tensionwhen practitioners clamor for immediate answers and solutionsas evaluation researchers demand time and patience to build sup-port for findings and fully explore their meaning and nuances.Researchers often have the luxury of time to ponder and interpretphenomena, while practitioners feel someone’s hot breath—fromclientele, funders, or supervisors—down the back of their necks.

Cognitive resources and ‘‘ways of knowing’’. Researchersand practitioners operate with different ways of knowing. While

2000., Vol. 49, No. 3 343

researchers tend to base their conclusions on numbers and sci-entifically generated evidence, practitioners often rely on theirhearts and their own experiences. Researchers use logic, statis-tics, systematic gathering of information, prediction, a focus ondetail, and empiricism. Their goal is to ‘‘seek truth and com-municate it’’ (Morris & Cohn, 1993). When a new idea is pro-posed, they want to identify the source of the idea and the em-pirical support behind it. They trust experts with a publicationtrack record who are associated with credible institutions. Theyare trained to examine studies and research reports to judge ad-equacy before they choose to trust the results, and even thenthey are most likely to be convinced if they have the opportunityto replicate the study themselves.

Practitioners, on the other hand, tend to rely on intuition,instinct, direct experience, clinical evidence, diagnosis, and in-terpersonal sensitivity. They know something if they have seenit happen, or they rely on faith, and they trust that the outcomewill be achieved later. If practitioners are presented with a newidea, they might want a chance to try it out first and see how itworks with their clients before they believe it. Leader magazine,published by the Active Parenting program, is full of ‘‘testimo-nials’’ and case reports of individual families who have benefitedfrom programs—a reporting style meant to ‘‘sell’’ practitionerson several program packages. Practitioners will know a programworks if it feels right to them, if it makes their life or job better,or if it appears to make a difference with one or more of theirclientele families now or sometime in the future. ‘‘The practi-tioner of care knows about . . . moments [when opportunitiesfor the most caring and healing arise] and is willing to offer hisor her services, no matter what the outcome’’ (Carrera, 1996, p.123).

These contrasts in ‘‘ways of knowing’’ become evident insome phases of planning for an evaluation. Researchers may ex-plain that they are designing a study to find out what role aparticular factor plays in a program’s outcome at the same timethat practitioners say that they already know what the role ofthat factor is, because they have seen it operate. For example,an evaluation researcher may plan a method to look for empiricalproof that a parenting education program is as effective withfirst-time parents as it is with parents having additional children.Practitioners may have no interest in supporting the evaluationstudy, because they can describe several cases that prove forthem clinically that it has been equally effective with the twogroups. In many cases, practitioners and researchers might endup believing the same things, but they come to those beliefs mostoften by very different channels.

Values and definitions of excellence. Success for a quanti-tative evaluation researcher—a good day—means collectinglarge amounts of data and finding statistically significant results.Studies are excellent if they include a controlled design withtreatment and comparison or control groups, large samples, andhomogeneous samples (Guterman, 1997). The content or direc-tion of the results often matters less to researchers than thestrength of the finding.

Success for a practitioner—a good day—means having animpact on one or more individuals or families, especially if anindividual or family has struggled, been resistant, or is at specialrisk. Carol Klass (1996) gives the bottom line for a practitioner:‘‘she knows that she is making a difference’’ (p. 2). Deliveringa program to which the practitioner has a commitment is moreimportant than formally measuring its outcome. In spite of con-

centrating on process rather than empirically verified outcomes,most practitioners do have clear ideas of functionality and knowwhat they expect from their clientele. Colleen Corron (1998)describes several success stories using those assumptions offunctionality. There is a sense of discouragement or personalfailure if families or individuals are arrested, abuse their children,divorce, or lose their jobs.

Because they define success so differently, researchers andpractitioners may become very frustrated with each other. Theresearcher is looking for information and patterns, and the prac-titioner is looking for change in the lives of individuals andfamilies. It is easy to hear each side saying, ‘‘I just don’t un-derstand what they want! I thought it was a great outcome!’’This gap in understanding also may mean that each side can endup sabotaging the other’s goals unintentionally.

Patterns of communication. Researchers tend to use writtencommunication to share information with others, and they oftenfocus on technical topics and scientific language. Practitioners,on the other hand, communicate with others in person or byphone, increasing the likelihood of an emotional connection andthe possibility of building a relationship. Relationships are cru-cial to practitioners. ‘‘Adults who work with youth have longbeen aware of the awesome power of relationships’’ (Brendtro,Brokenleg, & Van Bockern, 1990, p. 58). In order to help homevisitors build relationships and make a difference using effectivecommunication skills, Klass (1996) spends almost 20 pages onskills to encourage, maintain, and promote the parent-home vis-itor relationship. In a similar way, Diane DePanfilis and MarshaSalus (1992) list skills for caseworkers to build a helping rela-tionship with client children and their parents through good com-munication as a critical skill.

In Sage Publications’ nine-volume ‘‘Program EvaluationKit,’’ a set of books to support evaluators, effective communi-cation is referenced in only one of them. In that volume, Howto Communicate Evaluation Findings, the emphasis is on theformat and outline of evaluation reports. It also covers instruc-tions for preparing both written and oral reports and preparingtables and graphs. The goal of these guidelines is to help eval-uators to convey information accurately and efficiently. Rela-tionship is not an issue.

It is easy to visualize how communication between research-ers and practitioners can be misunderstood and misinterpreted.If one group is concerned with relationship issues and the otherinterested only in information, facts, and figures, those mis-matched goals could lead to frustration and confusion.

Daily life styles. Academics enjoy a large amount of free-dom in the classroom and in much of their research, while prac-titioners operate under fairly close scrutiny. To complicate mat-ters, the groups seem to reverse roles in other settings. Evalua-tion researchers work to devise study designs with very strictguidelines and consistency, while practitioners are told that re-sponsiveness and flexibility are key attributes of quality serviceproviders. ‘‘Staff members and program structures are funda-mentally flexible’’ (Schorr, 1988, p. 257).

When reviewing the professional work environment of re-searchers, one sees that supervision and oversight in the area ofresearch consist primarily of peer review related to grant fund-ing, institutional review boards that oversee the rights of humansubjects who participate in research, and manuscript publication.Researchers may have annual reviews for promotion or raises,

344 Family Relations

but their day-to-day actions are not monitored often. Most re-searchers and academics have flexible work schedules, allowingfor variable work hours—but often demanding a large numberof evening and weekend commitments.

Flexibility for evaluation researchers ends when a study isdesigned, however. Procedures are detailed and usually intrac-table. The goal is making sure that data are gathered in a spec-ified and predictable way, so that changes in measured outcomescan be assumed to be due to real changes in subjects, not tochanges in study methods or procedures.

Practitioners, on the other hand, have limited flexibility orfreedom in their work settings. They often experience weeklysupervision/case review or need to keep detailed records of hoursspent doing particular tasks, number of individuals reached, andclient satisfaction. Practitioners also must notify someone of theirlocation at all times when they are on the job or on call. Theyproduce service plans and regular case reports regarding indi-vidual families, or they participate in regular case conferences.All of this work is usually reviewed and approved or modifiedby someone else. Practitioners are expected to adhere to agencypolicies and guidelines, and many operate under certification byprofessional groups that stipulate specific codes of ethics. If thesepractitioners work for the government, the problem could be‘‘operations manuals with page upon page of rules covering ev-ery conceivable situation that the bureaucrat may have to dealwith in his [or her] working life. . . county workers [are] re-quired to do everything by an established set of procedures’’(RAND, 1997–98, p. 4). RAND goes on to provide the contrastwith the research world. ‘‘On the other [hand] are researcherswho need flexibility in putting the results of their analyses towork. As you can imagine, the cultures are often at odds’’(RAND, p. 4).

At the same time that practitioners function in such a struc-tured working environment, they also have competing pressures.As Lisbeth Schorr (1988) stated it, ‘‘the process of applying awide array of services on a flexible and individualized basis runscounter to the traditions of many of the helping professions andthe requirements of most bureaucracies’’ (p. 178). She alsopoints out that ‘‘effective programs require competent, caring,and flexible professionals’’ (p. 273). In fact, she suggests thatthe best professionals will break the rules as needed to benefitfamilies. It is unclear whether these competing pressures lead toprofessionals who have difficulty abandoning rules and regula-tions to adapt to new situations and the needs of clientele, or ifthe encouragement to operate ‘‘free of the normal external re-straints’’ (Schorr, p. 266) leads professionals to reject all restric-tions in favor of the individual client. Either approach could bechallenging in an evaluation situation.

As the above statements suggest, daily life styles may be-come conflictual when evaluation teams attempt to schedulemeetings or establish or modify work or evaluation procedures.Researchers may assume that practitioners have the same leewaythat the researchers enjoy in setting work hours and in makingdecisions about how they perform their tasks. At the same time,they may assume that the practitioners share their commitmentto the standardized procedures as outlined. Practitioners mayenvy, resent, or be confused by the way that researchers do theirjobs. They may be fiercely supportive of their clientele and bevigilant of any research practice that may seem tedious or intru-sive, and they may apply their bent toward flexibility at thatpoint. The differences in job flexibility and independence are in

some ways also related to class or social stratum, so that conflictsarising from these contrasts may be especially delicate. As theclass with greater power and flexibility, researchers may be lessaware of the contrast in work conditions, while practitioners maysee the comparison as obvious and glaring.

Use of tools. A researcher’s preferred tools center on so-phisticated technology and large-scale data management, whilea practitioner’s may focus on communications and informationmanagement regarding individuals. Researchers begin their workin the library or with on-line computer services. They look forstudies conducted by others and review the findings of the strongstudies. They look especially for studies that included large num-bers of subjects, that were conducted by well-known profession-als or at well-respected institutions and that others have quoted.Computers, whether large mainframe models, personal desktopversions, or laptops, are critical devices that must be powerfulenough to complete the analyses for the size and type of dataset being studied. Researchers also may use tools like videotapeand audiotape recorders or more sophisticated devices to collectdata. They keep forms and data for years after they have beencollected.

Practitioners depend on tools that help them keep in touchwith clients, supervisors, and referral contacts. Because they dealwith a number of immediate crises, they often have pagers orcellular phones so that they can be reached immediately whenneeded. Telephones are critical for practitioners also when deal-ing with less crisis-oriented needs, such as negotiating appoint-ments with clients or exploring resources for families. Keepingrecords is a very important task for practitioners just as it is forresearchers, but practitioners’ records are likely to focus on in-dividual families rather than on background information or onlarge data sets. Like researchers, they also may keep data inlocked cabinets, but have little use for a family’s data once thatfamily has terminated.

To practitioners, the researcher’s tools may seem expensive,overwhelming, and intimidating. To researchers, the practition-er’s tools may seem annoying and intrusive. It is clear that thediffering tasks, needs, and orientations of the two groups requiredifferent tools, but the players are not likely to understand whythe tools are so different unless they understand their differingtasks, needs, and orientations.

In summary, researchers tend to operate with a long timeframe; believe things if they can be proven empirically; respectnumbers, logic, and science; have high amounts of professionalfreedom and flexibility but narrow and restricted parameters inresearch designs; and use complex technological tools. Practi-tioners, on the other hand, operate on immediate time frames;respect intuition, experience, and personal testimonials; have rel-atively inflexible jobs with close supervision but strive to beresponsive and flexible with clientele; and use tools that facilitatecommunication and personal connections. Researchers are ob-servers who try to understand and predict behavior, and practi-tioners are hands-on interventionists who try to mold and changebehavior.

Conflicts of Cultures

These differences in cultural contexts set the stage for con-flict. Hacker and Wilmot (1985; cited in Folger, Poole, & Stut-man, 1993) have defined conflict as ‘‘the interaction of interde-pendent people who perceive incompatible goals and interfer-

2000., Vol. 49, No. 3 345

ence from each other in achieving those goals.’’ I have experi-enced a number of such conflicts in an evaluation study I haveconducted with colleagues. The study involved a home-visitingprogram that targeted pregnant or new parents who were at riskfor abusing and neglecting their children. My job was to com-plete a statewide outcome evaluation of this program, which wascoordinated at the state level and included a handful of programsites initially. Because the sites were located over the state, ourplan was to have program staff collect some questionnaire in-formation from all consenting participants, and the evaluationteam would complete interviews with a smaller number of par-ticipants located within driving distance.

The conflicts we encountered could be categorized in fiveprimary groupings: staff concerns that evaluation procedureswould negatively impact families; demands on staff time; mis-understandings of data collection procedures; very rapid programgrowth; and data management procedures designed for case man-agement rather than research and evaluation. Some examples ofeach of these types of problems follow.

● Concerns about the impact of the evaluation procedures onfamilies grow out of contrasts in values and views of success.The evaluation team wanted to collect initial data on familiesas soon as possible. Program staff were concerned that col-lecting data at the beginning of the family’s involvement—right after the program had asked them to participate in a de-tailed clinical interview to determine program eligibility—would lead people to drop out of the program early. In addi-tion, the evaluation team wanted to videotape mothers andbabies interacting, but the program staff felt it would be intim-idating and intrusive.

● Demands on staff time became a conflict because of differ-ences in daily life style and again because of contrasting val-ues. The evaluation plan stipulated that the program staffwould complete questionnaires with participating families ev-ery six months. Because the staff expressed concerns aboutbeing able to complete the forms at the proper time and inorder to adapt to their needs, the evaluators set a very largedata collection window, allowing a two-and-a-half-month win-dow for each time slot. (Still, 15% of data were collected outof the data collection window.) In order for data collection tobe considered complete for a family, staff needed to work withfamilies to respond to all four questionnaires and needed tocomplete one observation scale. The time commitment seemedrealistic to the evaluators, but the practitioners perceived it astaking away from more important duties. (This is consistentwith Cooke’s [1998] report that ‘‘staff members found it dif-ficult to complete evaluation duties while fulfilling their reg-ular work with families’’ [p. 9].)

● Misunderstandings of evaluation procedures led to some inter-esting dilemmas. These seemed to stem from assumptionsmade in communication. Families needed to sign consentforms to indicate willingness to participate in the evaluation,and the staff needed to send the completed forms to the eval-uators. Staff, knowing that the evaluators did not want iden-tifying information on the data forms, sent consent forms withthe names obliterated, making it impossible to verify that con-sent had been given by specific families.

● Program growth was a joy for program staff, but caused head-aches for evaluators. This was a reminder of temporal orien-tation contrasts in addition to value differences. Evaluators feltthat more outcome results should be gathered before expand-

ing, but practitioners saw families with immediate needs andfelt the program could help. The number of program sites morethan tripled in the first year. The evaluation plan stipulated thatfamilies who were identified when the program caseloads werefull would serve as a no-treatment comparison group of fam-ilies. Because of growth in funding and staffing, the caseloadsrarely became full. In addition, when caseloads came close tobeing full, program staff, in keeping with their values and pri-orities, worked to find a way to ensure that every willing fam-ily could receive services.

● Data management conflicts arose from different use of toolsand differences in values. Evaluators did not collect back-ground demographic data on families, because the programwas collecting extensive data to fulfill both the need for mon-itoring of families and the need for data for evaluation. Siteswere allowed to create their own ID numbers—with any num-ber of digits and including any data they wished. The datawere computerized, but they were organized to support casemanagement and program monitoring rather than evaluation.As the computer system was put into place, it automaticallyassigned identification numbers, so that ID numbers for manyfamilies changed—breaking a cardinal research rule. Afterfamilies left the program, their identification numbers weresometimes reassigned to a new family. That worked fine forprogram staff, but created many nightmares for the evaluationteam.

While these conflicts may be understandable and even pre-dictable based on the earlier analysis, solutions may not be ob-vious. It is helpful to consult conflict management literature tofind such solutions.

Managing ConflictFirst, researchers and evaluators would benefit from devel-

oping a constructive view of conflict. While it may seem to rep-resent a failure when individuals or groups find themselves inconflict with each other, it is important to recognize that conflictis a natural part of life. ‘‘The more differences and disagreementsare repressed in order to prevent conflict, the more likely it isthat there will be conflict, and that it will be painful and destruc-tive’’ (Youssy Albright, 1995). In order to work together in anevaluation effort, researchers and practitioners must learn to un-derstand and manage conflict.

A helpful perspective may be looking at the stance of eachparty in a situation. ‘‘Stance’’ refers to what will satisfy the par-ties as they attempt to resolve a conflict. Stance can be repre-sented by positions, interests, and needs (Girard & Koch, 1996).Positions are the solutions that people think they want. They arespecific, concrete outcomes and usually leave little room for ex-ploring options or solving problems. Interests represent the con-text of a person’s stance and are broader than positions. Theyoften refer to realms to which parties feel a commitment or anallegiance. Needs are broader yet. Referring to a need is gener-ally personal and problem-focused. In many cases, needs mustbe satisfied in order to fully resolve a conflict, even if thoseneeds are unstated.

Authors have suggested that certain stances will make a sat-isfying solution to conflict more likely (Girard & Koch, 1996).Others have defined supporting climates that facilitate findingpositive solutions to conflicts (Folger et al., 1993). Applyingtheir recommendations to the interactions of evaluation research-ers and practitioners, several guidelines emerge.

346 Family Relations

Guidelines for Managing Researcher-PractitionerConflict

1. Identify your own needs, interests, and purposes con-nected with the evaluation effort. Decide why you are partici-pating and what benefits you personally or your agency or or-ganization expects to gain from the effort. Practitioners may needto produce evaluation results for a funding source or may needto make a decision about a change in the program and wantevaluation information to help with that decision (Greene, 1988).Other practitioners may be involved only because someone toldthem they had to do this, or they may not be ‘‘sincerely com-mitted to obtaining accurate information about the program inquestion but only [want] research done to buttress decisions al-ready made’’ (Morris & Cohn, 1993, p. 630). An evaluationresearcher may be interested in comparing the results of thisintervention to previous evaluation literature or may be tryingto build a publication record to help with personal credibility forfuture funding requests. Other researchers may be just curiousand think the study is interesting. Communicate your needs, in-terests, and purposes to the collaborating partners honestly andin a nonjudgmental way. Needs and interests are not right orwrong, good or bad; understanding where the others are comingfrom is critical, however.

2. Find the intersection of interests that are shared by bothevaluation researchers and practitioners. Once all the partieshave shared their goals and needs, outline shared concerns. Bespecific about issues of interest. Both groups may want to learnmore about the optimal timing of the intervention or what theoutcome of the program is for families in poverty. Both groupsmay believe in the program and want to see it survive and ex-pand. In some cases, the only shared goal may be meeting aparticular deadline, but further communication and networkingmay reveal additional areas of shared interest. Embellish andemphasize shared goals as much as possible as the evaluationprogresses, but also identify areas of potential conflict. If one ofthe areas of interest for the researcher is the examination of theideal qualifications of the practitioners, be aware of the threat-ening nature of that question for the practitioners in the program.If the only area of interest for the practitioners is to prove theprogram is good, be prepared for conflict with the evaluator whofeels a need to look for both positive and negative outcomes.Whenever differences occur, try to return to (or find) the sharedgoals in finding a solution.

3. Define power relationships related to the evaluation. Out-line what each party may or may not decide or how decisionswill be made jointly. These procedures should be based on theassumption that each side has expertise from which the otherside may benefit. The evaluation researchers may state that theevaluation team makes all decisions about the structure and man-agement of all data collection forms (Greene, 1988). That meansthat practitioners may not change ID numbers, must use a certainform to collect data, and must ask all questions on the form inthe order that they appear. The practitioners may state that alldecisions about the content of individual or group sessions withclientele rest with the program staff. That means that researchersmay not dictate the use of any procedures or curriculum withoutcollaboration with the program staff. Both groups together maywant to decide how families will be assigned to no-treatmentcontrol groups, and what circumstances would lead to providingtreatment to individuals originally assigned to that group. Greene

(1988) has suggested that the groups should work together indefining the initial evaluation questions and in the interpretationof results. In any case, each group should identify an area inwhich it is in control, and power relationships should be as bal-anced as possible.

4. Each group should describe what is negotiable and whatis non-negotiable in the way that they will perform their dutiesas members of the evaluation team. The list of non-negotiablepositions should be very short, but the list of needs and interestsmay be more extensive. Non-negotiable guidelines will be mostclosely related to the ethical guidelines for each group. Somenon-negotiables for the evaluators may include requiring pro-gram staff to always get informed consent before gathering anydata, never allowing program staff to review any individual’sevaluation responses, or requiring program staff to use only as-signed ID numbers on evaluation forms. However, if the evalu-ators focus on their needs and interests, they may find that thereis some room to negotiate with program staff in the determina-tion of ID numbers and that methods of obtaining consent canbe adapted to the program. The non-negotiables for practitionersmay include never allowing any names or other identifying in-formation to be given to the evaluators, not allowing any eval-uators to contact clientele until initial rapport between the familyand the program has been established, or that all reports or pa-pers from the evaluation team need to be provided to the pro-gram staff for review before presentation or publication to out-side groups. Again, negotiation with the evaluators may uncoversome techniques for meeting the needs of both the practitionersand evaluators without violating the non-negotiable guidelines.

5. Establish many opportunities and channels of communi-cation. Get to know each other. Do some of this before the harddecisions need to be made. If possible, shadow each other onthe respective jobs. Alternate meeting places between researchand practitioner settings. In respect for the different methods ofcommunication used by the two groups, include both relation-ship-based communication for the benefit of the practitioners andtechnical communication for the evaluation researchers. Makesure that some face-to-face communication occurs regularly.Each side should ask questions and listen carefully to the other.

6. Each side should find opportunities to educate the otherabout its approaches. Researchers can teach practitioners somebasic research concepts, and practitioners can teach researchersabout the ‘‘best practices’’ in their field. This will help each tounderstand the reasons for procedures and anticipate needs. Ifpractitioners understand the reason for needing to keep recordseven though families have terminated with the program, theywill be more likely to do so than if they are simply given adirective. If evaluators understand the process of training a newstaff person, they will be able to formulate their orientation of arecent trainee to the evaluation process in a way that is clear andminimally disruptive. At the same time, each side needs to acceptthat the other is not likely to change its view of the world as aresult of this education.

7. Incorporate multiple ‘‘ways of knowing’’ into the evalu-ation design. Statistics and charts are one way of demonstratingimpact, and professionals with a research background know andappreciate this approach. Another way of knowing is throughtestimonials and case studies. Evaluation designs that includeopen-ended qualitative methods along with objective question-naires and observation may help to meet the needs of research-ers, practitioners, and the other stakeholders who will use the

2000., Vol. 49, No. 3 347

evaluation results. Quotes from qualitative interviews also mayhelp to meet the immediate needs of the practitioners for somekind of reportable results before the study is completed.

8. Return regularly to the shared interests and needs in or-der to check direction. Remind each other of shared objectives.At predetermined time periods, meet as a group and review thegoals of the evaluation and the goals of the individuals. Celebratethe achievement of shared goals and the reaching of individualgoals. If the course seems to be shifting away from the originalshared purpose, consider ways to balance the direction to meetshared goals. To accommodate to the different time orientations,researchers can break long-term processes into small steps andshow progress along the way. This will improve relevance andeventual application of the results of the evaluation study(Greene, 1988).

The evaluation team in which I have been involved attempt-ed some of these approaches and might have benefited fromusing more of them. Betty Cooke (1998) listed many similarmethods in her collaborative evaluation. In her case, researchersprepared detailed evaluation guides, the practitioners had accessto evaluation consultants for technical assistance, and evaluationworkshops were offered to the practitioners. In our case, estab-lishing joint evaluation goals from the beginning may have in-creased the commitment of program staff to the collection oftimely and complete data. Such sharing also may have increasedthe commitment of staff to the recruitment of a comparisongroup by helping them to understand its importance.

Communication was helpful for us in a number of ways.Newsletters from the evaluation team for the program staff in-troduced new evaluation team members and notified programstaff of changes in evaluation procedures. By presenting partic-ipating client families with a copy of the videotape of their par-ent-child interaction, we allayed concerns about the intrusivenessof the taping. In fact, the videos proved to be the best recruitingtools for families to participate in the evaluation procedures.Face-to-face meetings between practitioners and evaluators werescheduled regularly, allowing evaluators to develop personal re-lationships with program staff. Meals and informal discussiontime were very helpful in building collaboration, but were dif-ficult to schedule. Many other challenges would have been moreeasily managed if relationships and boundaries had been moreclearly specified than they were. Finally, a more collaborativerelationship would have allowed the researchers to be a resourcerather than an impediment to program growth as program staffsought to make decisions about changes in procedures and scope.Staff could have suggested research questions, and evaluatorscould have provided information about changes that would havebeen most likely to achieve the program’s goals.

All of the suggestions are likely to be most effective whenthe program being evaluated is small, when there is sufficientmoney for both the program and the evaluation, and when re-lationships are collegial from the beginning of the evaluation. Insituations that inherit a significant amount of conflict or whenthere has been great difficulty in finding shared ground or re-solving differences, it may be helpful to bring in an outside me-diator to help all parties focus on the production of a high qualityassessment of the program.

Researchers and practitioners do make an odd couple inmany ways, but their differences also can complement each otherin critical domains. Evaluations of family-serving programs are

crucial in building an understanding of how families operate andin providing feedback for the further development and refine-ment of intervention programs. As public officials, funding agen-cies, and the general public become more sophisticated in un-derstanding research and evaluation, their expectations of qualityare rising. If they work together, researchers and practitionerscan respond to those rising expectations and produce powerfulreports that support the worlds of evaluation research and prac-tice and improve services to and the well-being of families.

ReferencesBrendtro, L. K., Brokenleg, M., & Van Brockern, S. (1990). Reclaiming youth

at risk: Our hope for the future. Bloomington, IN: National Educational Ser-vice.

Brady, A., & Coffman, J. (1997). Achieving and measuring results: Lessons fromHFRP’s parenting study. The Evaluation Exchange, III (1),11.

Carrera, M. A. (1996). Lessons for lifeguards: Working with teens when thetopic is hope. New York: Donkey Press.

Conning, S., & Erickson, J. B. (1993). Some fundamentals of program evalua-tion. From The Youth Institute (Indiana Youth Institute), 5 (2),4.

Cooke, B. (1998). Use of staff in family program evaluations. The EvaluationExchange, IV (2),8–9.

Corron, C. (1998/Spring). Active Parenting enters the workplace. Leader. 3–5.Cousins, J. B., & Earl, L. M. (1982). The case for participatory evaluation.

Educational Evaluation and Policy Analysis, 14, 397–418.Cousins, J. B., & Earl, L. M. (1995/Fall). Participatory evaluation: Enhancing

evaluation use and organizational learning capacity. The Evaluation Exchange,I (3/4),2–3.

Dail, P. W. (1988). Evaluating family life education: Defining the philosophywhich drives the process. Paper presented at the National Council on FamilyRelations Annual Meeting, Philadelphia, November.

DePanfilis, D., & Salus, M. K. (1992). Child protective services: A guide forcaseworkers. Washington, DC: U.S. Department of Health and Human Ser-vices.

Folger, J. P., Poole, M. S., & Stutman, R. K. (1993). Working through conflict:Strategies for relationships, groups, and organizations. New York: Harper-Collins.

Girard, K., & Koch, S. J. (1996). Conflict resolution in the schools: A manualfor educators. San Francisco: Jossey-Bass.

Greene, J. G. (1988). Stakeholder participation and utilization in program eval-uation. Evaluation Review, 12 (2),61–116.

Guterman, N. B. (1997). Early prevention of physical child abuse and neglect:Existing evidence and future directions. Child Maltreatment, 2 (1),12–34.

Klass, C. S. (1996). Home visiting: Promoting healthy parent and child devel-opment. Baltimore, MD: Paul Brookes.

Leslie, D. (1993). Evaluation: How do you know you’re making a difference?From The Youth Institute (Indiana Youth Institute), 5 (2),1.

Morris, M., & Cohn, R. (1993). Program evaluators and ethical challenges: Anational survey. Evaluation Review, 17, 621–642.

Olds, D. L., Henderson, C. R., & Kitzman, H. (1994). Does prenatal and infancynurse home visitation have enduring effects on qualities of parental caregivingand child health at 25 and 50 months of life? Pediatrics, 93 (1),89–98.

RAND. (1997–98/Winter). Reinventing local government: How can researchhelp? RAND Research Review, XXI (3),4–11.

Schorr, L. B. (1988). Within our reach. New York: Doubleday.Seligman, M. E. P. (1996). Science as an ally of practice. American Psychologist,

51, 1072–1079.Starke, R. E. (1983). Stakeholder influence in the evaluation of Cities-in-Schools.

In A. S. Bryk (Ed.) Stakeholder-based evaluation. San Francisco: Jossey-Bass.Stecher, B. M., & Davis, W. A. (1987). How to focus an evaluation. Newbury

Park, CA: Sage.Youssy Albright, J. (1995). Intervening in a conflict: Comprehensive model. In

Ministry of Reconciliation, Discipleship and reconciliation committee hand-book. New Windsor, MD: Church of the Brethren Ministry of Reconciliation.

Judith A. Myers-Walls is an associate professor and ExtensionSpecialist focusing on child development and parenting topics.She has developed and evaluated small and large family lifeeducation programs.

Received 2-8-99Revised & Resubmitted 10-14-99Accepted 12-20-99