evaluation in school-based health centers

12
EVALUATION IN SCHOOL-BASED HEALTH CENTERS LAURA A. NABORS University of Cincinnati School-Based Health Centers (SBHCs) are ideal primary care settings for improving children’s access to and utilization of health care services. In this era of shrinking funding for social service programs, SBHCs may provide services to youth from low-income families, who otherwise might lack access to health care services. However, the growth of SBHCs has outpaced evalua- tion efforts. More information is needed about what services are being provided, and for whom services are effective. This article reviews information that will assist in the development of evaluation efforts for SBHCs. A review of evaluation theory, ideas for evaluation in SBHCs, challenges to implementing research in schools, and future directions for evaluation efforts are presented. © 2003 Wiley Periodicals, Inc. Growth of school-based health centers (SBHCs), which are based in schools or linked to nearby agencies that have a formal relationship with the school, has outpaced evaluation efforts (Santelli, Morreale, Wigton, & Grason, 1996). Concomitantly, state and federal funding for chil- dren’s services is decreasing, thereby increasing the role of community programs such as SBHCs in caring for our nation’s children. Improving knowledge about the impact of SBHCs on health and academic outcomes for youth is important to ensure the sustainability of these programs. Moreover, evaluating the services provided in SBHCs will provide accountability and quality assurance data that can provide directions for improving these programs (Center for Mental Health in Schools, 2000). Also, results from evaluation research can be used to apply for further funding, in the form of grants and contracts, thereby improving the financial status of SBHCs. Despite this convincing argument for increasing evaluation efforts, research documenting the effectiveness of the various health services provided by SBHCs is lacking (Dryfoos, 1995; Santelli et al., 1996). This article is meant to serve as a resource for individuals who are interested in evaluating SBHC services. Evaluation theory, potential outcome indicators for SBHCs, challenges to conducting evaluation projects in SBHCs, and ideas for future evaluation efforts are reviewed. Beginning the Evaluation Developing research for evaluation of a service or services provided in SBHCs can be daunt- ing. There are many good resources, however, such as those listed in the Introduction to this special issue, for both novice and more experienced evaluators. To begin, a research team should be selected, as the evaluation typically requires input from several researchers. The team should be composed of key stakeholders (e.g., clinicians, parents, students) as well as experienced research- ers. It also is important to ensure that extra funding and time (e.g., time for data collection, entry, and analyses) is set aside for the evaluation. Organizations deciding to engage in this process must be aware that it takes commitment, time, effort, and expertise; most SBHC programs are compre- hensive and multifaceted, and therefore, typically represent a challenge in terms of defining and measuring key outcomes. At the beginning of the evaluation process it is important to determine key research ques- tions. Research questions may vary widely between SBHCs, as significant variability exists among the process of care and services delivered in different centers. It may be useful to develop some general questions for evaluating SBHC services so that data from outcome studies may be compared Correspondence to: Laura Nabors, Department of Psychology, Mail Location 376, Room 429 Dyer Hall, University of Cincinnati, Cincinnati, OH 45221–0376. E-mail: [email protected] Psychology in the Schools, Vol. 40(3), 2003 © 2003 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pits.10090 309

Upload: laura-a-nabors

Post on 06-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Evaluation in school-based health centers

EVALUATION IN SCHOOL-BASED HEALTH CENTERS

LAURA A. NABORS

University of Cincinnati

School-Based Health Centers (SBHCs) are ideal primary care settings for improving children’saccess to and utilization of health care services. In this era of shrinking funding for social serviceprograms, SBHCs may provide services to youth from low-income families, who otherwisemight lack access to health care services. However, the growth of SBHCs has outpaced evalua-tion efforts. More information is needed about what services are being provided, and for whomservices are effective. This article reviews information that will assist in the development ofevaluation efforts for SBHCs. A review of evaluation theory, ideas for evaluation in SBHCs,challenges to implementing research in schools, and future directions for evaluation efforts arepresented. © 2003 Wiley Periodicals, Inc.

Growth of school-based health centers (SBHCs), which are based in schools or linked tonearby agencies that have a formal relationship with the school, has outpaced evaluation efforts(Santelli, Morreale, Wigton, & Grason, 1996). Concomitantly, state and federal funding for chil-dren’s services is decreasing, thereby increasing the role of community programs such as SBHCsin caring for our nation’s children. Improving knowledge about the impact of SBHCs on healthand academic outcomes for youth is important to ensure the sustainability of these programs.Moreover, evaluating the services provided in SBHCs will provide accountability and qualityassurance data that can provide directions for improving these programs (Center for Mental Healthin Schools, 2000). Also, results from evaluation research can be used to apply for further funding,in the form of grants and contracts, thereby improving the financial status of SBHCs.

Despite this convincing argument for increasing evaluation efforts, research documenting theeffectiveness of the various health services provided by SBHCs is lacking (Dryfoos, 1995; Santelliet al., 1996). This article is meant to serve as a resource for individuals who are interested inevaluating SBHC services. Evaluation theory, potential outcome indicators for SBHCs, challengesto conducting evaluation projects in SBHCs, and ideas for future evaluation efforts are reviewed.

Beginning the Evaluation

Developing research for evaluation of a service or services provided in SBHCs can be daunt-ing. There are many good resources, however, such as those listed in the Introduction to thisspecial issue, for both novice and more experienced evaluators. To begin, a research team shouldbe selected, as the evaluation typically requires input from several researchers. The team should becomposed of key stakeholders (e.g., clinicians, parents, students) as well as experienced research-ers. It also is important to ensure that extra funding and time (e.g., time for data collection, entry,and analyses) is set aside for the evaluation. Organizations deciding to engage in this process mustbe aware that it takes commitment, time, effort, and expertise; most SBHC programs are compre-hensive and multifaceted, and therefore, typically represent a challenge in terms of defining andmeasuring key outcomes.

At the beginning of the evaluation process it is important to determine key research ques-tions. Research questions may vary widely between SBHCs, as significant variability exists amongthe process of care and services delivered in different centers. It may be useful to develop somegeneral questions for evaluating SBHC services so that data from outcome studies may be compared

Correspondence to: Laura Nabors, Department of Psychology, Mail Location 376, Room 429 Dyer Hall, Universityof Cincinnati, Cincinnati, OH 45221–0376. E-mail: [email protected]

Psychology in the Schools, Vol. 40(3), 2003 © 2003 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/pits.10090

309

Page 2: Evaluation in school-based health centers

across sites. Examples of areas for general questions include: (1) service utilization, (2) servicecosts, (3) consumer satisfaction, and (4) outcomes of care for youth receiving services (e.g.,change in health status or knowledge, improved functioning in school). Specific research ques-tions also can be designed to examine services provided by individual programs within or acrossSBHCs. For instance, if prevention services are offered key evaluation questions may includewhether these services improve health care knowledge and reduce risk-taking behaviors for youthreceiving services. If the goal is to examine the effectiveness of school-based mental health ser-vices evaluation questions may focus on examining changes in behavioral or emotional function-ing or changes in family functioning for children who have been participating in treatment.

Table 1 presents several types of questions that can be used to guide evaluation efforts inSBHCs. The variety of questions that can be asked and the many service areas (e.g., health care,prevention, mental health) that need to be evaluated often make conducting evaluations difficult

Table 1Questions to Consider When Conducting Research in Schools

What factors should be considered as the research project is developed?• Negotiating with schools and stakeholders to gain access to the schools.• Different outcomes are of interest to different stakeholders—how should key outcome indicators be selected?• What Human Subjects Review Boards should the project be submitted to (school, university, health department)?• What is an optimal way to communicate the purpose of the research and steps in the research design to stakeholders?• Are there ethical concerns related to using a control or comparison group—should children wait for needed services?

If the evaluation design is changed, and becomes less rigorous, will the results continue to be of use for the field?• What measures can be used?• Is it possible to conduct behavior observations?

What issues are involved in obtaining parental consent?• What is the best method for distributing informed consent forms? Will the teachers or center staff be of assistance in

this process? Should teachers and staff be compensated for helping with the recruitment process?• Should incentives be used to encourage youth to bring back consents? If an incentive will be used, what type should

be used (money, candy)?

When are the children free to participate in the research (e.g., during recess, study hall)?

Where will outcome assessments occur (at the SBHC, in the classroom, in a separate office)?

IF SBHC staff or teachers need to complete measures, how does one motivate them to complete the measures and howshould they be compensated for their efforts?

Which types of biases may threaten the internal or external validity of the project?• Will there be differences between students who participate in the evaluation and those who do not (selection bias)?• Will filling out surveys or questionnaires at different outcome points (intake, 6 months later) impact student responses

(e.g., what about practice effects)?• Will student responses be influenced in a positive direction, simply because they realize they are participating in a

research project (Hawthorne Effect)?• If multiple measures will be used, will positive responses on one measure influence the student to also respond positively

to other measures?• What are the best processes to guard against selective subject loss and attrition?• What types of confounding factors might increase key outcomes (e.g., family socioeconomic status, level of neighborhood

violence)?

How will the findings impact key components for the SBHC (e.g., funding, stakeholder interest groups, public policy)?

310 Nabors

Page 3: Evaluation in school-based health centers

for busy centers, especially where a primary goal is increasing access to services for a largenumber of underserved youth. To reduce the overwhelming nature of the evaluation process it maybe advisable for evaluation teams to set short-term, intermediate, and long-term research goals.Another way to reduce the scope of the evaluation process is to examine different aspects ofservices in a series of studies.

Selection of outcome indicators also should be considered at the beginning of the evaluationprocess. One way to conceptualize these indicators is to consider them in terms of quantitative(e.g., improved nutrition resulting from free lunch programs; changes in health status as a result oftreatment) and qualitative indicators (e.g., student perceptions of SBHC services). Functionalindicators, which focus on academic achievement or school success, provide real-world data forschool administrators and legislators. Examples of functional indicators include school atten-dance, grades, retention rates, graduation and drop out rates, legal records, and discipline records(Rosenblatt & Attkisson, 1993). Changes in children’s social, emotional, or behavioral function-ing, and change in family functioning are other potential outcome indicators.

Avedis Donabedian (1980) provided a framework for evaluating the myriad of outcome indi-cators related to the variety of services provided by different SBHCs. He divided the evaluationprocess into three parts—research examining program structure, treatment process, and treatmentoutcomes. The following section reviews information about evaluating these three types of indi-cators in SBHC settings.

Using Donabedian’s Quality Indicators for Evaluations in SBHC

In this era of managed care and shrinking funding for health services, evaluating services toprove their worth is becoming popular. Finding a theory that can guide evaluation efforts cananchor the evaluation process. Donabedian (1980) suggested that three interrelated quality indi-cators should be assessed during the evaluation process: program structure, treatment process, andoutcomes. His theory may be useful in guiding evaluation efforts for SBHCs because of its sim-plicity and practical division of evaluation topics.

Program Structure

The structure of an organization may be conceptualized as factors contributing to physicalpresentation of the program and characteristics of the service providers (Donabedian, 1980). Indi-cators of program structure include adequacy and quality of physical space, service utilizationrates, experience and qualifications of providers, and the match between services and consumerneeds (Kisker & Brown, 1996; Santelli et al., 1996; St. Leger, 2000). When examining the struc-ture of SBHCs it may be beneficial to begin by conducting a needs assessment to collect informa-tion on key stakeholders’ (e.g., students, parents, teachers) views of services that should be providedby the SBHC. Reviewing results from surveys or focus groups designed to gather data aboutconsumer needs can shape the direction of the program; administrators can then decide whetherthe evaluation team will assess the effectiveness of existing services or will examine the processinvolved in developing new services to address unmet needs.

Other information gained from a structural assessment can provide feedback that will makethe SBHC more “user-friendly” for students. For example, student input can help improve thedesign of office space or paperwork process for students, and thus their use of the services andshow-rate for appointments may improve. Documenting the training and experiences of staff (e.g.,nurses, counselors) can provide information about their qualifications, which can be used in pro-posals for additional funding and for applications to join managed care panels.

Evaluation in SBHCs 311

Page 4: Evaluation in school-based health centers

Treatment Process

This outcome area refers to the different factors that influence how treatment is delivered(Donabedian, 1980). Examples of process indicators are students’ perceptions of staff expertiseand abilities, records of interactions between SBHC staff and students during clinic visits, ordocumentation of the steps students follow when complying with treatment recommendationsmade by SBHC staff. Assessing the process of care can be time-consuming for the evaluationteam. It may involve interviewing staff and students or recording interactions by audiotape orvideotape.

If audiotaped or videotaped records are made research assistants must typically be hired totranscribe and/or code information. This can cause the costs associated with the evaluation increase,often making it prohibitive for organizations to conduct evaluation activities without additionalfunding. To begin to gather evidence on process indicators, SBHC staff may need to seek addi-tional funding, possibly through grants, and may need to form teams with other SBHCs on aregional or national level to begin to conduct more evaluation studies in this area. Besides involv-ing extra costs and time, assessment of process variables can be a challenging area in terms ofethical and methodological issues (Gomby & Larson, 1992; Green, 2001). For example, it may bedifficult to study relationships between health care providers and children due to ethical andprivacy concerns.

Complex research or data analysis methods may be needed to understand the relationshipsbetween process indicators. To understand the relationship between interactions between SBHCstaff and students many variables need to be considered. The evaluator must also consider theinfluence of possible mediator and moderator variables, such as the personality traits of SBHCstaff and of the students receiving services. “Action” or “participatory” research, which involveskey stakeholders in designing procedures and interpreting findings, and investigates for whom,how, and why services work, may be one approach for examining relations among process andoutcome indicators (Hall, 1992).

Treatment Outcome

Studies focusing on how treatment affects outcome indicators (e.g., change in academic per-formance) are more common in comparison with research on program structure or treatmentprocess. Table 2 presents presents a brief overview of outcome studies in different areas. Theresearch methods, procedures, and outcome indicators selected for these studies may help to guidethe development of future projects for SBHCs.

One area that may be of particular interest to school psychologists affiliated with SBHCs areoutcomes for programs for children with chronic health conditions (see Table 3). In a special issuefor Children’s Services: Social Policy, Research, and Practice, Drotar (2001) recommended thatschool-based programs may be optimal settings for meeting the academic, mental health, andhealth care needs of youth with chronic health conditions. Psychologists in schools may be opti-mally situated to examine the ways in which school-based services improve school reintegration,academic achievement, and social relationships for children with chronic health conditions. Addi-tionally, studies that examine possible “medical cost offset” or savings related to providing school-based health care services are important areas for future research (Kubisyn, 1999).

Usually “outcomes” research addresses a myriad of factors that are related to program goals,requirements for accountability data, types of services offered by the SBHC, and the interests ofthe evaluation team. This makes it difficult to develop general guidelines in terms of standardprocedures and measures to use across different outcome studies. Therefore, careful considerationof research methods and procedures is important. Literature summarizing the results of several

312 Nabors

Page 5: Evaluation in school-based health centers

Table 2Examples of Evaluation Research in Schools

Area Source Overview

Service Utilization Kisker & Brown (1996). Do school-based healthcenters improve adolescents’ access to health care,health status, and risk-taking behavior?

Evaluated outcomes for students receiving care in 19different SBHCs. Information on utilization,knowledge of health information, health status, andrisk-taking behaviors was presented.

Crespo & Shaler (2000). Assessment of school-basedhealth centers in a rural state: The West Virginiaexperience.

Evaluated outcomes for 10 different SBHCs. Data onrates of enrollment and service utilization wereassessed for youth with various diagnoses and studyresults were compared with data on national norms.

Consumer Satisfaction Weiler & Pigg (2000). An evaluation of clientsatisfaction with training programs and technicalassistance provided by Florida’s Coordinated SchoolHealth Program Office. (CSHPO).

Results of a satisfaction survey for individualsattending training sessions or requesting technicalassistance from Florida’s CSHPO. Over two-thirds ofthe respondents were satisfied with services. Findingsprovided direction for program development.

Nabors, Weist, Reynolds, Tashman, & Jackson (1999).Adolescent satisfaction with school-based mentalhealth services.

Description of adolescents’ reasons for and level ofsatisfaction with services provided in a school mentalhealth program. Adolescents also identified areas forimproving service delivery.

Prevention Manios, Kafatos, & Mumalakis (1998). The effectsof a health education intervention initiated at firstgrade over a 3-year period: Physical activity andfitness indices.

Reviews a 3-year prevention program for improvingthe physical health of elementary school age children.Measured change in children’s physical skills, healthknowledge of children and parents, and time childrenspent in physical activity.

Prevention Schonfeld, Bases, Quackenbush, Mayne, Marion, &Cicchetti (2001). Pilot-testing a cancer educationcurriculum for grades K– 6.

Focus of program was to educate children about thelink between risk behaviors and the development ofcancer in adulthood. Children’s conceptual and factualunderstanding of educational material was assessedusing semistructured interviews.

Mental Health Catron & Weiss (1994). The Vanderbilt school-basedtherapy program: an interagency, primary-care modelof mental health services.

Evaluation of a school-based therapy program.Collaboration with other providers and preventiveinterventions were important activities. Serviceutilization and access to care were increased withon-site services.

Nabors & Reynolds (2000). Program evaluationactivities: Outcomes related to treatment foradolescents receiving school-based mental healthservices.

Information about change in behavioral and emotionalfunctioning, satisfaction with services, and studyretention for middle and high school students.

Support Services Williams and Sadler (2001). Effects of an urban highschool-based child care center on self-selectedadolescent parents and their children.

Program to reduce problems associated withadolescent pregnancy and parenting. Outcomeindicators included grades, promotion, high schoolgraduation rates, repeat births, and child healthoutcome.

Eber & Nelson (1997). School-based wraparoundplanning: Integrating services for students withemotional and behavioral needs.

Examined outcomes related to Project WRAP, whichwas designed to be a resource coordinating systemfor students, involving school-based services.

Children with ChronicHealth Problems

Rynard, Chambers, Klinck, & Gray (1998). Schoolsupport programs for chronically ill children:evaluating the adjustment of children with cancer atschool.

Outcome data on child adjustment, absenteeism, andachievement, as well as teacher perceptions of theusefulness of the program are presented.

Worchel-Prevatt et al. (1998). A school reentryprogram for chronically ill children.

Description of program implementation and outcomes(e.g., improved achievement) for a program forchildren with chronic illnesses.

Evaluation in SBHCs 313

Page 6: Evaluation in school-based health centers

studies by outcome area, such as Kirby and Coyle’s (1997) review of programs to reduce sexualrisk-taking behaviors or St. Leger’s (1999) review of the effectiveness of health promotion pro-grams in elementary schools can provide guidance for designing research projects. These reviewsare important because they summarize groups of studies in a given area and present a review ofimportant outcome indicators and suggestions for evaluating the effectiveness of services or inter-ventions. When considering evaluation methods, it is advisable to identify several sources of data(e.g., school records, teacher report) and to identify several methods for collecting and/or analyz-ing data (Dryfoos, Brindis, & Kaplan, 1996). It is important for the evaluation team to consultwith experts in the field and to be familiar with resources to improve research methods and designs(e.g., Drotar, 2000; Neuman, 2000) to maximize assistance in designing evaluations and analyzingand interpreting findings.

Despite careful planning and significant brainstorming with experts, evaluators may facesignificant challenges that make it difficult to conduct research in schools. In the following sectionof this article, other barriers to conducting evaluation research in SBHCs, and ideas for over-coming these challenges, are discussed.

Table 3Challenges Related to Conducting Evaluation Research in SBHCs

Area Resource Summary/Overview

Involving Stakeholders Price et al. (1999) Survey used to examine nurses’ perceptions of research. Improvingknowledge or health care efforts were reasons to become involvedin research.

Olds & Symons (1990) Strategies for gaining access to school, to conduct research, andfor maintaining support and interest in research.

Recruitment Harrington et al. (1997) Multilevel research design for enhancing recruitment efforts wasimplemented. Procedure was in place for consents that were notreturned quickly. Discussion of involving teachers and use ofincentives to improve recruitment efforts.

Obtaining Consent Ross, Sandberg, &Flint (1999)

Reviewed different types of consents used in schools and presents20 tips for increasing study participation.

Belzer, McIntyre, Simpson,Officer, & Stadey (1993)

Reviewed methods for increasing consent for a health promotionstudy conducted in elementary schools.

Retention Mills, Pederson, Koval,Gushue, & Aubut (2000)

Discussed methods of data collection that might improve studentparticipation in the study and retention. Described procedures forobtaining data, getting missing data, and tracking subjects.

Selecting Measures Lamp, Price, & Desmond(1989)

Reviewed criteria to be aware of when selecting measures. Discussedthe importance ensuring that measures used in research haveadequate psychometric properties.

Clinical Significance Cook & Walberg (1985) Discussed the major findings for and the implications of the SchoolHealth Education Evaluation Project. Presented ideas forsuccessfully conducting studies and for obtaining results that areclinically important.

Torabi (1986) Presented information on determining the clinical significance ofoutcomes in health research for children.

314 Nabors

Page 7: Evaluation in school-based health centers

Overcoming Barriers to Evaluation Efforts

Many barriers exist to conducting successful research in schools and evaluators often arechallenged to find innovative ways to conquer these obstacles. Table 3 presents resources thataddress resolving common problems encountered when evaluating SBHC services. These includeinvolving key stakeholders in the evaluation, recruiting participants, obtaining consent, and select-ing appropriate measures (Robinson, Ruch-Ross, Watkins-Ferrell, & Lightfoot, 1993). In a note-worthy article, Dryfoos and colleagues (1996) provided a good overview of challenges faced byevaluators who conduct research in schools. Strategies for overcoming these challenges are pre-sented in the following section.

Access and Recruitment

It is often difficult to gain access to students and families as research participants. This mayoccur because many SBHCs are operated by community-based or university-based organizationsand therefore are considered “outsiders” by key personnel involved in approving research. Hence,permission from many stakeholders, such as university, health department, and school-based humansubjects committees may be needed before the evaluation process can begin (Dryfoos et al., 1996;Olds & Symons, 1990). Perseverance and a positive attitude are attributes that help the researchermove through the steps for gaining permission to conduct research in schools.

Parental Consent

Many parents may be suspicious of researchers’ intentions or may be too busy to read longconsent forms. Hence, another major hurdle may be whether enough consent forms are returned toconduct a meaningful study (Belzer et al., 1993; Dryfoos et al., 1996; Ross et al., 1999). Recruit-ment problems can occur for other reasons, including resistance from students, parents, or SBHCstaff and teachers. Any of these groups may feel that the research project is unnecessary or that themeasures require the students to provide information that is “too personal.” SBHC staff may beespecially susceptible to the belief that evaluation measures will reveal too much private infor-mation about students, especially in sensitive areas like reproductive or mental health services.

One way to overcome this problem may be to form an advisory board for the research—composed of researchers, students, parents, teachers, and SBHC providers. This group can thenselect the measures that will be used in evaluation studies. This allows opinions from several keygroups to influence the selection of outcome measures, and the collective opinions of the advisoryboard may result in the selection of more acceptable measures. After selecting outcome measures,members of the advisory board may act as “lobbyists” and explain why measures were selectedand how they will be useful in gathering information about SBHC services. Price and colleaguesoffer suggestions for involving key stakeholders in evaluation efforts (Price, Telljohann, & King,1999).

Attrition

Loss of study participants or attrition often plagues school-based research projects. Youthreceiving SBHC services are often from low-income families residing in urban areas. Althoughthese youth have the most need for health care services, this group may be difficult to track inlongitudinal research projects as they may be likely to drop out of the research project, treatment,and/or school (Nabors & Reynolds, 2000). If study attrition is significant, evaluation findings mayonly demonstrate how services work for high functioning youth because the students who expe-rienced the greatest need for services may have discontinued treatment. When plans for coping

Evaluation in SBHCs 315

Page 8: Evaluation in school-based health centers

with attrition are included in research methods evaluators may greatly improve their chances forsuccessful completion of the project. Several researchers have presented helpful reviews of strat-egies for reducing study attrition (Kazdin, Mazurick, & Bass, 1993; Mobray, Bybee, Collins, &Levine, 1998; Ribisl, Walton, Mowbray, Luke, Davidson, & Boots-Miller, 1996; Robinson et al.,1993).

A related issue is finding methods for learning about the impact of services for children whodrop out of school prematurely or use services on an infrequent basis (Dryfoos et al., 1996). Moststudies provide information about students who frequently seek services, but more information isneeded about outcomes for youth that visit the SBHC only a few times per year. Developingstrategies to track and include these youth who are infrequent consumers of SBHC services is animportant direction for future research efforts (Ribisl et al., 1996; Werthamer-Larrson, 1994).

Comparison Groups

Quasiexperimental research, using a comparison group or groups, is a common practice forevaluations conducted in community-based settings. Typically it is challenging to find a compar-ison group, and this method is not as sound as the selection of a randomized control group (Dry-foos et al., 1996). Thus, whenever possible, random assignment to treatment and control groups ispreferred. However, comparison groups are commonly used when research questions focus oncomparing the progress of students who receive services to those who do not. As mentioned,SBHC staff, parents, and others may consider it immoral or unethical to withhold treatment forstudents, which may necessitate formation of a wait-list control group for research purposes. Ifthis occurs, the evaluator should consider using a comparison group if the effectiveness of servicesremains an important research question. It may also be difficult to recruit a suitable comparisongroup for quasi-experimental designs. One approach is to compare students who use services tothose who do not (Nabors & Reynolds, 2000; Weist, Proescher, Friedman, Paskewitz, & Flaherty,1995). Unfortunately, students who use services may be different (e.g., exposed to more riskfactors) than those who do not seek services, making the two groups different from the outset ofthe study (i.e., before treatment is administered). Consequently, when designing studies, “. . .eval-uation efforts should consider longitudinal cohort designs . . .and randomized designs where pos-sible and appropriate” (Santelli et al., 1996, p. 363). If this is not possible, then evaluators shoulduse qualitative methods to document stakeholder perceptions (e.g., perceptions of students, par-ents, teachers) about the effectiveness of services.

Small Samples

If sample sizes are small, insufficient power to detect significant results may influence find-ings (Dryfoos et al., 1996). If this occurs, the findings may be misleading and not representative ofhow interventions influence outcomes. Large-scale outcome studies across several sites and usingmultilevel modeling and data analysis techniques may improve the quality of research and yieldmore valid findings (see Bock, 1989; Raudenbush & Willms 1991). Moreover, multisite evalua-tion projects will allow for comparison of different programs in different regions of a state, coun-try, or countries. Having opportunities to compare and contrast the benefits of different programsin different areas will help guide the formulation of best practice guidelines and standards of carein the field.

Multiple Interventions

It may be difficult to determine what interventions influenced outcomes for students partici-pating in SBHC services (Dryfoos et al., 1996). To illustrate, it is important to consider that at-riskyouth receiving SBHC services may be participating in several other concurrent interventions

316 Nabors

Page 9: Evaluation in school-based health centers

(e.g., several SBHC programs, free lunch programs, social service, and mentoring programs). Ifthe students receiving SBHC services also participate in several social service programs, then ahost of factors related to the different interventions could be influencing outcomes that are mis-takenly attributed to SBHC services (Mercier, Fournier, & Paladeau, 1992). Therefore, whenchildren participate in many service programs it becomes, “. . . difficult to sort out what interven-tion is affecting which students” (Dryfoos et al., 1996, p. 217). Evaluators must consider this issuewhen designing studies and analyzing outcome data (e.g., consider using correlational techniqueslike path analysis). On a positive note, if children are participating in several programs, evaluatorsfrom these programs may wish to form evaluation teams that can improve research questions andincrease resources for evaluators.

Finding Measures

Another barrier to collecting data for evaluation research on children’s health services is thatonly a relatively limited pool of measures is available. Psychologists working in schools who haveexperience with measurement development and with the special issues related to school-basedpractice are well-suited for the task of selecting appropriate measures for evaluation projects or fordeveloping improved measures for use in SBHCs. Using a measure that examines student percep-tions of their functioning in several domains (e.g., self-esteem, health status), may provide a morecomprehensive picture of change in student functioning over time. One good example of a generalmeasure for adolescents is the Child Health and Illness Profile-Adolescent Edition (CHIP-AE;Starfield et al., 1993, 1995). The CHIP-AE assesses student functioning in many areas (e.g.,satisfaction with health, perceptions of health behaviors and resilience factors).

Unfortunately, many measures in this area are relatively “underdeveloped,” and more infor-mation on their psychometric properties and their applicability for use with students receivinghealth care at SBHCs is needed. Rodrigue, Geffken, and Streisand’s comprehensive text ChildHealth Assessment: A Handbook of Measurement Techniques (2000) has many useful measuresfor researchers interested in examining outcomes related to health status indicators for childrenand families. Weist and colleagues (Weist, Nabors, Myers, & Armbruster, 2000) reviewed severalmeasures for examining changes in children’s behavioral, emotional, and social functioning thatmay be useful for psychologists interested in undertaking outcomes research in schools.

Because of the variety of activities conducted by different SBHCs (e.g., various preventionand intervention programs) it may always be challenging to develop core methods or commonmeasures that could be used to examine outcomes across several SBHCs. It is important to con-sider the special aspects of school-based care, such as ethical considerations or importance ofprivacy for students seeking treatment, when designing evaluation projects. If more SBHCs con-duct evaluations it will become possible to develop a body of knowledge from which to draw ideasfor “best practice.” This will allow SBHCs to acquire outcome data and develop standards of careto apply for managed care panels and provide important accountability data to sustain or expandexisting programs as well as promote the growth of new ones.

Summary

An increasing body of research is being developed to demonstrate that community-basedhealth care programs such as School-Based Health Centers (SBHCs) are an effective mode ofservice delivery. SBHCs may be instrumental in providing health care services for adolescents andyouth from very low-income families who may otherwise lack access to basic health care services(Dryfoos, 1996; Federal Interagency Forum on Child and Family Statistics, 2001; Kisker & Brown,1996). Evaluation of these services is needed to provide documentation that children’s health care

Evaluation in SBHCs 317

Page 10: Evaluation in school-based health centers

needs can be met in schools, enabling more children to have access to health care services thatpromote their development (Kubisyn, 1999).

Conceptualizing the SBHC as a primary care center will allow SBHCs to apply for reim-bursement for services in ways similar to those of other community- and hospital-based healthprograms for children. Furthermore, when SBHCs are considered primary care services, it willbecome possible to compare health services research conducted across SBHCs and other primarycare settings, which will add to the literature and knowledge base in the field. Consequently, somekey outcome indicators for SBHCs may include: (1) service utilization and accessibility; (2) effec-tiveness of case management or case coordination activities; (3) cost-effectiveness of services; (4)change in functional outcomes (e.g., grades, absenteeism, drop-out rate); (5) change in behavioral,social, or emotional functioning; (6) change in health status, or risk-taking behaviors; (7) con-sumer (e.g., student, parent) satisfaction; (8) consumer perceptions of the cultural-appropriatenessof care; (9) change in children’s health care knowledge as a result of participation in preventionactivities; and (10) relationships between treatment process and outcomes for youth (Gomby &Larson, 1992; Kisker & Brown, 1996; Kubisyn, 1999; Santelli et al., 1996; St. Leger, 2000). Asmore SBHCs are developed, evaluation teams should continue to be organized to examine theeffectiveness of different services and interventions to document the process and outcomes of careas well as to improve the quality of care in this unique health care setting.

References

Belzer, E.G., Jr., McIntyre, L., Simpson, C., Officer, S., & Stadey, N. (1993). A method to increase informed consent inschool health research. Journal of School Health, 63, 316–317.

Bock, R.D. (1989). Multilevel analysis of educational data. San Diego, CA: Academic Press.Catron, T., & Weiss, B. (1994). The Vanderbilt school-based therapy program: An interagency, primary-care model of

mental health services. Journal of Emotional and Behavioral Disorders, 2, 247–253.Center for Mental Health in Schools at UCLA. (2000). An introductory packet on evaluation and accountability: Getting

credit for all you do. Los Angeles, CA: Author.Cook, T.D., & Walberg, H.J. (1985). Methodological and substantive significance. Journal of School Health, 55, 340–342.Crespo, R.D., & Shaler, G.A. (2000). Assessment of school-based health centers in a rural state: The West Virginia

experience. Journal of Adolescent Health, 26, 187–193.Donabedian, A. (1980). The definitions of quality and approaches to its assessment. Ann Arbor, MI: Health Administration

Press.Drotar, D. (2000). Handbook of research in pediatric and clinical child psychology: Practical strategies and methods. New

York: Kluwer Academic/Plenum Publishers.Drotar, D. (2001). Promoting comprehensive care for children with chronic health conditions and their families: Introduc-

tion to the special issue. Children’s Services: Social Policy, Research, and Practice, 4, 157–163.Dryfoos, J.G. (1995). Full service schools: Revolution or fad? Journal of Research on Adolescence, 5, 147–172.Dryfoos, J.G. (1996). Adolescents at risk: Shaping programs to fit the need. The Journal of Negro Education, 65, 5–18.Dryfoos, J.G., Brindis, C., & Kaplan, D.W. (1996). Research and evaluation in school-based health care. In L. Juszczak &

M. Fisher (Eds.), Adolescent medicine: State of the art reviews (pp. 207–220). Philadelphia, PA: Hanley & Belfus.Eber, L., & Nelson, M. (1997). School-based wraparound planning: Integrating services for students with emotional and

behavioral needs. American Journal of Orthopsychiatry, 67, 385–395.Federal Interagency Forum on Child and Family Statistics. (2001). America’s children: Key national indicators of well-

being. Washington, DC: U. S. Government Printing Office.Gomby, D.S., & Larson, C.S. (1992). Evaluation of school-linked services. The future of children (pp. 68–84). Los Altos,

CA: Center for the Future of Children, David and Lucile Packard Foundation.Green, R.S. (2001). Improving service quality by linking processes to outcomes. In M. Hernandez & S. Hodges (Eds.),

Developing outcome strategies in children’s mental health. Baltimore, MD: Paul H. Brookes Publishing.Hall, B. (1992). From margins to center? The development and purpose of participatory research. American Sociologist,

23, 15–29.Harrington, K.F., Binkley, D., Reynolds, K.D., Duvall, R.C., Copeland, J.R., Franklin, F., & Raczynski, J. (1997). Recruit-

ment issues in school-based research: Lessons learned from the High 5 Alabama Project. Journal of School Health, 67,415– 421.

318 Nabors

Page 11: Evaluation in school-based health centers

Kazdin, A.E., Mazurick, J.L., & Bass, D. (1993). Risk for attrition in treatment of antisocial children and families. Journalof Child Clinical Psychology, 22, 2–16.

Kirby, D., & Coyle, K. (1997). School-based programs to reduce sexual risk-taking behavior. Children and Youth ServicesReview, 19, 415– 436.

Kisker, E.E., & Brown, R.S. (1996). Do school-based health centers improve adolescents’ access to health care, healthstatus, and risk-taking behavior? Journal of Adolescent Health, 18, 335–343.

Kubisyn, T. (1999). Integrating health and mental health services in schools: Psychologists collaborating with primary careproviders. Clinical Psychology Review, 19, 179–198.

Lamp, E., Price, J.H., & Desmond, S.M. (1989). Instrument validity and reliability in three health education journals,1980–1987. Journal of School Health, 59, 105–108.

Manios, Y., Kafatos, A., & Mumalakis, G. (1998). The effects of a health education intervention initiated at first grade overa 3-year period: Physical activity and fitness indices. Health Education Research, Theory, & Practice, 13, 593– 606.

Mercier, C., Fournier, L., & Peladeau, N. (1992). Program evaluation of services for the homeless: Challenges and Strat-egies. Evaluation and Program Planning, 15, 417– 426.

Mills, C.A., Pederson, L.L., Koval, J.J., Gushue, S.M., & Aubut, J.L. (2000). Longitudinal tracking and retention in aschool-based study on adolescent smoking: Costs, variables, and smoking status. Journal of School Health, 70,107–112.

Mowbray, C.T., Bybee, D., Collins, M.E., & Levine, P. (1998). Optimizing evaluation quality and utility under resourceconstraints. Evaluation and Program Planning, 21, 59–71.

Nabors, L.A., & Reynolds, M.W. (2000). Program evaluation activities: Outcomes related to treatment for adolescentsreceiving school-based mental health services. Children’s Services: Social Policy, Research, and Practice, 3, 175–189.

Nabors, L., Weist, M., Reynolds, M., Tashman, N., & Jackson, C. (1999). Adolescent satisfaction with school-based mentalhealth services. Journal of Child and Family Studies, 8, 229–236.

Neuman, L. (2000). Social research methods: Quantitative and qualitative approaches. Boston: Allyn & Bacon.

Olds, R.S., & Symons, C.W. (1990). Recommendations for obtaining cooperation to conduct school-based research. Jour-nal of School Health, 60, 96–98.

Price, J.H., Telljohann, S.K., & King, K.A. (1999). School nurse’s perceptions of and experience with school healthresearch. Journal of School Health, 69, 58– 62.

Raudenbush, S.W., & Willms, J.D. (1991). Schools, classrooms, and pupils: International studies of schooling from amultilevel perspective. San Diego, CA: Academic Press.

Ribisl, K.M., Walton, M.A., Mowbray, C.T., Luke, D.A., Davidson, W.S., & Boots-Miller, B.J. (1996). Minimizing par-ticipant attrition in panel studies through the use of effective retention and tracking strategies: Review and recom-mendations. Evaluation and Program Planning, 19, 1–25.

Robinson, W.L., Ruch-Ross, H.S., Watkins-Ferrell, P., & Lightfoot, S. (1993). Risk behavior in adolescence: Methodolog-ical challenges and school-based research. School Psychology Quarterly, 8, 241–254.

Rodrigue, J.R., Geffken, G.R., & Streisand, R.M. (2000). Child health assessment: A handbook of measurement tech-niques. Boston, MA: Allyn & Bacon.

Rosenblatt, A., & Attkisson, C. (1993). Assessing outcomes for sufferers of severe mental disorder: A conceptual frame-work and review. Evaluation and Program Planning, 16, 347–363.

Ross, J.G., Sundberg, E.C., & Flint, K.H. (1999). Informed consent in school health research: Why, how, and making iteasy. Journal of School Health, 69, 171–176.

Rynard, D.W., Chambers, A., Klinck, A.M., & Gray, J.D. (1998). School support programs for chronically ill children:Evaluating the adjustment of children with cancer at school. Children’s Health Care, 27, 31– 46.

Santelli, J., Morreale, M., Wigton, A., & Grason, H. (1996). School health centers and primary care for adolescents: Areview of the literature. Journal of Adolescent Health, 18, 357–366.

Schonfeld, D.J., Bases, H., Quackenbush, M., Mayne, S., Marion, M., & Cicchetti, D. (2001). Pilot-testing a cancereducation curriculum for grades K-6. Journal of School Health, 71, 61–72.

Starfield, B., Bergner, M., Ensminger, M., Riley, A., Ryan, S., Green, B., & Kim, S. (1993). Adolescent health statusmeasurement: The development of the CHIP. Pediatrics, 91, 430– 435.

Starfield, B., Riley, A.W., Green, B.F., Ensminger, M.E., Ryan, S.A., Kelleher, K., Kim-Harris, S., Johnston, D., & Vogel,K. (1995). The adolescent health and illness profile: A population-based measure of health. Medical Care, 33, 553–566.

St. Leger, L. (1999). The opportunities and effectiveness of the health promoting primary school in improving childhealth—A review of the claims and evidence. Health Education Research, Theory, & Practice, 14, 51– 69.

St. Leger, L. (2000). Developing indicators to enhance school health. Health Education Research, Theory, & Practice, 15,719–728.

Torabi, M.R. (1986). How to estimate practical significance in health education research. Journal of School Health, 56,232–234.

Evaluation in SBHCs 319

Page 12: Evaluation in school-based health centers

Weiler, R.M., & Pigg, R.M. (2000). An evaluation of client satisfaction with training programs and technical assistanceprovided by Florida’s Coordinated School Health Program Office. Journal of School Health, 70, 361–367.

Weist, M.D., Nabors, L.A., Myers, C.P., & Armbruster, P. (2000). Evaluation of expanded school mental health programs.Community Mental Health Journal, 36, 395– 411.

Weist, M.D., Proescher, E.L., Friedman, A.H., Paskewitz, D., & Flaherty, L. (1995). School-based health services for urbanadolescents: Psychosocial characteristics of clinic users and nonusers. Journal of Youth and Adolescence, 24, 251–265.

Werthamer-Larrson, L. (1994). Methodological issues in school-based services research. Journal of Clinical Child Psy-chology, 23, 121–132.

Williams, E.G., & Sadler, L.S. (2001). Effects of an urban high school-based child care center of self-selected adolescentparents and their children. Journal of School Health, 71, 47–52.

Worchel-Prevatt, F.F., Heffer, R.W., Prevatt, B.C., Miner, J., Young-Saleme, T., Horgan, D., Lopez, M., Rae, W.A., &Frankel, L. (1998). A school reentry program for chronically ill children. Journal of School Psychology, 36, 261–279.

320 Nabors