evaluation of math-science partnership projects
Post on 03-Jan-2016
41 Views
Preview:
DESCRIPTION
TRANSCRIPT
Evaluation ofEvaluation of Math-Science Partnership Math-Science Partnership
ProjectsProjects
(or how to find out if you’re really getting your money’s worth)
Why Should States Require Good Why Should States Require Good
Evaluations of MSP Projects?Evaluations of MSP Projects?
To determine if the project’s objectives contribute to State education goals
To find out how activities are implemented during the year
To monitor the project’s progress toward achieving its objectives
To determine if the project ultimately reaches its objectives (and if not, why not)
A good local evaluation can ...A good local evaluation can ...
Provide evidence that is directly relevant to the district’s students, teachers, and schools
Provide immediate feedback to improve on-going projects
Provide information for making informed decisions about allocating local resources
Developing the State RFPDeveloping the State RFP
(Or, how to ask for something so that you get what you want )
What do you want to see in a good What do you want to see in a good evaluation?evaluation?
Clear objectives with measures that directly assess the targets of each objective
Documentation of program implementation and progress
An evaluation design that can clearly show whether program activities themselves are the cause of any changes in target outcomes
Teacher-Focused ObjectivesTeacher-Focused Objectives
Increase the number of mathematics and science teachers who participate in content-based professional development activities
Increase teachers’ content knowledge in mathematics or science
Student-Focused ObjectivesStudent-Focused Objectives
Improve student academic achievement on the state mathematics and science assessments
Measuring ProgressMeasuring Progress
For each objective, there should be at least one measure (or indicator) that directly assesses the objective’s target outcome
Measuring Progress: ExampleMeasuring Progress: Example
Course specific content test – YESTeacher certification math content test – YES
(but not a math pedagogy test)Teacher self-report of learning or course
satisfaction – NO
To measure an increase in teachers’ math content knowledge, there must be a direct measure of teachers’ math content knowledge.
Measuring Progress: ExampleMeasuring Progress: Example
State mathematics achievement test – YESStudent self-report of learning or interest in
mathematics – NO
To measure improvement in students’ mathematics achievement, there must be a direct measure of students’ mathematics achievement.
Documenting the Program’s Documenting the Program’s Implementation and ProgressImplementation and Progress
Who are the participants?Were activities carried out as planned and
on what timeline?If problems were noted, how were they
corrected?Do early data show progress toward the
expected outcomes?
How do you determine whether How do you determine whether the project activities themselves the project activities themselves actually produce changes in the actually produce changes in the
target outcomes?target outcomes?
(Where’s the beef?)
Evaluation DesignEvaluation Design
Baseline data are essential
A comparison group is important
Random assignment is the only sure method for determining program effectiveness
What is random assignment?What is random assignment?
Intervention and comparison groups are constructed by randomly assigning some teachers, schools or districts to participate in the program activities and others to not participate
Random assignment is not the same as random selection (e.g., randomly choosing 5 schools that use Curriculum X out of schools that already use Curriculum X to compare with 5 randomly chosen schools that use Curriculum Y out of schools that already use Y)
The Random Assignment Difference: The Random Assignment Difference: The Career Academy StudyThe Career Academy Study
In a recent study, 73% of students voluntarily enrolling in a high school technical education program called Career Academy graduated on time.
Completion rates for students from the National Education Longitudinal Survey who followed a career technical curriculum or a general curriculum in high school were 64% and 54%, respectively.
BUT students in the Career Academy study who had been randomly assigned to the control condition graduated at the rate of 72%, not significantly different from the students in the Career Academy intervention
40
45
50
55
60
65
70
75
%
Career Acad NELSTechnical
NELS General RandomControl
Timely High School Completion
Career AcademiesCareer Academies
If not random assignment, then whatIf not random assignment, then what
Use a comparison group of students, schools or districts that are carefully matched to the targeted population in academic achievement levels, demographics, and other characteristics thought to be relevant to the intervention (e.g., teachers’ years of classroom experience) prior to the implementation of the intervention
If not random assignment, then whatIf not random assignment, then what
Be sure to identify both the intervention and comparison groups and the outcome measures before the intervention is administered
Finally, be sure that the comparison group is not comprised of students or schools that had the opportunity to participate in the intervention but declined.
Writing the Evaluation Component: Writing the Evaluation Component: Measures and Data CollectionMeasures and Data Collection
Require objectives with measures (indicators) that directly relate to the objectives
Require baseline data (existing or a project administered pre-test)
Require data that documents what was implemented and how the program was implemented
Writing the Evaluation Component: Writing the Evaluation Component: Evaluation DesignEvaluation Design
Require an evaluation design that can determine whether the project activities themselves produce changes in the target outcomes
Encourage use of random assignment designs
Encourage applicants to seek assistance from consultants who have experience in conducting impact evaluations of programs
Review of Plans: Review of Plans: Are Outcomes Linked to Objectives?Are Outcomes Linked to Objectives?
Are objectives stated in measurable terms? Is progress toward each objective measured
by a specific indicator or indicators that directly relates to the objective?
Do the identified indicators cover all of the key outcomes?
Review of Plans: Review of Plans: Will Data Be Used to Improve Program?Will Data Be Used to Improve Program?
Will evaluation data be collected throughout the project?
Will evaluation data be used to inform project activities?
Is the timeline for collection of evaluation data integrated with the overall project timeline?
Will the data they plan to collect provide information about various components of the project?
Review of Plans: Review of Plans: Will the Evaluation Assess the Will the Evaluation Assess the
Impact of the Program? Impact of the Program? Does the design allow the applicant to
determine that observed changes in outcomes are due to the program? – Do they collect or use baseline data?– Do they include a control or comparison group
in their evaluation design?– Do they use random assignment?
Review of Plans: Do Project Personnel Review of Plans: Do Project Personnel Have Expertise in Impact Evaluations?Have Expertise in Impact Evaluations?
Do they involve an experienced evaluator, or does someone on their staff with sufficient experience in quantitative program evaluation?
Does the evaluator have a sufficient time investment to carry out the evaluation over the life of the program?
Who can help review the evaluation Who can help review the evaluation component of the MSP proposals?component of the MSP proposals?
University faculty with expertise in quantitative program evaluation
–Public policy–Public health–Prevention science–Psychology
Evaluators with expertise and experience with random assignment evaluations
top related