strategies for course redesign evaluation

Post on 20-Jan-2016

64 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

DESCRIPTION

Strategies for Course Redesign Evaluation. Laura M. Stapleton Human Development and Quantitative Methodology University of Maryland, College Park Lstaplet@umd.edu. Presentation Outline. Goal of evaluation A proposed framework for evaluation Experimental design considerations - PowerPoint PPT Presentation

TRANSCRIPT

STRATEGIES FOR COURSE REDESIGN EVALUATION

Laura M. StapletonHuman Development and Quantitative MethodologyUniversity of Maryland, College ParkLstaplet@umd.edu

1

Presentation Outline

• Goal of evaluation

• A proposed framework for evaluation

• Experimental design considerations

• Examples of what (and what not) to do

• Summary and recommendations

2

Goal of Evaluation

• …to provide useful information for judging decision alternatives, assisting an audience to judge and improve the worth of some educational program, and assisting the improvement of policies and programs”

(Stufflebeam, 1983)

3

Frameworks

• Summative-judgment orientation (Scriven, 1983)

OR

• Improvement orientation (Stufflebeam, 1983)

4

“The most important purpose of program evaluation is not to prove but to improve”

Proposed Framework

• Stufflebeam’s CIPP framework for program evaluation• Context• Inputs• Process• Product

• Evaluation can encompass any or every one of these

aspects

5

Context Evaluation

• What needs are addressed, how pervasive and important are they, and to what extent are the project’s objectives reflective of assessed needs?

• What course is undergoing redesign?• Why is it targeted for redesign?• Are these reasons sufficient for the resource/time expenditure

that redesign would require?• Are there components of the traditional course that are

good/important to keep?• Does redesign represent potential benefits?

6

Input Evaluation

• What procedural plan was adopted to address the needs and to what extent was it a reasonable, potentially successful, and cost effective response to the assessed needs?

• Why were the specific redesign components selected?• What else might have worked just as well?• What are the costs of the chosen approach versus the

costs of others (to all stakeholders)?

7

Process Evaluation

• To what extent was the project plan implemented, and how, and for what reasons did it have to be modified?

• Was each part of the plan in place? • Did the components operate as expected?• Did the expected behavioral change occur?• How can implementation efforts be improved?

8

Product Evaluation

• What results were observed, how did the various stakeholders judge the worth and merit of the outcomes, and to what extent were the needs of the target population met?• How did outcomes compare to past/traditional delivery?• What were stakeholders opinions’ of the change?• Were the cost/benefit advantages realized?• Were there unintended consequences?

9

Proposed Framework

• Stufflebeam’s CIPP framework for program evaluation• Context• Inputs• Process• Product

10

Process Evaluation Strategies

• Observations

• Review of extant process data

• Focus groups

• Informal or formal feedback

11

Product Evaluation Strategies

• Qualitative review of judgments of stakeholders

• Quantitative comparison of measured outcomes

• Causal conclusions regarding quantitative outcomes depends on design (Campbell & Stanley, 1963)

12

Study Design

13

sample

Group A

Group B

pre-test

pre-test

Re-designed Instruction

Traditional Instruction

post-test

post-test

Pre-test post-test control group design

with this design, you have strong support for a causal statement

Study Design

14

sample

Group A

Group B

Re-designed Instruction

Traditional Instruction

post-test

post-test

Post-test only control group design

with this design, you have support for a causal statement, assuming students do not drop out differentially

Study Design

15

Group A

Group B

pre-test

pre-test

Re-designed Instruction

Traditional Instruction

post-test

post-test

Non-equivalent control group design

with this design, initial differences in the groups may explain differences (or lack of differences) in the outcomes

Study Design

16

Group A

Group B

Re-designed Instruction

Traditional Instruction

post-test

post-test

Static group comparison / post-test only with non-equivalent groups

with this design, initial differences in the groups may explain differences (or lack of differences) in the outcomes

Examples of what to do (and not do)

• UMBC Psychology Course Redesign

• Context: Low pass rates in PSYC100 course; student course evaluations were “brutal”

• Input: Delivery of PSYC100 course content altered• Material on web; self-paced “labs” and quizzes• Dyads within lecture hall with peer facilitators• Lectures were more discussion based, including video

and clicker questions• Less time in lecture, more on self-paced on-line work

17

Examples of what to do (and not do)

Process Evaluation (of redesign pilot)

• Lab utilization statistics• Time lab completed (relative to exam and speed)• Number of times quiz attempted

• Qualitative reaction from redesign section instructors• Focus group comments from students• Reactions from small groups in lectures

• What was working• What was not working• What changes would be helpful

18

Examples of what to do (and not do)

Product Evaluation (of redesign pilot)

•Post-test Only Static Group Comparison• Grade Distribution

• Redesign• Traditional (same semester)• Traditional (historical)

• Common Exam• Redesign• Traditional (same semester)

• Student Course Evaluations• Redesign• Traditional (same semester)

19

Summary Suggestions

• Identify a fairly independent evaluator now• Determine what type of evaluation you need to undertake

[which CIPP stage(s)?]• Identify components of each (remember unintended

consequences)• Make it happen when it needs to happen! • Be creative in considering sources of “data”• Be flexible to change your evaluation plan mid-stream• Think long-term as well as short-term

20

References

• Campbell, D.T. & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research. Chicago, IL: Rand McNally & Company.

• Scriven, M. (1983). Evaluation ideologies. In Madaus, G. F., Scriven, M. & Stufflebeam, D. L. (Eds.) Evaluation Models, pp. 229-260. Hingham, MA: Kluwer Academic Publishers.

• Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In Madaus, G. F., Scriven, M. & Stufflebeam, D. L. (Eds.) Evaluation Models, pp. 117-142. Hingham, MA: Kluwer Academic Publishers.

21

22

Thank you!

Contact for info: Lstaplet@umd.edu

top related