assessment for learning using · pdf fileassessment for learning using rubrics judy arter ......

14
Assessment For Learning Using Rubrics Judy Arter Independent Consultant Loren Ford Clackamas Community College January 27-28, 2011 Anderson Conference Shifting From a Grading Culture to a Learning Culture: ASSESSMENT THEORY AND PRACTICE Judy Arter PO Box 470, Beavercreek, OR 97004 [email protected] Adapted from: Stiggins, R., Arter, J., Chappuis, J., and Chappuis, S. (2006). Student Assessment For Student Learning (CASL), Portland: Pearson Assessment Training Institute. Arter, J. and Chappuis, J. (2006). Creating and Recognizing Quality Rubric (CAR), Portland: Pearson Assessment Training Institute..

Upload: buiduong

Post on 28-Mar-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Assessment For Learning Using Rubrics

Judy Arter Independent Consultant

Loren Ford Clackamas Community College

January 27-28, 2011 Anderson Conference

Shifting From a Grading Culture to a Learning Culture: ASSESSMENT THEORY AND PRACTICE

Judy Arter PO Box 470, Beavercreek, OR 97004

[email protected]

Adapted from: Stiggins, R., Arter, J., Chappuis, J., and Chappuis, S. (2006). Student Assessment For Student Learning (CASL),

Portland: Pearson Assessment Training Institute. Arter, J. and Chappuis, J. (2006). Creating and Recognizing Quality Rubric (CAR), Portland: Pearson Assessment

Training Institute..

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 1 Handout #3

Seminar Goals 1. Understand the relationship betwen assessment and student motivation 2. Understand the different between assessment of and for learning 3. Deepen your understanding of keys to quality student assessment 4. Deepen your understanding of assessment for learning strategies

Goals For This Session 1. Deepen your understanding of keys to quality student assessment―Keys 1 (Clear Purpose), 2

(Clear Targets), and 5(Student Involvement) 2. Deepen your understanding of assessment for learning strategies; in this case a concrete example

of meaningfully involving students in their own assessment using rubrics 3. Understand how rubrics can make learning targets clearer to students

Rubric: Written down criteria that define various levels of quality of student reasoning, performances, or products.

Assessment FOR Learning (formative assessment): Assessments given during the learning process to modify the teaching and learning activities in which students are engaged to maximize the learning before the grading event. Rubrics are great as AFL tools because:

• Rubrics help make complex learning targets clearer to students by describing what a good quality performance or product looks like. Students can hit any target that is clear.

• Good rubrics, because of their descriptive detail, help instructors diagnose student strengths and weaknesses, plan instruction, and provide descriptive feedback to students on complex learning targets

Assessment OF Learning (summative assessment): Assessments used to determine how much students have learned as of a particular point in time in order to report achievement status to others. For example, grading.

Rubrics can be used as AOF tools because they help instructors make consistent and justifiable judgments of quality for grading or documenting levels of mastery.

Rubrics are a means to an end―more effective student learning. Rubrics are not needed for everything. They are best used for those learning targets that require a subjective judgment of quality, for example, lab reports, mathematical problem solving, writing, artistic performance, and critical thinking. The role of the rubric is to make subjective judgments as objective as possible. This clarifying of learning targets is also useful to students. Rubrics are not needed for assessing knowledge when answers are right/wrong.

What AFL Doesn’t Look Like with Rubrics • Handing the rubric to students with little explanation • No practice with the rubric before the “grading” use • Asking students to self- or peer-assess using the rubric without teaching them how • Using any old rubric without making sure it adheres to standards of quality

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 2 Handout #3

Assessment for Learning Using Rubrics: Strategy 1: Provide a clear and understandable vision of the learning target

using rubrics

Part A: By the end of this course, I want you to have additional coping skills for dealing with life problems. So, in this course I will ask you to apply psychological information and concepts by analyzing your own behavior and describing possible courses of action that will make you more effective in your dealings with others. To practice these skills you will write weekly journal entries. To help you understand what it looks like when you are applying ideas to yourself, compare the two samples of reaction and response papers below. Which is better? Why? Make a list of characteristics of a good reaction and response paper.

Sample 1: People could learn a lot from the chapter about how to have better human relations. You could use this information in so many ways that would be helpful. I didn't learn much because a lot of this stuff I already knew. This was some really cool stuff. I liked the stories, especially the one about eating broccoli. Here's what's in the chapter: ten general principles, devil's advocate, catch-22, diversity, paradox, forensics (first day morgue syndrome), shyness, self-talk, listening, commitments, communication, fears, double standards, risk taking, choice and balance, developing skills, and critical thinking. It ended with personal learning. I'm going to come to all the classes, read all the chapters ,and do all the homework that is required to get an A. The section on paradox was great to read. I'm going to show it to my brother. I think he would enjoy the information in this chapter, too. Wow the Johari window really opened my eyes to some stuff about other people.

Sample 2: When I was young my parents told me that I was shy. I think that I started to believe it, whether it was true or

not. There are times when I would like to speak up more and let my opinion be know, but I'm afraid that others might not think my contribution would be good enough. I'm also nervous that other people might think I'm too pushy. I'm not sure how to speak up. The book talked about double-standards. I think that applies to me because I don't mind other people making comments even when they are a little off the mark (as long as they are friendly), but, I'm very afraid of what people might think if I make a comment that is a little off the mark.

I'm going to use the information to start taking more risks in starting conversations. I'm especially going to pay more attention to my self-talk and try telling myself that my opinion is worthwhile. (I think this was in the section on affirmations.)

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 3 Handout #3

Strategy 1: Provide a Clear and Understandable Vision of the Learning

Target

What You Need: • Analytical trait scoring guide (complex skill or product broken down into teachable

parts—called traits) • Bulleted list describing the components of quality for each part of the scoring guide,

in language students can understand (base this on the descriptors under the strong level of each trait)

• Several anonymous strong and weak performances

What You Do: Anything you do to help students answer the question, "What are the elements of quality in the performance or product I am to create?" applies to Strategy 1. Many instructors use a version of the following steps to introduce the concepts of the rubric they will be using. In this example, we will introduce students to the language of a rubric for reaction and response papers―learning to apply psychological concepts to their own context.

1. Ask students, “What does a good reaction and response paper look like?” Record their responses on chart paper. Write down exactly what they say. (This represents what they already know about quality and the language they use to describe it.)

Alternative: The above strategy only works if students already know something about quality. If students know very little about quality in your context, give them two examples of student work―one strong and one very weak―and ask them to decide which is better and list the reasons why they think so.

2. Show one or two examples of reaction and response papers. Ask students to add to their list of features of a good one. Record additional responses. Showing students examples serves to remind them of features of a sound performance that didn't come to mind immediately.

Alternative. If you showed them examples to begin with, then you can either show them two additional samples―one strong and one weak―and ask them to add to their list. Or, you can skip this step entirely.

3. Tell students their list includes many of the same characteristics that experts (instructors) look for.

4. Show students a list of the criteria represented in your rubric. For reaction papers this might be: Understands Concepts, and Application to Self. Then show a bulleted list of the main features of each trait―use the descriptors that define "strong" performance. For example, for reaction and response papers you would have:

Understands Concepts Application to Self • Information is accurate • Terms are used correctly • Examples (or

counterexamples) are relevant

• In the first person ("me," "I") • Specific description of what was learned • Analysis of one's own behavior in light of the concepts in the chapter―or―examples

of how various concepts reflect one's own experiences • Reactions and opinions regarding the chapter content: likes, dislikes, and reasons why • Sincere

Ask students to see if any of the ideas from the bulleted list show up on their list. If there is a match, write the trait name next to the word or phrase on their list. If there are no matches, tell students they will learning more about this. In going through this process, students identify what they already know, link their descriptions of quality to the language of the rubric, and realize that the concepts on the rubric are not totally foreign to them.

5. Hand out a version of the rubric written in language your students will understand.

From: Arter, J. and Chappuis, J. (2006). Creating and Recognizing Quality Rubrics, Portland, OR: Pearson Assessment Training Institute, pages 137-138.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 4 Handout #3

AFL Strategy 2 with Rubrics: Use Models to Practice Scoring Directions. You will be evaluating sample student reaction and response papers on the trait of Application to Self by doing the following:

1. Read the description for strong Application to Self. Then read the description for weak Application to Self . 2. Read the sample journal entry. 3. Compare the journal entry to the Application to Self rubric. Which level best describes the student's performance?

You can assign intermediate points: "Green Light minus" and "Red Flag plus." "Green Light minus" means that the paper has many features of a Green Light, but some features of a Red Flag. "Red Flag plus" means that the paper has many features of a Red Flag, but some of a Green Light. Discuss your scores in small groups using the language of the scoring guide to explain your decision.

Trait Evaluated: Application to Self Sample #3 Score: _____________________ Words from the rubric trait of Application to Self that describe this sample:

Trait Evaluated: Application to Self Sample #4 Score: _____________________ Words from the rubric trait of Application to Self that describe this sample:

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 5 Handout #3

Strategy 2: Use Examples and Models of Strong and Weak Work

Teaching Students to Use the Rubric to Evaluate Examples

Anything you do to help students answer the questions, "What does quality look like? What are some problems to avoid?" applies to Strategy 2. Even though you have introduced the language of the rubric to students, they still need to practice with it to understand what the descriptors mean and to be able to differential among different levels of quality. We recommend that, if the performance or product is complex, you focus on one trait at a time. First, gather models of strong and weak work—anonymous strong and weak student work, published strong (and weak, if available) work, and your own work. Share anonymous student samples that model both good work and problems students commonly experience, especially the perennial, pervasive problems that drive you nuts. 1. Choose one trait to focus on at a time. Have students read the rubric for that trait, beginning with the descriptors

at the strong end of the scale, then reading the descriptors at the weak end of the scale, and finally reading through the middle descriptors.

2. Show students an anonymous strong example of the performance or product. Don't tell them that it is strong. (Note that in some contexts, such as with mathematics problem solving, this process works better if students are asked to solve the problem in the example on their own before looking at sample solutions.).

3. Ask students to put the performance or product mentally into one of two piles―strong or weak―for the trait specified. Tell them this is silent, independent work. Have them begin reading the rubric at the appropriate end of the scale―if it is in the strong pile, they begin reading at the strong end of the scale; if it is in the weak pile, they begin reading at the weak end of the scale. If they believe it is strong, but the highest level doesn't describe it exactly, they are to read down through the levels until they find descriptors that match their judgment. Conversely, if they believe it is toward the weak end, but the lowest level doesn't describe it exactly, they are to read up through the levels until they find descriptors that most closely match their judgment. This is again independent work.

4. Students are now ready to share their judgments and reasons in small groups. Ask them to be sure to refer to the language of the rubric when explaining why they gave the example a particular rating. They do not need to come to consensus on a rating, but they should all share their judgments and reasons for them. Students can change their ratings if the discussion causes them to rethink their judgment and reasons.

5. Next, ask students to share their ratings by voting. In the case of a 5-point scale, you would ask them, "How many of you gave this example a 5?"" and tally the votes. Do the same for each score or rating point in descending order. Then ask, "What did you give it and why?" The previous steps have been leading to this discussion―it is the most important part. Let students share their ratings and reasons. Remind them to refer to the language of the rubric in justifying their judgments. The point of this step is not to make sure students center on the rating you have in mind. Rather, the purpose is to give them practice at matching examples to levels of quality as defined in the rubric, which is a necessary precursor to self-assessment.

6. You can share the rating you would give the example, if you like, but don't spend too much time explaining your own analysis at this point, even if it is widely discrepant from some or all of their judgments. Rather, select another example, this time one that is fairly weak, and follow the same procedure. Select subsequent samples to reflect problems your students typically have. Students will come to a closer convergence with your vision of quality through engaging in this activity several times.

From: Arter, J. and Chappuis, J. (2006). Creating and Recognizing Quality Rubrics, Portland, OR: Pearson Assessment Training Institute, pages 138-140.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 6 Handout #3

Oral Communication Rubric

Human Relations, Spring Term 2010 Loren Ford, Clackamas Community College

Judy Arter, Assessment Consultant

Trait Green Light Red Flag Understands the Content Under Discussion

1. Information is accurate 2. Terms are used correctly 3. Examples and counterexamples are

relevant

1. Struggles to provide ideas or support for ideas, or ideas are hard to understand

2. Terms are used incorrectly 3. It is hard to understand how examples and

counterexamples are relevant to the topic being discussed

Interaction With Others

1. Listens to others Nonverbal behaviors are positive:

learning forward, looking at the person speaking, nodding, smiling

Doesn’t interrupt Asks clarifying questions Paraphrases

2. Shares the spotlight Pauses to let others talk Draws others into the conversation Paces one’s own contribution to not

overwhelm 3. Makes others feel comfortable expressing

their insights Mirrors--matches others’ styles Invites other points of view

4. Uses language others will understand

1. Doesn’t listen to others Nonverbal behaviors are negative: sitting

back, crossing arms, scowling, shaking head, rolling eyes

Interrupts Makes irrelevant or distracting statements Cannot summarize what another person said

2. Either monopolizes the conversation or is totally uninvolved Overwhelms will irrelevant detail Doesn’t stop talking Makes a comment on what everyone else

said; doesn’t let anyone else respond Uninvolved in the conversation even when

directly asked for a contribution 3. Interactions with others are negative

Ridicules another’s experiences Non-courteous/rude

4. Tries to impress others by being a thesaurus

Open to Being Influenced By Others

1. Relates others’ points of view to oneself States how one’s own experience is

akin to that of another Agrees Indicates that another’s statement is

interesting and might be relevant to oneself

Acknowledges that having a different point of view doesn’t make a person wrong

1. Disregards or ridicules others’ points of view Never changes one’s opinion Attacks another person’s point of view Is insincere in agreeing with another person

or relating someone else’s experience to one’s own

Implies that another person is wrong for having their point of view

Uses a mocking tone of voice

©2011, Judy Arter and Loren Ford, [email protected]

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 7 Handout #3

Reaction and Response Rubric1 Human Relations, Spring Term 2010

Loren Ford, Clackamas Community College Judy Arter, Assessment Consultant

Trait Green Light Red Flag

Understands Concepts

1. Information is accurate 2. Terms are used correctly 3. Examples and counterexamples are

relevant

1 Struggles to provide ideas or support for ideas, or ideas are hard to understand

2. Terms are used incorrectly 3. t is hard to understand how examples and

counterexamples are relevant to the topic being discussed

Application to Self

1. In the first person (“me,” “I”) 2. Specific description of what was

learned 3. Analysis of one’s own behavior in

light of the concepts in the chapter and/or examples of how various concepts reflect one’s own experiences: “Here’s how this chapter was useful to me” and/or “ Here’s an example from my life …”

4. Reactions and opinions regarding the chapter content: likes, dislikes, and reasons why

5. Sincere

1. Not in the first person 2. General, vague, or abstract description of

content 3. Little analysis of one’s own behavior in light

of the concepts in the chapter and/or little description of how the ideas might be useful in one’s own life; high school book report: “Here are the points made in the book, but I won’t discuss how they relate to myself or how useful they are to me”

4. No conclusion or irrelevant conclusion 5. Insincere; “I need to do this to get the grade, but I

won’t really think about how the information might apply to myself”

©2011, Judy Arter and Loren Ford, [email protected]

1 Draft based on examination of many student self-reflections.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 8 Handout #3

Analytic Rubric for Oral Presentation

Trait 1: Content/Critical Thinking—Focus/theme, support; relevancy, accuracy, tailored to listener needs.

Strong Middle Weak

Ideas are focused and supported with relevant details and examples. The speaker has chosen the most significant information. Information is accurate. The speaker anticipates the information needs of the audience, adapts content to the listeners’ background, and/or refers to listeners’ experience.

The topic is fairly broad, but ideas are reasonably clear and focused on relevant content. Support is attempted, but there seems to be holes. Most information is accurate, but there are some confusions. Information is mostly relevant to listener needs, but the listener is either left with some questions or wishes some information was left out.

The content may be repetitious or sound like a collection of disconnected thoughts; the speaker is still in search of a topic; the length is not adequate for development; or much information, although accurate, is not relevant. Information is limited, unclear or incorrect. Much information, even if accurate, is either confusing to listeners or repeats what is already obvious to listeners. Audience information needs don't seem to be considered.

Trait 2: Organization—Putting ideas together, sequencing, opening, closing.

Strong Middle Weak

Ideas that go together are put together. Details seem to fit where they're placed. The speaker helps the listener understand the sequence of ideas through organizational aids such as previewing the organization, using transitions, and summarizing. Listeners can easily put the ideas in an outline. The opening draws the listener in; the closing leaves a sense of closure and resolution.

Most ideas that go together are put together. Some details don’t seem to fit where they’re placed. The sequence and relationships are fairly easy to follow, but sometimes the listener has to make assumptions to connect the ideas. Creating an outline of the ideas requires inferences. The presentation has a recognizable and relevant opening and closing, but there is little sense of anticipation and/or closure.

Many ideas that go together are not put together. Many details don't seem to fit where they're placed. Listeners have trouble putting the ideas into an outline. Sequencing is confusing. There is no opening or closing; or the opening or closing do not fit the topic or leave the listener confused.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 9 Handout #3

Criterion 3: Delivery—Volume, visual aids, speaking fluency, pacing.

Strong Middle Weak Volume is loud enough to be heard and understood. Volume is intentionally used to keep the listener’s attention and/or enhance the points being made. Visual aids are used effectively to support and enhance meaning. Pronunciation and enunciation are clear enough to be understood and are used to emphasize important points. The speaker exhibits very few disfluencies, such as “ah,” “um,” and “you know.” There is little in the presenter’s demeanor, dress, or mannerisms that distract the listener from the message. Pacing is right for the audience. The speaker knows when to slow down and when to speed up.

The speaker can be heard and volume doesn’t distract the listener, but neither does volume draw attention to important points. Visual aids, while understandable, don’t add much to the presentation. Pronunciation and/or enunciation are generally clear enough to be understood, but are not used effectively to underscore important points. While the speaker exhibits disfluencies, they don’t detract from the presentation enough to interfere with meaning. The presenter’s demeanor, dress, or mannerisms sometimes distract the listener, but meaning is not disrupted. Pacing is fairly good, but at times the speaker goes too slow or too fast for the listeners to keep up.

The speaker can’t be heard and/or changes in volume distract the listener from understanding the points being made. Visual aids are confusing, do not relate to the point being made, or distract the listener. Pronunciation and/or enunciation detract from being able to understand the speaker. Disfluencies, such as “um,” “ah,” and “you know,” detract from understanding what is being said. The presenter’s demeanor, dress, or mannerisms distract the listener to the extent that meaning is disrupted. Pacing is awkward. The listener wants the speaker to either get on with it or not go so quickly.

Criterion 4: Language Use—Choice of words, language techniques, sentence fluency.

Strong Middle Weak Words and phrases are accurate, to the point, create pictures in the listener’s head, and/or result in emphasizing the intended points. The speaker consciously uses language techniques such as vivid language, emotional language, humor imagery, metaphor, and simile to make intended points. Sentences are varied and easy to listen to and understand. They attract and hold attention

The speaker uses bland language that, while not detracting from the message, does little to enhance it. Words and grammar are accurate and communicate, but don’t capture the listener’s attention. Sentences are usually correct and can be understood, but generally lack the flair that maintains attention.

Words are used incorrectly or are insulting; or the images created from vocabulary are distracting or irritating. Word and phrases either sound like a thesaurus on the loose or are so nondescript, such as “thing” and “stuff” that the listener looses attention. The speaker might use jargon or clichés. Sentences either ramble, are choppy, or are awkward. Sentence structure might all be the same and so become boring.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 10 Handout #3

Rubric for Lab Reports—Themes Across Sections

Trait In Control Developing Beginning Conceptual Understanding

1. Correct terminology was used when needed

2. Biological concepts were correctly and succinctly explained

3. All portions of the lab report “flowed”: the most appropriate biological concepts underpinning the experiment were identified and built upon throughout, from support for the hypothesis and design, through explanation of results, conclusion, and into directions for further research

1. Terminology use was inconsistent—sometimes correctly used and sometimes either not used when needed or used incorrectly

2. Biological concepts were essentially correct although explanations may have been slightly incomplete, off base, too wordy, or too sketchy

3. The reader could follow the “flow” of the biological concepts underpinning the experiment from section to section, but with some effort: although the reader could see where the student was headed with the ideas, some relevant concepts were inappropriate, missing, hard to find, or not given appropriate emphasis

1. Terminology was frequently used incorrectly or was missing when needed

2. Biological concepts were incorrectly explained

3. The report appeared to be a loosely linked series of independent sections, relevant biological concepts were rarely identified, concepts shifted between sections, it was not apparent how concepts were linked to the experiment and results; or all biological concepts learned so far were referred to regardless of relevance

Reasoning— Pre-experiment

1. Provided relevant support for the hypothesis

2. The prediction was stated so that it was measurable

3. Experimental and control variables were identified and explained correctly

4. The design was sufficient to control for extraneous variables

1. Discussed some concepts that underpin the hypothesis, but not all that were important

2. The prediction was stated so that it was measurable

3. Experimental and control variables were identified correctly, but may not have been explained fully

4. The design controlled for some extraneous variables, but there were holes

1. The hypothesis seemed to be a guess, be based on personal opinion, or cited irrelevant concepts

2. The prediction would be difficult to measure 3. Experimental and control variables were

either incorrect or missing 4. The design was not adequate to control

variables or the variables measured were not those of importance

Reasoning— During the Experiment

1. There were enough data collected, and it was of sufficient quality in terms of accuracy and relevance to be able to draw an accurate conclusion about the hypothesis

1. There were enough data collected, but the accuracy of the data was noticeably below what would be required to be able to draw an accurate conclusion about the hypothesis

1. Data were insufficient in quantity and/or quality to reach an accurate conclusion about the hypothesis

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 11 Handout #3

Trait In Control Developing Beginning

Reasoning— Post-experiment

1. Identified the relevant trends, patterns, and anomalies in the data and explained their importance

2. When needed, identified relevant sources of error and suggested changes to the experiment that could result in better information

3. Correctly recognized whether or not the data support the hypothesis or prediction

4. Supported conclusions with references to the data collected

5. Suggested relevant avenues for future research that were clearly explained and expanded thinking about the relevant biological concepts

1. Incompletely identified the relevant trends, patterns, and anomalies, and/or could explain their importance more clearly

2. When needed, identified some of the relevant sources of error, but didn’t discuss their relevance to the results, or suggested changes to the experiment were not the most relevant

3. Correctly recognized whether the data supported the hypothesis, but gave little explanation

4. Stated a clear conclusion, but lacked a certain linkage to the data analysis

5. Suggested directions for further research were appropriate but insufficient to expand thinking about relevant biological concepts

1. Restated the data without analysis of trends, patterns, and anomalies, or such analysis was incorrect or missing

2. Relevant sources of error were either incorrect or missing, or provided a “laundry list” of everything that might have happened; suggestions for improvement were either irrelevant or missing

3/4 Incorrectly recognized whether or not data supported the hypothesis, or discussion of the data was not related to the hypothesis, or did not link the discussion to the experiment, or the discussion just repeated the hypothesis

5. There was little in the way of relevant suggestions for further research , or the suggestions given were inappropriate

Communication 1. Standard lab report format was used and information was correctly placed into the appropriate sections

2. Standard procedures used were clearly identified; other procedures were described so clearly that they could be replicated exactly by a competent practitioner: step-by-step with pictures and diagrams of equipment set-up

3. Tables and graphs were labeled and titled appropriately; data representation enhanced understanding— it clarified the relationship between the variables involved

4. Even though writing conventions and neatness may not have been perfect, errors did not interfere with understandability

1. Most of the information that goes into a lab report were in evidence even though some elements may have been included in the incorrect section

2. Standard procedures were incompletely identified; and/or other procedures were described, but there would be questions when trying to replicate: some of the steps were not clearly described, were in the wrong order, or would be helped though a picture

3. Tables and graphs explained what the data are, but could have clearer labels and titles, or another data representation would be more useful to illustrate the relationship between the variables involved

4. Writing conventions and messiness interfered with understandability, but the reader is fairly confident that inferences about what was done are correct

1. Standard lab report was not used, and/or a lot of work was required to find relevant information

2. Procedures were not described or were described in a way that would make it difficult to replicate the experiment: the steps were not clearly described or were missing, or order had to be inferred.

3. Tables and graphs were missing or labeled inaccurately or not al all; or the choice of representation was inappropriate for the data—did little to clarify the relationship between the variables involved

4. Writing convention errors or messiness were so severe that the reader is not sure that what was conveyed was what the writer intended; or is not sure that inferences about what was done are correct

Adapted from drafts prepared by Chris Strictland, Clackamas Community College, Oregon City, Oregon, 2006. Used with permission.

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 12 Handout #3

Research Paper Rubrics, Clackamas Community College (OR)

High Average Low

Judy Arter and Loren Ford, Anderson Conference, PCC, January 2011 13 Handout #3

Communication An inviting introduction draws the reader in, a satisfying

conclusion leaves the reader with a sense of closure and resolution;

there is a clear thesis transitions are thoughtful and clearly show how ideas connect; uses an appropriate variety of valid sources which are well

integrated and support the author’s points; quotations, paraphrases and summaries are used and cited

appropriately; uses the proper format (APA, MLA, etc.); sequencing is logical and effective; spelling is generally correct, even on more difficult words; punctuation is reasonably accurate, consistent, and guides the

reader effectively through the text; grammar and usage contribute to the clarity; conventions,

if manipulated for stylistic effect, work; voice and style are appropriate for the type of paper assigned; paragraphs are well-focused and coherent.

The paper has a recognizable introduction and conclusion, but the introduction may not create a strong sense of anticipation and/or the conclusion may not tie the paper into a coherent whole;

there is a thesis but it is ambiguous or unfocused; transitions often work well, but some leave connections between

ideas fuzzy; valid sources generally support the author’s points but a

greater variety or more detail is needed; quotations, paraphrases and summaries generally work but

occasionally interfere with the flow of the writing, seem irrelevant or are incorrectly cited;

uses the proper format but there are occasional errors; sequencing shows some logic but it is not under complete control

and may be so predictable that the reader finds it distracting; spelling is generally correct but more difficult words may be

misspelled; punctuation is acceptable but occasional errors interrupt the flow

or confuse the reader; there are problems with grammar or usage but not serious

enough to distort meaning; voice and style don’t quite fit with the type of paper assigned; paragraphs occasionally lack focus or coherence.

There is no real lead-in to set up what follows, no real

conclusion to wrap things up; there is no clear thesis; connections between ideas are often confusing or missing; citations are infrequent, lack credibility, or often fail to

support the author’s points; quotations, paraphrases and summaries tend to break the

flow of the piece, become monotonous, don’t seem to fit, and/or are not cited;

frequent errors in format or incorrect format used; sequencing seems illogical, disjointed, or forced; there are frequent spelling errors even on common

words; punctuation is often missing or incorrect and makes the

paper noticeably more difficult to interpret; errors in grammar or usage are frequent enough to

become distracting and interfere with meaning; voice and style are not appropriate for the type of paper

assigned; paragraphs generally lack focus or coherence.

Critical Thinking The paper displays insight and originality of thought; there is sound and logical analysis that reveals clear

understanding of the relevant issues; there is an appropriate balance of factual reporting,

interpretation and analysis, and personal opinion; the author goes beyond the obvious in constructing

interpretation of the facts; telling and accurate details are used to reinforce the author’s

arguments; the paper is convincing and satisfying..

There are some original ideas and some seem obvious or

elementary; analysis is generally sound but there are lapses in logic or

understanding; the balance between factual reporting, interpretation and analysis,

and personal opinion seems skewed; paper shows understanding of relevant issues but lacks depth; generally accurate details are included but the reader is left with

questions – more information is needed to ‘fill in the blanks’; the paper leaves the reader vaguely skeptical and unsatisfied.

There are few original ideas, most seem obvious or

elementary; analysis is superficial or illogical; the author seems to

struggle to understand the relevant issues; there is a clear imbalance between factual reporting,

interpretation and analysis, and personal opinion; author appears to misunderstand or omit key issues; there are few details or most details seem irrelevant; the paper leaves the reader unconvinced.

Content The paper addresses a topic within the context of examining

biological, personal, social/cultural/political, or paradigmatic change;

the paper is complete and leaves no important aspect of the topic not addressed;

the author has a good grasp of what is known, what is generally accepted, and what is yet to be discovered;

appropriate significance is assigned to the information presented and irrelevant information is rarely included;

connections between the topic of the paper and related topics are made that enhance understanding;

specialized terminology, if used, is used correctly and precisely;

the author seems to be writing from personal and/or professional knowledge or experience.

The paper addresses a topic within the appropriate context but the

connections are somewhat tenuous or there are diversions to less relevant points;

the paper is substantially complete but one or more important aspects of the topic are not addressed;

the author has a good grasp of the relevant information, but fails to distinguish between what is known, what is generally accepted, and what is yet to be discovered;

the paper often uses information in a way inappropriate to its significance or includes much irrelevant information;

there are few connections made to related topics; specialized terminology is sometimes incorrectly or

imprecisely used; the author seems to be writing from knowledge or experience but

has difficulty going from general observations to specifics.

The paper addresses a topic only vaguely related to

examining change; the paper is clearly incomplete with many important

aspects of the topic left out; the author has a poor grasp of the relevant information; the paper frequently uses information inappropriately

or uses irrelevant information; no connections are made to related topics to help clarify the

information presented; specialized terminology is frequently misused; the work seems to be a simple restatement of the

assignment or a simple, overly broad answer to a question with little evidence of expertise on the part of the author.

©2002, Dave Arter, Clackamas Community College, Oregon City, Oregon. Used with permission