identifying affective responses of - uw-stout · assessment among the distinctively creative...

52
1 Identifying Affective Responses of Students to Creative Assessments by Steve Strieker A Research Paper Submitted in Partial Fulfillment of the Requirements for the Master of Science Degree in Education Approved: 2 Semester Credits James Lehmann The Graduate School University of Wisconsin-Stout August 6, 2010

Upload: trankhue

Post on 17-Sep-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

1

Identifying Affective Responses of

Students to Creative Assessments

by

Steve Strieker

A Research Paper Submitted in Partial Fulfillment of the

Requirements for the Master of Science Degree

in

Education

Approved: 2 Semester Credits

James Lehmann

The Graduate School

University of Wisconsin-Stout

August 6, 2010

2

The Graduate School

University of Wisconsin-Stout

Menomonie, WI

Author: Strieker, Stephen J.

Title: Identifying Affective Responses of Students to Creative Assessments

Graduate Degree/ Major: MS Environmental Education

Research Adviser: James Lehmann, Ph.D.

Month/Year: August, 2010

Number of Pages: 52

Style Manual Used: American Psychological Association, 6th edition

Abstract

While recent studies indicate alternative, creative assessments lead to greater student

achievement, the research lacks clear disclosure of how students react and respond to creative

assessments. This qualitative study evaluates how four middle school students—traditionally

exposed to analytical tests—affectively respond to a creative assessment. For a comparative

analysis, the researcher selected two students for showing aptitude on previous analytical tests

and two students for showing ingenuity on previous creative assessments. The study’s creative

assessment involved each subject displaying acquired historical knowledge in an imaginative

letter written from the perspective of a pioneer child traveling on the Oregon Trail.

A written questionnaire and interview gathered student feedback to the creative

assessment. The overall analysis of the subjects’ qualitative feedback revealed positive attitudes

and reactions to the study’s creative assessment and negative attitudes toward multiple-choice

tests. A comparative analysis of the feedback found a stronger enthusiasm for the creative

3

assessment among the distinctively creative subjects than the distinctly analytical subjects. The

positive affective responses of the subjects to the creative assessment necessitate the

development of more creative assessments in the researcher’s evaluation practices and further

research on the role of creative assessments in student learning.

4

Table of Contents

.................................................................................................................................................... Page

Abstract ............................................................................................................................................2

Chapter I: Introduction ....................................................................................................................6

Statement of the Problem .....................................................................................................7

Purpose of the Study ............................................................................................................7

Assumptions of the Study ....................................................................................................8

Definition of Terms..............................................................................................................8

Limitations of the Study.......................................................................................................9

Methodology ......................................................................................................................10

Chapter II: Literature Review ........................................................................................................11

Views of Intelligence .........................................................................................................11

Successful Intelligence.......................................................................................................16

Affective Response to Assessments...................................................................................19

Creative Assessments.........................................................................................................20

Chapter III: Methodology ..............................................................................................................25

Subject Selection and Description .....................................................................................25

Instrumentation ..................................................................................................................27

Data Collection Procedures ................................................................................................28

Data Analysis .....................................................................................................................30

Limitations .........................................................................................................................30

Chapter IV: Results ........................................................................................................................31

Overall Analysis.................................................................................................................31

5

Comparative Analysis ........................................................................................................33

Summary ............................................................................................................................35

Chapter V: Discussion ...................................................................................................................37

Limitations .........................................................................................................................38

Conclusions ........................................................................................................................38

Recommendations ..............................................................................................................41

References .....................................................................................................................................43

Appendix A: Written Questionnaire .............................................................................................47

Appendix B: Consent Form ...........................................................................................................48

Appendix C: Research and Assessment Instructions .....................................................................50

Appendix D: Notes Template ........................................................................................................51

Appendix E: Scoring Rubric ..........................................................................................................52

6

Chapter I: Introduction

Educators have long struggled with the question of how to effectively assess student

learning and intelligence. The Intelligence Quotient (IQ) was developed over a century ago to

classify human intelligence and adopted by many educators to assess student intelligence (Dent,

1995). Opponents of IQ testing have challenged this general intelligence theory throughout the

last century. Most prominent in this debate is Howard Gardner’s multiple intelligence (MI)

theory, which finds human intelligence to be much too complex to be summed up in a single IQ

test (Shearer, 2004). Sternberg (1996) has expanded the discussion to include his Successful

Intelligence theory, which sees intelligence as the ability to utilize analytical, creative, and

practical abilities to succeed in one’s environment.

Despite challenges from Gardner, Sternberg, and other researchers in the area of multiple

intelligences, the 2001 No Child Left Behind (NCLB) legislation reinvigorated the importance of

analytical testing by requiring public schools to raise 100% of American students’ testing scores

to proficient or advanced levels (Darling-Hammond, 2007). Most recently, the current

administration’s 2009 Race to the Top initiative looks to not only judge schools by standardized

testing scores, but also teachers (McNeil, 2010). It has been over a century since the first IQ

tests were developed, yet our ability to accurately assess human intelligence is seemingly more

complicated and controversial.

Compounding the complicated matter is the apparent overreliance placed on assessing

general intelligence and its effect on student motivation. The lack of student motivation is a

common complaint among educators. One apparent reason for this student apathy is traditional

testing assesses a relatively narrow range of intelligence. The overreliance on traditional testing

7

appears to have left students with weak analytical skills struggling to find recognition for their

innate talents.

Statement of the Problem

A problem exists in that research related to student assessent is lacking clear disclosure of

how students react and respond to creative assessments. Students with strong analytical skills

often have lots of opportunity to showcase their abilities in commonly used standardized

analytical assessments. However, students are not sufficiently given assessments that take into

account creative skills. While recent literature suggests the overreliance on standardized testing

has adversely effected student motivation (Nichols & Berliner, 2008), the general research on

testing is lacking clear disclosure of how students react and respond to creative assessments.

Thus, the point of this qualitative study will be to evaluate how students with distinctive creative

or analytical intelligence affectively respond to a creative assessment challenging them to display

acquired knowledge in a meaningful and creative way.

Purpose of the Study

In an attempt to build up the limited research related to creative assessments, the purpose

of this study is to identify and analyze the attitudes and reactions of students following the

completion of a creative assessment. The researcher also hopes to compare and contrast the

feedback from students traditionally strong in analytical intelligence and students traditionally

strong in creative intelligence. In particular, through an analysis of student feedback following a

creative assessment, the study will seek to answer the following questions:

1. How do students affectively respond to a creative assessment asking them to display

acquired knowledge in a meaningful and inventive way?

8

2. How do students traditionally strong in reasoning, logic, and comprehension (with

distinct analytical intelligences) affectively respond to a creative assessment

challenging them to display acquired knowledge in a meaningful and inventive way?

3. How do students traditionally adept at thinking, adapting, and creating in novel ways

(with distinct creative intelligences) affectively respond to a creative assessment

aligned with their innovative talents?

Assumptions of the Study

This study is conducted with several assumptions. The researcher assumes the subjects in

this study will participate fully and truthfully in the research, creative assessment, and the study’s

reflection. It is presumed the research content provided will be adequate for students to complete

the creative assessment. Furthermore, it is assumed the study’s post-assessment questionnaire

and interview are properly designed to draw out sufficient feedback from the subjects for

meaningful analysis.

Definition of Terms

Affective response. The affective response referenced in this study refers to the

emotional reaction of students following a creative assessment.

Analytical intelligence. A student’s aptitude to reason and analyze multifaceted

problems is commonly called analytical intelligence in this research paper. Students strong in

mathematical/spatial reasoning, logic, and reading/language comprehension are often considered

to be strong in analytical intelligence. Students with strong analytical skills tend to do well on

standardized tests that focus on reading, reasoning, and math skills.

Creative Assessment. A creative assessment is a tool that allows teachers to evaluate

student learning while giving students a chance to respond in meaningful, imaginative ways.

9

Creative assessments, like the one used in this study, often expect students to display acquired

knowledge in a personal and inventive manner.

Creative intelligence. The ability of a student to innovatively think, adapt, and create in

novel ways is identified as creative intelligence in this study. Students who can take acquired

knowledge and create something unique out of this knowledge are often considered to be

creatively intelligence. People who are good at art, drama, presentation, writing, dance, and

other activities requiring ingenuity are often thought to be strong in creative intelligence.

Distinct intelligence. A distinctive intelligence is the particular ability of the study’s

subjects to either effectively analyze or think creatively. Some students are distinguished by

their innate abilities. Some students are strong in analytical skills, while others are strong in

creative skills. It should be noted, however, some students are strong in both analytical and

creative skills. Furthermore, some other students are weak in either analytical or creative skills.

Successful Intelligence. Robert Sternberg’s (1996) successful intelligence theory

proposes that most often achievement in life requires an individual to successfully adapt to their

particular environment by balancing analytical, practical, and creative abilities.

Traditional testing. Traditional testing, the most common form of evaluations used in

America’s school system, tends to focus on analytical skills like comprehension, logic, critical

thinking, and reasoning. Often missing from traditional testing are evaluations of creative

thinking.

Limitations of the Study

Like most qualitative studies, this study has several limitations. The duration of the

study, its limited number of subjects, and the qualitative approach limits this study from being

reasonably generalized to any larger samples. The study was limited to a week-long creative

10

project and the affective responses of the study’s four subjects. Of course, the qualitative

methodology used in this study steers the researcher from producing quantitative data. The

findings and conclusions are limited to the researcher’s analysis of student feedback to a creative

assessment. The findings will be used to improve instruction and assessments implemented in

the researcher’s classroom.

Methodology

In the subsequent chapter, the researcher will review the research related to intelligence and

assessments. This research will set the ground work for the researcher’s own study and later

chapters. The overall design of this qualitative study will focus on thick descriptions of student

responses to open-ended questions. The study will petition students for their reactions following

a creative project. The researcher will analyze the student responses to the post-creative project

questionnaire and the subsequent responses in the interviews. The researcher will identify

common attitudes and reactions of the students. The researcher will also analyze the responses

of the traditionally creative subjects and the consistently analytical subjects. The research will

also contrast each groupings (analytical students and creative students) thoughts and attitudes.

The research will be used to improve instruction and assessments implemented in the

researcher’s classroom.

11

Chapter II: Literature Review

The intention of this qualitative study is to get an understanding how students affectively

respond to creative assessments. According to Nichols and Berliner (2008), students find

motivation from succeeding on assessments. However, the assessment world is dominated by

Intelligent Quotient (IQ)-style testing, which leaves students with weak analytical skills

struggling for success stories on such assessments. Complicating this discussion is research

indicating standardized testing is overrated as a predictor of success in life. In order to

understand the context of this study, this review will outline the latest insights on human

intelligence and, in particular, Sternberg’s (1997) theory of successful intelligence. The

literature related to the affective response of students to assessments will also be examined.

Lastly, the research review will uncover what is known about the use of creative assessments in

education.

Views of Intelligence

In order to understand the literature related to creative assessments, a look at the

opposing views of human intelligence is necessary. In a modern context, educational researchers

are often divided over the view of student abilities and intelligence. Multiple intelligence

thinking—a broader view of human intelligence that sees people as have a wide-range of

thinking abilities—is a relatively new view of human intelligence (Shearer, 2004a). Historically,

however, a narrower view of intelligence has dominated educational thinking of student

intelligence for over 100 years. Main stream intelligence theorists tend to view intelligence as a

general mental capability involving reasoning, abstract thinking, and an ability to learn

effectively and efficiently. According to these general intelligence theorists, individuals with

strong general intelligence tend to do well on standardized tests focused on reading, reasoning,

12

and math skills. General intelligence theorists generally find human intelligence to be innate and

inherited. They see intelligence as something you are largely born with and only remotely

acquired by experience. This innate view of intelligence explains general intelligence theorists

belief in the consist measurability of general intelligence through IQ testing. Intelligent Quotient

assessments tend to focus on analytical intelligence such as mathematical/spatial reasoning,

logic, and reading/language comprehension. These standardized tests are designed to score

human general intelligence with 100 as the average score. Ninety-five percent of people tested

for IQ fall between the range of 70 and 130. Individuals who score below 70 are often classified

as being mentally retarded. Students who score above 130 are often classified as gifted. Among

both general intelligence and multiple intelligence theorists is broad support for IQ testing

accurately measuring analytical skills (Gottfredson, 1997).

The support for the general intelligence view runs deep. Binet created the first known IQ

assessment in the early 20th century. The IQ research compiled since Binet’s work is now over

100 years old (Cuban, 2004). Out of this general intelligence research, Gottfredson (2004) has

accumulated numerous studies believed to show how IQ testing is useful for schools and

students. Standardized tests related to IQ are routinely used by schools to rank (and track)

students. According to Nettelback and Wilson (2005), IQ testing is useful in schools, where

methodology and assessments tend to focus on analytical skills. IQ-like tests are also commonly

used by school psychologists and special education teachers to identify learning disabilities and

talented and gifted students. Gottfredson (2004) also referenced several sociological and

psychological studies showing the worth of general intelligence outside schools. Specifically,

Gottfredson highlighted general intelligence research showing a correlation between IQ scores

13

and income, occupational level, life expectancy, and job performance. For many of these

supporters of general intelligence, IQ is the best measurement of success in life.

While general intelligence thinking is historic, its application is not stuck in the past.

Standardized testing is alive and well in 21st century American public schools. The 2001 No

Child Left Behind (NCLB) legislation sparked growth in standardized testing by requiring public

schools to raise 100% of American students’ testing scores to proficient or advanced levels

(Darling-Hammond, 2007). Most recently, President Obama’s Race to the Top initiative looks to

continue the reliance on standardized testing as the premier assessment tool of America’s schools

(McNeil, 2010). More than ever America’s school communities rely on data from standardized

assessments to “determine an ever-widening range of education decisions: promotion and

graduation requirements for students, bonuses for principals, tenure and salaries for teachers,

ratings for schools, and school closings” (Tashlik, 2010, p. 55).

Even though standardized testing of general intelligence is a popular tool for America’s

schools, the educational literature is full of opponents who deem standardized testing is

overused. Leading this opposition to standardized testing is Gardner (2004) and other

proponents of his multiple intelligence (MI) theory. The IQ-like testing push contradicts the

research of Gardner. According to Cuban (2004), no theory has had more of an impact on

educational practices than Gardner’s MI theory in the last few decades. Gardner’s (1983)

Frames of Mind formally introduced multiple intelligence thinking to America’s educational

community. Gardner proposed that many different types of abilities make up human

intelligence. Eight intelligences—linguistic, logical mathematical, visual-spatial, kinesthetic,

musical, naturalist, interpersonal, and intrapersonal—are presently included in Gardner’s MI

research. Unlike the general intelligence theory, the MI theory found some individuals are

14

stronger in different areas of intelligence than others. As noted earlier, many multiple

intelligence supporters do not dispute the accuracy of IQ tests in measuring analytical

intelligence. However, according to Gardner and his supporters, human intelligence is much too

complex to be summed up in an IQ test as proposed by supporters of general intelligence

(Shearer, 2004a).

While respected, the MI theory is not universally accepted. Multiple intelligence critics

find Gardner’s theory lacking empirical evidence and reliable assessments. The empirical

evidence MI critics seek requires scientific testing of Gardner’s theory. Scientific studies

attempt to control variables in hopes of isolating results. This has been problematic for Gardner

as he finds the MI theory is best understood in a social context. Testing the MI theory in a

broader, social setting as opposed to a controlled setting does not meet the requirements for

empirical study set by many MI critics (Shearer, 2004a). Gottfredson (1997), with the support of

52 intelligence theory experts, indirectly dismissed the multiple intelligence theory through an

editorial promoting general intelligence research and validating IQ as one of the most important

measurements of success. Multiple intelligence supporters, of course, think IQ measurement is

too narrow to indicate broad success in life (Sternberg, 1996).

Shearer (2004a) notes, however, MI critics often support the general intelligence theory

because it can be supported by problem-solving assessments that search for a single, reliable

answer. Multiple intelligence thinking challenges educators to view intelligence and student

abilities more broadly. The broader view of intelligence, however, is more difficult to research

than general intelligence. Although not as widely accepted and utilized as the IQ test is used to

measure general intelligence, the Multiple Intelligences Developmental Assessment Scales

(MIDAS) was created in support of Gardner’s MI theory. The assessment asked respondents a

15

series of questions about level of participation (not at all, fairly good, good, very good, excellent,

and I don’t know) in a variety of life activities. Each activity questioned about is thought to

relate to one of Gardner’s intelligences. The responses to the MIDAS questionnaire are

translated into a profile that aids in identifying multiple intelligences in the respondents. While

its validity and reliability are still being studied, it is considered useful when supported by

qualitative feedback from individuals who take the test. Unlike the IQ test—which is often

viewed by its supporters as an accurate measurement of one’s general intelligence, the MIDAS is

not viewed as a definitive multiple intelligence profile. In fact, multiple intelligence theorists are

slow to utilize the MIDAS or any other quantitative measurement to analyze their broader view

of human intelligence. Once again, the MI theory is to be understood in a social context.

According to Shearer (2004a), the MI theory cannot be precisely quantified, like general

intelligence. The MIDAS serves primarily as an educational tool for educators to develop

effective learning strategies geared toward individuals’ strengths (Shearer, 2004b).

Ironically, MI critics note the lack of universally accepted measurements for multiple

intelligences, yet the validity of standardized tests has also come into question. Most prominent

in the literature is the criticism of Herrstein and Murray’s (1994) Bell Curve, which argued IQ

test scores are the best predictor of success in life. Critics, like Sternberg (1996), find IQ to be

predictive of outcomes only 10% of the time. Put another way, “more than 90% of the variation

we see in performance is not accounted for by conventional ability tests” (p. 18).

Foote (2007) also noted new studies indicating many of our states’ standardized tests are

not accurately measuring student learning. Simply put, it appears the scores of some

standardized tests are inflated. In particular, numerous states have seen a drastic rise in

standardized test scores implemented in conjunction with NCLB legislation, yet a separate

16

national test (called the National Assessment of Educational Progress) of reading and math

scores has remained largely stagnant. Furthermore, if students are doing better on tests designed

to accurately gauge learning standards, logically, they should be performing better at post-

secondary institutions utilizing those academic standards. However, reports of rising student

struggles beyond high school showed the states’ improvements in formal assessments may have

indicated an inaccurate account of student learning. The criticisms of testing point to a need for

further assessment research.

Successful Intelligence

Most important in this review is Sternberg’s (1997a) successful intelligence theory.

Sternberg is at the forefront of the opposition to the America’s overreliance on standardized,

analytical tests in evaluating student abilities and public school teaching effectiveness. Sternberg

has made a career studying human intelligence and has theorized that analytical intelligence (the

focus of standardized assessments and Intelligence Quotient tests) makes up only one component

of successful intelligence. Sternberg’s studies sought out successfully intelligent people, as

determined by their peers, in a variety of educational and occupational environments. Sternberg

(1996) analyzed data collected about successfully intelligent individuals and concluded that they

tended to manage their strengths and weaknesses well, focused effectively on goals, creatively

adapted to changes in their environment, and followed through on ideas.

Similar to Gardner (1983), Sternberg (1997a) developed a broader view of intelligence than

proponents of general intelligence. Instead of following Gardner’s multiple intelligence model,

however, Sternberg categorized intelligence into three categories: analytical, creative, and

practical intelligences. Analytical intelligence involves academic skills tested most commonly

on IQ tests (reasoning, logic, and comprehension). Creative intelligence involves adapting prior

17

knowledge and skills to new circumstances or situations. Practical intelligence involves utilizing

strengths and compensating for weaknesses effectively in everyday life. Balancing all three

categories of intelligence in Sternberg’s triarchic theory is considered important to success in

life.

Translating his research to education, Sternberg (1996) contended America’s schools over

rely on testing and teaching to analytical thinking. Schools, according to Sternberg, should be

most concerned about assisting students in becoming “highly competent…on the job, in personal

relationships, and in other aspects of our daily lives” (p. 19). Intelligent Quotient and analytical

assessments matter in predicting success in life, according to Sternberg, but not near as much as

other skills he associates with successful intelligence. If an individual has an IQ score of less

than 70—which is considered mentally retarded—Sternberg found they were less apt to be

successful in every day living. Beyond looking at the extreme of someone who is classified

mentally retarded, Sternberg found very little correlation between academic, analytical

intelligence and being successful in life. More simply put, just because an individual is

successful on standardized, analytical tests, this does not necessarily mean he or she will be

successful as an employee, in personal relationships, and in every day living. For Sternberg, it is

the practical and creative skills not tested on IQ-like tests making individuals most successful in

life.

Recognizing a need for analytical skills—like comprehension, logic, critical thinking, and

reasoning—educators cannot discard analytical assessments as useless. However, Sternberg’s

work pushes America’s schools to look beyond exclusively using analytical assessments.

Successful intelligence thinking challenges educators to examine student abilities in a larger

“sociocultural context” (Sternberg & Grigorenko, 2004, p. 274). One component of teaching,

18

according to the successful intelligence model, involves diversifying instruction and assessment

to accommodate each student’s strengths. One of Sternberg’s (1997b) most important studies

involved matching students’ innate abilities with instruction. The high school students in his

successful intelligence study were divided according to their analytical, creative, or practical

thinking abilities and taught in ways matching their abilities. To evaluate the divergent

instruction, all three groups of students were assessed in the same way. They were given a

multiple-choice test, followed by three different essay questions asking them to use their

acquired knowledge in an analytical, a creative, and a practical way. As anticipated, all three

distinct groups of students performed better overall on the assessment than the study’s control

group. Sternberg’s study indicated students learn better when they are taught according to their

intellectual strengths.

Noticeably similar to Sternberg’s thinking is the work being done through the New York

Performance Standards Consortium (Foote, 2007). This coalition of schools has abandoned the

traditional graduation testing used in New York’s public high schools and utilized inquiry-based

assessments. The students were required to complete project-based assessments involving

reading, writing, critical thinking, research, discussion, argument development, and formal

presentation of their knowledge. Isolating and evaluating the learning in these project-based

assessments is problematic. However, what is most interesting and similar to Sternberg’s

thinking, is the real-outcome evaluation of the consortium’s efforts. Sternberg (1996) identified

successful intelligence “as the kind of intelligence you need to succeed in the real world” (p. 19).

The consortium’s success is calculated by how students perform in the “real world” after high

school. The post-graduation study tracks how many students enroll in academic settings and

how those students perform in their new schools. The success of the consortium in the “real

19

world” is difficult to analyze with virtually no schools to compare to with similar, “real-world”

type data. However, the schools enrolled in the coalition noted a significant overall rise in

student achievement beyond high school since the consortium was created (Foote, 2007). The

success of the New York Performance Standards Consortium and its near abandonment of

standardized testing give credence to Sternberg’s and other multiple intelligence supporters’ call

for alternative student achievement measurements in America’s schools.

Affective Response to Assessments

While Sternberg’s recent research shows how diversified teaching and assessments can

contributes to higher achievement for a wider variety of students, his work is lacking an

examination of how students affectively respond to alternative assessments. Ironically,

motivating students is a common concern among educators (Motivating students, 1996), yet the

literature involving affective responses to assessments is incomplete. Certainly, measuring

attitudes, motivation, and emotions is complicated and difficult to quantify. A scan of the

educational literature uncovers too few studies revealing student reactions to assessments.

Nevertheless, some recent research indicates the emphasis put on standardized testing is

adversely affecting student motivation. Nichols and Berliner (2008) have examined numerous

case studies related to standardized testing. They propose that student motivation is waning

because of the focus on standardized testing. A number of other researchers—such as Merrow

(2001), Rothstein (2008), Foote (2007), Sternberg (2006), and Gardner (2009)—reported

standardized testing adversely affecting student motivation. However, Nichols and Berliner

(2008) noted the “research has not fully examined the impact of this test-dominated school

environment on students' attitudes and dispositions toward learning” (p. 14).

20

While lacking the direct research revealing student reactions to standardized testing, the

literature does reveal some studies about teacher perception of student reactions to formal

testing. One survey study found 41.4% of teachers viewed standardized testing as negatively

affecting student learning compared to only 6% of teachers as seeing it as a positive (Jones,

Jones, Hardin, Chapman, Yarbrough, & Davis, 1999). Another indirect study by Hoffman,

Assaf, & Paris (2001) looked at the reports of student illness while taking standardized testing,

which could indicate test anxiety. Teachers reported in this elementary school study over a third

of their students often had upset stomachs during standardized testing. Admittedly, these

studies—based on teacher feedback—are hardly adequate for accurately gauging student

motivation to assessments. The void in literature regarding students’ attitudes in relation to

assessments shows the need for further research.

Creative Assessments

As previously indicated, the affective response of students to assessments has not been

extensively documented. However, the development of creative assessments as a tool and

predictor of achievement is underway. These new assessments give merit to the push for

alternative views of student achievement. These assessments move beyond general intelligence

testing to identify talents of individuals.

In the Aurora Project, Sternberg, Grigorenko, & Jarvin (2006) researchers are having

success identifying gifted and talented children (grades 9-12) in ways beyond general analytical

intelligence. Students in this study are given nine different tests asking them to respond

figuratively, verbally, and quantitatively in creative, practical, and analytical assessments. Some

test examples from the Aurora battery include matching various images, writing a story plot to

go with an illustration, explaining a relationship between two dissimilar nouns, categorize

21

information into pro/con lists to make a choice about a tough decision, and comparing different

routes on a map. Under this assessment, students do not necessarily have to score high in all the

categories to be classified gifted. However, the application of the Aurora battery will depend on

the definition of “gifted” in various high schools. While still in the works, the Aurora Project

hopes the assessment tool will lead to the identification of a larger number of talented and gifted

students whose talents are often overlooked in the general intelligence assessments.

Similarly, Sternberg, Grigorenko, & Jarvin (2006), through the Rainbow Project, used a

battery of entrance tests, including a creative assessment, to better predict student success in

college. The triarchic assessment—measuring creative, analytical, and practical intelligences--

used in the Rainbow Project is considered a better predictor of college grade point average

(GPA) than the commonly used Scholastic Aptitude Test (SAT). The SAT is advertized as

accurately measuring literacy, mathematical, and writing skills needed for college success.

Sternberg, Grigorenko, & Jarvin (2006) acknowledge the validity of the SAT as a predictor, but

the Rainbow Project’s additional assessments better predicted student GPA’s in college. Central

to the Rainbow battery involve its tests related to creativity. The Rainbow battery included

writing two short stories chosen from a list of unique titles. The creative essays were graded by

multiple assessors on novelty, quality, and task-appropriateness. The researchers showed that

the creative assessment, while not perfect and in need of further adjustment, helped better

predicted success among college students than the traditional SAT.

Further promising research in the area of creative assessment can be found at Glendale

(AZ) Community College (GCC), where the entire curriculum focuses on the multiple

intelligence theory. The Multiple Intelligences/Learning for Understanding (MI/LfU) study

allowed students to creatively present their acquired understandings through drama, dance, art,

22

writing, music, and presentations. All work is assessed according to creativity, performance,

organization, reflection, and understanding. Glendale Community College sought qualitative

feedback from students following each MI/LfU course. An analysis of this research showed a

high percentage of MI/LfU students felt the use of imagination and creativity added excitement

to their learning. Furthermore, many students felt class participation was higher and teachers

seemed more energized in MI/LfU courses (Díaz-Lefebvre, 2004). A follow-up comprehensive

survey of graduates confirmed earlier findings with graduates indicating increased student

motivation, longer retention of academic material, and a high satisfaction of learning compared

to traditional methods (Díaz-Lefebvre, 2006).

Related to this discussion is the movement toward 21st century skills. Twenty-first

century skills promote life-long learning and informational literacy using modern resources. As

the title implies, these skills are deemed necessary for success in the modern era. These high-

ordered skills require students to create with the knowledge they acquire (Stripling, 2008). In a

push to promote and measure 21st century skills, Silva (2009) cites several studies showing the

growing movement toward teaching and measuring higher-order skills along with the acquired

knowledge often measured in standardized tests. For instance, the College Work Readiness

Assessment (CWRA) has shown effectiveness in testing students’ abilities to problem-solve

using modern resources and higher-order skills. The CWRA has students use online newspaper

editorials and research reports to solve real-world dilemmas, like urban pollution and inadequate

healthcare. Students’ written solutions are assessed according to their understanding of

economic, social, and environmental impacts of the real-world problems.

Silva (2009) also noted studies and programs currently underway by the Department of

Education attempting to measure multiple skills through several short-term assessments.

23

Programs, like River City, allow online students to test hypotheses and study in a virtual world.

Using the River City software, middle school science students’ role play as urban scientists given

the task of exploring the cause of diseases that strike the imaginary, online city. Using scientific

steps, the online students learn how to investigate problems, develop hypotheses, and identify

causes of the problem. What is helpful to teachers is the ability to monitor students’ activity in

the program and assess students’ level of scientific inquiry and study. Without ever giving a

formal assessment, the instructor can observe and assess learning as students’ progress through

the simulation.

Wilhelm & Wilhelm (2010) add to the alternative assessment research with favorable

results of their inquiry instructional methods. Comparable to other creative assessments,

Wilhelm & Wilhelm had students engaged in inquiry learning centered on answering essential

questions through creative projects. The instructional methods used by these researchers relied

less on fact recall and teacher-driven lessons and more on student inquiry and conceptual and

procedural learning. For instance, when studying the explorations of Lewis and Clark, the unit

was centered on a broader question like “Why do we explore?” This type of instruction allowed

for students to guide their own learning and study beyond the historical facts related to Lewis

and Clark’s expedition. The inquiry study found “inquiry to be highly motivating for students,

particularly reluctant ones” (p. 44). Wilhelm & Wilhelm documented favorable, affective

responses from students in their inquiry-oriented study. The study found students were more

motivated when they determined the direction of their studies.

Also relevant to this review is an alternative, instructional model of historical inquiry

promoted by Steeves (2005) in Integrating Inquiry across the Curriculum. Steeves (2005)

pushed for instructional methods promoting historical understanding. Steeves drew on research

24

from John Dewey and other theorists in outlining support of learning involving activity.

Creative assessments often expect students to display acquired knowledge in a personal and

inventive manner. This act of inquiry and engagement is thought to lead to deeper learning.

Steeves (2005) encouraged the use of assessments that “…enable students to demonstrate how

they made sense of multiple, discrete pieces of information” (p. 75). Incidentally, Steeves

pointed out how modern standardized testing trends in education contradict the inquiry learning

promoted in creative assessments. Traditional testing too often, explained by Steeves, tries to

measure knowledge through multiple choice questions seeking a single answer unconnected to

other knowledge. Historical inquiry methods challenge students look beyond solitary answers.

The review of the literature related to human intelligence and assessments indicates a need

for further study in those areas. The research community is still divided over the narrower view

of intelligence (general intelligence) and a broader view of intelligence (multiple intelligence).

The validity of assessments related to both general intelligence and multiple intelligence are still

questioned. Furthermore, the literature is not thorough regarding how students affectively react

to various assessments and instruction. While the studies related to the use of alternative,

creative assessments and instructions show promise for improving student motivation and

learning—more research needs to follow.

25

Chapter III: Methodology

A review of educational literature (Chapter 2) uncovered opposing views on human

intelligence, controversy over the tests that measure human abilities, and too little research

tracking student feedback on tests. Multiple intelligence theorists (Gardner, 1983; Sternberg,

1996), in recent decades, challenged the traditional general intelligence theory and its

Intelligence Quotient (IQ) test as being too narrow a measurement of human abilities. Despite

the challenge put forth by multiple intelligence theorists, the support for general intelligence is

still going strong (Gottfredson, 2004). The use of IQ-like testing is more prevalent than ever

with most of America’s public schools required to raise a higher percentage of their students’

standardized testing scores to proficient or advanced levels (Rothstein, 2009).

Recent multiple intelligence research (Díaz-Lefebvre, 2004; Silva, 2009; Sternberg,

Grigorenko, & Jarvin 2006; Wilhelm & Wilhelm, 2010), however, has shown some promising

findings regarding the use of alternative, creative assessments to identify student learning. While

these studies indicated alternative, creative assessments led to greater student achievement, the

research is still lacking clear disclosure of how students react and respond to creative

assessments. The purpose of this researcher’s qualitative study was to evaluate how middle

school students with distinctive creative or analytical intelligences affectively respond to a

creative assessment. This study sought to evaluate how creative assessments affect students

traditionally exposed to analytical testing. The study’s design and methodology are outlined in

detail in this chapter.

Subject Selection and Description

The study took place in the researcher’s 2010 summer social studies class at a middle

school located in a midsize, southern Wisconsin community. The site of the study was one of

26

three middle schools in an urban community with a population of about 60,000. The middle

school used for the study consisted of grades sixth through eighth and serviced about 700 of the

school district’s 10,000 students during the 2009-2010 school year. The middle school site of

this study was comprised of students mostly from middle class families. However, over 40% of

the student population at the middle school came from low-income homes (as determined by the

federal lunch program). The middle school is largely homogeneous with over 80% of its

population classified as non-white, non-Hispanic. The remainder of the school’s demographics

consisted of 8% Hispanic, 6% black, 2% Asian, and 1% American Indian (School District of

Janesville, 2009).

The subjects of the study were a convenience sample that consisted of four summer

school students enrolled in the researcher’s seventh grade social studies class for summer school

students. A total of 51 students were enrolled in the middle school’s summer program and

divided by grade among the researchers’ three social studies classes. While all the summer

school students participated in the creative assessment, this study focused on four of the 15

students enrolled in the researcher’s seventh grade class. The subjects were screened through an

informal analysis of student performance on analytical and creative assessments completed in the

first three weeks of the four week summer program. Two students, one male (Subject 1) and one

female (Subject 2), were selected for their advanced performances (consistent scores above 90%)

on three previously taken analytical tests. The other two subjects, one male (Subject 3) and one

female (Subject 4), were selected for their advanced performance (displaying acquired

knowledge in novel ways) on two previously completed alternative, creative assessments (Table

1). The two sets of subjects were chosen for their contrasting viewpoints (analytical and

creative) of the study’s creative assessment.

27

Table 1 Subject Profiles

Interviewees

Gender

Informally Identified Aptitude

Grade Level

Subject 1 Male Analytical 7 Subject 2 Female Analytical 7 Subject 3 Male Creativity 7 Subject 4 Female Creativity 7

Instrumentation

The study’s focus and instrumentation involved petitioning the subjects—through an open-

ended questionnaire and a follow up interview—for their reactions to the creative writing project.

The qualitative study was designed to record and analyze the thoughts and attitudes of middle

school students with distinctive creative or analytical intelligences to the creative assessment.

The researcher utilized an electronic, written questionnaire (Appendix A) to seek student

responses after a creative writing project with the intent of gathering attitudes and reactions to

the creative assessment. The written questionnaire asked the subjects the following open-ended

questions:

1. What do you feel you have learned through this creative project?

2. What is your overall attitude toward creative projects, like the one you just completed?

3. What, if anything, did you find difficult about completing this creative project?

4. How do you feel you best learn?

Qualitative feedback was also collected from the four subjects through individual interviews

after the completion of the written questionnaire. The follow-up questions used in the interviews

were dictated by each student’s responses to the written questionnaire. In general, the researcher

asked the interviewees to expand on their comments in the written questionnaire. The

interviewees were also asked to provide specific examples to some of their general written

28

impressions of the creative project. The oral responses of the interviewees often prompted

further inquiry and clarification of interviewees’ comments.

Data Collection Procedures

The overall design of the qualitative study involved collecting and analyzing the thoughts

and reactions related to a creative assessment of students enrolled in the researchers’ social

studies class. Before data collection, the researcher informally screened and selected (by the

criteria outlined earlier in this chapter) four of the 15 students enrolled in the researcher’s

seventh grade class. Data collection began then with the four subjects and their parents

completing the study’s consent form (Appendix B). However, before implementing the

alternative assessment and the collection of student feedback to the assessment, the subjects

spent two 50-minute class periods researching the history of the Oregon Trail. The research

phase (outlined in Appendix C) allowed the students to acquire historical background knowledge

necessary for completing the alternative assessment. The research process involved viewing an

introductory video (HowStuffWorks, n.d.) and reading an historical synopsis (“Just for kids,”

n.d.) of what the Oregon Trail experience was like for pioneer children. Students also reviewed

online primary sources, including a letter (“Much esteemed,” 1850) and a diary (Siedler, 1895-

1896) written by pioneers on the Oregon Trail. Students added to their research by exploring an

online glossary of slang words used by 19th century pioneers (“Old west legends,” 2003).

Students recorded pertinent notes about the Oregon Trail experience in a Word document

template (Appendix D) created by the researcher.

Following the research and note taking phase, the subjects engaged in an alternative

assessment of their acquired knowledge about the Oregon Trail experience. The study’s creative

assessment involved each subject displaying his or her acquired historical knowledge in an

29

imaginative letter written from the perspective of a pioneer child traveling on the Oregon Trail.

In 200-400 word letters, students were instructed to incorporate seven to ten facts about the

Oregon Trail experience, use historically accurate dates, and integrate slang from the pioneer

days. Students were given creative liberty to construct the letters in descriptive ways, spell

words incorrectly, doctor the letters to look aged, and add imagery and items to the letter. The

students completed the creative assessment over two 50-minute class periods. The creative

letters were assessed through the use of a simple scoring rubric (Appendix E) that evaluated the

use of historical knowledge and creativity by the students. The evaluation process, however, was

not part of the study’s focus.

The day following the conclusion of the creative assessment, the data collection most

relevant to this qualitative study began with the subjects responding to a written questionnaire

(Appendix A). The questionnaire was administered electronically so students typed their

responses to the open-ended questions into an electronic document and saved their work to an

electronic folder set up by the researcher. The subjects were instructed to take as long as they

needed to complete the questionnaire. All the subjects completed their typed responses to the

questionnaire in less than 20 minutes.

Further qualitative feedback was gathered in interviews the next school day after the

administering of the written questionnaire. While the other students engaged in an independent

learning activity, each of the four subjects was interviewed in a private, one-on-one conversation

with the researcher for approximately 12 minutes. The interviewees were asked to expand on

their thoughts about the creative assessment and their written comments in the questionnaire.

The researcher took typed up notes of the interviewees’ oral responses. The notes from the oral

feedback were added to the subjects’ written feedback for data analysis by the researcher.

30

Data Analysis

The data analysis involved the researcher analyzing the subjects’ responses to the post-

creative project questionnaire and from the notes gathered in the subsequent interviews. Student

responses to the open-ended questionnaire were in the form of sentences and paragraphs written

by the students in their own words. Using the typed notes of the oral interviews and the subjects

typed replies to the questionnaire, the researcher highlighted common attitudes and reactions of

the students to the creative assessment. The researcher then conducted a comparative analysis of

the subjects (Subjects 1 and 2) selected for their proficiency on previous analytical assessments

and the subjects (Subjects 3 and 4) selected for their proficiency on previous analytical

assessments. The comparative analysis concentrated on identifying contrasting reactions and

attitudes of the two sets of subjects to the creative assessment. Lastly, the researcher identified

the reactions distinctive to each subject as part of the data analysis.

Limitations

This study had several procedural limitations. The duration of the study, the small pool

of potential subjects for the study, and the qualitative approach limited this study from being

reasonably generalized to any larger samples. The summer school program’s timeline of four

weeks only allowed the researcher seven 50-minte class periods for the study’s procedures and

data collection. The study’s potential pool of only 51 students enrolled in the summer program

also limited the screening and selection of the study’s subjects. Of course, the qualitative

methodology used in this study prevented the researcher from collecting quantitative data for

analysis. The researcher’s data analysis was restricted to the student feedback following the

study’s creative assessment. The inadequacies of the study limited usage of the study’s results

discussed in the subsequent chapter.

31

Chapter IV: Results

Recognizing a need for more educational research related to the use of creative

assessments, the purpose of this study was to identify and analyze the attitudes and reactions of

middle school students following the completion of a creative assessment. The overall

methodology involved soliciting student feedback to a creative assessment through a written

questionnaire and a follow up interview. The researcher also designed this study to compare and

contrast feedback following a creative assessment from students traditionally strong in analytical

assessments and students traditionally strong in creative creative assessments. As a result, the

study involved two contrasting sets of subjects. Two students (Subjects 1 and 2) were selected

for the study after showing aptitude on previous analytical tests. The other two students (Subjects

3 and 4) were selected for the study after showing ingenuity on previous creative assessments.

Detailed in this chapter is the qualitative feedback provided by the four subjects after the

completion of a creative assessment. An overall analysis of the attitudes and reactions of the

subjects to the study’s creative assessment is provided as well as a comparative analysis of the

analytical and creative-minded students’ reactions.

Overall Analysis

This qualitative study sought the affective responses of students to a creative assessment.

The middle schools students involved in the study researched and took notes on the history of the

Oregon Trail. They then, as part of the creative assessment, used their newly acquired

knowledge in writing a creative letter in the character of a pioneer child on the Oregon Trail.

Following the creative assessment, the study’s subjects were asked in a written questionnaire and

a follow-up interview about the learning involved in the creative assessment, about their attitude

32

toward creative projects, and their opinion on the creative project. The subject’s feedback were

documented and analyzed by the researcher.

The data analysis revealed some overall patterns in the student feedback to the creative

assessment. All four subjects initial reactions (documented on the post-creative assessment

written questionnaire) to the creative project were positive. In answering “What is your overall

attitude toward creative projects, like the one you just completed?” on the written questionnaire,

Subject 1 wrote “it was neat.” Subject 2 shared she “liked it alright.” Subject 3 wrote “it was

awesome!!!” Subject 4 wrote she “really enjoyed it because we got to learn new things.” While

the range of enthusiasm varied in the initial reactions, they were generally upbeat about the

creative assessment.

The subjects’ opening feedback on the questionnaire also indicated an apparent positive

reaction to the learning involved in the creative project. The subjects noted lots of learning when

asked in the questionnaire “What do feel you have learned through this creative project?” and

“How do you feel you best learn?” Each of the subjects shared multiple things learned through

the project about the Oregon Trail experience—including food eaten, games pioneers played,

encounters with Indians, weather hardships, daily chores, use of covered wagons, hunting for

food, and more. Three of the four subjects explicitly wrote they learned “a lot.” Supporting this

positive learning feedback was the advanced scoring achieved by all four of the study’s subjects

on the project evaluation (Appendix E).

More impressive than the students’ initial reaction to the assessment was the enthusiastic

feedback provided by all four subjects in the follow up interviews. After the completion of the

written questionnaire, each of the four subjects was interviewed in a private, one-on-one

conversation with the researcher for approximately 12 minutes. In general, the interviewees

33

were asked to expand on their thoughts about the creative assessment and their written comments

in the questionnaire. The researcher noted all four subjects speaking enthusiastically in the

interview about the creative assessment. When asked what was “neat” about the historical letter

project, Subject 1 said he “loved being able to write with” his own words. When asked what she

“liked” about the creative project—Subject 2 said, “It was great to have the freedom to make the

letter however I wanted.” Subject 3 said, “Everything about it was awesome. I liked looking at

the old letters and making mine look like the old ones.” When asked what she most enjoyed

about the project, Subject 4 said she thought “it was fun learning about the slang” used during

the pioneer days. The researcher also observed that all four subjects smiled and spoke

extemporaneous for minutes at a time during the interviews—which may indicate further

positive affective reactions to the creative assessment by the subjects. Regardless, the researcher

noticed no overtly negative attitudes or reactions to the creative assessment in the written or oral

feedback of the students.

Comparative Analysis

This creative assessment study also focused on qualitative feedback from two distinct sets

of subjects. The researcher compared and contrasted feedback following a creative assessment

from students traditionally strong in analytical assessments and students traditionally strong in

creative assessments. The subjects were screened through an informal analysis of student

performance on analytical and creative assessments completed in the researcher’s class before

the study’s project was implemented. Two students, one male (Subject 1) and one female

(Subject 2), were selected for their advanced performances (consistent scores above 90%) on

three previously taken analytical tests. The other two subjects, one male (Subject 3) and one

female (Subject 4), were selected for their advanced performance (displaying acquired

34

knowledge in novel ways) on two previously completed alternative, creative assessments (see

Table 1, p. 31). The two sets of subjects were chosen for their contrasting viewpoints (analytical

and creative) of the study’s creative assessment.

While both sets of subjects shared enthusiasm in their responses to the study’s creative

assessment, the study’s investigator observed a stronger fervor in the distinctly creative students’

(Subjects 3 and 4) responses. For instance, Subject 3 went as far to say in the follow up

interview that this was the “best project” that he had ever done. Subject 4 added in her interview

that she was “proud of” her letter and wished she “could do things like this more often in

school.” Adding to this observation of greater enthusiasm by Subjects 3 and 4, the researcher

noted more novel ideas in their assessments than the distinctly analytical students (Subjects 1 and

2). Impressively, Subject 3 cleverly constructed his letter with several drawings related to the

letter in the margins. Equally ingenious, Subject 4 brought in a pressed flower to include with

her letter. These extraordinary efforts on the creative assessment by the creative-minded subjects

indicated greater enthusiasm for the project than the distinctively analytical subjects.

The distinctly analytical students (Subjects 1 and 2) were positive about the assessment

(as noted earlier in this analysis); however, they did not speak with as much intensity in the

interviews about the learning experience as the distinctly creative students. For example, when

Subject 1 was asked what he liked about the project beyond the freedom to write with his own

words, he indifferently said “getting it done.” Furthermore, when asked in the interview if she

would like to do more projects similar to the historical letter project, Subject 2 said, “Maybe, I

don’t know. It depends if it is hard or not.” Subjects 1 and 2’s comments indicate less

enthusiasm on the creative assessment than the comments of the distinctly creative students.

35

The researcher’s cross analysis of the feedback of the distinctively analytical subjects and

the distinctively creative subjects uncovered one notable commonality. Both sets of subjects,

through the interview, expressed favoritism for the creative assessment over traditional,

analytical tests. In the post-assessment interview, the researcher asked each of the study’s

subjects “If you had the choice of taking a multiple-choice test or writing a creative historical

letter after doing your research—which would you rather do?” All four subjects said they would

rather write a creative letter. Expectantly, Subject 3 and Subject 4 (of the distinctly creative set)

found the letter project to be more motivating than taking a multiple-choice test. Subject 3 said

he liked creative assessments better because he does “better on projects—like this one—than

regular tests.” Similarly, Subject 4 blatantly expressed, “I hate multiple-choice tests.”

Unexpectedly, Subject 1 and Subject 2 (both of who scored well [over 90%] on all three

multiple-choice tests administered in the researcher’s class) articulated a preference for the

creative assessment over multiple-choice tests. Subject 1 stated that even though he did well on

multiple-choice tests he found them “boring.” Subject 2 (of the analytical set) explained, “At

least in the writing project I get to do something besides just circle letters” (which was the

method of answering on the previous multiple-choice tests). These comments reveal both sets of

subjects prefer the common assessment over traditional multiple-choice tests.

Summary

The analysis of the subjects’ qualitative feedback revealed generally positive attitudes

and reactions to the study’s creative assessment. The subjects’ comments gathered through the

study’s written questionnaire and student interviews indicated favorable opinions of the creative

assessment. A comparative analysis of the subject feedback found a stronger enthusiasm for the

creative assessment among the distinctively creative subjects than the distinctly analytical

36

subjects. However, further cross examination of the feedback exposed a consistent preference for

creative assessment work over traditional multiple-choice tests among all the subjects. The

overall positive feedback to the study’s common assessment warrants further discussion in the

following chapter.

37

Chapter V: Discussion

Motivating students to learn is often listed by educators as their greatest challenge in

teaching (Motivating students, 1996). The premise for this study began with the researcher

observing a significant number of his students with apathetic and negative attitudes toward

traditional multiple-choice tests. Similarly, recent literature suggests the overreliance on

standardized, analytical testing has adversely effected student motivation (Nichols & Berliner,

2008). This new literature challenged those theorists (Gottfredson, 2004; Herrnstein & Murray,

1994) who traditionally supported the educational use of standardized, analytical tests to measure

general intelligence.

Gardener (1983) first challenged that traditional testing assessed a relatively narrow range

of intelligence and campaigned for a broader, multiple intelligence view of human abilities.

Furthering the discussion on testing, Sternberg (1996) theorized that our schools’ overreliance on

traditional, analytical testing had done a disservice to many students with weak analytical skills.

Sternberg’s (1997a) research found students with basic analytical intelligence could be highly

successful in schools and life by using broader abilities, like practical and creative intelligences.

According to Sternberg & Grigorenko (2004), schools’ overreliance on traditional testing,

however, appeared to leave students with weak analytical skills struggling to find recognition for

their innate talents.

Recent multiple intelligence research (Díaz-Lefebvre, 2004; Silva, 2009; Sternberg,

Grigorenko, & Jarvin 2006; Wilhelm & Wilhelm, 2010), however, has shown some promising

findings regarding the use of alternative, creative assessments. While these studies indicated

alternative, creative assessments led to greater student achievement, the research is still lacking

clear disclosure of how students react and respond to creative assessments. This researcher

38

attempted through this qualitative study to get a better understanding of how his middle school

students affectively respond to creative assessments. For closer examination, the researcher

compared how his students with distinctly analytical intelligence and his students with distinctly

creative intelligence responded affectively to non-traditional tests.

Limitations

This study was not without its limitations. The short duration of the study and its

qualitative methodology limited the range and scope of this study. The 2010 summer school

program, where the researcher conducted the study, was limited to four weeks. As a result, the

researcher managed only seven 50-minute class periods for the study’s procedures and data

collection. Of course, the qualitative methodology used in this study prevented the researcher

from collecting quantitative data for analysis. As a result, the researcher’s data analysis was

restricted to the student feedback following the study’s creative assessment.

The study’s potential pool of only 51 students enrolled in the summer program also

limited the screening and selection of the study’s subjects. The researcher’s small pool of

available students made it difficult to select a large number of subjects with the distinctive

abilities (creative and analytical) required of the study’s purpose and still maintain grade level

consistency and gender equity in the subject selection. Also, to enable time for students to

provide thick descriptions to the creative assessment through written questionnaires and

interviews, the study’s qualitative data collection was restricted to four subjects.

Conclusions

Despite its limitations, this study revealed important student reactions and attitudes to the

creative assessment created by the researcher. The subjects’ comments gathered through the

study’s written questionnaire and student interviews indicated favorable opinions of the creative

39

assessment. A comparative analysis of the subject feedback found a stronger enthusiasm for the

creative assessment among the distinctively creative subjects than the distinctly analytical

subjects. Ironically, further cross examination of the student feedback exposed a consistent

preference for creative assessment work over traditional, multiple-choice tests among all the

subjects. The researcher expected the subjects with a propensity to do well on previous creative

assessments to find the study’s creative assessment more motivating than traditional, multiple-

choice tests. However, the researcher was surprised that the distinctively analytical subjects’

spoke more favorably of participating in creative assessments than taking multiple-choice tests.

The distinctively analytical students were highly successful on previous multiple-choice tests, yet

they surprisingly did not find multiple-choice assessments to be as motivating as the study’s

creative assessment.

Many of the study’s findings were consistent with the educational literature. For instance,

consistent with some of the literature, all four subjects in the study had negative attitudes about

multiple-choice tests. Nichols and Berliner (2008) examined numerous case studies related to

standardized testing and proposed that student motivation is weakened because of the focus on

standardized testing. A number of other researchers—such as Jones, Jones, Hardin, Chapman,

Yarbrough, & Davis (1999) Merrow (2001), Sternberg (2006), Foote (2007), Rothstein (2008),

and Gardner (2008)—reported standardized testing adversely affecting student motivation. The

negative feedback about traditional testing from this study’s subjects concurs with the

aforementioned researchers’ reports.

The study’s results also seem to support Gardner’s (1983) and Sternberg’s (1996)

promotion of a broader view of intelligence. Both these researchers were critical of the

overreliance on Intelligent Quotient (IQ)-like, standardized testing in America’s public schools.

40

Both of the multiple-intelligence researchers also theorized that the general intelligence,

promoted through standardized testing, only identifies a portion of human intelligence and

abilities. In this study, two of the subjects (Subject 3 and Subject 4) had struggled on previous

analytical assessments, yet were advanced at displaying their acquired historical knowledge in

the study’s creative assessment. Also through the study’s qualitative data collection, the

distinctly creative students had responded very negatively about their experience with multiple-

choice tests. Conversely, these distinctly creative subjects responded more enthusiastically to the

creative assessment than the study’s distinctly analytical subjects. This study’s findings seem to

support Sternberg’s (1997b) and Gardner’s (2009) premise that students weak in analytical skills

but strong in other abilities often go unrecognized as intelligent in school settings focused on

analytical testing.

This study’s overall positive results regarding creative assessments fits in with the

popular research supporting the use of alternative assessments in schools. As noted earlier, recent

multiple intelligence researchers (Díaz-Lefebvre, 2004; Silva, 2009; Sternberg, Grigorenko, &

Jarvin 2006; Wilhelm & Wilhelm, 2010) found promising results in their studies involving

alternative, creative assessments. These studies unveiled a range of alternative creative,

assessments—including the use of inquiry learning, drama, role playing, project-based learning,

creative writing, and online simulations—that showed positive results in the areas of student

learning and motivation. Similarly, this study broke from the traditional multiple-choice

assessment in assessing students acquired historical knowledge. For the study’s creative

assessment, students had to display their acquired knowledge in an imaginative letter written

from the perspective of a pioneer child on the Oregon Trail. Like the abovementioned

researchers, this researcher’s use of a creative assessment resulted in positive feedback from his

41

subjects regarding motivation and learning. This study’s positive findings translate into a

number of recommendations to be discussed in the next section.

Recommendations

In a broad context, this study’s positive findings point to a need for further educational

research in the area of creative assessments. As referenced in the literature review, supporters of

general intelligence and IQ testing (Gottfredson, 2004; Herrnstein & Murray, 1994) criticized the

lack of empirical evidence supporting the broader view of human intelligence endorsed in many

creative assessment studies. Admittedly, the educational research is not comprehensive enough

regarding the promotion of diversified assessments. However, the positive results of peer-

reviewed creative assessment studies (Díaz-Lefebvre, 2004; Silva, 2009; Sternberg, Grigorenko,

& Jarvin 2006; Wilhelm & Wilhelm, 2010) and this small study challenge the focus on IQ-like

testing in America’s public schools and warrants further study of alternative, creative

assessments.

In a narrower context and by design, this study’s findings most directly relates to the

researcher’s current and future instructional practices. Coupled with the encouraging literature

related to the use of creative assessments (Díaz-Lefebvre, 2004; Silva, 2009; Sternberg,

Grigorenko, & Jarvin 2006; Wilhelm & Wilhelm, 2010), this study’s positive student feedback

about the study’s creative assessment validates a continued diversification of assessment

practices in the researcher’s social studies classes. The premise for this study began with the

researcher observing a significant number of his students demonstrating apathetic and negative

attitudes toward traditional multiple-choice tests. The consistent positive feedback to the study’s

creative assessment contradicts the student apathy often observed by the researcher on

traditional, analytical tests. Admittedly, many variables factor into student motivation for

42

learning and the staging of successful creative assessments. However, the passionate responses

of the distinctly creative students, in particular, to the study’s creative assessment necessitate the

development of more creative assessments in the researcher’s evaluation practices.

The final recommendation relates to the methodology used in evaluating assessments. The

study’s successful procedures point the researcher toward more consistent use of the study’s data

collection process in the researcher’s classes. The use of the written questionnaire and the follow

up interview proved to be an efficient and effective method for collecting the subjects’ reactions

to the creative assessment. The qualitative student feedback was gathered in a relatively short

time and its analysis proved valuable in validating the usage of the study’s creative assessment.

A more frequent collection of student reactions to a wider variety and number of assessments

class could prove to be even more valuable in diversifying and, ultimately, improving the

researcher’s instructional practices.

.

43

References

Cuban, L. (2004). Assessing the 20-year impact of multiple intelligences on schooling. Teachers

College Record, 106(1), 140-146.

Darling-Hammond, L. (2007). Evaluating no child left behind. Nation, 284(20), 11-18.

Dent, H. (1995). Everything you thought was true about IQ testing, but isn't: A reaction to the

bell curve. ERIC Document Reproduction Service, No. ED394096.

Díaz-Lefebvre, R. (2004). Multiple intelligences, learning for understanding, and creative

assessment: Some pieces to the puzzle of learning. The Teacher College Record, 106(1),

49-57.

Díaz-Lefebvre, R. (2006). Learning for understanding: A faculty-driven paradigm shift in

learning, imaginative teaching, and creative assessment. Community College Journal of

Research & Practice, 30(2), 135-137. Doi:10.1080/10668920500433082.

Foote, M. (2007). Keeping accountability systems accountable. Phi Delta Kappan, 88(5),

359-363.

Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic

Books.

Gardner, H. (2009). The five minds for the future. School Administrator, 66(2), 16-20.

Gottfredson, L. (1997). Mainstream science on intelligence: An editorial with 52 signatories,

history, and bibliography. Intelligence, 24(1), 13-23.

Gottfredson, L. (2004). Schools and the g factor. Wilson Quarterly, 28(3), 35-45.

Herrnstein, R. J. & Murray, C. (1994) The Bell Curve. New York: Free Press.

Hoffman, J., Assaf, L., & Paris, S. (2001). High-stakes testing in reading: Today in Texas,

tomorrow? Reading Teacher, 54(5), 482.

44

HowStuffWorks. (Producer). (n.d.). America from 1837 to 1844: The Oregon Trail [Video file].

Retrieved from http://videos.howstuffworks.com/hsw/18259-america-from-1837-to-

1844-the-oregon-trail-video.htm

Jones M., Jones, B., & Hargrove, T. (2003). The unintended consequences of high-stakes testing.

Lanham, MD: Rowman & Littlefield.

Jones, M., Jones, B., Hardin, B., Chapman, L., Yarbrough, T., & Davis, M. (1999). The impact

of high-stakes testing on teachers and students in North Carolina. Phi Delta Kappan,

81(3), 199.

Just for kids: Trail kids. (n.d.). Retrieved from http://www.blm.gov/or/oregontrail/education-

kids-trail.php

McNeil, M. (2010). Race to top round two heating up. Education Week, 29(30), 1-23.

Merrow, J. (2001). Choosing excellence: Good schools are not good enough. Lanham, MD:

Scarecrow Press.

Motivating students tops list of issues for principals. (1996). Reading Today, 13(6), 20.

Much Esteemed Friends. (1850, May 19). [Letter from unidentified author on Oregon Trail to

“esteemed friends”]. Overland Trails Diaries (MSS SC 114). Harold B. Lee Library,

Brigham Young University, Provo, Utah. Retrieved from http://contentdm.lib.byu.edu/

u?/Diaries,3551

Nettelbeck, T., & Wilson, C. (2005). Intelligence and IQ: What teachers should know.

Educational Psychology, 25(6), 609-630.

Nichols, S., & Berliner, D. (2008). Testing the joy out of learning. Educational Leadership,

65(6), 14-18.

45

Old west legends: Old slang, lingo, & phrases. (2003). Retrieved from http://www.legendsof

america.com/we-slang.html

Rothstein, R. (2009). Taking aim at testing. American School Board Journal, 196(3), 32-35.

School District of Janesville. (2009). Demographic and student membership report. Retrieved

from http://www.janesville.k12.wi.us/AboutUs/DistrictDemographics/tabid/277/Default.aspx

Shearer, B. (2004a). Multiple intelligences theory after 20 years. Teachers College Record,

106(1), 2-16.

Shearer, B. (2004b). Using a multiple intelligences assessment to promote teacher development

and student achievement. Teachers College Record, 106(1), 147-162.

Siedler, Amelia (1895-1896). Diary. Emilie and Marie Stapp Collection (DG0933). University of

Southern Mississippi Libraries (electronic version). Retrieved from http://digilib.usm.

edu/u?/degrum,625

Silva, E. (2009). Measuring skills for 21st-century learning. Phi Delta Kappan, 90(9), 630-634.

Steeves, K. (2005). History: Uncovering the past through inquiry. In Audet, R. & Jordan, L.

(Eds.), Integrating inquiry across the curriculum (pp. 65-84). Thousand Oaks: Corwin

Press.

Sternberg, R. (1996). IQ counts, but what really counts is successful intelligence. NASSP

Bulletin, 80(583), 18-23.

Sternberg, R. (1997a). Successful intelligence: How practical and creative intelligence determine

success in life. New York: Plume.

Sternberg, R. (1997b). What does it mean to be smart?. Educational Leadership, 54(6), 20.

Sternberg, R. (2006). Successful intelligence: Toward a broader model for teaching and

accountability. Edge: The Latest Information for the Education Practitioner, 1(5), 3-18.

46

Sternberg, R., & Grigorenko, E. (2004). Successful intelligence in the classroom. Theory Into

Practice, 43(4), 274-280.

Sternberg, R., Grigorenko, E., & Jarvin, L. (2006). Identification of the gifted in the new

millennium: Two assessments for ability testing and for the broad identification of gifted

students. KEDI Journal of Educational Policy, 3(2), 7-27.

Stripling, B. (2008). Inquiry: Inquiring minds want to know. School Library Media Activities

Monthly, 25(1), 50-52.

Tashlik, P. (2010). Changing the national conversation on assessment. Phi Delta Kappan, 91(6),

55-59.

Wilhelm, J., & Wilhelm, P. (2010). Inquiring Minds Learn to Read, Write, and Think: Reaching

All Learners through Inquiry. Middle School Journal, 41(5), 39-46.

47

Appendix A: Written Questionnaire

Reflection United States History Pioneer Letters

1. What do you feel you have learned through this creative project?

2. What is your overall attitude toward creative projects, like the one you just completed?

3. What, if anything, did you find difficult about completing this creative project?

4. How do you feel you best learn?

48

Appendix B: Consent Form

Consent to Participate in UW-Stout Approved Research

Title: Identifying Affective Responses of Students to Creative Assessments Investigator: Steve Strieker Social Studies Teacher School District of Janesville 3125 Mineral Point Janesville, WI 53548 608-743-5805 [email protected]

Research Sponsor: Dr. Jim Lehmann--Program Adviser / Faculty MSED Program--Online School of Education University of Wisconsin-Stout Wisconsin's Polytechnic University [email protected] Cell (509) 240-5029 www.uwstout.edu/programs/mse/online/

Description: Steve Strieker is conducting a study that will focus on student responses to open-ended questions following a creative social studies project. The study will ask students for their reactions following the assessment. The researcher will utilize a written questionnaire to seek student responses after the creative project with the intent of gathering attitudes and reactions to the creative project. The researcher will also interview the subjects to seek and record further reactions to the creative project. The researcher will identify common attitudes and reactions of the students to the project with the intent of improving future assessments utilized by the investigator. The creative project will be preceded by student engagement in historical research. First, students will complete some reading and note taking of an historical event. Students will then take some of the historical concepts and terminologies learned and design and present their findings in a creative manner. Risks and Benefits: There are no foreseeable risks, physical and/or psychological, immediate, or long-range for the students involved in this study. The students will benefit from the opportunity to engage in alternate forms of assessment that might provide for better motivation to learn. Future students in the researcher's social studies classes will benefit from better-designed and differentiated assessments based on the feedback the researcher receives from the students in this study. Special Populations: Students enrolled in the researcher’s social studies classes at Franklin Middle School (Janesville, WI) for the summer 2010 semester will participate in this study. A written consent of participants’ parents/guardians will be required. Time Commitment: Students involved in this project can expect to participate in the research and creative assessment during three of their regularly scheduled social studies classes (50 minutes each). Completion of the questionnaire and interview will involve about one class period (50 minutes).

49

Confidentiality: All information obtained through the course of the study will be recorded in a confidential manner and the findings will not be released in any way that could identify the children participating in this project. To protect confidentiality, students’ names will not be used. A number will be used to identify students (Student 1, Student 2, etc) in all reports. This informed consent will not be kept with any of the other documents completed with this project. Right to Withdraw: Participation in this project is voluntary and only allowed with parents/guardians permission. If parents/guardians wish to withdraw their child from the study at any time, they may do so without prejudice or penalty, and the information collected up to that point will be destroyed upon request. IRB Approval: This study has been reviewed and approved by The University of Wisconsin-Stout's Institutional Review Board (IRB). The IRB has determined that this study meets the ethical obligations required by federal law and University policies. If you have questions or concerns regarding this study please contact the Investigator or Advisor. If you have any questions, concerns, or reports regarding your rights as a research subject, please contact the IRB Administrator. Investigator: Steve Strieker Social Studies Teacher School District of Janesville 3125 Mineral Point Janesville, WI 53548 608-743-5805 [email protected]

Research Advisor: Dr. Jim Lehmann--Program Adviser / Faculty MSED Program--Online School of Education University of Wisconsin-Stout Wisconsin's Polytechnic University [email protected] Cell (509) 240-5029 www.uwstout.edu/programs/mse/online/

IRB Administrator: Sue Foxwell, Director, Research Services 152 Vocational Rehabilitation Bldg. UW-Stout Menomonie, WI 54751 715-232-2477 [email protected] Statement of Consent: By signing this consent form you agree to participate in the project entitled, Identifying Affective Responses of Students to Creative Assessments. ___________________________________Participant’s Signature Date___________ __________________________________Signature of parent or guardian Date___________

50

Appendix C: Research and Assessment Instructions

Historical Letters United States History Pioneer Children

Before the 20th century, people communicated over long distances mostly by writing letters. Historical letters provide historians with facts about history and people’s feelings about events. In this project, you will research about the pioneer experience on the Oregon Trail. You will then write a letter from the perspective a pioneer child traveling on the Oregon Trail.

Instructions: A. Research: 1. Watch the video clip about the Oregon Trail

videos.howstuffworks.com/hsw/18259-america-from-1837-to-1844-the-oregon-trail-video.htm

2. Read the article about pioneer children http://www.blm.gov/or/oregontrail/education-kids-trail.php

3. View website about commonly used words used in the pioneer days a. http://www.legendsofamerica.com/we-slang.html

B. Notes

1. Complete the notes outline based on what you learn from the reading, video clip, and websites

2. Eventually highlight notes (7-10 facts) used in your pioneer letter

C. Pioneer Letter

1. Write an imaginative letter in the character of a pioneer child 2. Requirements:

Provide specific historical information (7-10 facts) from your pioneer notes o Highlight on your notes information you use in your letter

Write the letter in pioneer character o Use slang/phrase from pioneer days o Misspell some words o Use date from pioneer era o Show feelings in letter

Letter must be between 200 & 400 words

Letter must be handwritten in dark pen

Make the letter look aged

Add things commonly sent with letters

Letter is to be creative, but believable

Link to see a pioneer child’s (Amelia Siedler) diary http://digilib.usm.edu/cdm4/document.php?CISOROOT=/degrum&CISOPTR=655&REC=2 Link to see a letter from a pioneer on the Oregon Trail http://memory.loc.gov/cgi-bin/ampage?collId=upbover&fileName=dia5578/upboverdia5578.db&itemLink=D?upboverbib:39:./temp/~ammem_Z9dM

51

Appendix D: Notes Template

Pioneer History Notes

1. When & why did pioneers travel west on the Oregon Trail?

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

2. What was not fun for pioneer children traveling on the Oregon Trail?

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

3. What did pioneer children do for fun on the Oregon Trail?

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

Click here to enter text.

4. List some pioneer-day slang/phrases and their modern meaning

Click here to enter text.

52

Appendix E: Scoring Rubric

Scoring Rubric United States History Pioneer Children Letters Name: Advanced (3 points) Proficient (2 points) Basic (1 point) History Oregon Trail Facts

11-15 historical facts 7-10 historical facts 0-6 historical facts

History Pioneer Era Slang

6-10 slang terms 3-5 slang terms 0-3 slang terms

History Pioneer Era Dating

1 accurate date and 0 inaccurate dates

0 dates or inaccurate dates

Creativity Ideas

Utilized lots of original ideas

Has some original content in letter

Reads like list of historical facts

Creativity Presentation

Made letter look aged & added novel things to letter

Made letter look aged

Looks like a modern letter

Creativity Voice

Some misspelled words and lots of descriptive wording

Some descriptive & misspelled wording

Ordinary & correctly spelled wording

Total Final Scoring /10

Advanced Over 12 Points

Proficient 10-12 Total Points

Basic 0-9 Total Points