proving or improving science learning? understanding high school students’ conceptions of science...

27
Science Education LEARNING Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan MIN-HSIEN LEE, 1 TZUNG-JIN LIN, 2 CHIN-CHUNG TSAI 3 1 Center for Teacher Education, National Sun Yat-sen University, Kaohsiung 804, Taiwan; 2 Graduate Institute of Applied Science and Technology and 3 Graduate Institute of Digital Learning and Education, National Taiwan Universityof Science and Technology, Taipei 106, Taiwan Received 25 November 2011; accepted 4 October 2012 DOI 10.1002/sce.21046 Published online 15 January 2013 in Wiley Online Library (wileyonlinelibrary.com). ABSTRACT: Classroom assessment is a critical aspect of teaching and learning. In this pa- per, Taiwanese high school students’ conceptions of science assessment and the relationship between their conceptions of science assessment and of science learning were investigated. The study used both qualitative and quantitative methods. First, 60 students were inter- viewed regarding their conceptions of science assessment. From these interviews, a survey was developed, validated, and used with 914 students to quantitatively examine the same question. Third, 224 students were surveyed to determine the relationship between their conceptions of both science assessment and science learning. Using the phenomenographic method, the first study revealed six categories of conceptions of science assessment: re- producing knowledge, rehearsing, accountability to learning, improving learning, problem solving, and critical judgment, which seem to reveal the surface, summative, and formative purposes of assessment, respectively. The second study, based on what the high school stu- dents reported about science assessment, resulted in the use of the Conceptions of Science Assessment questionnaire, which confirmed the second-order analysis (surface, summative, Correspondence to: Min-Hsien Lee; e-mail: [email protected] Contract grant sponsor: National Science Council, Taiwan. Contract grant numbers: NSC 99-2511-S-008-009, NSC 100-2511-S-110-010-MY3, NSC 100-2511- S-011-004-MY3, and NSC 101-2631-S-011-002. C 2013 Wiley Periodicals, Inc.

Upload: chin-chung

Post on 10-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

ScienceEducation

LEARNING

Proving or Improving ScienceLearning? Understanding HighSchool Students’ Conceptions ofScience Assessment in Taiwan

MIN-HSIEN LEE,1 TZUNG-JIN LIN,2 CHIN-CHUNG TSAI3

1Center for Teacher Education, National Sun Yat-sen University, Kaohsiung 804, Taiwan;2Graduate Institute of Applied Science and Technology and 3Graduate Institute of DigitalLearning and Education, National Taiwan University of Science and Technology, Taipei106, Taiwan

Received 25 November 2011; accepted 4 October 2012DOI 10.1002/sce.21046Published online 15 January 2013 in Wiley Online Library (wileyonlinelibrary.com).

ABSTRACT: Classroom assessment is a critical aspect of teaching and learning. In this pa-per, Taiwanese high school students’ conceptions of science assessment and the relationshipbetween their conceptions of science assessment and of science learning were investigated.The study used both qualitative and quantitative methods. First, 60 students were inter-viewed regarding their conceptions of science assessment. From these interviews, a surveywas developed, validated, and used with 914 students to quantitatively examine the samequestion. Third, 224 students were surveyed to determine the relationship between theirconceptions of both science assessment and science learning. Using the phenomenographicmethod, the first study revealed six categories of conceptions of science assessment: re-producing knowledge, rehearsing, accountability to learning, improving learning, problemsolving, and critical judgment, which seem to reveal the surface, summative, and formativepurposes of assessment, respectively. The second study, based on what the high school stu-dents reported about science assessment, resulted in the use of the Conceptions of ScienceAssessment questionnaire, which confirmed the second-order analysis (surface, summative,

Correspondence to: Min-Hsien Lee; e-mail: [email protected] grant sponsor: National Science Council, Taiwan.Contract grant numbers: NSC 99-2511-S-008-009, NSC 100-2511-S-110-010-MY3, NSC 100-2511-

S-011-004-MY3, and NSC 101-2631-S-011-002.

C© 2013 Wiley Periodicals, Inc.

Page 2: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 245

and formative) of the purposes of science assessment. In the third study, the relationshipsbetween high school students’ conceptions of science assessment and of learning sciencewere revealed. C© 2013 Wiley Periodicals, Inc. Sci Ed 97:244–270, 2013

INTRODUCTION

Classroom assessment is a critical aspect of teaching and learning. As the perspectiveof learning has shifted from traditional, subject matter–centered learning to the currentlyadvocated learner-centered learning paradigm, the role of assessment is following this shiftas well in that various forms of assessment are being introduced, while still maintainingtheir crucial role in education (Shepard, 2000; van de Watering, Gijbels, Dochy, & van derRijt, 2008; Wang, Kao, & Lin, 2010). That is, alternatives to traditional paper-and-penciltesting formats are drawing more attention in the current learning paradigm (Bell, 2007).Rather than assessing students’ rote memorization of concepts, which is based on thetraditional, behaviorist view of teaching and learning, assessment that supports the processof inquiry and construction of meaningful understanding aligned with the constructivistview of teaching and learning has been proposed (Brooks & Brooks, 1993). Despite this,constructivist assessment is still rare in the classroom. With respect to the learner-centeredperspective, the activities of assessment have moved away from assessment to simplydemonstrate learning or achievement (e.g., summative) to include assessment to improvelearning throughout the teaching process (e.g., formative) (Black & Williams, 1998; Gipps,1994; Shepard, 2000).

Given the diverse roles of assessment, understanding what it means to students is of greatimportance. Students experience and perceive different kinds and purposes of assessment inthe classroom. The ways in which they integrate this assessment into their learning dependon how they view or conceptualize it, which may, in turn, influence their learning (van deWatering et al., 2008).

The terms “assessment” and “evaluation” are both common in education (Taras, 2005).In general, “assessment” refers to collecting data, weighting and analyzing the data forits meaning, and providing feedback for improvement (Astin, 1993). In contrast, “evalu-ation” refers to assigning a value, merit, or worth to an object (Lawrenz, 2007; Scriven,1991). Some researchers have differentiated between assessment and evaluation, arguingthat the former could be regarded as process oriented, and so can also be referred to asformative assessment, whereas the latter could be viewed as product oriented, much likesummative assessment (e.g., Lawrenz, 2007). However, other researchers seem to suggestthat assessment and evaluation are intertwined (e.g., Bell, 2007) and propose blurring thedistinction between the two constructs (e.g., Taras, 2005). In this study, we adhere to theperspective that there is a certain degree of overlap between assessment and evaluation.Similar to Sadler (1989) and Remesal (2011), this study supports the view that classroomassessment refers to a complex process of collection, analysis, evaluation, and judgment ofthe teaching and learning process and learning outcomes.

How individuals perceive and experience phenomena is expressed in their conceptionsof those phenomena (Thompson, 1992). Referring to Entwistle and Peterson’s (2004) def-inition, a “conception” can be seen as “an individual’s personal and therefore variableresponse to a specific idea” (p. 408). Recently, there seems to have been a trend to in-vestigate the conceptions of assessment held by teachers and/or students (e.g., Brown &Hirschfeld, 2007; Brown, Irving, Peterson, & Hirschfeld, 2009; Li & Hui, 2007), takinginto account variables such as gender, age, ethnicity, or even students’ academic outcomes(e.g., Brown & Hirschfeld, 2008; Hirschfeld & Brown, 2009). Conceptions of assessmentrepresent individuals’ views or understandings of assessment implemented in the learning

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 3: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

246 LEE ET AL.

environment and influence their views on learning (Peterson & Irving, 2008; Vermetten,Vermunt, & Lodewijks, 2002). Although assessment of science learning is crucial and isemphasized in the science education literature (Bell, 2007), empirical studies on students’conceptions of science assessment are limited and those that have been carried out largelyfocus on the tertiary level (e.g., Wang et al., 2010). In contrast, students’ conceptions ofscience assessment at the secondary level have not been well explored.

In recent decades, the educational reforms of the Taiwanese high school system haveencouraged multiple assessments of students’ science learning. Traditionally, Taiwan’shigh school system was centralized and efficiency driven to enable students to succeedin the high-stakes college entrance examinations (Lee, Tsai, & Chai, 2012). A decadeago, the dominant types of classroom assessment in Taiwan were paper-and-pencil teststo prepare for the upcoming high-stakes examinations. A similar situation was found inthe United States, where there is certainly still a large emphasis on standardized tests.However, assessment practices in Taiwan have recently undergone major changes througheducation reforms that have been proposed to provide various alternatives for gainingentrance into college besides passing the high-stakes examinations, making entry intocollege more easily available. There are now diverse channels for college entrance; forexample, students can provide evidence of their academic experience of mini-scientificresearch to demonstrate their scientific inquiry or problem-solving abilities. In classroomassessment, an increasing amount of project-based or cooperative work is being employed.However, in our previous studies (Lee, Johanson, & Tsai, 2008; Tsai, 2004) of conceptionsof learning science, we found that the conception “preparing for the paper-and-penciltesting” was still held by many high school students in Taiwan. Therefore, investigationsconducted in this educational climate may lead to potential insights regarding classroomassessment for science learning.

Moreover, as suggested by several researchers (Bell, 2007; Wang et al., 2010), individ-uals’ views or conceptions of learning science might be related to their conceptions ofscience assessment. It is possible to expect that students who regard science assessment asevaluating what they have remembered from lectures might conceptualize learning scienceas memorizing the content they are taught. The interrelations between students’ concep-tions of science learning and those of science assessment are not well developed. Theinterrelations between students’ conceptions of science assessment and learning sciencecould also provide further validity to the conceptions of science assessment explored in thisstudy.

Accordingly, the purposes of the current study were threefold: First, we unraveled whatstudents think about science assessment and managed to identify their conceptions of sci-ence assessment qualitatively. Second, in a follow-up work based on the previous qualitativedata, we developed a questionnaire to assess high school students’ conceptions of scienceassessment quantitatively. Finally, we explored the possible links between conceptions ofscience assessment and of learning science, which could possibly provide criteria-relatedvalidity for the conceptions of science assessment questionnaire.

LITERATURE REVIEW

Classroom Assessment for Science Learning

In general, assessment refers to “any act of interpreting information about students’performance, collected through any of a multitude of means” (Brown, 2004, p. 304). Thisstatement suggests that assessment types are diverse. One possible reason for these vari-ations might be attributed to the purpose of assessment. Doran, Lawrez, and Helgeson

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 4: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 247

(1993) conducted a review of science assessment and concluded that what is assessed inthe science classroom could be categorized as knowledge of facts and concepts, scienceprocess skills, higher order science thinking skills, problem-solving skills, skills for lab-oratory work, and attitudes toward science. Classroom assessment of science learning, asdefined by Bell (2007), refers to the form of assessment adopted by science teachers in theclassroom for formative or summative purposes and could be used by the teacher and/or thestudents.

Since the shift from subject matter– or teacher-centered instruction to the current learner-centered paradigm, the purposes and types of assessment have become more varied (Segers& Dochy, 2001). Researchers have outlined constructivist learning as that learning whichoccurs when learners construct their understanding as a result of active engagement ina meaningful learning experience (e.g., Brooks & Brooks, 1993). In the field of scienceeducation, Wang et al. (2010) also suggested that the movement of science assessmentfrom a subject matter–centered perspective to a learner-centered perspective has changedthe purpose of assessment from valid measurements of a student’s knowledge to providinginformation that empowers learning and informs teaching. The traditional form of assess-ment, perhaps mainly for summative purposes, is strongly influenced by behaviorist andassociationist theories or learning paradigms, such as traditional psychometric testing andpsychological measurement (Black, 2001). Moreover, constructivist-oriented assessmentinvolves good teaching practice and clear expectations of desired competence and empha-sizes assessing the learner’s in-depth understanding of the science content and the processof science learning through authentic measures, whereas learning tends to align with theformative purpose of assessment (Shepard, 2000).

That is, the summative purpose of assessment is to prove learning, thus emphasizing theproduct of learning, whereas the formative purpose is to improve learning, thus emphasiz-ing the process of learning. With regard to assessment practices, Cole (1990) suggestedthat the quantitative view refers to the accumulation of fragmented concepts, whereas thequalitative view refers to the mastery of higher order skills and advanced knowledge. Theformer is oriented toward summative assessment, whereas the latter is much more orientedtoward formative assessment. Panizzon and Pegg (2008) also referred to quantitative as-sessment practices as involving the application of norm-referenced, collective approachesto determining a student’s ability and making it accountable, which is more oriented towardsummative assessment. In sum, the summative purpose of assessment could be viewed asquantitatively assessing students at the conclusion of learning, whereas the formative pur-pose of assessment could be regarded as qualitatively assessing students during the processof learning.

In recent decades, science education researchers have increasingly emphasized assess-ment for formative purposes in the science classroom (e.g., Bell & Cowie, 2001; Coffey,Hammer, Levin, & Grant, 2011; Furtak & Ruiz-Primo, 2008; Ruiz-Primo & Furtak, 2007).In particular, most studies have focused on teachers’ assessment practices and their effectson students’ learning. For example, Furtak and Ruiz-Primo (2008) developed a series offormative assessment prompts to elicit students’ ideas of science concepts. Ruiz-Primo andFurtak (2007) proposed an informal formative assessment practice in which teachers gatherinformation regarding students’ developing understanding during formal whole-class con-versations. The above two studies further suggest that, for formative assessment to beeffective and useful, developing assessment which reveals students’ learning in progressand providing appropriate feedback to students is important. In addition to the regular sum-mative purpose of assessment, the contemporary classroom assessment of science learningis oriented toward embracing multiple purposes, which has an increasing weight on theformative purpose of assessment (Bell & Cowie, 2001).

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 5: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

248 LEE ET AL.

Research on Conceptions of Assessment

Because the goals of science learning (or learning in general) are continuing to evolve,several issues regarding educational assessment have arisen, such as how researchers andeducators manage to assess what students have learned or how students conceptualizeassessment based on their learning experiences (e.g., Bell, 2007; Brown et al., 2009; Doranet al., 1993).

Given the diverse roles of assessment, understanding what it means to teachers andstudents is of great importance. Conceptions of assessment refer to how people conceiveof or experience assessment in educational contexts (Li & Hui, 2007; Peterson & Irving,2008). In this strand of research, some recent studies have attempted to investigate the viewsof assessment held by teachers and/or students (e.g., Brown, 2004; Brown & Hirschfeld,2007, 2008; Brown, Lake, & Matters, 2011; Li & Hui, 2007). Derived from the empiri-cal literature on students’ conceptions of assessment (Brown & Hirschfeld, 2007, 2008;Brown et al., 2009; Hirschefeld & Brown, 2009), four major conceptions of the purposes,procedures, or outcomes of assessment are (a) improving achievement, (b) a means of mak-ing the students accountable for their own learning, (c) being irrelevant, and (d) enjoyingthe implementation of assessment in schools. In addition, these studies have found thatstudents’ conceptions of assessment are related to their academic performance, such astheir mathematics achievement (Brown & Hirschfeld, 2007), reading achievement (Brown& Hirschfeld, 2008), or background factors such as gender, age, or ethnicity (Hirschfeld& Brown, 2009). That is, the possible relations between conceptions of assessment andacademic performance, as suggested by Brown and Hirschfeld (2007), could be interpretedby self-regulation theory, suggesting that students who view assessment as a constructiveforce for personal responsibility may gain higher grades, whereas those who seek to blameschools or teachers for poor assessment results and those who do not take assessmentseriously, or who ignore it, may receive lower grades.

More recently, the notion of conceptions of assessment has attracted the attention ofscience education researchers. Focused on the aspects of the target of assessment (whatneeds to be assessed) and their methods of assessment (how science learning can bestbe assessed), Wang et al. (2010) probed Taiwanese preservice teachers’ conceptions ofassessment of science learning and found that they were coherent with their conceptions oflearning science. In sum, although researchers have initiated this line of research in scienceeducation, more work is needed to explore high school students’ conceptions of scienceassessment. Actually, as assessment usually corresponds to teachers’ instruction, there isstill a dearth of research on assessment from the students’ perspective. As mentioned above,what assessment means to students might influence their learning strategies and as suchhave an impact on their learning outcomes. Accordingly, one of the purposes of the currentstudy was to explore Taiwanese high school students’ conceptions with respect to scienceassessment.

Research on Conceptions of Learning

Conceptions of learning held by individuals refer to their experiences of learning (Fodor,1998) and preferred ways of learning (Lee et al., 2008; Liang, Lee, & Tsai, 2010). Saljo(1979) was the first researcher who attempted to investigate college students’ conceptions oflearning using the so-called “phenomenographic” method (which will be introduced laterin the Methods section). Through interviewing 90 participants about their own learningexperiences and personal opinions about learning, Saljo categorized five, qualitativelydifferent conceptions based on the interview data. Students saw learning as (1) an increase

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 6: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 249

of knowledge, (2) memorizing, (3) an acquisition of facts or principles, (4) an abstractionof meaning, and (5) an interpretive process aimed at understanding reality. Since then,numerous researchers have explored individuals’ conceptions of learning in various groupsof students and discovered relationships with other variables related to learning such asapproaches to learning, beliefs about knowledge, or specific learning contexts (e.g., Duarte,2007; Marshall, Summer, & Woolnough, 1999; Marton, Dall’Alba, & Beauty, 1993).

Recently, several studies have found that conceptions of learning could be dependentupon different contexts or subject domains such as science (Lee et al., 2008; Tsai, 2004)or even more specific contexts, for example, biology (Chiou, Liang, & Tsai, 2012). Tsai(2004) employed the interview method to explore the conceptions of 120 Taiwanese highschool students and proposed a framework for conceptions of learning science consistingof seven categories: learning science as “memorizing,” “preparing for tests,” ”calculatingand practicing,” “increase of knowledge,” “applying,” “understanding,” and “seeing in anew way.” Based on the proposition made by Marton et al. (1993), Tsai (2004) contendedthat students’ conceptions of learning science appear to show a developmental and hier-archical trend (i.e., from lower to higher levels), whereby any former category might behierarchically subsumed by the latter, consecutive category.

Later, Lee et al. (2008) developed a questionnaire to assess Taiwanese high schoolstudents’ conceptions of learning science, validating it through rigorous structural equationmodeling analysis. The results indicated that students’ conceptions of learning are relatedto their approaches to learning science. Students holding mature conceptions of learningscience, such as viewing learning science as applying and understanding, tend to employdeep approaches to learning (meaningful learning, linking course content to everydayexperiences) rather than surface approaches (e.g., rote learning, memorization). In additionto conceptions of learning being domain dependent, some researchers have contended thatlearners’ cultural backgrounds also have a certain impact on shaping their conceptions oflearning (Lee et al., 2008; Li, 2001, 2003; Liang et al., 2010; Marton, Watkins, & Tang,1997).

Relationships Between Conceptions of Assessment and Conceptionsof Learning

Numerous researchers have argued that individuals’ beliefs about school assessmentsmight have an impact on what they think about learning (e.g., Entwistle & Entwistle,1991; Hirschfeld & Brown, 2009; Ramsden, 1997). For instance, Samuelowicz and Bain(2002) interviewed 20 academics from various disciplines about their assessment practicesin teaching undergraduate classes and found that they possessed different assessment orien-tations related to their teaching fields that were strongly related to the orientations of theirteaching and learning beliefs. That is, their orientations of assessment practice ranged fromfavoring the reproduction of knowledge and procedures to favoring the construction and/ortransformation of knowledge according to their conceptions of assessment. Similarly, ina non-Western culture, Lee et al. (2008) have found that one of the high school students’conceptions of learning science is “preparing for tests,” which might be the result of howstudents perceive science assessment. This result suggests that students’ conceptions ofscience assessment may be interrelated with their own conceptions of science learning andin turn influence their approaches to science learning.

As previously mentioned, students’ conceptions of assessment might be closely relatedto their own conceptions of learning. More specifically, their conceptions of assessmentpractices may play an influential role in their conceptions of learning (e.g., Brown et al.,2011; Gipps, 1999; Li & Hui, 2007). If students perceive assessment practices merely as

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 7: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

250 LEE ET AL.

TABLE 1The Procedure and the Participants of This Study

Study Method SchoolNumber ofParticipants

1: Identifyingconceptions

Phenomenographicmethod

1 (6 classes) 60 (30 males)

2: Developingquestionnaire

Exploratory factoranalysis

5 (15 classes) 411 (230 males)

Confirmatory factoranalysis

6 (18 classes) 503 (258 males)

3: Relational study Correlation analysis 2 (9 classes) 224 (97 males)

traditional testing (close-ended multiple-choice questions), they might only feel pressureand anxiety, which may then have negative effects on their learning. In contrast, if theyperceive diverse assessment practices that support self-reflection in their learning process,they would be aware of their responsibilities and take an active role in their learning.Therefore, one of the purposes of this study was to investigate and unravel the possiblerelationships between these two conceptions in the realm of science education.

RESEARCH PURPOSES AND QUESTIONS

The purposes of the current study were mainly as follows: First, we aimed to discoverhigh schools students’ conceptions of science assessment through qualitative data collectionand analysis. Second, in a follow-up work based on the aforementioned qualitative data, wedeveloped a questionnaire to measure sampled students’ conceptions of science assessmentquantitatively, and then, finally, we explored the possible links between their conceptionsof science assessment and of learning science.

This led to three specific research questions:

1. What are the participants’ conceptions of science assessment as obtained via thephenomenographic method?

2. Does the questionnaire developed from the above qualitative findings have sufficientvalidity and reliability to measure Taiwanese high school students’ conceptions?

3. What are the relationships between students’ conceptions of science assessment andthose of learning science?

Research Design

To address the three research purposes, three studies were designed, as shown inTable 1. The phenomenographic method is beneficial for understanding students’ con-ceptions, which could reflect their educational experience (Linder & Marshall, 2003). Ac-cordingly, to explore Taiwanese high school students’ conceptions of science assessment,a phenomenographic approach was adopted in the first study.

In the second study, a questionnaire named Conceptions of Science Assessment (COSA)was developed according to the qualitative results obtained in the first study. In the secondstudy, two different sets of samples were used to conduct the exploratory and confirmatoryfactor analyses of the COSA. Finally, in the third study, the COSA was used to assess alarge sample of students to explore the relationships between their conceptions of scienceassessment and those of learning science.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 8: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 251

STUDY 1: IDENTIFY STUDENTS’ CONCEPTIONS OF SCIENCEASSESSMENT USING THE PHENOMENOGRAPHIC METHOD

Method

Participants and Data Collection. The research data for Study 1 were collected throughindividual interviews with 60 high school students (including 30 males and 30 females)from six different classes in a senior high school in Taiwan (see Table 1). In addition, as thecurricular standards of science education in Taiwan encourage multiple forms of assessment(Ministry of Education, 2004), various types of science assessments are performed in highschool classrooms. For example, besides the traditional paper-and-pencil academic tests,an increasing number of teachers are asking their students to complete a team-based projectand then present and explain the results to the whole class or to conduct self-assessmentand peer assessment. Thus, high school students in Taiwan may have diverse experiencesregarding classroom assessment for learning science.

The research data were collected through a series of in-depth and open-ended semistruc-tured interviews. Each selected student was interviewed individually by a trained researcher.The guiding interview questions were partially based on the literature (e.g., Peterson & Irv-ing, 2008) and also developed by this study, and dealt with various aspects of scienceassessment including its nature, its role and practice in science learning, and its purpose,as follows:

• What does “science assessment” mean to you?• Why do we need “science assessment”?• What do you think about the purpose of “science assessment”?• What do you think about the relationship between science assessment and learning

science?

In addition, the content validity of the interview questions was verified by two expertsin science education. In this study, we consulted with two experts in the field of scienceeducation who had experience of phenomenographic studies to ensure the use of appropriateinterview questions, working through the review process at least twice. All of the individualinterviews were tape-recorded. The interviews were conducted in Chinese and then fullytranscribed.

Data Analysis. The verbatim transcripts of the student interviews were analyzed using thephenomenographic method (Marton, 1981). Similar to the method used by Tsai (2004), thedata analysis had two objectives. One was to develop categories of descriptions originatingfrom the students’ conceptions of science assessment, whereas the other was to identifyindividual students’ conceptions of science assessment. The first step was to read eachstudent’s interview transcript and then mark those sentences that could represent their mainideas of science assessment. Second, this study elaborated on and summarized the students’conceptions of science assessment by comparing the representative sentences marked in thefirst step. Subsequently, the “qualitatively different” categories of conceptions of science as-sessment were explored and summarized. According to the marked representative sentencesand the categories identified in the previous step, individual students’ main conceptions ofscience assessment could be identified.

It should be noted that students may convey mixed views across different categories ofconceptions of learning science (e.g., Lin & Tsai, 2008; Marton et al., 1993). To illuminatea more direct and clear analysis of each student’s conceptions of science assessment,this study, similar to the method utilized by Tsai (2004) and Koballa, Graber, Coleman,

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 9: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

252 LEE ET AL.

and Kemp (2000), used the most dominant conception as perceived by the researchers torepresent the students’ views. For example, if a student expressed numerous ideas aroundthe “A” conception but only a few notions about “B,” he or she would be classified intothe “A” conception. For those interview data on which the researchers did not agree, theyreviewed the interview transcripts, discussing them case by case, before deciding on a finalclassification.

In addition, to assess the reliability of this analysis, another researcher was asked toanalyze the representative sentences coded from the verbatim transcripts of the students’interviews and the intercoder agreements for this analysis were also examined. The inter-coder agreement for the categorization of students’ conceptions of science assessment was0.98.

Results of Study 1

The Categories of Students’ Conceptions of Science Assessment. As a result ofthe phenomenographic analysis of the students’ interview transcripts, six qualitativelydifferent categories of conceptions of science assessment were identified in this study. Thatis, science assessment is conceptualized by the students as serving the following mainpurposes: reproducing knowledge to others, rehearsing different scenarios for practice forexaminations, accountability to reveal their learning, improving learning, problem solving,and having others critically judge their learning. These six categories are presented below,with relevant interview quotations provided.

Science Assessment as Reproducing Knowledge (“Reproducing”) to Others. In thiscategory, science assessment is characterized as a way of reproducing scientific knowledge.Science assessment is regarded as measuring the students’ ability to reproduce informationpresented in lectures and/or textbooks. For example, the students stated that “Some contentsof the textbook are unclear for me to understand. Thus, the science assessment can be usedto enhance my ability to memorize such contents” (student 40521); “The purpose of scienceassessment is to give us a strong impression of scientific concepts and to (help us) memorizethose scientific concepts more easily” (student 40425).

The students in this category experienced science assessment as a way of reproducingknowledge. Science assessment, from this perspective, becomes a technique for amplifyingthe memorization of scientific knowledge.

Science Assessment as Rehearsing (“Rehearsing”) for Practice for Examinations. Inthis category, the students view science assessment as rehearsing school science texts.For these students, science assessment is seen as a way of making them familiar withthe scientific knowledge. For example, the students stated that “The purpose of scienceassessment is to make the formulae clearer by continuous rehearsal” (student 40407); “Sci-ence assessment for me is rehearsal. Effective science assessment indicates more and morepractice” (student 40211). Moreover, although the students viewed science assessment asrehearsing the science concepts or formulae, some of them further revealed that assessmentthat involves much more rehearsing and practicing can probably enhance their learning.For example, one student stated that “The science assessment is to rehearse the unfamiliarcontents continuously so that my study can make progress” (student 40325).

The “rehearsing” conception may have been shaped by the educational climate in Taiwanin which students’ performance in school and in the national-level examinations mayinfluence their opportunities for advanced study (Lee, Chang, & Tsai, 2009; Lee et al., 2008;Tsai, 2004). In this climate, students may regard science assessment as making the scientific

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 10: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 253

conceptions and formulae clearer by continuous rehearsal or to rehearse to obtain greaterfamiliarity with the content tested in the upcoming high-stakes examinations. Moreover,for high school students in Taiwan, there are various “supplementary trade books” that offeradditional after-school materials to reinforce students’ school knowledge acquisition andto enhance their achievement scores. These books are designed on the basis of the officialschool textbook to provide students with additional opportunities to review the textbookcontent after school. Furthermore, they include various tests to familiarize students with thecontent of the upcoming examinations. They are widely used in Taiwanese high schools,not only by the students but also by the teachers who may use them to help the students toreview what they have learned, and as exercises for students to practice for the examinations.Notably, several supplementary trade books are referred to as “self-assessment” in Taiwan.Accordingly, students may refer to such books when considering assessment and thusequate the conception of assessment with rehearsing.

The distinction between this conception and the previous one (i.e., “reproducing”) maylie in whether students experience mere repetition or variation in the process of assessment.Repetition and variation in the learning process could relate to Marton, Wen, and Wong’s(2003) study of issues about the simultaneous presence of memorization and understandingin Chinese students’ learning conceptions. They suggested that, for Chinese students whothink of learning as memorization, some may use rote memorization, whereas others mayuse “meaningful memorization” in their learning. They further suggested the dichotomybetween “repetition” and “variation” to differentiate between rote and meaningful memo-rization. Repetition refers to when students read the same presentation of something severaltimes and thus repeat the same thing over and over again, whereas variation means that theyread different presentations of the same thing or read the same presentation in differentways. Repetition may enhance remembering, whereas variation may involve understanding(Marton et al., 2003). In this study, students with the “reproducing” conception, much likethe repetition in Marton et al.’s (2003) study, tend to view science assessment as simplyrecalling the same presentation of science concepts or formulae in lectures and/or text-books. On the other hand, students with the “rehearsing” conception seem to view scienceassessment as practicing the science concepts or formulae in several different ways, whichmay enhance their understanding, for example, practicing a variety of problems for thesame topic provided by the supplementary trade books.

Science Assessment as Making the Students Accountable for Their Own Learning (“Ac-countability”). In this category, making students accountable for their own learning isseen as the main feature of science assessment. For example, the students stated that “Thepurpose of science assessment is to reveal what I have learned” (student 40113); “Scienceassessment is to check how much you learn. That is an important criterion for teachersto make decisions about whether you can get the credit or not” (student 40601); “Scienceassessment is to see my learning performance and check how well I have learned” (Student40409).

Based on these interview responses, for some of these students, science assessmentreveals not only the quantitative outcome (how much is learned) but also the qualitativeoutcome (how well it is learned). Moreover, the “accountability” conception of this studyis also found in Brown and Hirschfeld’s (2007) study, which revealed students’ views ofassessment as something which made them accountable for learning.

Science Assessment as Improving Learning (“Improving”). In the fourth category, sci-ence assessment was viewed as improving learning activity, enhancing learning motivation,and facilitating understanding of science. For example, the students stated that “Scienceassessment can help me to further understand the scientific knowledge learned in the class”

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 11: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

254 LEE ET AL.

(student 40120); “Science assessment can improve my learning because I can know whichscience concepts I have not really understood and then seek help” (Student 40214).

In this category, students believe that science assessment can enhance their sciencelearning. As previously mentioned, improving learning is considered as the main feature ofassessment in the constructivist view of learning (e.g., Gipps, 1994; Shepard, 2000). Theconception of “assessment as improving learning” is also widely identified in this line ofresearch (e.g., Brown & Hirschfeld, 2007).

Science Assessment as Problem Solving (“Problem Solving”). In the fifth category,the students conceptualize science assessment as problem solving, which extends scienceknowledge to open-ended situations. For these students, the purpose of science assessmentis a transfer of scientific knowledge to problem solving. For example, the students statedthat “Science assessment is to apply what you have learned to problem solving and toinspire you with new ideas” (student 40104); “Science assessment for me is not only toreflect on what the teacher has taught, but also to apply it to solve problems and to put itinto practice” (student 40106).

Clearly, the students in this category emphasize the connection between scientific knowl-edge and practical situations. Through science assessment, these students can transfer whatthey have learned to authentic problems and thus become autonomous problem solvers.

Science Assessment as Having Others to Critically Judge One’s Learning (“CriticalJudgment”). In the final category, science assessment is viewed as a way of justifyingknowledge. The main feature of science assessment is conceptualized as evaluating knowl-edge claims, challenging one’s understanding, and exploring the value of knowledge. Forexample, one student stated that “Science assessment for me is to evaluate knowledgeclaims, to explore what this knowledge can do, and to assess its value” (student 40319).Another student mentioned that “When learning has got to a certain stage, the assess-ment will be performed. That is, someone may justify and criticize your learning and yourknowledge through the assessment. Then, you will learn more about how to respond tosuch critiques” (student 40240).

The first student seems to imply that it is s/he who is using the assessment to make a criticaljudgment about what value the knowledge holds, whereas the second seems to be saying it isanother person who is making the critical judgment. In both cases, the students are referringultimately to their own learning. These students believe that science assessment could beused to justify scientific knowledge and to evaluate the value of knowledge. Samuelowiczand Bain (2002) also found that academics hold the conception that the feedback fromassessment can be used to challenge students’ understanding. Similarly, some studentsmay recognize that science assessment could be used as a way to challenge their own orothers’ knowledge. Challenging or justification of knowledge could be seen as higher levelcognitive activities in the students’ learning process (Kitchener, 1983). Students with thisconception seem to view science assessment not only as a way of improving their learningbut also as further polishing their understanding.

The Distribution of Students’ Conceptions of Science Assessment. Based upon thesix categories revealed, each student’s interview responses were classified into one in-dividual category that mainly represented his/her conception of science assessment. Thedistribution of the students’ conceptions of science assessment among the six categories ispresented in Table 2.

Accordingly, about half of the students viewed science assessment as “accountability”(46.7%). The results of this study seem to indicate that science assessment, for nearly half

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 12: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 255

TABLE 2Students’ Conceptions of Science Assessment (n = 60)

Conceptions of Science Assessment N Percentage

Reproducing knowledge (Rk) 4 6.7Rehearsing (Reh) 5 8.3Accountability (Acc) 28 46.7Improving learning (IL) 10 16.7Problem solving (PS) 9 15.0Critical judgment (CJ) 4 6.7

of the sample of 10th graders in Taiwan, is a simple way of making them accountable fortheir science learning.

STUDY 2: DEVELOPING THE CONCEPTIONS OF SCIENCEASSESSMENT QUESTIONNAIRE

In the second study, a new questionnaire named Conceptions of Science Assessment(COSA), based on the qualitative results obtained in the first study, was developed andvalidated through both exploratory and confirmatory factor analysis using two differentsets of samples.

Method

Samples. Two different sets of samples were enrolled in this study. For the exploratoryfactor analysis (EFA) study, they were 411 high school students (around 16 years old)enrolled in 15 classes from 5 high schools in Taiwan, consisting of 230 males and 181females, none of whom had taken part in the first study. And, for the confirmatory factoranalysis, a total of 503 high school students (258 males) enrolled in 18 classes from 6schools participated. Those students had not taken part in the first study either.

Instrument

Questionnaire Assessing Students’ Conceptions of Science Assessment. The COSAquestionnaire was developed based on the interview results of the first study and con-sisted of six categories: science assessment as “reproducing knowledge,” “rehearsing,”“accountability,” “improving learning,” “problem solving,” and “critical judgment.” Thesesix categories served as the foundation for the development of the COSA questionnaire.

Using the oral responses given by the students to the interview questions regarding theirconceptions of science assessment in the first study, and also referring to the items ofthe Students’ Conception of Assessment developed by Brown and Hirschfeld (2008), thesecond study constructed six to 8 items for each of the six categories of science assessmentconceptions. After completing the initial construction process, the authors consulted withtwo experts in the field of science education regarding its content validity. A detaileddescription of the six categories, with a sample item for each, is presented below:

1. Reproducing knowledge: Science assessment is characterized as a way of reproducingscientific knowledge, e.g., science assessment can be used to enhance my ability tomemorize the content of the textbook.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 13: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

256 LEE ET AL.

2. Rehearsing: Science assessment is seen as a way of becoming more familiar withscientific knowledge, e.g., science assessment is to make the formulae clearer bycontinuous rehearsal.

3. Accountability: Science assessment is to make me accountable for my learning, e.g.,the science assessment is to reveal what I have learned.

4. Improving learning: Science assessment is viewed as improving learning activity,enhancing learning motivation, and facilitating the understanding of science, e.g.,science assessment can help me to better understand the scientific knowledge learnedin the class.

5. Problem solving: Science assessment is characterized as problem solving that extendsscience knowledge to open-ended situations, e.g., science assessment is to apply whatyou have learned and to inspire you with new ideas.

6. Critical judgment: Science assessment is conceptualized as evaluating knowledgeclaims, challenging one’s understandings, and exploring the value of knowledge,e.g., science assessment for me is to evaluate knowledge claims, to explore what thisknowledge can do, and to assess its value.

In this study, the COSA was presented in a 5-point Likert mode, ranging from 5, “stronglyagree” to 1, “strongly disagree.” Accordingly, students gaining higher scores in a certaincategory show stronger agreement with the statements in that category regarding scienceassessment.

Data Analysis. To establish the reliability and validity of COSA, EFA was employedafter data collection. Then, confirmatory factor analysis (CFA) was performed to analyzethe construct validity, convergent validity, and the structure of the COSA. In this study,construct validity was established through a series of factor analyses. Moreover, throughthe CFA, the convergent validity could be assessed by examining the significant t valuesfor the factor loadings of each item on its construct (Hatcher, 1994).

Results of Study 2

Exploratory Factor Analysis for the Conceptions of Science Assessment. To validatethe COSA, an EFA with a varimax rotation was performed to clarify the structure of theconceptions of learning science. As a result, the 411 students’ conceptions were grouped intothe following six factors: reproducing knowledge (Rk), rehearsing (Reh), accountability(Acc), Improving learning (IL), problem solving (PS), and critical judgment (CJ). Theeigenvalues of the six factors from the principle component analysis were all larger than 1,whereas those items with a factor loading of less than 0.40 were omitted from the survey(Stevens, 1996). As a result, a total of 34 items were retained in the final version of COSA(shown in Table 3) and the total variance explained is 68.18%.

In addition, the reliability (Cronbach’s α) coefficients for these factors were .86, .88,.91, .90, .86, .92, respectively, and the overall α was .93, suggesting that these factors werehighly reliable in terms of assessing the students’ conceptions of science assessment.

Confirmatory Factor Analysis for the COSA. Bell (2007) suggested that classroom as-sessment could be used by both teachers and students for formative or summative purposes.As previously mentioned, the summative purpose of assessment could be regarded as prod-uct oriented and as having the aim of arriving at an overall grade for concluding learning,whereas the formative purpose of assessment could be viewed as process oriented and as

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 14: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 257

TABLE 3The Exploratory Factor Analysis and Cronbach’s α Values for the COSA Fac-tors (n = 411)

Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Factor 6

Factor 1: Reproducing knowledge, α =.86Rk1 .73Rk2 .74Rk3 .84Rk4 .87Rk5 .77

Factor 2: Rehearsing, α =.88Reh1 .67Reh2 .77Reh3 .76Reh4 .65Reh5 .76Reh6 .66

Factor 3: Accountability, α =.91Acc1 .75Acc2 .72Acc3 .78Acc4 .71Acc5 .76Acc6 .69

Factor 4: Improving learning, α = .90IL1 .69IL2 .72IL3 .80IL4 .75IL5 .59IL6 .70

Factor 5: Problem solving, α = .86PS1 .58PS2 .74PS3 .66PS4 .73PS5 .70

Factor 6: Critical judgment, α =.92CJ1 .77CJ2 .81CJ3 .84CJ4 .84CJ5 .81CJ6 .80

Total variance explained: 68.18%, overall α = .93

useful for identifying areas in which to improve learning. That is, students’ conceptionsof science assessment may be further categorized into summative and formative purposes.The conceptions of science assessment as “improving learning,” “problem solving,” and“critical judgment” seem to relate to the formative purpose of science assessment, whereasthe COSA of “rehearsing” and “accountability” tend to relate to summative assessment.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 15: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

258 LEE ET AL.

Surface

COSA

Summative

COSA

Formative

COSA

Reproducing

knowledge

Rehearsing

Accountability

Improving

learning

Problem

solving

Critical

judgment

Figure 1. The second-order factor analysis model of the COSA questionnaire.

Moreover, the COSA of “reproducing knowledge” indicates the simple recall of informationin science assessment, and so is categorized as the surface purpose of science assessment.Similar to the definition of the surface approach to learning (Chin & Brown, 2000), “sur-face” assessment focuses on memorizing separate facts and reproducing terms, reflecting arote-learning situation. Accordingly, to further understand students’ conceptions of scienceassessment and the structure of COSA, this study framed the six factors in terms of surface,summative, and formative COSA, as displayed in Figure 1. Through conducting the CFA, asecond-order factor analysis was used to reflect whether these six factors could be replicatedand fitted into the proposed model shown in the figure.

As illustrated in Figure 1, each factor of the students’ COSA was a first-order construct(i.e., reproducing knowledge, rehearsing, accountability, improving learning, problem solv-ing, and critical judgment), whereas the surface, summative, and formative COSA were thesecond-order constructs. To verify the construct validity and the structure of the COSA andto further examine whether the proposed model could be established, second-order CFAanalysis was performed for 503 students’ responses on the COSA.

Hair, Black, Babin, Anderson, and Tatham (2006) suggested that, to evaluate the con-vergent validity of constructs, it is recommended to calculate the values of factor loading,t value, average variance extracted (AVE), and composite reliability (CR). Accordingly,Table 4 presents the scores of factor loadings, t value, AVE, and CR for each first-orderconstruct. As shown, the factor loading and the t value for the six factors are larger than 0.5and significant, whereas the scores of AVE and CR are higher than the cutoff values of 0.5and 0.7, respectively. The ratio of Chi square to degrees of freedom = 3.19, the comparativefit index (CFI) = 0.97, and the root mean squared error of approximation (RMSEA) =0.066, revealing an acceptable model fit (Hair et al., 2006) and suggesting good convergentand construct validity for the COSA items.

The CFA loadings and t values from each first-order construct to the second-orderconstructs are presented in Table 4. As shown in the table, the results corresponding tothe individual factors (first-order construct) reveal that practically all the first-order factorsload cleanly onto three latent second-order factors in respect of surface, summative, andformative conceptions of science assessment. That is, the results indicate that the first-order

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 16: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 259

TABLE 4The Second-Order CFA for the COSA Factors (n = 503)

Standardized CFA First-Order Loading

Factor and Items Factor Loading t Value AVE CR

Reproducing knowledge (Rk) 0.53 0.85Rk1# 0.67 –Rk2 0.63 12.25*

Rk3 0.75 14.25*

Rk4 0.81 15.08*

Rk5 0.76 14.39*

Rehearsing (Reh) 0.57 0.88Reh1# 0.82 –Reh2 0.86 22.95*

Reh3 0.85 22.30*

Reh4 0.55 12.77*

Reh5 0.73 18.32*

Reh6 0.65 15.53*

Accountability (Acc) 0.67 0.92Acc1# 0.85 –Acc2 0.88 25.90*

Acc3 0.82 24.43*

Acc4 0.79 21.71*

Acc5 0.81 22.31*

Acc6 0.72 18.90*

Improving learning (IL) 0.57 0.89IL1# 0.70 –IL2 0.63 13.21*

IL3 0.83 17.31*

IL4 0.82 17.12*

IL5 0.77 16.18*

IL6 0.78 16.21*

Problem solving (PS) 0.56 0.87PS1# 0.79 –PS2 0.81 19.47*

PS3 0.73 17.16*

PS4 0.79 18.90*

PS5 0.62 14.09*

Critical judgment (CJ) 0.63 0.91CJ1# 0.84 –CJ2 0.85 23.12*

CJ3 0.75 19.27*

CJ4 0.78 20.32*

CJ5 0.80 21.15*

CJ6 0.75 19.32*

Standardized CFA Second-Order LoadingSecond-Order Factor Model Loading Value t ValueSurface COSA

Rk 1.00 16.03*

Summative COSAReh 0.84 17.19*

Acc 0.91 19.10*

(Continued)

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 17: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

260 LEE ET AL.

TABLE 4Continued

Standardized CFA First-Order Loading

Factor and Items Factor Loading t Value AVE CR

Formative COSAIL 0.91 15.62*

PS 0.86 16.90*

CJ 0.67 13.87*

Notes: AVE, Average variance extracted; CFA, confirmatory factor analysis, CR, compositereliability. Chi square = 1658.20 (p < .001); degrees of freedom = 519; Chi square perdegree of freedom = 3.19; RMSEA = 0.066; CFI = 0.97.* p < .05, # indicates a fixed item.

factors converge to the second-order constructs. Accordingly, through the CFA analysis,the construct validity and the second-order structure of COSA were confirmed in this study.

STUDY 3: THE RELATIONSHIP BETWEEN STUDENTS’ CONCEPTIONSOF SCIENCE ASSESSMENT AND OF LEARNING SCIENCE

In the third study, by using the COSA developed in the second study and the Conceptionsof Learning Science (COLS) questionnaire (Lee et al., 2008), the correlation betweenstudents’ COSA and COLS was examined with the aim of further providing partial, criteria-related validity for the COSA.

Method

Samples. The samples were 224 tenth graders (around 16 years old) enrolled at two highschools in Taiwan. For each school, four to five classes were selected, consisting of 97males and 127 females, none of whom had taken part in either the first or second study.Data were collected via two questionnaires: the COSA, which was developed in the secondstudy, and the COLS, developed by Lee et al. (2008), as described below.

Instrument

Questionnaire Evaluating Students’ Conceptions of Learning Science. The COLS ques-tionnaire (Lee et al., 2008) was implemented to assess students’ conceptions of learningscience. The COLS had been confirmed by Lee et al. (2008) through both exploratory andconfirmatory factor analysis consisting of six factors (i.e., memorizing, testing, calculatingand practicing, increase of knowledge, applying, and understanding and seeing in a newway). The original COLS comprised six to nine items for each factor presented with bipolarstrongly agree/strongly disagree statements on a 5-point Likert scale. A description of eachfactor with a sample item is presented below:

1. Memorizing: Learning science is characterized as the memorization of definitions,formulae, laws, and special terms, e.g., learning science means memorizing theimportant concepts found in a science textbook.

2. Testing: Learning science is to pass the examinations or to achieve high scores inscience tests, e.g., I learn science just because of the tests.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 18: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 261

TABLE 5The Cronbach’s α Values, Factor Means, and Standard Deviations for theCOSA and COLS Factors (n = 224)

Factor Number of Item α Mean (SD)

COSAReproducing knowledge 5 .84 2.82 (0.81)Rehearsing 6 .89 3.68 (0.72)Accountability 6 .90 3.68 (0.72)Improving learning 6 .90 3.57 (0.75)Problem solving 5 .87 3.52 (0.75)Critical judgment 6 .88 3.35 (0.76)

COLSMemorizing 6 .87 2.77 (0.80)Testing 7 .89 2.94 (0.83)Calculating and practicing 6 .86 3.08 (0.75)Increase of knowledge 5 .89 3.73 (0.71)Applying 5 .85 3.56 (0.71)Understanding and seeing in a new way 7 .91 3.76 (0.75)

The overall COSA α =.94; the overall COLS α =.89.

3. Calculating and practicing: Science learning is viewed as a series of calculating,practicing tutorial problems, and manipulating formulae and numbers, e.g., the wayto learn science well is to constantly practice calculations and problem solving.

4. Increase of knowledge: Learning science is perceived as the acquisition and accu-mulation of scientific knowledge, e.g., I am learning science when I increase myknowledge of natural phenomena and topics related to nature.

5. Applying: Science learning is for the application of received scientific knowledge,e.g., the purpose of learning science is to learn how to apply methods I already knowto unknown problems.

6. Understanding and seeing in a new way: A true understanding is characterized asthe major feature of learning science, and the acquisition of scientific knowledge isto obtain a new way to interpret natural phenomena, e.g., learning science meanschanging my way of viewing natural phenomena and topics related to nature.

In this study, each factor was presented in a 5-point Likert mode. Items on the scales areanchored from 5 = strongly agree to 1 = strongly disagree. Accordingly, students gaininghigher scores for a certain factor show stronger agreement with those statements regardinglearning science.

Results of Study 3

The Reliability and the Descriptive Data for the COSA and COLS. With respect to theCOSA questionnaire, the reliability (Cronbach’s α) coefficients for these factors were .84,.89, .91, .93, .87, .89, respectively, and the overall α was .95, suggesting that they hadsufficiently high reliability in assessing the students’ conceptions of science assessment(shown in Table 5). Table 5 also presents the 224 students’ average item scores (i.e., mean)and standard deviations on the six factors of the COSA. Student’s mean scores representtheir relative agreement with a certain conception. That is, a higher mean shows relativelystronger agreement with that conception of science assessment. According to Table 5,

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 19: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

262 LEE ET AL.

TABLE 6The Paired Comparison for the Second-Order Factors of COSA (n = 224)

Second Factor of COSA Mean (SD) Paired t Test

Surface COSA 2.82 (0.81)Summative COSA 3.68 (0.66)Formative COSA 3.48 (0.66)Summative: Surface 13.64***

Summative: Formative 5.31***

Formative: Surface 9.61***

***p < .001.

students attained high scores on the “accountability” factor (an average of 3.68 per item)and the “rehearsing” factor (an average of 3.68 per item). Their scores on the “reproducingknowledge” factor, an average of 2.82 per item, were significantly lower when comparedto those of other factors (p < .001).

Moreover, in this study, the reliability (Cronbach’s α) coefficients also supported thereliability of evaluating the students’ conceptions of learning science. As shown in Table 5,the reliability coefficients for these factors of COLS (n = 224) were .87, .89, .86, .89, .85,.91, respectively, with an overall α of .89. In addition, Table 5 also shows the 224 students’average item scores and the standard deviations of the six factors of the COLS. That is,the students displayed more agreement with the “understanding and seeing in a new way”factor (an average of 3.76 per item) and the “increase of knowledge factor” (an average of3.73 per item).

The Paired Comparison for the Second-Order Factors of COSA. As the second studyhad already confirmed the second-order structure for the COSA (i.e., surface, summative,and formative), this study further conducted paired comparisons of students’ means of thesecond-order factors of COSA. As shown in Table 6, the summative feature of scienceassessment was the feature most often recognized, whereas the surface feature was the leastrecognized. That is, summative assessment still dominates in the science classroom, but sur-face assessment, which merely highlights the recall of information, was less conceptualizedby the Taiwanese high school science students in this study.

The Correlation Between Students’ Conceptions of Science Assessment and Learn-ing Science. To initially understand the relationships between the COSA and COLS,Pearson correlation analyses of the students’ responses to the two questionnaires were con-ducted. As shown in Table 7, first, the surface COSA factor “reproducing knowledge” wassignificantly positively correlated with the lower level factors of COLS such as “memoriz-ing,” “testing” and “calculating and practicing” (r = .26, .19, .24, respectively, p < .01).Second, the “rehearsing” and “accountability” factors of summative COSA were positivelycorrelated with the “calculating and practicing,” “increase of knowledge,” “applying,” and“understanding and seeing in a new way” factors of COLS. Finally, the formative COSA(i.e., improving learning, problem solving, and critical Judgment) and the higher level fac-tors of COLS (increase of knowledge, applying, understanding and seeing in a new way)were positively related (r = .49–.55, p < .01). In addition, the formative COSA, including“problem solving” and “critical Judgment,” was negatively correlated with the “testing”factor of COLS (r = –.18, –.19, respectively).

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 20: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 263

TAB

LE

7T

he

Co

rrel

atio

ns

Bet

wee

nS

tud

ents

’Res

po

nse

sto

CO

SA

and

CO

LS

( n=

224)

CO

SA

Sur

face

Sum

mat

ive

For

mat

ive

Rep

rodu

cing

Impr

ovin

gP

robl

emC

ritic

alC

OLS

Kno

wle

dge

Reh

ears

ing

Acc

ount

abili

tyLe

arni

ngS

olvi

ngJu

dgm

ent

Mem

oriz

ing

.26**

*.0

6.0

7.0

1.0

9.0

2Te

stin

g.1

9**–.

09.0

97–.

13–.

18**

–.19

**

Cal

cula

ting

and

prac

ticin

g.2

4***

.18**

.16*

.12

.11

.09

Incr

ease

ofkn

owle

dge

.04

.47**

*.4

8***

.53**

*.5

2***

.49**

*

App

lyin

g–.

02.4

1***

.52**

*.4

9***

.52**

*.5

1***

Und

erst

andi

ngan

dse

eing

ina

new

way

–.10

.39**

*.4

3***

.49**

*.4

9***

.47**

*

* p<

.05,

**p

<.0

1,**

* p<

.001

.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 21: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

264 LEE ET AL.

Previous studies have suggested that individuals’ conceptions of assessment might relateto their conceptions of learning (e.g., Gijbels & Dochy, 2006; Struyven, Dochy, & Janssens,2005). Moreover, some studies have implied that students’ conceptions of assessment have apotential impact on the quality of their learning (e.g., Peterson & Irving, 2008). Overall, theresults gained from the correlation analysis suggest that the students’ conceptions of scienceassessment play a role in their conceptions of learning science and possibly provide thecriteria-related validity for the COSA. As shown in Table 6, with respect to the second-orderaspect of COSA, those students with surface purpose conceptions of science assessment asreproducing knowledge tend to relate to the lower level conceptions of learning science,such as memorizing, testing, and calculating and practicing. On the other hand, studentswith either a summative or formative purpose of science assessment tend to relate to thehigher level conceptions of learning science, such as increase of knowledge, applying, andunderstanding and seeing in a new way. It should be noted that the summative COSA wasalso positively related to the COLS of “calculating and practicing,” whereas the formativeCOSA was further negatively related to the COLS of “testing.” The formative COSA hadhigh correlations with the higher level conceptions of learning science.

As Biggs (1998) and Black (2000) have stated, for teachers, classroom assessments areusually carried out for both summative and formative purposes. For example, they mayuse the information gained from the assessment to inform the learning and teaching (i.e.,formative purpose) and also use the same assessment at the end of teaching and learning forsummative purposes. Bell and Cowie (2001) also suggested that both summative and for-mative assessment influence students’ learning progress. Therefore, formative assessmentaims to improve students’ learning and emphasizes their learning process, which corre-sponds to learning for applying and understanding. On the other hand, since the summativepurpose of assessment focuses on students’ accountability, and as summative and formativepurposes may overlap in classroom assessment, students with summative conceptions ofscience assessment may also refer to the higher level conceptions of learning science. Taras(2005) also suggested that formative assessment could be seen as summative assessmentplus feedback which is used by the learner.

However, for the lower level COLS, a difference in correlation patterns between the sum-mative and formative COSA could be identified. That is, those students who conceptualizescience assessment as having a summative purpose may also view learning science as cal-culating and practice. In general, summative science assessment often uses multiple-choicetests, which involve several calculating tasks. On the other hand, as it is not the purpose offormative assessment to ensure that students pass the examinations or achieve high scoresin science tests, students with formative COSA may tend to avoid viewing learning scienceas testing.

DISCUSSION

The major purposes of this study were to explore Taiwanese high school students’conceptions of science assessment and to develop a conceptions of science assessmentquestionnaire. Subsequently, the quantitative relationships between science assessment(i.e., conceptions of science assessment) and learning science (i.e., conceptions of learningscience) were identified. Both qualitative and quantitative research methods were utilizedin this study.

Students’ Conceptions of Science Assessment

This study is an initial attempt to explore high school students’ conceptions of sci-ence assessment through the phenomenographic method. The six conceptions of science

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 22: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 265

assessment identified in the first study may provide new insights to improve the presentunderstanding of students’ conceptions of science assessment.

The six conceptions of science assessment identified in this study, including “reproducingknowledge,” “rehearsing,” “accountability,” “improving learning,” “problem solving,” and“critical judgment,” suggest a description of the students’ conceptions of science assessmentas a hierarchical system (i.e., from “reproducing knowledge” to “critical judgment”), whichconcurs with the purpose of using the phenomenographic method as asserted by Marton(1994).

In the first study, almost half of the students viewed science assessment as making themaccountable for their own learning (accountability). “accountability” could be seen as oneof the most general purposes of assessment for proving learning. That is, students stillview the summative purpose of science assessment as proving learning. Furthermore, theconception of science assessment as “rehearsing” identified in this study may have beenshaped by the examination-oriented educational climate and traditional culture in Taiwan.That is, the traditional Chinese values and beliefs regarding learning effort enable studentsto believe that they can improve their ability and achievement through effort and hard work(Chan, 2007; Li, 2003). Therefore, in the educational and cultural climate in Taiwan, somestudents may conceptualize science assessment as rehearsing.

In addition, the conception of science assessment as critical judgment was identified inthis study. This result may imply that some students have experienced advanced assessment,which could be used to justify or evaluate their own or others’ opinions or claims. Forexample, completing and presenting a scientific inquiry activity by oneself or as groupwork is used as an important form of science class assessment. Students need to presenttheir claims and reply to queries from their teacher and classmates. Accordingly, studentswho identify this form of assessment may consider that assessment evaluates knowledgeclaims, challenges one’s understandings, and explores the value of knowledge and theymay therefore conceptualize science assessment as critical judgment.

As previously mentioned, classroom assessment for science learning could be categorizedas summative or formative according to its purpose. In the first study, the conceptions of“reproducing knowledge,” “rehearsing,” and “accountability” could be categorized as thesummative conceptions of science assessment. These three conceptions usually involveproduct-oriented assessment work, such as recalling information, practicing the scienceconcepts or formulae in several ways, or presenting the knowledge of facts and concepts.These conceptions are more closely related to a quantitative practice of science assessment,as they probably gain or value the feedback of science assessment as proving how muchknowledge is learned. As a result, students with this perspective would tend to view scienceassessment as a way of quantitatively calculating how much science knowledge they havelearned.

Moreover, the conceptions of science assessment as “improving learning,” “problemsolving,” and “critical judgment” could be categorized as formative conceptions of scienceassessment, which emphasize the process of assessment. These students tend to view scienceassessment as a way of processing higher order science thinking or problem-solving skills.Furthermore, students who conceptualize science assessment as “improving learning,”“problem solving,” and “critical judgment” may value the feedback as improving theirlearning and ability by identifying whether they have integrated and refined the scientificknowledge and then extended it to other situations. Hence, the assessment practice ofgaining and valuing the feedback of science assessment is a qualitative view, which can bereferred to as formative assessment.

In addition, it should be noted that those conceptions or experiences that are identifiedas formative COSA do not necessarily come from implicit formative assessment practices

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 23: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

266 LEE ET AL.

(which would be the conceptions of formative assessment). For instance, when students areasked to complete a science project with an oral presentation and peer assessment at theend of semester, on the one hand, they realize that such activities are one of the classroomassessments relating to their final grade, while, on the other hand, they may possiblyperceive such assessment as problem solving, which extends their science knowledgeto open-ended situations (i.e., the formative purpose of assessment in this study). Furtherstudies are recommended to perform an implicit formative assessment in science classroomsto capture students’ conceptions of this particular form of assessment.

The Development of the COSA Questionnaire and the CorrelationsBetween Students’ Conceptions of Science Assessment and LearningScience

In the second study, both the EFA and CFA results indicated that the COSA questionnairedeveloped in this study has good construct validity and high reliability measures. That is, thenewly developed COSA could not only provide science educators with a valid instrumentfor evaluating what assessment means to students but could also reflect their assessmentactivity through surface, summative, and formative purposes of assessment, which supportthe COSA framework proposed in this study.

By using the COSA and the COLS questionnaires (Lee et al., 2008), this study furtherinvestigated the role of students’ perspectives of science assessment in their conceptions oflearning science. As a result, the quantitative findings of this study support the contentionthat students’ perceptions of science assessment seem to be associated with their sciencelearning conceptions. Three correlation patterns were revealed for the relationships betweenCOSA and COLS.

First, the surface COSA was positively related to the lower level COLS. That is, stu-dents who view the major purpose of science assessment as reproducing knowledge tendto possess COLS of memorizing, testing, and calculating and practicing. This result im-plies that students who conceptualize science assessment as assessing their ability to re-produce the information presented in lectures and/or textbooks may tend to have lowerlevel conceptions of learning science such as memorizing, testing, and calculating andpracticing.

Alderson and Wall (1993) and Biggs (2003) suggested that what students learn and howthey learn may be influenced by what they perceive they will be assessed on. Accordingly,students who perceive an assessment activity as something that requires only low cognitiveactivities such as factual recall, may tend to rote learn specific facts or pieces of fragmentedconcepts, and reproduce them at the time of assessment, which may result in their lowerlevel conceptions of learning. As the lower level conceptions of learning science may resultin rote approaches to learning science (Lee et al., 2008), the above results may also providesome evidence that science assessment which pays excessive attention to reproducingknowledge might encourage students to use reproductive means of learning science andthereby discourage their utilization of in-depth reflective learning strategies.

Second, the summative COSA was positively related to the higher level COLS, mostnotably also to the COLS factor “Calculate and practicing.” Tsai (2004) suggested that theCOLS “calculating and practicing tutorial problems” was the specific conception regardingthe domain of science learning, which involves a series of calculating, practicing, andmanipulating formulae and numbers. What’s more, summative assessment is focused onproviding summative information regarding students’ science learning. Thus, such a positivecorrelation with “calculate and practicing” seems to reveal the main feature of the summativeCOSA.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 24: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 267

Third, the higher level COLS were positively related to the formative COSA with highcorrelation coefficients, whereas the COLS factor “testing” has a negative relationshipwith the formative COSA factors “problem solving” and “critical judgment.” That is, forthose students who regard science assessment as enhancing their science learning (i.e.,improving learning), the connection between scientific knowledge and practical situations(i.e., problem solving), or evaluating knowledge claims, challenging one’s understanding,and exploring the value of knowledge (critical judgment) tend to have higher level con-ceptions of learning science such as an increase in knowledge, applying, or acquisition ofunderstanding and new perspectives.

Moreover, Lee et al. (2008) suggested that the COLS factor “testing” was the strongestpredictor for the surface science learning approaches. That is, students with formativeCOSA tended not to view learning science as preparing for testing and tended not touse rote approaches to learning science. This study further supports the assumption offormative COSA, since the formative purpose of science assessment does not aim toassess students’ rote learning. The present results also imply that, to encourage studentsto embrace sophisticated means of learning science, science assessment needs to placegreater stress on the connections between scientific knowledge and practical situations andto provide more opportunities to evaluate and challenge knowledge claims. On the contrary,when students perceive that the assessment tasks require them to demonstrate a personalinterpretation or application of the underlying principles, they may be more inclined tolearn for understanding, reflecting their higher level conceptions of learning.

In sum, researchers have suggested that students holding more sophisticated concep-tions of learning science may develop meaningful learning approaches (e.g., Lee et al.,2008). Accordingly, to enhance students’ meaningful learning, it is suggested that sci-ence educators should utilize varied assessment types to facilitate students’ mature scienceassessment conceptions (i.e., problem solving or critical judgment), and subsequently, stu-dents may tend to embrace sophisticated learning conceptions and meaningful learningapproaches to learning science. That is, to promote students’ maturation of science assess-ment conceptions, science educators, on the one hand, may have to reduce the utilization ofpaper-and-pencil multiple-choice assessments which may easily lead students to reproduceknowledge. On the other hand, teachers may need to construct science assessment that canencourage students to become involved in problem-solving activities, to critically evaluateinformation obtained from others, and to transform and develop their scientific knowledge.For example, the usage of authentic assessment and peer assessment may be appropriate.Peer assessment can be defined as the process whereby groups of students comment onand judge their peers’ work (Falchikov, 1995) and thus may provide opportunities forprocess-oriented practice of science assessment.

Moreover, in Taiwan, teachers dominate most science assessment in the classroom.Accordingly, it is also important to investigate the relationship between teachers’ andstudents’ conceptions of science assessment. Thus, we can gain much more informationto improve students’ science assessment and learning. Given that the utilization of variousscience assessments may affect learning either negatively or positively, there is a need toadopt science assessment practices that would positively influence student science learning.In the field of science education, several assessment practices for formative purposes thatcould enhance students’ science learning have been developed and demonstrated (e.g.,Furtak & Ruiz-Primo, 2008; Ruiz-Primo & Furtak, 2007). However, there are still veryfew empirical studies exploring students’ conceptions of learning and assessment throughbeing engaged in formative assessment practices. Further study may create a programwhereby students are formatively assessed to identify whether they develop different ideasof learning and assessment. Moreover, future research may need to empirically examine

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 25: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

268 LEE ET AL.

the interrelationships among students’ conceptions of science assessment, conceptions oflearning science, and their approaches to learning science. It is hoped that the present studywill contribute to the substantial body of knowledge of science education by highlightingthe relations of conceptions of science assessment with students’ science learning.

REFERENCES

Alderson, J. C., & Wall, D. (1993). Does washback exist? Applied Linguistics, 14, 115 – 129.Astin, A. W. (1993). Assessment for excellence: The philosophy and practice of assessment and evaluation in

higher education. New York: American Council on Education/Macmillan.Bell, B. (2007). Classroom assessment of science learning. In S. K. Abell & N. G. Lederman (Eds.), Handbook

of research on science education (pp. 965 – 1006). Mahwah, NJ: Erlbaum.Bell, B., & Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education,

85, 536 – 553.Biggs, J. (1998). Assessment and classroom learning: A role for summative assessment? Assessment in Education,

5, 103 – 110.Biggs, J. (2003). Teaching for quality learning at university (2nd ed.). Buckingham, England: The Society for

Research into Higher Education and Open University Press.Black, P. (2000). Research and the development of educational assessment. Oxford Review of Education, 26,

407 – 419.Black, P. (2001). Dreams, strategies and systems: Portraits of assessment past, present, and future. Assessment in

Education, 8, 65 – 85.Black, P., & Williams, D. (1998). Assessment and classroom learning. Assessment in Education, 5, 7 – 74.Brooks, J. G., & Brooks, M. G. (1993). In searching for understanding: The case for constructivist classrooms.

Alexandria, VA: Association for Supervision and Curriculum Development.Brown, G. T. L. (2004). Teachers’ conceptions of assessment: Implications for policy and professional develop-

ment. Assessment in Education: Principles, Policy & Practice, 11, 301 – 318.Brown, G. T. L., & Hirschfeld, G. H. F. (2007). Students’ conceptions of assessment and mathematics: Self-

regulation raises achievement. Australian Journal of Educational & Developmental Psychology, 7, 63 – 74.Brown, G. T. L., & Hirschfeld, G. H. F. (2008). Students’ conceptions of assessment: Links to outcomes.

Assessment in Education: Principles, Policy & Practice, 15, 3 – 17.Brown, G. T. L., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. F. (2009). Use of interactive-informal assessment

practices: New Zealand secondary students’ conceptions of assessment. Learning and Instruction, 19, 97 – 111.Brown, G. T. L., Lake, R., & Matters, G. (2011). Queensland teachers’ conceptions of assessment: The impact of

policy priorities on teacher attitudes. Teaching and Teacher Education, 27, 210 – 220.Chan, K. W. (2007). Hong Kong teacher education students’ epistemological beliefs and their relations with

conceptions of learning and learning strategies. The Asia-Pacific Education Researcher, 16, 199 – 214.Chin, C., & Brown, D. E. (2000). Learning in science: A comparison of deep and surface approaches. Journal of

Research in Science Teaching, 37, 109 – 138.Chiou, G.-L., Liang, J.-C., & Tsai, C.-C. (2012). Undergraduate students’ conceptions of and approaches to

learning in biology: A study of their structural models and gender differences. International Journal of ScienceEducation, 34, 167 – 195.

Coffey, J. E., Hammer, D., Levin, D. M., & Grant, T. (2011). The missing disciplinary substance of formativeassessment. Journal of Research in Science Teaching, 48, 1109 – 1136.

Cole, N. S. (1990). Conceptions of educational achievement. Educational Researcher, 19(3), 2 – 7.Doran, R. L., Lawrenz, F., & Helgeson, S. (1993). Research on assessment in science. In D. Gabel (Ed.), Handbook

of research in science teaching and learning (pp. 388 – 442). New York: Macmillan.Duarte, A. M. (2007). Conceptions of learning and approaches to learning in Portuguese students. Higher Educa-

tion, 54, 781 – 794.Entwistle, N. J., & Entwistle, A. (1991). Contrasting forms of understanding for degree examinations: The student

experience and its implications. Higher Education, 22, 205 – 227.Entwistle, N. J., & Peterson, E. R. (2004). Conceptions of learning and knowledge in higher education: Rela-

tionships with study behavior and influences of learning environments. International Journal of EducationalResearch, 41, 407 – 428.

Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovations in Education and TeachingInternational, 32, 175 – 187.

Fodor, J. A. (1998). Concepts: Where cognitive science went wrong. Oxford, England: Clarendon Press.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 26: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

PROVING OR IMPROVING SCIENCE LEARNING? 269

Furtak, E. M., & Ruiz-Primo, M. A. (2008). Making students’ thinking explicit in writing and discussion: Ananalysis of formative assessment prompts. Science Education, 92, 799 – 824.

Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: an introduction (7th ed.). New York: Allynand Bacon.

Gijbels, D., & Dochy, F. (2006). Students’ assessment preferences and approaches to learning: Can formativeassessment make a difference? Educational Studies, 32, 399 – 409.

Gipps, C. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24, 355 – 392.Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: The Falmer Press.Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate data analysis (6th

ed.). New York: Prentice Hall.Hatcher, L. (1994). A step-by-step approach to using the SAS system for factor analysis and structural equation

modeling. Cary, NC: SAS Institute.Hirschfeld, G. H. F., & Brown, G. T. L. (2009). Students’ conceptions of assessment. European Journal of

Psychological Assessment, 25, 30 – 38.Kitchener, K. S. (1983). Cognition, metacognition, and epistemic cognition: A three-level model of cognitive

processing. Human Development, 26, 222 – 232.Koballa, T., Graber, W., Coleman, D. C., & Kemp, A. C. (2000) Prospective gymnasium teachers’ conceptions of

chemistry learning and teaching. International Journal of Science Education, 22, 209 – 224.Lawrenz, F. (2007). Review of science program evaluation. In S. K. Abell & N. G. Lederman (Eds.), Handbook

of research on science education (pp. 943 – 963). Mahwah, NJ: Erlbaum.Lee, M.-H., Chang, C.-Y., & Tsai, C.-C. (2009). Exploring Taiwanese high school students’ perceptions of and

preferences for teacher authority in the earth science classroom with relation to their attitudes and achievement.International Journal of Science Education, 31, 1811 – 1830.

Lee, M.-H., Johanson, R.E., & Tsai, C.-C. (2008). Exploring Taiwanese high school students’ conceptions ofand approaches to learning science through a structural equation modeling analysis. Science Education, 92,191 – 220.

Lee, M.-H., Tsai, C.-C., & Chai, C.S. (2012). A comparative study of Taiwan, Singapore, and China preserviceteachers’ epistemic beliefs. The Asia-Pacific Education Researcher, 21, 599–609.

Li, J. (2001). Chinese conceptualization of learning. Ethos, 29, 111 – 137.Li, J. (2003). U.S. and Chinese cultural beliefs about learning. Journal of Educational Psychology, 95, 258 – 267.Li, W., & Hui, S. (2007). Conceptions of assessment of Mainland China college lecturers: A technical paper

analyzing the Chinese version of COA-III. The Asia-Pacific Education Researcher, 16, 185 – 198.Liang, J.-C., Lee, M.-H., & Tsai, C.-C. (2010). The relations between scientific epistemological beliefs and

approaches to learning science among science-major undergraduates in Taiwan. The Asia-Pacific EducationResearcher, 19, 43 – 59.

Lin, H.-M., & Tsai, C.-C. (2008). Conceptions of learning management among undergraduate students in Taiwan.Management Learning, 39, 561 – 578.

Linder, C., & Marshall, D. (2003). Reflection and phenomenography: towards theoretical and educational devel-opment possibilities. Learning and Instruction, 13(3), 271 – 284.

Marshall, D., Summer, M., & Woolnough, B. (1999). Students’ conceptions of learning in an engineering context.Higher Education, 38, 291 – 309.

Marton, F. (1981). Phenomenography—Describing conceptions of the world around us. Instructional Science, 10,177 – 200.

Marton, F. (1994). Phenomenography. In T. Huson & T. N. Postlethwaite (Eds.), The international encyclopediaof education (2nd ed., pp. 4424 – 4429). Oxford, England: Pergamon Press.

Marton, F., Dall’Alba, G., & Beauty, E. (1993). Conceptions learning. International Journal of EducationalResearch, 19, 277 – 299.

Marton, F., & Saljo, R. (1997). Approaches to learning. In F. Marton, D. Hounsell, & N. Entwistle (Eds.), Theexperience of learning: Implications for teaching and studying in higher education (2nd ed., pp. 39 – 59).Edinburgh, Scotland: Scottish Academic Press.

Marton, F., Watkins, D., & Tang, C. (1997). Discontinuities and continuities in the experience of learning: Aninterview study of high-school students in Hong Kong. Learning and Instruction, 7, 21 – 48.

Marton, F., Wen, Q., & Wong, K. C. (2003). ‘Read a hundred times and the meaning will appear . . . ’ Changes inChinese university students’ views of the temporal structure of learning. Higher Education, 49, 291 – 318.

Ministry of Education (2004). The 10 – 12 grades science and life technology curriculum standards, Taipei, Taiwan:Ministry of Education.

Panizzon, D., & Pegg, J. (2008). Assessment practices: Empowering mathematics and science teachers in ruralsecondary schools to enhance students learning. International Journal of Science and Mathematics Education,6, 417 – 436.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)

Page 27: Proving or Improving Science Learning? Understanding High School Students’ Conceptions of Science Assessment in Taiwan

270 LEE ET AL.

Peterson, E. R., & Irving, S. E. (2008). Secondary school students’ conceptions of assessment and feedback.Learning and Instruction, 18, 238 – 250.

Ramsden, P. (1997). The context of learning in academic departments. In F. Marton, D. Hounsell, & N. Entwistle(Eds.), The experience of learning: Implications for teaching and studying in higher education (2nd ed., pp.198 – 217). Edinburgh, Scotland: Scottish Academic Press.

Remesal, A. (2011). Primary and secondary teachers’ conceptions of assessment: A qualitative study. Teachingand Teacher Education, 27, 472 – 482.

Ruiz-Primo, M. A., & Furtak, E. M. (2007). Exploring teachers’ informal assessment practices and students’understanding in the context of scientific inquiry. Journal of Research in Science Teaching, 44, 57 – 84.

Salder, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18,119 – 144.

Saljo, R. (1979). Learning in the learner’s perspective I: Some commonsense conceptions: Report from the Instituteof Education, University of Cambridge, No. 76.

Samuelowicz, K., & Bain, J. (2002). Identifying academics’ orientations to assessment practice. Higher Education,43(2), 173 – 201.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.Segers, M., & Dochy, F. (2001). New assessment forms in problem-based learning: the value-added of the students’

perspective. Studies in Higher Education, 26(3), 327 – 343.Shepard, L. A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4 – 14.Stevens, J. (1996). Applied multivariate statistics for the social science (3rd ed.). Mahwah, NJ: Erlbaum.Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher

education: A review. Assessment & Evaluation in Higher Education, 30(4), 325 – 341.Taras, M. (2005). Assessment—summative and formative—some theoretical reflections. British Journal of Edu-

cational Studies, 53, 466 – 478.Thompson, A. G. (1992). Teachers’ beliefs and conception: a synthesis of the research. In D. A. Grouws (Ed.),

Handbook of research on mathematics teaching and learning (pp. 261 – 283). New York: Macmillan.Tsai, C.-C. (2004). Conceptions of learning science among high school students in Taiwan: a phenomenographic

analysis. International Journal of Science Education, 26(14), 1733 – 1750.van de Watering, G., Gijbels, D., Dochy, F., & van der Rijt, J. (2008). Students’ assessment preferences, perceptions

of assessment and their relationships to study results. Higher Education, 56, 645 – 658.Vermetten, Y. J., Vermunt, J. D., & Lodewijks, H. G. (2002). Powerful learning environments? How university

students differ in their response to instructional measures. Learning and Instruction, 12(3), 263 – 284.Wang, J. R., Kao, H. L., & Lin, S. W. (2010). Preservice teachers’ initial conceptions about assessment of science

learning: The coherence with their views of learning science. Teaching and Teacher Education, 26(3), 522 – 529.

Science Education, Vol. 97, No. 2, pp. 244–270 (2013)