looking for a balance between internal and external evaluation of school quality: evaluation of the...

18
This article was downloaded by: [UVA Universiteitsbibliotheek SZ] On: 22 September 2014, At: 00:40 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Journal of Education Policy Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tedp20 Looking for a balance between internal and external evaluation of school quality: evaluation of the SVI model Henk Blok a , Peter Sleegers a & Sjoerd Karsten a a Department of Education , University of Amsterdam , Amsterdam, The Netherlands Published online: 14 Jun 2008. To cite this article: Henk Blok , Peter Sleegers & Sjoerd Karsten (2008) Looking for a balance between internal and external evaluation of school quality: evaluation of the SVI model, Journal of Education Policy, 23:4, 379-395, DOI: 10.1080/02680930801923773 To link to this article: http://dx.doi.org/10.1080/02680930801923773 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions

Upload: independent

Post on 23-Apr-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [UVA Universiteitsbibliotheek SZ]On: 22 September 2014, At: 00:40Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Journal of Education PolicyPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/tedp20

Looking for a balance between internaland external evaluation of schoolquality: evaluation of the SVI modelHenk Blok a , Peter Sleegers a & Sjoerd Karsten aa Department of Education , University of Amsterdam ,Amsterdam, The NetherlandsPublished online: 14 Jun 2008.

To cite this article: Henk Blok , Peter Sleegers & Sjoerd Karsten (2008) Looking for a balancebetween internal and external evaluation of school quality: evaluation of the SVI model, Journal ofEducation Policy, 23:4, 379-395, DOI: 10.1080/02680930801923773

To link to this article: http://dx.doi.org/10.1080/02680930801923773

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to orarising out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Journal of Education PolicyVol. 23, No. 4, July 2008, 379–395

ISSN 0268-0939 print/ISSN 1464-5106 online© 2008 Taylor & FrancisDOI: 10.1080/02680930801923773http://www.informaworld.com

Looking for a balance between internal and external evaluation of school quality: evaluation of the SVI model

Henk Blok*, Peter Sleegers and Sjoerd Karsten

Department of Education, University of Amsterdam, Amsterdam, The NetherlandsTaylor and FrancisTEDP_A_292543.sgm(Received 7 March 2007; final version received 21 November 2007)10.1080/02680930801923773Journal of Education Policy0268-0939 (print)/1464-5106 (online)Original Article2008Taylor & Francis2340000002008Dr. [email protected]

This article describes the results of a study into the utility of the SVI model, a model inwhich internal and external evaluation are balanced. The model consists of three phases:school self-evaluation, visitation and inspection. Under the guidance of schoolconsultants, 27 Dutch primary schools have built up two years of experience with theSVI model. The results show that the school leaders developed a positive attitudetowards school self-evaluation and visitation. They found that both self-evaluation andvisitation have improved their insight into the quality of the school. However, a contentanalysis of the school self-evaluation reports shows that the school self-evaluations areoften of low quality. For example, it appeared that most of the self-evaluation reports donot provide answers to questions the schools have formulated at the beginning of theself-evaluation. Moreover, the teams of critical friends and the inspectors concluded thatthe school self-evaluations do have many shortcomings. Based on these results, weconclude that school self-evaluation is a very difficult task for most schools. It istherefore crucial that schools receive external support and that they build up experiencewith school self-evaluations over a period of years.

Keywords: school self-evaluation; visitation; school inspection; quality care; school reform;accountability

Introduction

‘Quality needs to be demonstrated’, argued Adriaan D. de Groot, the founding father of theempirical approach in Dutch psychology and educational science. The quotation is takenfrom an essay on assessing educational quality (de Groot 1984). De Groot concludes thatsuch assessment should be based on what students learn from education. He argues againstthe view that the quality of education can be deduced from process indicators such as thenumber of hours of teaching, the social climate of the school or the involvement of parentsin schools. Although nowadays such a view receives little support among researchers, it isstill a topical issue in school practice, as we will show in our contribution.

Over the last 10–20 years, greater accountability has been advocated for schools andschool systems as integral part of much broader school reform initiatives (Leithwood, Edge,and Jantzi 1999). Although the calls for greater accountability have been remarkably similaracross many countries, different strategies and tools for increasing accountability have beendeveloped, ranging from increasing competition among schools, site-based management,setting professional standards and a central curriculum to national assessment of key areasof the curriculum and student achievement by school inspection. In many cases, these

*Corresponding author. Email: [email protected]

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

380 H. Blok et al.

accountability strategies or tools are the central vehicles for reform, on the assumption thatholding schools, parents and students accountable for attaining high standards will triggerschools to improve their quality. More recently, policy-makers and researchers havestressed the need for more responsive forms of accountability by using models that try tofind a balance between internal and external evaluations of schools (Nivo 2001; Mulford2005). The concept of ‘earned autonomy’ as recently developed in England and theNetherlands is a good example of such a more responsive accountability approach. Earnedautonomy involves freedom to manoeuvre beyond prescribed accountability programmesfor schools that have demonstrated that they are performing well according to inspectionevidence and test results.

In the Netherlands, the concept of earned autonomy has been developed as part of theimplementation of the Dutch Educational Supervision Act in 2003. Within this renewedinspection framework, schools are expected to be capable of developing their own qualitycare through self-evaluation (Ehren, Leeuw, and Scheerens 2005). The self-evaluationresults of the schools should be valid and reliable and provide information about indicatorsincluded in the inspection framework. If these requirements are met, schools will beconfronted with fewer and less thorough inspection visits.

In order to help Dutch schools produce valid and reliable self-evaluation results that willmeet the criteria of the inspection, a combined development and research project, calledZiezo, was started in 2004. This project attempted to implement the SVI model, in whichthe internal and external evaluation is balanced. This model consists of three phases: schoolself-evaluation, visitation and inspection. The school self-evaluation is organised accordingto the school’s own needs and ends with the school self-evaluation report. During the visi-tation phase, the school is visited by a team of representatives from other schools. The goalof the visitation is that the visiting team acts as a fellow teacher or critical friend (Mortimore1983; Swaffield 2004) and evaluates the validity of the self-evaluation report. In the inspec-tion phase, the Inspectorate of Education in the Netherlands performs their regular, legallydetermined inspection visits. The inspectors utilise the self-evaluation reports and thereports provided by the collegial visitations for their inspection research. As has alreadybeen explained, a broad and convincing self-evaluation can imply that the inspectors needto perform less independent research.

In order to learn as much as possible from Ziezo, the project and the SVI model havebeen evaluated. The research was oriented on the participating schools’ experiences as wellas the schools’ self-evaluation reports, the reports of the visitation teams and the inspectors’judgements. The article below describes the results of this evaluation and uses all sourcesmentioned. The main questions are whether the SVI model is feasible for schools, andwhether it is a valuable interpretation of the principle of ‘earned autonomy’ and can there-fore considered as a more responsive accountability approach.

School self-evaluation

Although Ralph W. Tyler already advocated the importance of a periodic evaluation of theschool in 1942 (Tyler, Smith, and the Evaluation Staff 1942), school self-evaluation hasbecome a major trend in several OECD countries not until the late seventies or early eight-ies, strongly influenced by the growing calls for greater accountability (Simons 1987;Mortimore 1983). Under the notion of school self-evaluation, a variety of activities haveemerged throughout the years, ranging from process evaluation of classroom practices(action-research), school-focused in-service education for teachers and continuous organi-sational change to more mechanistic modes of accountability such as cost-benefit analysis

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 381

and performance-based education. Although all these activities broadly pursue the same aimof encouraging schools to evaluate themselves, the underlying assumptions, purposes, audi-ences and interests differ. One of the most controversial aspects is the issue of control: whohas the control of the process, who has access to the products and whose interests are served(Simons 1987; Ball 2003). As a consequence, key themes that are often discussed in theliterature on school self-evaluation are: the aims of school self-evaluation (accountabilityvs. school improvement), the form (process evaluation vs. mechanistic outcome-orientedapproaches), the procedure (management tool or shared and agreed ends) and the organisa-tional conditions (sufficient own expertise and experience vs. external training and support);see MacDonald (1978), Adelman and Alexander (1982), Elliott (1983), Mortimore (1983),Shipman (1983), Simons (1987).

Despite the growth of the school self-evaluation movement, supported by differentinitiatives, projects, guidelines and workbooks, systematic empirical research into schoolself-evaluation, however, is scarce and far from comprehensive. Most of the studies into self-evaluation by schools are primarily oriented towards the experiences and opinions of theschools and teachers involved. The utility of school self-evaluation for school improvement,for instance, has received little attention. To illustrate this, we will review existing research.

Clift, Nuttall, and McCormick (1987) describe the first experiences of schools inEngland with school self-evaluation. The researchers concluded that self-evaluation did notcontribute to improvements at the school in any way. They were surprised by the naiveapproach of schools that paid too little attention to the reliability and validity of the self-evaluation. Many schools also lacked sufficient manpower or time. The researchers suggestconnecting self-evaluation with school improvement, not with accountability purposes,from the very start of the programme.

Connecting self-evaluation with two different aims – accountability purposes and schoolimprovement – can lead schools into difficulties, as is also found by Scheerens, Van Amels-foort, and Donoughue (1999). These researchers studied school self-evaluation in fourcountries: the UK, the Netherlands, Spain and Italy. These countries show a wide diversityin the ways they approach evaluation of schools, as a result of different traditions and educa-tional policies. Nonetheless, there were also important similarities. All four countries seeself-evaluation not only as a means for school improvement, but also as an opportunity tomeet the demand for schools to be accountable to the community. This double-goal state-ment can cause tension, especially if there is significant pressure for the school to achievegood results. This pressure can lead schools to adopt strategic behaviour, for instance, inorder to soften the consequences of undesirable outcomes. A further observation was thatschool staff received extensive training in order to be able to carry out the school self-eval-uation. Scheerens, Van Amelsfoort, and Donoughue judge extensive external support to bean important precondition for successful self-evaluation:

Support may be required from the very first step in which the purpose of the evaluation is tobe explained vis-à-vis internal and external requirements up to the final stage where advicemay be needed in making practical use of evaluation results. (1999, 104)

Finally, in all four countries it seemed that all the participants were convinced of the valueof self-evaluation by schools and were willing to continue the activities.

The results of the research by Davies and Rudd (2001) also show that participants expe-rience the self-evaluation as positive. In Davies and Rudd’s survey, nine Local EducationAuthorities (LEAs) and 23 primary and secondary schools were involved. Their studymethods involved a review of key documents, and in-depth interviews with LEA personnel

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

382 H. Blok et al.

and school staff in schools that were in the process of doing self-evaluation. Many benefitsfrom self-evaluation were noted, both for LEA officers and for schools. For instance, LEAsfound self-evaluation a useful entry to their schools, and they appreciated the way schoolself-evaluation helped them to develop an overview of how their schools are performing.On the other side, schools were happy with a change in the school culture – for exampleteachers becoming more open to classroom observation by colleagues – and the impetusself-evaluation had on teacher development. Several issues or difficulties were also noted,most notably, workload implications for school staff; the need for LEAs to achieve abalance between management and support; and the requirement for schools to take anappropriate degree of ownership of self-evaluation. Ownership had mostly been confined tothe management level, excluding teachers, students and parents. Overall however, there canbe little doubt that both LEAs and the schools had positive attitudes towards self-evaluation.

Comparable positive experiences with school self-evaluation have also been reported byMeuret and Morlaix (2003) in their European research project ‘Evaluating quality in schooleducation’, in which 101 schools for secondary education in 18 countries were involved.The school self-evaluation was performed according to a standard procedure suggested bythe project leaders. Some important parts of the procedure were (1) appointing a steeringcommittee, (2) drawing up a general diagnosis based on the so-called ‘self-evaluationprofile’, (3) selecting topics which qualified for further research, (4) performing furtherresearch, and (5) formulating and executing improvement actions. The general impressionis that the steering committee was positive about the project. They were of the opinion that‘the project has enhanced the school’s ability to improve’ (mean score 4.5 on a ‘1’ = low to‘6’ = high scale). They were also positive about the statement that ‘the project has enhancedthe effectiveness of the school’ (mean score 3.7 on the same scale). Further analysis showedthat both of the first steps (appointing a steering committee and drawing up a general diag-nosis) were related to the perceived success. Schools which had given these steps moreattention were more positive about the results achieved.

That specific conditions can promote the success of the self-evaluation was also foundin a study by Hendriks, Doolaard, and Bosker (2002) on the development and use of instru-ments for quality care. These researchers described the development of a set of instrumentsthat schools could use for self-evaluation (the so-called ZEBO instrument). The set ofinstruments consisted of three parts: student achievement tests; questionnaires to determineacademic content covered in lessons; and questionnaires to describe school process indica-tors (e.g. staff cohesion, school and class climate, models of instruction, time on task, testuse). The set of instruments has been used with 123 primary schools. Participating schoolsreceived in return both their own averages (at the school and class level) as well as the aver-ages of all schools combined. This enabled the schools to compare their own results withthose of the 123 schools. The usefulness of this feedback was studied by questionnaires (atall participating schools) and in-depth interviews with school leaders, teachers and students(at eight schools). This enabled the researchers to identify several conditions for successfulself-evaluation: (1) the evaluation process should be transparent to whoever is involved, (2)school personnel has to be open to feedback and possible criticism, (3) the instruments usedshould be of excellent quality, (4) the school should have ownership over the results and theconclusions attached to them, and (5) self-evaluation should be carried out on a regular basisevery three to five years (Hendriks, Doolaard, and Bosker 2002, 135–6).

More recently, the results of another Dutch study into the use and effects of ZEBO hasbeen reported by Schildkamp (2007). In her study, 79 primary schools participated andadministered ZEBO three times, in 2003, 2004 and 2006. After each of the three adminis-trations, teachers and principals was interviewed to obtain information on the use of ZEBO

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 383

and the factors influencing the use of ZEBO. To study the effects of ZEBO use, data on theperformances on standardised spelling and mathematics tests of two cohorts of pupilsduring a period of five years were gathered. The results showed that the use of ZEBO waslimited in 2003, 2004 and 2006. The results also showed that the use of ZEBO had no effecton students’ learning outcomes. Some small effects were found on teachers’ behaviour,achievement orientation and professional develeopment. School organisational factors suchas the innovative attitude and innovative capacity of schools, available time and resources,the degree to which an open school climate exists and the degree to which the principaltakes responsibility for the ZEBO output were found to mediate the use of ZEBO. Althoughthe findings show that the use of school self-evaluation instruments and the effects of thisuse on learning outcomes are limited, the results also suggest that several school organisa-tional characteristics matter and that self-evaluation can affect professional practice andenhance professionalism.

Kyriakides and Campbell (2004) offer a review of research on school self-evaluation.From this review, one can add two more success conditions, not mentioned by Hendriks,Doolaard, and Bosker (2002). At the start of a school self-evaluation, it is important toestablish clarity and consensus among stakeholders about the aims of the enterprise.Consensus helps to raise the involvement of teachers, students, parents and the widercommunity. Generating criteria or standards for assessing the different qualities of theschool is a further success condition. This condition had been met in the project reported byHendriks, Doolaard, and Bosker through offering schools norm-referenced feedback.However, Kyriakides and Campbell advise schools to create their own criteria. This willnecessitate participants to discuss criteria, exchange ideas and negotiate expectations. Thecriteria generating process provides participants with a useful picture of how other groupsof stakeholders view the school and what their expectations from the school are. Still,Kyriakides and Campbell consider school self-evaluation to be a difficult enterprise withmany hidden pitfalls. They consider the validity to be the most serious problem.

One way to improve the validity of the self-evaluation results of schools is to connectself-evaluation as a method for internal evaluation with external evaluation procedures, asproposed for instance by Nivo (2001). He describes how internal evaluation can benefitfrom external evaluation by legitimising the validity of the internal evaluation, but acknowl-edges also a benefit in the other direction. Learning from external evaluation will besupported if a school is engaged in self-evaluation, and if results from both enterprises pointin the same direction. Nivo suggests nine conditions that have to be met to establish aconstructive dialogue as a basis for coexistence. For instance, ‘the interaction between inter-nal and external evaluators should be based on a two-way flow of information in a processof mutual learning’ (Nivo 2001, 102). Both parties are not necessarily equal in authority,but each has something to learn from the other, and something to teach the other.

Indeed, in England the school self-evaluation and the external quality review – in theform of investigations by the inspectors – are connected to one another (Office for Standardsin Education [Ofsted] 1998). Still, there does not seem to be any degree of equality. It hasbeen stated by Ofsted that:

Self-evaluation complements external inspection and it is clearly desirable to use shared crite-ria for both … to be efficient, effective and complementary, evaluation by the school shoulddraw from the criteria and indicators used in inspection, and should employ similar techniques.(1998, 2)

Several authors strongly oppose this type of externally driven school self-evaluation thatmainly focusses on technical information (e.g. student outcomes), and make a plea for

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

384 H. Blok et al.

democratic forms of school self-evaluation that provide information on the quality ofeducational practice and foster professional learning in schools (Simons 1987; Saunders1999). According to Simons, ‘the major justification for school self-evaluation is enhancedprofessionalism and it is best introduced as a continuing part of professional practice, not asa short response to political pressures’ (Simons 1987, 198). Externally initiated school self-evaluation practices may decrease the school’s commitment, as a result of which necessaryconditions for successful school self-evaluation such as shared goals and ownership may notbe realised (Mortimore 1983; Simon 1987; Saunders 1999). In addition, externally drivenschool self-evaluation that mainly focus on the measures of productivity or output, a modeof regulation called performativity by Ball, can have profound paradoxical effects onteachers and schools: it might lead to resistance, distrust, deprofessionalised teachers andcapitulation, cynical compliance and strategic response behaviours (Ball 2003). Usingnationally prescribed evaluation models can bring the threat of a monoculture: schoolsstarting to look alike (DiMaggio and Powell 1983).

Looking for a balance: the Ziezo project

In some of the publications mentioned, the undertone is positive: school self-evaluation isimportant and feasible, and schools have experienced positive results with it. At the sametime, the evidence for this positive image is relatively scarce. Despite the fact that most ofthe participants have a positive attitude towards self-evaluation and are convinced about itsvalue both for school improvement and accountability, the validity of the self-evaluation isconsidered a key problem. Given the recent developments towards more responsive formsof accountability, this validity problem needs serious attention of policy-makers, schoolleaders, teachers and researchers. A valid self-evaluation seems to be a necessary conditionfor finding a new balance between internal and external evaluation and realising the conceptof earned autonomy.

As mentioned earlier, to explore a new balance between internal and external evaluation,the project Ziezo started in 2004. Ziezo’s main goal was to guide the introduction andimplementation of the SVI model, consisting of the three components: school self-evaluation,visitation and inspection. The project lasted two-and-a-half years (August 2004–June 2006):one year for the self-evaluation, a half year for the visitation and a year for the inspection.Twenty-seven primary schools have participated in Ziezo, including three schools for specialeducation. The schools have registered for the programme on their own initiative. They there-fore probably do not form a representative random sample. These are schools that are positiveabout self-evaluation and find it important to develop an effective system of quality care. Inother areas, the schools may be fairly representative. They are located in different parts ofthe country, both in urban and rural areas, and all denominations are represented (public,Christian and Islamic schools).

An important principle of Ziezo was that the responsibility for the self-evaluation shouldremain with the school. This was intended to increase the school’s commitment to perform-ing the self-evaluation. The school was expected to observe the agreed-upon framework forperforming the school self-evaluation, and to draw up an integrated report. The schoolswere divided into three regional-operating pilot groups. Each group was supervised by oneof the national centres for school consultancy. The supervision was largely adjusted to meetthe wishes of the individual schools, but they were also offered a standard programme. Allschools received a source book specially written for the project. This source book containedgeneral information about the project’s main phases, combined with a number of morespecific suggestions and examples. Each school received an extra budget of 2000 euros per

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 385

year to pay for assistance or consulting. All of the schools were visited in the initial phaseby the school consultants to discuss school-specific conditions and wishes. The schoolswere also invited to a number of meetings during the course of the project. One importanttheme discussed during these meetings was progress of the self-evaluations at the schoolsinvolved. Other meetings were dedicated to visitation. These meetings offered schools theopportunity to learn from each other through mutual dialogue. Schools were also encour-aged to organise mutual school visits and informal exchanges. Most of the schools actuallydid so.

In the visitation phase, each school was visited during the course of a day by a visitationteam consisting of fellow school leaders and teachers. This offered the advantage that theparticipating schools could experience visitation from two perspectives, that of the visitoras well as the visited. Ziezo attempted to stimulate a two-way flow of information in aprocess of mutual learning between the internal and external evaluators. The visitationswere prepared both orally and in writing. The visitation teams became acquainted before-hand with the school self-evaluation report. The teams also often had access to other docu-ments such as school prospectuses, timetables, lesson plans, etc. In a preliminary discussion,the members of the visitation teams determined which questions should be asked and whichsupplementary research activities (such as interviews or observations) would be undertaken.This preliminary discussion was occasionally held on an earlier day, but was also held at thestart of the day of the visitation itself. The visitation teams then put their observations andconclusions on paper in the form of a visitation report as soon as possible after the visitationday. Afterwards, the report was offered to the visited school, along with a request for awritten reaction to the visitation report, including the conclusions and suggestions made bythe visitation team.

The inspection phase was conducted under the direct responsibility of the Inspectorateof Education in the Netherlands, using that organisation’s standard procedures. Theschools are visited by variable teams of two inspectors, involving a total of 20–30 inspec-tors. These inspections consisted of two school visitations, a preparatory visit and theregular inspection visit. In the preparatory visit, the inspectors provided a substantiatedverbal conclusion to the submitted self-evaluation and visitation reports. This visit wasalso used to clarify the subjects that would be given attention during the regular visita-tion, considering the self-evaluation report and the school’s specific wishes. During theregular visit, the inspectors provided a differentiated quality evaluation about the schoolbased on a standard set of quality indicators. These indicators can be divided into threeareas: quality care, teaching and learning processes, and learning outcomes. The inspec-tors had gathered once before in order to be informed about the Ziezo project. During thismeeting, they discussed the school self-evaluation and visitation reports. The discussionwas conducted based on an internal note that offered further guidelines for evaluating theschool self-evaluation. It may be assumed that the inspectors were sufficiently acquaintedwith Ziezo’s goals and methods. In order to ensure that the inspectors utilised a commonmeasure of severity, a consultant inspector who was also acquainted with Ziezo reviewedeach draft report.

The SVI model may seem complicated at first glance, but it also has important potentialadvantages. As the model begins with the school completing a school self-evaluation at itsown initiative, it can be expected that the school’s commitment to the project is strong.Since the school knows that the school self-evaluation will be closely scrutinised – first bythe visitation team and then by the inspectors – it will probably strive to provide a reliableand valid self-evaluation. The visitation and the inspection provide a double check on thevalidity of the school’s self-evaluation report.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

386 H. Blok et al.

Research questions

We studied the feasibility of the SVI model. More specifically, we wished to know whetherthe schools in the Ziezo project were actually capable of performing a reliable and validself-evaluation. The answer to this question is important for the further development of theprinciple of earned autonomy. First, we will evaluate the school self-evaluation reportsbased on general principles of empirical research. Furthermore, we will then analyse howtwo external teams react to the school self-evaluations: the ad hoc groups of critical friends(the V phase of the SVI model) and the inspector teams on behalf of the Inspectorate ofEducation in the Netherlands (the I phase of the SVI model). Finally, we will discuss theexperiences and beliefs of the school leaders regarding the SVI model. The researchquestions are as follows:

(1) How valid is the school self-evaluation?(2) How do the visitation teams evaluate the validity of the school self-evaluations?(3) How do the inspectors evaluate the validity of the school self-evaluations?(4) What value do the school leaders place on the individual components of the SVI

model, in particular the school self-evaluation and the visitation?

Validity is used here in the sense of construct validity, i.e. the extent to which a measureaccurately reflects the concept that it is intended to measure. Or, in our context, the extentto which the outcomes and conclusions of the school self-evaluation reflects the school’squality. We expect ideally a perfect relation between perceived and real qualities of schools.

Research methods

In this study we used two research methods – content analysis of the reports written fromwithin the framework of Ziezo and questionnaires for the school leaders (Table 1).

The content analyses were performed based on specifically developed analysis frame-works for the three types of reports (self-evaluation, visitation and inspection). One memberof the research team performed the content analyses. Another member of the team thenverified the analyses for correctness and completeness. When their insights differed fromone another, the evaluators reached a common solution via methodological discourse. Thisprocedure prevents a statistical evaluation of reliability. However, in qualitative research,intersubjectivity is often more appropriate than reliability. The analysis procedure utilisedhere capitalises largely on the argumentative interpretation of intersubjectivity (Smaling1992). An essential part of argumentative intersubjectivity is the methodological discoursein which participants have to strive for the so-called ideal speech situation.

We also used two questionnaires for school leaders, regarding the school self-evaluationphase and the visitation phase. These questionnaires were loosely based on the question-naires developed by Meuret and Morlaix (2003) in their research into evaluating quality inschool education of 101 secondary schools from 18 different countries. As the inspectionphase falls outside the responsibility of Ziezo, no questionnaire was drawn up for this phase.Two issues were addressed in the questionnaires: a personal evaluation of the results and apreview in the form of attitude questions (what is your opinion of the usefulness in thecontext of quality care?). We formulated 8–12 statements for both issues (about 20 itemsper questionnaire). Some examples are as follows:

(1) the self-evaluation phase has contributed to a more pleasant work climate at school(scale ‘school self-evaluation, observed effect’);

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 387

(2) self-evaluation does not tell us anything new (scale ‘school self-evaluation, attitude’);(3) the visitation has increased the teachers’ commitment to quality care (scale ‘visita-

tion, observed effect’); and(4) visitation leads to better management of the school (scale ‘visitation, attitude’).

The reliability (Cronbach’s alpha) of the four scales were satisfying, varying from 0.78(school self-evaluation, observed effect) up to 0.92 (visitation, attitude).

The cooperation of the schools has remained strong. In the course of the Ziezo project,the number of participants decreased from 27 to 24. In the last column of Table 1, we cansee that the response percentage has still remained above 80%. Generalisation of the resultsrelated to the group of all Ziezo participants is therefore justified.

Results

The school self-evaluation reports

The first result is that four of the 27 schools did not conduct any empirical research. Instead,they simply described their resolutions or produced an ‘arm chair analysis without any empiricalevidence’. The reports of these four schools played no further role in the research, as theyfall outside our analysis framework. From the remaining self-reports, we can discern thatroughly half of the schools plan to evaluate the quality of their education in a broad sense(Table 2). The other half chose more specific subjects, including the provisions for studentswith special needs or the school’s image among the parents. The justifications of the choicesmade are disappointing. In only three of the 23 self-reports were the evaluation goals relatedto an analysis of the initial situation. One school determined the evaluation goals in a commondecision with the team. Based on the actual concerns within the team, the school performedresearch on the quality of the student care and the school’s image among the parents.

Schools use diverse instruments in gathering empirical data. They frequently used exist-ing instruments for school self-evaluation, consisting of questionnaires to measure teacher,student and parent satisfaction. All of the schools gathered data on teacher beliefs and prac-tices, usually by having them fill in a questionnaire, but also through interviews and classobservations. Fourteen schools involved parents’ views in their self-evaluations, usually inthe form of a questionnaire. Ten schools also involved the beliefs of students in their self-evaluation, also by using a questionnaire. Two schools considered student progress. Fourschools paid sufficient attention to issues of generalisability and reliability, for instance, bytaking measures to prevent sample bias.

Table 1. Overview of the research design.

Research question Research method

Number of schools in the sample (total number of schools

participating in the Ziezo phase concerned)

1 Content analysis of the self-evaluation reports 23 (27)2 Content analysis of the visitation reports 24 (24)3 Content analysis of the inspection reports 20a (24)4 Questionnaires for school leaders about (1)

school self-evaluation and (2) visitation(1) School self-evaluation: 23 (27)

(2) Visitation: 22 (24)

Note: aIn fact, we had 21 reports at our disposal, because one school with two locations drew up two separatereports.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

388 H. Blok et al.

Table 2. Evaluation ratings of the school self-evaluation, visitation, and inspection reports.

Variables (total number of reports) Value labels Number

School self-evaluation reports (n = 24)What is the goal of the self-evaluation? Education quality in a broad sense 10

Specific subject (e.g. image among parents) 9No goal indicated 4

Which type of instrument did the school use to gather empirical data?

Parent questionnaire 14Student questionnaire 10Teacher questionnaire 16Internal audit (for example, class observations) 14External audit 1Student achievement tests 2

Do the chosen instruments fit the goal of the self-evaluation?

Yes 8Partly yes, partly no 11No 4

Are the conclusions related to the goal of the self-evaluation?

Yes 6Partly yes, partly no 9No 8

Were the consequences of the self-evaluation for school improvement discussed?

Yes 12Partly yes, partly no 6No 5

Visitation reports (n = 24)What is the subject of the visitation? Exclusively the self-evaluation 8

The self-evaluation and other questions 12Other questions only 4

Which data gathering methods were used during visitation?

Interviews with school staff 24Interviews with parents 11Interviews with students 7Class observations 17

What are the conclusions with regard to the self-evaluation report?

The report was largely confirmed 1The report was partially confirmed 5The report was considered to be largely

inadequate13

Conclusions regarding the report were missing 5

Inspection reports (n = 21)What is the opinion regarding the

validity of the self-evaluation?Valid 2Partly valid 3Invalid 10No judgement given 6

How did the Inspectorate evaluate the data provided by the school?

The data were largely accepted or verified 2The data were partially accepted 10The data were ignored 9

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 389

Ideally, the data collected should complement the formulated evaluation goals. Onlyeight schools have sufficiently met this criterion. An almost systematic omission was thatthe learning outcomes were ignored, although most schools had chosen for a broadapproach to quality. The self-reports also showed shortcomings in other respects. Forexample, the correspondence between evaluation goals and final conclusions was ofteninsufficient. Only six schools answered the questions they had formulated at the beginningof the self-evaluation in their conclusions. Only 12 reports included more or less detailedplans to deal with the problems and complaints encountered.

In concluding, we could say that almost all of the self-reports show significant short-comings. The main points are: they deal with partial aspects of quality, the validity of thereported results is not easy to evaluate and the consistency between the main elements(question, method, results, conclusion) is insufficient. The self-reports do not seem to beuseful as a basis for accountability or for school improvement.

The visitation reports

Twenty-four of the 27 original Ziezo schools participated in the Ziezo visitation phase. Wewere able to analyse all of the visitation reports and in most cases (20 of the 24) we wereable to include the school’s written reaction to the visitation report, in which the schooldescribed which actions would be undertaken based on the visitation team’s conclusionsand recommendations.

The visitation team was usually composed of six or seven members, including two exter-nal professionals who performed the tasks of chairman and secretary. The members usuallycame from two or three different schools. The teams were usually independent of the school,with one exception being a team including two school leaders from different locations ofthe same school.

From the visitation reports, it seems that eight teams limited their examinations to theschool’s own self-evaluation, as was the original intention (Table 2). In 12 other cases,other subjects besides the self-evaluation were also considered. It is our impression thatvarious schools wished to expand the central theme of the visitation, and that the visita-tion teams agreed with their wishes. In four cases, the self-evaluation was not consideredat all.

During all visitations, the visitors held discussions with school leaders, class teachersand often other staff members such as student counsellors or remedial teachers. Elevenvisitation teams also held interviews with parents, and seven had interviews with students.The visitors observed lessons and visited classes in 17 of the schools. These numbers givethe impression that the available sources were not always systematically consulted orobserved. Considering the size of the visitation teams, it must be possible to apply a broaderset of instruments than had actually been the case.

The conclusions reached about the school’s self-evaluations were often very critical. Inonly one case were the conclusions and recommendations in the self-evaluation largelysupported by the visitation team. Five other cases received partial support.

The school’s reactions showed that they largely accepted the visitation team’s conclu-sions. In six cases, we were unable to determine the responses (or reactions) of schoolsbecause the school simply did not send a reaction to the visitation team. What the schoolsactually did with the visitation team’s recommendations remains uncertain. Approximatelyhalf of the schools indicated that the visitation team’s recommendations would be followedup, but this was almost always a global declaration of intent, without concrete measures ora timeframe.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

390 H. Blok et al.

The inspection reports

The Inspectorate’s judgement regarding the self-evaluation and the visitation (including theschool’s reaction to it) was negative in 10 of the 21 cases (Table 2). The judgement waspositive or partially positive in five cases. It is interesting to note that the Inspectorate’sjudgements were not included in six cases. When the inspectors stated a judgement, it wasusually insufficiently supported by arguments. It remains unclear whether the negativejudgements were based on the empirical data gathered by the school, or on the qualitativeconclusion that the school reached based on that data. The inspectors also made nodifferentiation between the sources of their information (the school self-evaluation or thevisitation report). They did sometimes make distinctions between the utility in the contextof monitoring or school improvement. In five cases, the self-evaluation was emphaticallydeclared to be suitable as a basis for school improvement, but fell short as a measure foraccountability.

The results were more positive regarding the use that the inspectors made of the data.We found indications that information from the school was accepted or verified in 12 of the21 reports. This information was mostly accepted in two of these cases, while in the other10 only parts of the information was accepted. In most cases, it remained unclear as to whatacceptance or verification implied precisely. The information accepted was usually notlabelled as such.

Despite the Inspectorate’s relatively negative judgements regarding the utility and valid-ity of the information provided by the school, we regularly found positive comments in theinspection reports. Eleven of the 21 reports indicated that participation in Ziezo had visiblystimulated the school to pay attention to quality care. Another positive aspect is that theZiezo schools were scored more positively on three of the seven indicators than the schoolsthat had not participated in Ziezo. It seems likely that this difference is the result of theactivities the Ziezo schools had undertaken.

The perceptions and attitudes of school leaders

The school leaders were, in general, positive about their self-evaluation (Table 3). Theyfound that it had improved their insight into the school (A.04, 91%) and had contributedpositively to the school’s innovative capacity (A.05, 87%). Other positive effects seemed tobe a greater involvement by the teachers (A.03, 61%) and a more pleasant work climate atschool (A.01, 35%). Some of the school leaders also thought that the school’s effectivenesshad improved as a result of the self-evaluation (55%). The school leaders’ attitude regardingself-evaluation was also positive in many respects. They felt that there is much to learn fromschool self-evaluation (B.01, 91%), that it can lead to better management (B.05, 91%) andthat its implementation is feasible (B.08, 91%). The only problem was time: school self-evaluation is time-consuming (B.03, 65%).

The school leaders were also generally positive about the visitation phase. Approxi-mately half of the schools showed an increased involvement by the teachers in qualitycare (A.03). The school leaders indicated that the visitations had contributed to theschool’s capacity to improve (A.05, 91%) and that it had improved their knowledge ofand insight into the school (A.04, 73%). The school leaders’ attitudes towards visitation asan instrument for quality care were also generally very positive. They found that althoughvisitation cost a lot of time (B.03, 77%), it was still time well spent. They consideredvisitation to be feasible (B.08, 82%), reliable (B.10, 86%) and informative (B.12, 82%).According to the school leaders’ opinions, the visitation phase of Ziezo was unmistake-ably successful.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 391

Conclusions

The school self-evaluation reports were evaluated three times for their validity: by research-ers (ourselves), by visitation teams and by the inspectors. These evaluations wereconducted independently of one another, with the exception that the inspectors also hadaccess to the visitation reports. This circumstance is a consequence of the SVI model,which determines that the inspectors should consult the self-evaluations as well as the visi-tation reports. The most remarkable result is that all three of the ‘parties’ were very criticalabout the validity of the school self-evaluations concerned. The researchers have deter-mined that almost all of the self-reports exhibited important shortcomings. The visitationteams have found only one school self-report to be valid. The inspectors were in relation tothe other parties the most accommodating: they considered five of the school self-evalua-tion reports to be more or less valid. We as researchers, with our clearly defined notionsabout validity, found our negative conclusions confirmed by the visitation teams and theinspectors. This strengthens us in the belief that notions about validity were more or lessshared by all participants, although these notions had never been explicitly discussed in thecontext of the Ziezo project.

The project staff of Ziezo has consciously chosen to guide the schools only to a limiteddegree. The schools had a reasonably large degree of freedom to organise their school self-evaluations. The intention was that this would increase the schools’ commitment and senseof ownership, but also that it would lead to a diversity of self-evaluations. In retrospect, wemust conclude that this approach has worked in relation to the schools’ commitment towardsthe project, but that it was less successful in guaranteeing the quality of the self-evaluationreports. The root of the problem is that the data collected by the schools did not provide

Table 3. The perceptions and attitudes of school leaders on school self-evaluation (n = 23) andvisitation (n = 22); selected items.

% agree

Self-evaluation/visitation Self-evaluation Visitation

(A) Perceived effectsA.01 … has contributed to a more pleasant work climate at schoolA.02 … has improved the students’ involvementA.03 … has improved the teachers’ involvementA.04 … has improved our knowledge and insight into the schoolA.05 … has contributed to the school’s capacity to improveA.06 … has improved the school’s effectiveness

352261918755

239

55739136

(B) AttitudesB.01 … teaches us a lotB.02 … leads to better teaching skillsB.03 … costs a lot of extra timeB.04 … concerns everyoneB.05 … leads to better managementB.06 … is easy to understandB.07 … is preferred by most team membersB.08 … is feasibleB.09 … is cost-effectiveB.10 … results in reliable outcomesB.11 … is objectiveB.12 … portrays our development correctly

916565749183659170918387

825077777782738273868682

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

392 H. Blok et al.

answers to the initial questions of the school self-evaluation. In addition, it seemed that theschools were not well informed about the requirements that a valid school self-evaluationshould satisfy. This clarity was not present at the beginning of the Ziezo project, and did notincrease during the course of the project. Consequently, there is still a serious need forsuccessful examples.

Another result is that school leaders – despite the negative conclusions regarding theschool self-evaluations completed – are largely positive about the project with respect toboth the school self-evaluation and the visitation. They feel that both activities had provideda useful contribution to quality care at their school. Their attitude towards self-evaluationand visitation is positive, and they also find that the activities are feasible and offer manylearning opportunities. The only problem was the time investment: self-evaluation and visi-tation are time-consuming. We consider this second result to be as relevant as the first, as itshows that the school leaders do not become discouraged by the initial problems. Theyremain positive about the importance of quality care and the opportunities to contribute toit through school self-evaluations and visitations. At the same time, it seems that there is acertain contradiction between the first and the second result. Why do the experts havea negative opinion about the reports, while the school leaders state that they have learned alot from the process that had led to these reports?

Discussion

Perhaps the contradiction mentioned is less serious than a superficial observation wouldlead one to believe. After the completion of Ziezo, there was a round-table discussionbetween representatives of all the parties concerned (school leaders, school advisors,inspectors and researchers). In this discussion, the school leaders admitted that the qualityof the school self-evaluation reports was insufficient and could be improved. At the sametime, they stated that they had learned much about school self-evaluation, even thoughthey never reached the Ziezo goal – a valid school self-evaluation report. The school lead-ers stated that they are now more conscious of the importance of feasible evaluationobjectives or carefully stated initial questions. These questions determine to a significantdegree how the school self-evaluation should be conducted. It seems that the school lead-ers and their consultants had primarily considered Ziezo to be a pilot, emphasisingprocesses instead of end products. The school leaders appreciated the freedom the Ziezoproject had offered from the start of the project. This freedom gave them the opportunityto organise the school self-evaluation while considering the school’s special circum-stances. Some schools were still in the middle of a reform process and used the schoolself-evaluation to determine whether the reforms had led to noticeable changes. Otherschools utilised the school self-evaluation to gain experience with using a quality hand-book and the attached system of internal audits. The next cycle of the SVI model mayshow how effective the lessons learned from the first cycle have been applied by theschool leaders.

In their review of school self-evaluations, Kyriakides and Campbell (2004) come to theconclusion that ‘… the field of school self evaluation is at an early stage of development’(32). We have shown here that this conclusion is valid for Dutch primary schools. Kyriak-ides and Campbell list some conditions for improvement that school self-evaluations mustmeet. There must be a consensus about the goals to be reached and the school must have aclimate of ‘openness, collaboration, transparency and trust’. Southworth (2000) also empha-sises the importance of a ‘collaborative school culture’. We would also like to add a condi-tion to those mentioned by these scholars. We noticed that schools are poor at performing

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 393

adequate or rigorous forms of research. Some of the problematic elements are: formulatingappropriate research questions, operationalising core concepts, selecting or constructingvalid measurement instruments, analysing a large amount of data and formulating validconclusions. School leaders lack the technical training to perform rigorous research, andschool consultants – who should be offering the necessary knowledge and skills – fall shortin providing this assistance. An additional circumstance is that schools often use ready-made packages for quality care. The problem with such packages is that they focus on awide range of conditions for effective education and on parent, student and teacher satisfac-tion. They often distract school leaders from the true goal of education, namely, thatstudents learn as much as possible.

What has this study taught us about the utility of the SVI model? At first glance, ourresults do not portray the model in a positive light. Schools generally do not succeed inperforming a valid self-evaluation, visitation teams have difficulty in concentrating onthe question of the validity of the presented self-report, schools react vaguely orevasively to the visitation team’s recommendations and the inspectors avoid clearconclusions regarding the school self-evaluation and insufficiently explain how theyutilised the information provided by the school. Still, we consider it to be too soon tocome to negative conclusion about the SVI model. It is clear that the Ziezo project hasbeen too optimistic about the feasibility of the separate phases. Schools must learn howto perform a self-evaluation, school leaders must learn how a visitation should beconducted and inspectors still need to learn how to apply the results and conclusions ofself-reports into their own evaluation systems. We do not wish to rule out the chancethat under the influence of powerful external support and increased experience, the SVImodel will become an attractive model for quality care. Based on our results and thoseof previous research towards self-evaluation, we strongly suggest that the initial prob-lems can be addressed by intensive training and guidance. To us, this guidance is crucialfor finding a good balance between internal and external evaluation, and therefore torealise a responsive form of accountability.

Another crucial conclusion is the insight that an assessment of student progress shouldbe an indispensable element of any system of quality care, and therefore also for school self-evaluation (de Groot 1984). As logical as this may seem for scholars, for school leaders thisis an insight that they would rather not put into practice. Clift has already determined thatschemes for school self-evaluation concentrate on the conditions or precedents for goodeducation, and therefore circumvent the more difficult subjects:

The outcomes of learning (of whatever kind), perhaps the most important indicator ofeducational quality in the eyes of the general public, are given scant attention: there isvirtually no reference to standards – the normative notions of what might be expected ofschools. (1982, 264)

We can also formulate a similar conclusion: the large majority of the schools involved inZiezo had paid almost no attention to learning outcomes. In this light, it may be useful togather specific information about the reasons why school leaders are so hesitant to assessstudent progress. In the literature, we found no contributions that provided insight into thisproblem.

Notes on contributorsHenk Blok is a senior researcher in the SCO-Kohnstamm Institute, University of Amsterdam, TheNetherlands. His research interests include evaluation research, school quality and policy analysis.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

394 H. Blok et al.

Peter Sleegers is a professor of Educational Studies, University of Amsterdam, The Netherlands.His current research themes centre on school organisation, educational leadership, and educationalinnovations.

Sjoerd Karsten is a professor of Educational Policy and Administration, University of Amsterdam,The Netherlands. He specialises in policy analysis, parental choice, educational decentralisation andvocational education.

ReferencesAdelman, C., and R. Alexander. 1982. The self-evaluating institution. London: Methuen.Ball, S. 2003. The teacher’s soul and the terrors of performativity. Journal of Education Policy 18:

215–28.Clift, P. 1982. LEA schemes for school self-evaluation: A critique. Educational Research 24:

262–71.Clift, P.S., D.L. Nuttall, and R. McCormick, eds. 1987. Studies in school self-evaluation. London:

Falmer Press.Davies, D., and P. Rudd. 2001. Evaluating school self-evaluation. Slough: National Foundation for

Educational Research.De Groot, A.D. 1984. Quality of education and the evaluation paradigm. Studies in Educational Eval-

uation 10: 235–50.DiMaggio, P.J., and W.W. Powell. 1983. The iron cage revisited: Institutional isomorphism and

collective rationality in organizational fields. American Sociological Review 48: 147–60.Ehren, C.M., F.L. Leeuw, and J. Scheerens. 2005. On the impact of the Dutch Educational Supervi-

sion Act: Analyzing assumptions concerning the inspection of primary education. AmericanJournal of Evaluation 26: 60–76.

Elliott, J. 1983. Self-evaluation: Professional development and accountability. In Changing schools… changing curriculum, ed. M. Galton and B. Moon, 224–47. London: Harper Row.

Hendriks, M.A., S. Doolaard, and R.J. Bosker. 2002. Using school effectiveness as a knowledgebase for self-evaluation in Dutch schools: The ZEBO-project. In School improvement throughperformance feedback, ed. A.J. Visscher and R. Coe, 114–42. Lisse: Swets & Zeitlinger.

Kyriakides, L., and R.J. Campbell. 2004. School self-evaluation and school improvement: A critiqueof values and procedures. Studies in Educational Evaluation 30: 23–36.

Leithwood, K., K. Edge, and D. Jantzi. 1999. Educational accountability: The state of the art.Gütersloh: Bertelsmann Foundation.

MacDonald, B. 1978. Accountability standards and the process of schooling. In Accountability ineducation, ed. T. Becher and S. Maclure, 127–51. Windsor: National Foundation for EducationalResearch.

Meuret, D., and S. Morlaix. 2003. Conditions of success of a school’s self-evaluation: Some lessonsof a European experience. School Effectiveness and School Improvement 14: 53–71.

Mortimore, P. 1983. School self evaluation. In Changing schools … changing curriculum, ed.M. Galton and B. Moon, 215–23. London: Harper Row.

Mulford, B. 2005. Accountability policies and their effects. In International handbook of educationalpolicy, ed. N. Bascia, A. Cumming, A. Datnow, K. Leithwood, and D. Livingstone, 281–95.Dordrecht: Springer.

Nivo, D. 2001. School evaluation: Internal or external? Studies in Educational Evaluation 27: 95–106.Ofsted. 1998. School evaluation matters (Raising Standards Series). London: Author.Saunders, L. 1999. Who or what is school self-evaluation for? School Effectiveness and School

Improvement 10: 414–29.Scheerens, J., H.W.C.G. Van Amelsfoort, and C. Donoughue. 1999. Aspects of the organizational

and political context of school evaluation in four European countries. Studies in EducationalEvaluation 25: 79–108.

Schildkamp, K. 2007. The utilisation of a self-evaluation instrument for primary education. Enschede:University of Twente.

Shipman, M. 1983. Styles of school-based evaluations. In Changing schools … changing curriculum,ed. M. Galton and B. Moon, 248–54. London: Harper Row.

Simons, H. 1987. Getting to know schools in a democracy – The politics and process of evaluation.London: Falmer Press.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014

Journal of Education Policy 395

Smaling, A. 1992. Varieties of methodological intersubjectivity – The relations with qualitative andquantitative research, and with objectivity. Quality & Quantity 26: 169–80.

Southworth, G. 2000. How primary schools learn. Research Papers in Education 15: 275–91.Swaffield, S. 2004. Exploring critical friendship through leadership for learning. Paper presented at

the 17th International Congress for School Effectiveness and Improvement, January 6–9, inRotterdam.

Tyler, R.W., E.R. Smith, and the Evaluation Staff. 1942. Appraising and recording student progress,Vol. III, 3–34 (Adventure in American Education Series). New York: Harper & Brothers.

Dow

nloa

ded

by [

UV

A U

nive

rsite

itsbi

blio

thee

k SZ

] at

00:

40 2

2 Se

ptem

ber

2014