s3.amazonaws.com file · web viewsearching for effective peer assessment models for improving...

21
Searching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case Irja Leppisaari Centria University of Applied Sciences Finland [email protected] Janne Peltoniemi Centria University of Applied Sciences Finland [email protected] Tuula Hohenthal Centria University of Applied Sciences Finland [email protected] Yeonwook Im Hanyang Cyber University South Korea [email protected] Abstract: Peer assessment brings new affordances to the implementation of meaningful assessment on online courses (e.g. MOOCs) by using technological solutions to automate the assessment process. For this reason, teachers need digital pedagogic skills for planning, implementing and developing effective peer assessment models. In this paper we apply the criteria of a good peer assessment task to peer review the Do-It-Yourself (DIY) automatic evaluation path (peer assessment case) designed by one of the authors. Our collegial review employs a set of criteria teachers can use to develop peer assessment tasks in their teaching. The peer review describes the strengths and development needs of the DIY case. The case indicates that a successful peer assessment task also requires teachers to recognize changes in their role in a learning activity. Based on the peer review, the potential for the DIY model to provide automated peer assessment practices in game-oriented learning processes is acknowledged. Keywords: peer assessment, automated assessment models, DIY, peer review, online course, MOOC, personal learning paths, teacher’s role

Upload: vuongdang

Post on 13-Jun-2019

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Searching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case

Irja LeppisaariCentria University of Applied Sciences

[email protected]

Janne PeltoniemiCentria University of Applied Sciences

[email protected]

Tuula HohenthalCentria University of Applied Sciences

[email protected]

Yeonwook ImHanyang Cyber University

South [email protected]

Abstract: Peer assessment brings new affordances to the implementation of meaningful assessment on online courses (e.g. MOOCs) by using technological solutions to automate the assessment process. For this reason, teachers need digital pedagogic skills for planning, implementing and developing effective peer assessment models. In this paper we apply the criteria of a good peer assessment task to peer review the Do-It-Yourself (DIY) automatic evaluation path (peer assessment case) designed by one of the authors. Our collegial review employs a set of criteria teachers can use to develop peer assessment tasks in their teaching. The peer review describes the strengths and development needs of the DIY case. The case indicates that a successful peer assessment task also requires teachers to recognize changes in their role in a learning activity. Based on the peer review, the potential for the DIY model to provide automated peer assessment practices in game-oriented learning processes is acknowledged.

Keywords: peer assessment, automated assessment models, DIY, peer review, online course, MOOC, personal learning paths, teacher’s role

Introduction

Open online distance learning in higher education (further HE) has quickly gained popularity, expanded, and evolved. Massive Open Online Courses (MOOCs) appear to be a significant force within HE (Admiraali, Huisman & van de Ven, 2014). At the same time, engaging and active learning approaches with personalized learning, adaptive learning and gaming are gaining increasingly more space in delivery of learning solutions that support 21st century skills in HE (Johnson, Adams Becker, Cummins, Estrada, Freeman & Hall, 2016; Morales, Amado-Salvatierra, Hernández, Pirker & Gütl, 2016; Larsen McClarty, Orr, Frey, Dolan, Vassileva & McVay, 2012).

As with every other learning environment the quality of MOOCs, and also other (open) online courses, is very much the condition which determines the effectiveness and success of the delivered training (Creelman, Ehlers

Page 2: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

& Ossiannilsson, 2014). According to Creelman et al. (2014) MOOCs and open education require, however, different quality indicators than those traditionally used in HE. How a teacher is able to plan and deliver diverse pedagogic solutions is central to the quality of teaching, including teaching in new kinds of online education. In this article we especially examine the affordances of a new learner-centered assessment in the online education learning process. Large student masses, individualization of learning paths and a need for scaling demand new thinking about assessment and set new kinds of challenges to implementing meaningful assessment by using technological solutions to automate some parts of these processes. A further challenge is the need to automate peer assessment models based on qualitative assessment (cf. Boase-Jenelik, Parker & Herrington, 2013). At the same time it is obvious that the teacher’s role in the learning context will change, but insufficient attention has been paid to this in connection with MOOCs (cf. Bayne & Ross, 2013).

The use of peer assessments has been recognized as one feature which affects the effectiveness of MOOC pedagogy. Peer assessment is defined as the process whereby students are involved in giving feedback and grading the work of their peers. Learning-oriented assessment is about putting learning at the centre of assessment and reconfiguring assessment design so that the learning function is emphasized. However, we remain limited in our ability to both assess complex and open-ended student assignments (Admiraali et al., 2014) and use student-centered assessment practices where students can learn from one another. It is obvious that there is a clear need to improve models for peer assessment in MOOCs and other kinds of online courses. Good examples regarding the effectiveness of well-developed peer assessment activities are needed for MOOCs and more widely for improving new and rich assessment models for student-centered HE. This requires an ability to recognize features of a good assessment task. In this paper, we raise one practical peer assessment model from our teaching and learning development context and reflect on common factors of this automatic evaluation model/assignment based on the principles of good peer assessment tasks created by Rajaorko and Leppisaari (2017).

Peer assessment as an important part of evolving assessment methods

Reigeluth (1999) claims that constructivist learning environments require learners in a HE context assume a primary role in assessing their own learning. Peer assessment helps students to take responsibility for their own learning and to become active participants in the learning process (Miller & Ng, 1996). Peer assessment motivates and encourages learners to work socially and to share expertise. Assinder (1991) reports increased motivation, participation, real communication, in-depth understanding, commitment, confidence, meaningful practice and accuracy when students prepare and deliver learning tasks for each other. Peer assessment can therefore be seen as an effective means of involving learners in formative assessment, with the presence of an audience in general having a positive influence on performance (Lynch, 1988). It makes students practitioners who take responsibility for their own and others’ performance. Peer assessment as one of self-directed learning methods can be more appropriate and useful in the e-learning environment where learners may easily become passive in learning and causes high rates of dropouts (Im, 2007).

Many researchers (Falchikiv & Godfinch, 2000; Kim, 2012; Bachelet, Zongo & Bourelle, 2015) consider peer assessment mostly to be valid, and Matsuno (2009) claims peer assessment has been noticed to be more reliable than self-assessment. Sadler and Good’s (2006) research found that students’ self assessment and peer assessment are largely convergent. Moreover they are almost similar to teachers’ grading. According to Bachelet et al. (2015) the best ”return for work” appears with 3-4 peer grades. Feedback from several peers (3-5) may in fact be more reliable than feedback from one expert.

Development of Peer Assessment Models for MOOCs

According to Creelman et al. (2014), peer-to-peer pedagogy has been identified as one key quality area of MOOCs. Bayne and Ross (2013) extracted three emerging issues for MOOC pedagogy: 1) the role of the teacher, 2) learner participation, and 3) assessment. cMOOCs (connectivist type) focus especially on social interaction. Peer-to-peer practices, including peer review and peer assessment (subtly different) are an essential aspect of the “intelligent working” essential to a successful cMOOC (O’Toole, 2013). Boase-Jenelik et al. (2013) describe the difference between the above terms as follows: The term peer assessment is often used to describe the process of giving

Page 3: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

summative assessment, whereas peer review is generally used for giving and receiving non-summative formative feedback (Wood & Kurzel, 2008 in Boase-Jenelik et al., 2013).

On the basis of our earlier literature review we have recognized six essential factors to develop peer assessment models for MOOCs (Leppisaari & Im, 2016). Based on an examination of peer assessment models of MOOCs in HE contexts and on the basis of our own experiences (New Open Energy MOOC project in Finland, and online studies development at Centria University of Applied Sciences (further UAS) and Hanyang Cyber University), we have found that the following practical peer assessment factors are essential in planning and developing MOOCs and open online education.

a. Staff competencies for developing and implementing peer assessment practices for the digital ageThe role of the teacher changes (as our case in this paper will demonstrate) in modern online teaching and learning. Peer assessment challenges expertise in instructors and technical teams. It is obvious that peer assessment practices demand new pedagogical skills from teachers for promoting effective Digital Age Learning. E.g. at Centria UAS we have produced peer assessment info cards for teachers to support them in developing peer assessment practices.

b. Peer assessment training or scaffoldingScaffolding must be offered in peer assessment. To accustom students to peer assess, it is necessary to benchmark the teacher or provide clear guidelines. Before the start of the course, teachers must clarify that peer assessment is one of the most important dimensions of the whole learning process. Scaffolding can be provided also by asking more experienced participants to be mentors as well as providing online support in the form of FAQ pages, how-to videos, an interactive grading rubric, and discussion thread forums, technical guidelines for using the tools etc. E.g. at Centria UAS we have produced peer assessment info cards for students to support the use of peer assessment in teaching and learning.

c. Clearly articulated criteria for peer assessmentPeer assessment involves learners reviewing each others’ performance, products and/or performance as individuals or in groups against assessment criteria set in advance. Peer assessment is based on learning outcomes, from which the set, publically available peer assessment criteria are derived. Students need clearly articulated criteria for peer assessment to make the process sufficiently fair. Criteria based peer assessment means clearly identified review assignments for assessment and guides students to focus on issues and points significant to learning and competence. At the same time, the criteria assist students to identify what they need to focus on in revising/developing their own product. As peer-reviewers students have an opportunity to become familiar with the products of other students and to learn from these. Assessing the products of others develops students’ ability to evaluate their own performance.

d. Rich pedagogical models for peer assessment Peer assessment methods must be clearly described and explained. The process should be transparent and predictable. Peer assessment is a social process, in which both students and teachers participate.

The common model used in MOOCs can be described as follows: Students are required to submit an assignment each week, in relation to a topic covered that week. After that, each student is randomly allocated 3 or 4 assignments submitted by other students. Criteria for peer assessment have been drawn up and are available. At the same time the criteria guides students in completing their own learning task or product. Students have e.g. five days to complete the peer evaluation process after which it is closed and the teaching team start awarding final grades. Finally, students are asked to have a fresh look at their own work (self-assessment) and grade it after evaluating at least three other student’s assignments (cf. Admiraali et al., 2014; Bachelet et al., 2015). Tillema (2014) classifies peer assessment practices on the basis of the different student activities they include (e.g. rating, giving feedback). O’Toole (2013) describes the following six models for pedagogical meaningful peer assessment: Peer grading, Mantle of expert, Micro feedback and rating of a student's contribution, Students assessing students in teaching threshold concepts, Multiple critical perspectives, and Peer assessment of applying shared knowledge in diverse contexts.

Peer assessment assignments may be single learning tasks, various products (e.g. posters, blogs, mind maps, videos) and presentations, or the learning process over a total course. Peer assessment may utilize discussion, or structured or open-ended written assessment. Additionally, forms with prepared qualitative and/or quantitative questions can also be utilized.

Page 4: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

e. Technical platforms for peer assessmentAs digital learning technologies must enable the integration of self and peer assessment across the learning organization for formative and summative assessments, the flexible development, testing and redevelopment of prototypes and technical platforms are required. Enthusiastic teachers are needed to undertake trials and also develop technical platforms together with technical developers. Learning organizations must utilize learning analytics to also promote learning through different peer assessment practices.

f. Policies for verifying students’ peer assessment skills Peer assessment deepens learning and equips students for working life. It has an important role in engaging with course content and in developing life-long learning skills. Peer assessment and feedback skills are also an essential part of 21st century skills (Griffin et al., 2010). Peer assessment is a learner-centered way of evaluating and allows students to develop a range of transferable skills. Peer assessment in itself is an empowering learning situation. It is an avenue to practice giving and receiving feedback, key 21st century skills. Students must receive feedback on their performance as peer reviewers from their peers and the instructor. Students can gather evidence of peer assessment competence in, for example, an ePortfolio, or receive an Open Badge to verify achievement in peer assessment as a meta-skill. Good examples of peer review might be rewarded by feedback from the assessed student, other peers who have reviewed the assessment, or tutors. They might become recognized as skilled assessors (cf. O´Toole, 2013).

Principles of a good peer assessment task

Above we have set the background for the significance and implementation of peer assessment on online courses in HE. Our particular area of interest in this paper is delivery of a meaningful peer assessment task. Today, a teacher’s digital pedagogic skills include an ability to create effective peer assessment tasks. Thus, a teacher’s role in assessment is also powerfully linked to designing learning tasks in accordance with a constructively aligned teaching and learning process (Biggs & Tang, 2011). What issues need to be taken into consideration in the design and delivery of peer assessment tasks for peer assessment to be successful and support learning (cf. Boase-Jelinek, Parker & Herrington, 2013)?

On the basis of the above theoretical examination and trials conducted in the ESF New Open Energy venture, Rajaorko and Leppisaari (2017) have condensed four principles of peer assessment tasks (see Table 1): 1. Objective, 2. Criteria, 3. Method, and 4. Instructions. The table below summarizes the essential points of each criteria which should be taken into consideration in peer assessment tasks.

Table 1. Principles of a good peer assessment task (Rajaorko & Leppisaari, 2017).Objective The objective of peer assessment is clearly described and review assignments (what

is being assessed) are derived from the learning outcomes. Criteria Peer assessment criteria are explicit and expressed in an easy to understand way.

  Peer assessment criteria are derived from learning outcomes.

  The criteria lead participants to focus on issues significant to learning and learning outcomes in their assessments.

Method Methods used in peer assessment are appropriate for achieving the course objectives.

  Implementation is participatory and social.   The peer assessment process includes self assessment, which guides development of

one’s product.

  Participants receive feedback on their role as a peer reviewer.

Page 5: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Instructions Instructions are explicit and guide participants to peer-review constructively.

  Instructions help participants understand the purpose, significance and benefits of peer assessment and its impact on overall assessment.

Research

Boase-Jelinek et al., (2013) have noticed that whilst peer review is an appropriate activity for supporting critical thinking and reflective practice, it requires a number of decisions to be made in relation to student preparation and support, implementation strategy, and technological infrastructure to make it work in specific contexts. In order to support an uptake of effective peer assessment in our educational organizations and networks, we have as described above created theory-based instructions (Leppisaari & Im, 2016 and peer assessment info cards for teachers and students) and drawn up principles of a good peer assessment task (Rajaorko & Leppisaari, 2017). In this paper we employ the presented framework to examine one case (operational model), an example case of a new kind of peer assessment implementation in online courses. We examine how the principles of a good peer assessment task are realized in the DIY case. Concurrently we are interested in what meaning the delivery of the peer assessment task in question has for the teacher’s new role in a learning activity.

In the examination we employ the criteria of good peer assessment tasks (Rajaorko & Leppisaari, 2017) to peer review a colleague’s teaching trial. Below, the second author first describes the case he has delivered. The peer assessment task is integrally linked to a wider learning path diversification implementation model in HE and therefore it is described as one entity (online path in the context of a multipath setting). Next, the other authors examine the case (model) against the principles of a good peer assessment task (Rajaorko & Leppisaari, 2017). The peer review is implemented based on the second author’s own descriptions and DIY material that has been provided for the use of the peer reviewers. Understanding has been verified through joint discussion. In this context, the nature of peer assessment is an equal collegial reflection (cf. Jyrhämä, Hellström, Uusikylä & Kansanen, 2016). The peer assessment of an educational trial conducted here shares features with the virtual benchmarking model developed by Leppisaari, Herrington, Vainio and Im (2013) in which teachers assess each others’ teaching delivery on the basis of authentic learning criteria (Herrington, Reeves & Oliver, 2010). We employ the same method in our collegial review that teachers can use to assess and develop peer assessment tasks in their teaching.

Our research methodology can be described to align with an educational design research approach (McKenney & Reeves, 2014). Teaching development is research-based and cyclical, and based on needs emerging from genuine teaching contexts. The perspectives of a teacher who develops his/her own work and that of researcher-colleagues are raised in our examination. The examination results of this pilot cannot be generalized, but through their design-based research type process they do provide opportunities to identify issues that can be further developed and applied to comparable educational transformation trials. At the same time we also evaluate together the viability of the principles identified by Rajaorko and Leppisaari (2017) and their ability to raise relevant issues from a development of a good peer assessment task perspective.

Study Case – DIY (Do-It-Yourself) automatic evaluation path as an alternative study method

Background

Modern education emphasizes individualized learning methods (see e.g. McGee, 2008; Santally & Senteni, 2013; Sharples et al., 2015; Johnson et al., 2016). Associations to individual learning methods are found in time management, reading and study technics and ways to search and adapt new information. When traditional education often leans to one pedagogical method – usually a very teacher based lecturing model – in modern education various

Page 6: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

simultaneous methods are in use and learning is in an ongoing interaction between the teacher and students. For the same learning content, the need for several alternative paths and materials exists. This may lead to increasing teaching resources, as well as strategic decisions of resource allocation from the school/university management to renew educational systems. In the long run, the unit (credit point) cost is lowered through more efficient education models, since there are larger student masses to serve via alternative study paths with partly automated implementations and evaluations of courses. Still, small group interactive lectures can be maintained as one alternative path option. Every student has the possibility to choose the study path which is most suitable for his/her schedule, study style.

In this DIY case, a hybrid model is presented, which includes three alternative study paths; traditional teacher-based lectures with a flipped classroom element, book exam (distantly), and DIY (Do-It-Yourself) online path with automatic evaluation. DIY is generally linked to self-driven activities; see Wolf & McQuitty (2011) for client behavior as an application of DIY in a business context. We apply DIY in the following model, and also honor the MOOC evaluation methods by combining peer assessment, multiple choice questions and teacher’s collective evaluation (Bachelet, 2015; O´Toole, 2013). Regardless of the complex multi-path setting, only one learning environment platform is shared by students, and this platform acts as a collective portal for students to enter-in and then continue along unique paths. Thus, one learning environment may include several and various sources of online/offline materials and social media channels, depending on the chosen path.

This hybrid model is designed to be applied to existing course implementation by adding two alternative paths to the current teaching method. For example, if the assumed current path is the traditional teaching method, the two other options can be added to that path – book exam (distantly) and DIY online path. It is obvious that the latter path (DIY) requires more designing and planning hours in the implementation phase than the former (book exam). However, it is assumed that after a few implementations the teacher’s learning curve will become more efficient due to experience, better practices and routines.

Description of DIY

In the following, the DIY method is described both graphically and verbally. The description of the DIY method is not definitive regarding applied software and technical issues, but is one realistic option to practice the theoretical framework of an automatic evaluation process in online learning. The DIY method was tested in Centria UAS’s Personal Investment course during October-December 2016. Twenty-one students participated in the 2-credit pilot delivery. The applied learning environments were Kyvyt.fi (Personal Learning Environment, PLE), Wisemapping mind map, Google forms and integrated FormEmailer script, which enabled the game-oriented gate progressions and interactive and communication tool between students and teacher. The course started by sharing a word online registration document with participants. The purpose of this document was to form peer assessment student pairs right in the beginning phase. Secondly, the document included the url -address for the Kyvyt.fi learning environment, where students received instructions on descriptions and path progressions. The path was numbered 0, 1st, 2nd and 3rd phases. At phase 0, students received basic knowledge of the course content, peer assessment system and criteria, and an overall DIY method description through compact documentation. At phase 1, the game-oriented gates were presented, and phase 2 included the links to learning materials – mainly in the Wisemapping mind map database. Phase 3 was the concluding part of the course, where students were requested to submit an overall review, achieved result points and confirmation of the course (see Figure 1).

Page 7: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Figure 1. A learner’s path in automatic assessment gaming tasks.

Page 8: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

A motivating and interesting game-oriented aspect was implemented by the gate idea in Phase 1, where students were granted access to the next gate if they passed the assignment with at least the minimum required points. Students were instructed before every assignment with the criteria and guidance on how to proceed and submit the assignment. For the peer evaluator, there were two alternative technical paths for the gate -specific evaluation process, one submission link for passed assignments and one submission link for rejected/to-be-updated assignments. Each of these links was equipped with instructions to the receiver on how to proceed with the task – either being a gate step to the next assignment or instructions on how to update the current assignment. Moreover, the system was built so that the teacher was informed by email every time a student submitted the assignment and/or peer evaluator submitted the evaluation report to the student. This was governed by the previously mentioned FormEmailer script, which was pre-programmed and integrated to GoogleForms in advance. The notable issue is that the teacher’s role was not active in a traditional way, instead s/he was an observer in the process, and as needed, mentored students with any challenges during the path steps, either on technical and/or substance issues. After the final assignment, which also included the learning diary, the students submitted the final review of the whole entity of assignments by GoogleForms. Finally, the teacher did an overall review of results and checked that there were no missing remarks. After that, the grading was recorded in the learning environment.

Some remarks on feedback and experience

The DIY path is exceptional in its uniqueness and newness, compared to the two other learning paths presented above – and also its approach to and usage of pedagogical solutions. The three major benefits are independence of time and place, unique characteristics of the student path and automated evaluation outcomes, in which the teacher´s role as a mentor-coach instead of traditional lecturer is identified. It is noted, however, that these same benefits can also catalyze some serious challenges, i.e. the lack of traditional teaching, possible motivation problems of students, credibility and suitability of peer assessment, and finally the technical functionality of applied automation (IT, software etc.). The DIY method requires very extensive planning from the teacher’s point of view, including content planning, creating the combination of applied learning environments, technical planning and testing of different applications and software. It is obvious that the teacher needs a well-built network of both IT support and educational and pedagogical teams to support the planning and execution phases (cf. Bayne & Ross, 2013).

According to the feedback from the students, the course was mostly positive. Also, the student motivation was higher than that in a traditional course setting. One of the challenges was to structure a clear kick-off for the course. Some of the students were not activated right from the start, which caused some problems for the simultaneous peer-to-peer progressions. For the future, the path steps are progressed so that peer pairs are formed based on their performance speed, i.e. the first two students to submit the first assignment become pairs and continuing similarly in proceeding assignments. This gives an automatic efficiency to the path progressions for students.

From the teachers’ point of view, an automatic evaluation process is extremely interesting and modifies traditional teaching substantially. Naturally, the process included some fine tuning in technical aspects, but towards the end of the course, all technical issues were solved. The FormEmailer script especially enabled the teacher’s role in monitoring, since all submission steps were provided for teacher review during the process. It was relatively easy to identify possible errors in the process, and in addition, students always had the possibility to contact the teacher via email, if necessary. Email traffic was minimized on purpose, and used only as a last resort.

The fact that the process is well automated is a new experience for any teacher who is predominantly attached to traditional teaching methods. There are many possibilities to apply this type of teaching structure across the academic disciplines. It seems that one of the future models of education and teaching is to apply and integrate modern technologies to teaching and learning experiences and systematize them to serve more and more students globally and efficiently across universities and borders. The expansion potential is almost limitless.

Page 9: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Alignment to the principles of a good peer assessment task

Below the peer assessment case described above is analyzed against the criteria of a good peer assessment task (see Table 1). The examination is thematized according to the four main elements of the criteria.

The objective of peer assessment

According to good peer assessment principles the objective of peer assessment is clearly described, and review assignments (what is being assessed) are derived from the learning outcomes (Rajaorko & Leppisaari, 2017).

In the case we are examining, the objectives and advantages of peer assessment are explained to learners, making clear that this approach relates to contemporary education. Peer assessment has been included to diversify assessment, increase its reliability, and to introduce the method to learners and through it to learning from each other. The course website includes a document which explains what peer assessment is and that it is part of a modern learning concept. The benefits of peer assessment are also justified by its ”social nature and extensiveness”. The benefits of peer assessment could be stressed more strongly from a working life skills perspective. Gaming has been successfully linked to peer assessment in the case, and it would be interesting to have further information on the motivational affect (cf. Morales et al., 2016) of gaming in the further development of the DIY model. Progress on the course occurs once peer assessment has been done and the student has received sufficient points through the automated feedback to progress to the next stage. The significance of gaming on the course and in peer assessment could be further explained in the course orientation.

A good peer assessment task also indicates how peer assessment affects overall assessment (cf. Rajaorko & Leppisaari, 2017). Learners are told in the examined case that at the end of the course the teacher will make an aggregate assessment of the course based on the peer and self assessments. The assessment formation could be explained in more detail (e.g. percentage allocation).

An objective of the case is the (conscious) changing of a teacher’s role so that it is possible to deliver pedagogically meaningful scalable education solutions more widely to student masses (cf. MOOC). The teacher’s workload in the course design stage is considerable, but less in the delivery stage. As reviewers of the DIY case we considered whether or not the changing role of the teacher should be explained briefly to learners or is it enough in practice that it is an underpinning idea in the teacher’s mind? The DIY model was a question of consciously constructing a course in which delivery was not teacher-led. The course is built in such a way that the learning environment with its material, instructions and tools contains all the information needed to complete the course. Tasks are submitted through blogs rather than sent to the teacher through email. The teacher’s role is to function in the background as a mentor and be available as needed and conduct a final course assessment.

Peer assessment criteria

The second principle of a good peer assessment task includes the following factors: Peer assessment criteria are explicit and expressed in an easy to understand way. Peer assessment criteria are derived from learning outcomes. The criteria lead participants to focus on issues significant to learning and learning outcomes in their assessments (Rajaorko & Leppisaari, 2017).

The principles of peer assessment in the examined case are expressed as follows: ”Basically, learners assess each other’s work with commensurate criteria and as transparently as possible.” Students receive instructional assessment criteria, against which they assess their study-mates’ tasks. Peer assessment criteria are given with each task: ”Instructions for peer assessment criteria are given separately for each task, so that a peer reviewer receives the criteria with the task instructions.” The peer reviewer gives both numeric and written feedback. Examples of task-specific assessment criteria are the pre-tasks which are graded either 0 or 1 point, and receive a passing grade if the total grade is 4-5 points. The peer reviewers are also expected to give a written comment in the blog, but no actual instructions are given as to how the written feedback should be presented and what issues it needs to focus on. However the automatic message peer reviewers receive in their email inbox explains that they should utilize ”the internet and web searches of various business/investment sites” in the concept definition exercise of the pre-tasks. How a single peer assessment task’s criteria are connected to the learning objectives of the course remains unclear to

Page 10: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

the DIY case peer reviewers. In future the connection of peer assessment criteria to the learning objectives of the entire course could be clarified and further explained to the students.

Peer assessment method

According to the third principle of a good peer assessment task the following elements have to be implemented: methods (implementation strategy) used in peer assessment are appropriate for achieving the course objectives. Implementation is participatory and social. The peer assessment process includes self-assessment, which guides development of one’s product (Rajaorko & Leppisaari, 2017).

In the DIY case students have five staged gaming tasks, and an accepted completion of one task opens the next task. The tasks and instructions are given in the Kyvyt.fi environment. In the examined case, peer assessment is listed as the primary assessment method on the course. Additionally, self assessment in the form of a brief learning journal is also utilized. Peer assessment pairs are formed at the start of the course using the word online registration document. Scheduling has been identified as a critical factor in peer assessment tasks and there is cause to think, even in the examined case, at what stage peer assessment comes in this kind of non-stop delivery in terms of one’s own comparable task. In this model, the document explaining peer assessment provides instructions on where and within what timetable the learner should conduct peer assessment. A general rule stated during the course is that the peer reviewer should submit the review within a week of receiving the task. This ensures the course progresses on schedule and the learner can move on to the next stage.

In the examined case learners complete all tasks in the blog created in the pre-task, into which the peer reviewer also adds assessment outcomes (points) and comments. There are also separate forms used to confirm completion and assessment of tasks. Sending these forms generates an automatic message sent to the peer reviewer with the information: you have something to assess. On completion of his/her work, the peer reviewer either sends a form informing that the task needs further work (an automatic message to the student requesting further work and once completed another form is sent) or the task is accepted, allowing the student to progress to the next stage using a code given in the message. Based on their performance, students can progress in the game (or return to ”start” that is complete their performance as necessary). At the end, students check the blog that all tasks have been assessed and commented on and send the final summary to the teacher.

The examined peer assessment model activates and commits students to the learning process. Students have a clear responsibility in the realization of peer assessment, which supports careful completion of the task. A development area observed in the pilot was formation of assessment pairs from the perspective of personal scheduling and flexibility: In future the teacher intends to form peer assessment pairs in order of assignment completion.

Self assessment is part of a good peer assessment task (cf. Rajaorko & Leppisaari, 2017). In this model self assessment is performed at the end of the course in which there is an opportunity to comment on peer assessment (free wording) and what it taught. In the self assessment drawn up in the blog students reflect on their learning and issues associated with completing the course and course content. Students are also asked to comment on whether the course delivery (DIY method utilizing automatic evaluation and self and peer assessment) is also a good way to learn in the future. When mirrored against the good assessment task criteria, giving students feedback on their work as a peer reviewer emerges as a development area in the DIY model. This could be developed by describing as clearly as possible the peer assessment criteria and methods, facilitating assessment of a peer reviewer’s performance. The final task could include feedback for the peer reviewer, or be adapted so that the self assessment is also received by the peer reviewer, providing him/her feedback on his/her work through this avenue.

Peer assessment instruction

The fourth principle of a good peer assessment task is described as follows: Instructions are explicit and guide participants to peer review constructively. Instructions help participants understand the purpose, significance and benefits of peer assessment and its impact on overall assessment (Rajaorko & Leppisaari, 2017). Clear instructions are of primary importance in the examined case, as study and the related peer assessment take place completely independently. Written instructions are given in the learning platform as students complete the course independently through staged game tasks. At the start of the course, students receive information about peer

Page 11: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

assessment criteria and a description of methods. Instructions are found stage by stage along the learning path and well express the ”inbuilt instructions” required on online courses. This case is an excellent example of automated instructions implemented with modern technological solutions.

It would be useful to express the general peer assessment instructions at the start of the course in an even clearer learner-centered form (use the word ”you”; how the learner should act), and it could also focus on giving constructive feedback and comments, these required in working life. The teacher identifies as a challenge the offering of a clear kick-off for the course to support students to engage with the process. The Student info cards for peer assessment developed at Centria UAS to promote use of peer assessment in teaching could be utilized for this.

In the model, students themselves make a summative overall assessment of their learning in a brief learning journal. Instructions in the final self assessment task could be more detailed in terms of providing students with an opportunity to utilize the peer assessment they have received to mirror their own responses and learning against it. Students should be guided to include self assessment and final reflection at every step.

Discussion and conclusion

The transition from traditional teacher-centered education towards personal learning paths in a digital learning environment requires new ways of organizing teaching and pedagogical practices. This facilitates attention increasingly being drawn to teaching delivery solutions that enable automation and are scalable, which at the same time changes the teacher’s role in learning activities in a decisive way. The DIY model examined in this paper is an example of how digitalization can be employed meaningfully to achieve pedagogic principles sought in education. In the DIY case these principles are recognized in the big picture as course completion methods and learner-centeredness, gaming, and peer assessment – and the transformation of a teacher’s role that inherently follows such solutions. When online peer assessment is examined in greater detail, digitalization enables a partial automation of the assessment process. When digitalization enables multi-path -settings, personalized learning, adaptive learning, and automated evaluation, procedures are executed more quickly and effectively, simultaneously accumulating 21st century meta-skills required in working life.

In this paper a wider theoretical approach of online peer assessment is formed by applying the model of seven factors for creating quality peer assessment conditions created by Leppisaari and Im (2016). In the collegial peer review of the DIY case we have used the criteria for good peer assessment tasks created by Rajaorko & Leppisaari (2017). The results of the peer review of the DIY case indicate that the operational model instantiates many aspects of a good peer assessment task. In particular, it is very strong in providing good instructions for students step by step along the learning path – and gaming integrated into the tasks supports this process excellently. The peer assessment method is described well and tutor scaffolding is available.

While most of the principles are substantial in the way they have been realized in the case, providing some orientation training in peer assessment (including objectives), providing clear peer assessment criteria and a more evident linking of assessment criteria to course learning objectives could be strengthened to align more fully with the purpose and intent of a good peer assessment task. Based on the peer review, the potential for the DIY model in global contexts is identified as providing automated peer assessment practices in game-oriented processes. The DIY model presents an interesting method to transform assessment practices from traditional teacher-centered approaches to learner-centered approaches. Peer assessment in the examined case is part of a wider innovative game-based teaching and learning path and is realized in an automated modern way, scalable for even larger groups.

In this DIY case examination we have also tested Rajaorko & Leppisaari’s (2017) good peer assessment task criteria. In the main, they appear viable and have assisted the peer reviewers in the DIY peer review model examination and analysis. In our collegial review we have employed the same set of criteria teachers can employ for self assessment and development of peer assessment tasks in their teaching. Inter-collegial peer assessment which we have realized in our research facilitates development of teaching quality and at the same time provides teachers with an experience of peer assessment, in which the same principles that teachers use in their teaching experiments are employed. It is important to receive and analyze practical experiments in the development of peer assessment. New teaching methods are created through innovative experimentation. Developing peer assessment in one’s work requires digital pedagogic skills of teachers. Additionally, teachers need basic coding skills – or the support of a

Page 12: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

technology team – for delivering the described peer assessment implementation. The DIY case shows how technology provides tools to help teachers develop assessment. New teaching/assessment solutions require teachers to internalize changes in their role in learning activities. In the case we examined, the teacher actively reflected on his changed role in automated assessment in a teaching process delivered in a new way. The teacher’s role emerged as a significant issue in the description. This caused reflection on whether a description of the teacher’s role should be included as sub-criteria (e.g. Methods section) in the criteria for good peer assessment (cf. Rajaorko & Leppisaari, 2017). A successful peer assessment task requires that a teacher recognize and internalize changes in his/her role in the learning activity.

Peer assessment shows promising potential to improve and consolidate learning in MOOCs and comparable online deliveries. In further studies that focus on making more effective peer assessment models we need a larger sample with real data on student perceptions of the model’s viability. Also it will be very meaningful to collect international data and analyze them according to their culture and teaching and learning methodology.

References

Admiraali, A. W., Huisman, B., & van de Ven, M. (2014). Self- and Peer Assessment in Massive Open Online Courses. International Journal of Higher Education, Vol. 3, No. 3.

Assinder, W. (1991). Peer teaching, peer learning. ELT Journal, 45(3), 218–228.

Bachelet, R., Zongo, R., & Bouralle, A. (2015). Does peer grading work? How to implement and improve it? –Comparing instructor and peer assessment in MOOC GdP. Proceedings of the European MOOC Stakeholder Summit 2015. Retrieved from https://halshs.archives-ouvertes.fr/halshs-01146710/document .

Bayne, S. & Ross, J.  (2013). The pedagogy of the Massive Open Online Course: the UK view. The Higher Education Academy. Heslington. UK.

Biggs, J. & Tang, C. (2011). Teaching for Quality Learning at University: What the Student Does. Open University Press. Berkshire: McGraw-Hill Education.

Boase-Jelinek, D., Parker, J., & Herrington, J. (2013). Student reflection and learning through peer reviews. In Special issue: Teaching and learning in higher education: Western Australia's TL Forum. Issues In Educational Research, 23(2), 119–131.

Creelman, A., Ehlers, U–D., & Ossiannilsson, E. (2014). Perspectives on MOOC quality – An account of the EFQUEL MOOC Quality Project. The International Journal for Innovation and Quality in Learning, 2014, 78–87.

Falchikov, N. & Goldfinch, J. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher remarks. Review of Educational Research, 70(3), 287–322.

Griffin, P., Murray, L., Care, E, Thomas, A., & Perri, P.  (2010). Developmental Assessment: Lifting literacy through Professional Learning Teams. Assessment in Education: Principles, Policy and Practice, 17(4), 383–397.

Herrington, J., Reeves, T. C., & Oliver, R. (2010). A guide to authentic e-learning. New York: Routledge.

Im, Y. (2007). A Substantial Study on the Relationship between Students' Variables and Dropout in Cyber University. Journal of the Korean Association of Information Education, 11(2), 205–220.

Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A., & Hall, C. (2016). NMC Horizon Report: 2016. Higher Education Edition. Austin, Texas: The New Media Consortium.

Page 13: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Jyrhämä, R., Hellström, M., Uusikylä, K., & Kansanen, P. (2016). Opettajan didaktiikka (Teacher’s didactics). Jyväskylä: Ps-kustannus.

Kim, M. (2012). Developing and Validating a Multi-purpose Peer Assessment System for University Education. The Journal of Educational Information and Media, 18(4), 389–412. 

Larsen McClarty, K., Orr, A., Frey, P. M., Dolan, R. B., Vassileva, V., & McVay, A. (2012). A Literature Review of Gaming in Education. Research Report. Pearson.

Leppisaari, I., Herrington, J., Vainio, L., & Im, Y. (2013). Authentic e-Learning in a Multicultural Context: Virtual Benchmarking Cases from Five Countries. Journal of Interactive Learning Research, 24(1), 53–73. Chesapeake, VA: AACE.

Leppisaari, I. & Im, Y. (2016). Peer-assessment models in MOOCs for improving learning. Presentation in e-Learning Korea 2016 conference in Seoul 20.9.2016.

Lynch, T. (1988). Peer evaluation in practice. In A. Brooks & P. Grundy (Eds.) Individualism and autonomy in language learning. ELT Documents, 131 (pp. 119–125). London: British Council/MEP.

McAuley, A., Stewart, B., Siemens, G., & Cormier, D. (2010). Massive Open Online Courses. Digital ways of knowing and learning. Retrieved from https://outlook.office.com/owa/?realm=centria.fi&exsvurl=1&ll-cc=1035&modurl=0&path=/mail/search/attachmentlightbox .

McKenney, S. & Reeves, T. C. (2014). Educational Design Research. In J. M. Spector et al. (Eds.), Handbook of Research on Educational Communications and Technology (pp. 131–140). New York: Springer Science+Business Media.

O'Toole, R. (2013). Pedagogical strategies and technologies for peer assessment in Massively Open Online Courses (MOOCs). Discussion Paper. University of Warwick, Coventry, UK: University of Warwick. (Unpublished) Retrieved from http://wrap.warwick.ac.uk/54602/.

Matsuno, S. (2009). Self-, peer-, and teacher-assessments in Japanese university EFL writing classrooms. Language Testing, 29-1, 75–100.

McGee, P. (2008). Design with the Learning in Mind. In S. Carliner & P. Shank (Eds.), The e-Learning Handbook. Past Promises, Present Challenges (pp. 401–420). San Francisco: Pfeiffer.

Morales, M., Amado-Salvatierra, H. R., Hernández, R., Pirker, J., & Gütl, C. (2016). A Practical Experience on the Use of Gamification in MOOC Courses as a Strategy to Increase Motivation. International Workshop on Learning Technology for Education in Cloud. LTEC 2016: Learning Technology for Education in Cloud – The Changing Face of Education (pp. 139–149). Springer.

Rajaorko, P. & Leppisaari, I. (2017). Principles of a good peer assessment task. Presentation in Peer assessment training 22.3.2017.

Reigeluth, C. M. (1999). What is instructional design theory and how is it changing? In C. M. Reigeluth (Ed.), Instructional design theories and models volume II: A new paradigm of instructional theory (pp. 5–29). Nahwah, NJ: Lawrence Erlbaum Associates. 

Sadler, P. M. & Good, E. (2006). The Impact of Self- and Peer-Grading on Student Learning. EDUCATIONAL ASSESSMENT, 11(1), 1–31.

Page 14: s3.amazonaws.com file · Web viewSearching for effective peer assessment models for improving online learning in HE – Do-It-Yourself (DIY) case. Irja Leppisaari. Centria University

Santally, M. I. & Senteni, A. (2013). Effectiveness of Personalised Learning Paths on Students Learning Experiences in an e-Learning Environment. European Journal of Open, Distance and E-Learning, 16(1), 36–52.

Sharples, M., Adams, A., Alozie, N. et al., (2015). Innovative Pedagogy 2015. Open University Innovation Report 4. Milton Keynes: The Open University.

Tillema, H. (2014). Student involvement in Assessment of their Learning. In C. Wyatt-Smith, V. Klenowski & P. Colbert (Eds.), Designing Assessment for Quality Learning (pp. 39–54). Dordrecht: Springer.

Wolf, M. & McQuitty, S. (2011). Understanding the Do-It-Yourself Consumer: DIY Motivation and Outcomes. Academy of Marketing Science Review, 1(3), 154–170.