plagiarism and related issues in assessments not involving ... · plagiarism and related issues in...

36
Plagiarism and related issues in assessments not involving text Final Report 2015 The University of Newcastle Monash University Australian National University Simon The University of Newcastle www.newcastle.edu.au/prianit

Upload: others

Post on 07-Jun-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessments not involving text

Final Report 2015

The University of Newcastle Monash University

Australian National University

Simon The University of Newcastle

www.newcastle.edu.au/prianit

Page 2: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 i

Support for the project has been provided by the Australian Government Office for Learning

and Teaching. The views expressed in this report do not necessarily reflect the views of the

Australian Government Office for Learning and Teaching.

With the exception of the Commonwealth Coat of Arms, and where otherwise noted, all

material presented in this document is provided under Creative Commons Attribution-

ShareAlike 4.0 International License http://creativecommons.org/licenses/by-sa/4.0/.

The details of the relevant licence conditions are available on the Creative Commons

website (accessible using the links provided) as is the full legal code for the Creative

Commons Attribution-ShareAlike 4.0 International License

http://creativecommons.org/licenses/by-sa/4.0/legalcode.

Requests and inquiries concerning these rights should be addressed to:

Office for Learning and Teaching Department of Education and Training

GPO Box 9880, Location code N255EL10 Sydney NSW 2001

<[email protected]>

2015

ISBN 978-1-76028-097-0 PRINT ISBN 978-1-76028-098-7 PDF ISBN 978-1-76028-099-4 DOCX

Page 3: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 ii

Plagiarism and related issues in assessments not involving text

Final Report 2015

Lead institution

The University of Newcastle

Partner institutions

Monash University The Australian National University

Project leader

Simon, The University of Newcastle

Project manager

Ms Beth Cook, The University of Newcastle

Project team

Associate Professor Judy Sheard, Monash University

Associate Professor Chris Johnson, The Australian National University

Associate Professor Angela Carbone, Monash University

Chris Lawrence, The University of Newcastle

Professor Mario Minichiello, The University of Newcastle

Author

Simon, The University of Newcastle

Website

<www.newcastle.edu.au/prianit>

Page 4: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 iii

Acknowledgements We would like to acknowledge numerous contributions to this research. Project Reference Group Margaret Wallace, University of Wollongong Justin Zobel, University of Melbourne Craig Zimitat, University of Tasmania Jillian Hamilton, Queensland University of Technology Greg Preston, University of Newcastle Colin James, University of Newcastle Suzi Hewlett, Office for Learning and Teaching, Department of Education and Training, ably aided by many OLT staff members, notably Siobhan Lenihan and Victoria Ross Other contributors

Alexandra Rampano, University of Newcastle, who developed the initial Prianit website

Amy Robinson, University of Creative Arts, UK, who provided a copy of the questionnaire, used in the Spot the Difference research into visual plagiarism in the UK

The focus group and interview participants and the anonymous survey respondents

Page 5: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 iv

Table of contents Acknowledgements ................................................................................................................... iii

Executive summary .................................................................................................................... 1

Chapter 1: Project Aims ............................................................................................................. 4

Background ....................................................................................................................... 4

Objectives ......................................................................................................................... 6

Deliverables ...................................................................................................................... 7

Chapter 2: Research Approach .................................................................................................. 9

Literature review and document analysis ........................................................................ 9

Interviews with academics ............................................................................................... 9

Focus groups of academics and students ....................................................................... 10

Online survey of academics and students ...................................................................... 10

Chapter 3: Sample Findings ..................................................................................................... 12

University policies and procedures ................................................................................ 12

Interviews with academics ............................................................................................. 12

Focus groups ................................................................................................................... 14

Survey – policies and procedures ................................................................................... 14

Survey – broad findings .................................................................................................. 15

Survey – design cohort ................................................................................................... 17

Survey – computing cohort ............................................................................................ 18

Chapter 4: Dissemination and Impact ..................................................................................... 21

Plagiarism and related issues in assessments not involving text ................................... 21

Academic integrity: differences between computing assessments and essays ............ 21

How well do academic integrity policies and procedures apply to non-text assessments? .................................................................................................................. 22

Academic integrity: differences between design assessments and essays ................... 23

Page 6: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 v

Student perceptions of the acceptability of various code-writing practices ................. 23

Academic integrity and professional integrity in computing education ........................ 24

In their own words: students and academics write about academic integrity .............. 25

Academic integrity and computing assessments ........................................................... 25

Chapter 5: Conclusions ............................................................................................................ 26

Appendix A: certification ......................................................................................................... 29

Further Appendices .................................................................................................................. 30

Appendix B: Bibliography ............................................................................................... 30

Appendix C: Interview questions .................................................................................... 30

Appendix D: Focus group questions ............................................................................... 30

Appendix E: Survey – version for design students ......................................................... 30

Appendix F: Survey – version for computing students .................................................. 30

Appendix G: Survey – version for design academics ...................................................... 30

Appendix H: Survey – version for computing academics ............................................... 30

Appendix I: Example computing guideline from Australian National University ........... 30

Appendix J: Example computing guideline from The University of Newcastle.............. 30

Appendix K: Example online resource from Monash University ................................... 30

Appendix L: Example design guideline from The University of Newcastle .................... 30

Page 7: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 1

Executive summary Following an impression that the principles and practices of academic integrity might not apply uniformly across all forms of assessment, this project has investigated perceptions of academic integrity among students and academics in visual design and in computing.

Interviews were conducted with academics at most universities to determine what resources they use to educate students about matters of academic integrity within their particular disciplines.

Focus groups were conducted with students and with academics in computing and in visual design at three Australian universities and two computing education conferences.

An Australia-wide online survey drew nearly a thousand valid responses, with respondents from all of Australia’s universities.

In brief, the research found evidence that:

• students and academics in computing and visual design believe that their university’s academic integrity policies do not usefully apply to assessments in their disciplines;

• while there are standard ways in an essay of referencing material from external sources, there are no such standard ways in computing or in visual design;

• there is no universal agreement that referencing of external material is necessary in these disciplines as it is for essays;

• students and academics in computing have significantly different perceptions regarding certain practices in essay assessments and the parallel practices in computing assessments;

• students and academics in visual design have significantly different perceptions regarding certain practices in essay assessments and the parallel practices in design assessments;

• there are significant differences between the numbers who consider certain practices to be plagiarism or collusion and the numbers who consider those practices to be unacceptable;

• it is understood that designs are necessarily based on other designs, and there is no universally accepted requirement in design to identify and reference external sources of inspiration;

Page 8: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 2

• it is generally expected that computing practitioners will reuse existing algorithms and code where possible, and there is no widely accepted requirement to reference material so used.

These, along with many other findings, lead to the conclusion that the practices of academic integrity, as defined for written assessment items such as essays, do not apply in the same way to computing or to visual design. Most of the project’s participants appeared willing to engage with the principles of academic integrity, but have difficulty doing so because their university’s policies and procedures are alien to their particular disciplines.

If academic integrity is to be upheld and respected across all disciplines in Australia’s universities, some substantial changes are required:

1. The definition of academic integrity needs to be broadened to include ensuring that one’s work is conducted in the manner expected by the discipline, that the work is one’s own within the parameters understood to apply within the discipline.

2. The many disciplines that use non-textual assessment items must define the parameters of academic integrity as they apply within those disciplines. Giving full consideration to the professional practice of the discipline, the level of expectation regarding the use and referencing of external material, and the means of referencing external material, each discipline must indicate what it means to act with academic integrity within that discipline. This might begin at an institutional level, but must eventually become national, and ideally global.

3. The explanations from the disciplines must be accompanied by examples of referencing (if referencing is deemed appropriate), and by materials designed to educate students into the ways of academic integrity within those disciplines.

4. Universities’ policies need to be rewritten to acknowledge that there is no single way of implementing the principles of academic integrity, giving due deference to the definitions, practices, and procedures provided by each discipline.

While these major tasks are being undertaken, a number of smaller tasks should be implemented more rapidly, to help reduce some of the confusion that was uncovered by this project:

5. Current guidelines should be amended to make clear the distinction between questions of copyright and of academic integrity. Too many people believe that material that is free of copyright can be included in their own work without referencing.

6. In any discipline that uses non-textual assessment items, ideally the whole discipline will agree on the standards and requirements that will ensure academic integrity in its assessments. If such agreement cannot be reached, every assessment item should

Page 9: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 3

include in its specification an explanation of the academic integrity requirements of that item.

7. Disciplines that consider academic recycling (self-plagiarism) to be a problem should make this completely clear to their academics and students and should explain why. If there are disciplines in which it not seen as problematic – for example, in computing where the value of code reuse is taught – the university guidelines should reflect this.

8. Within the discipline of computing, academics should acknowledge that being able to explain a computer program is not the same as being able to write it. Of course academics should continue to ask students to explain programs if they believe that this is important, but they should be wary of relying on the explanation to decide whether the students wrote the program.

Page 10: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 4

Chapter 1: Project Aims

Background Academic integrity is integral to the assessment process at university, and indeed to the academic publication process. Academic integrity is a way of behaving designed to ensure – or rather, to assure – that the work being submitted is indeed the work of the person who submits it.

There are many ways to breach academic integrity, of which plagiarism is the most commonly discussed. Plagiarism is use of the work of others without appropriate acknowledgement, which can lead readers to believe that it is one’s own work. Academic writing, almost by definition, builds upon the writing and thinking of others: it is a balance between explaining what others have written or said and providing one’s own interpretations and perhaps even entirely new ideas. The ideas of others are necessary to place one’s own work in a firm context. In most university essays there is no expectation that the student will include any completely new ideas; rather, it is expected that the student will agree with one or more of the positions already expressed, and will formulate that agreement in a combination of words that has not been used before. This relies upon the understanding that there are almost infinitely many ways to express the same idea, almost infinitely many ways of combining large numbers of words to say the same thing.

Many students feel unable to rise to this challenge. They see the same concepts expressed in many different ways, and know that they themselves cannot express the ideas nearly so well. Therefore they choose to recycle the words of others. If they enclose those words in quotation marks and provide in-text references to make it absolutely clear whose words they are and where the reader can find the originals, they are writing with academic integrity. Unfortunately, they are also running the risk of being awarded a low mark for the assessment because there is not enough of their own expression in the work. On the other hand, if they omit the quotation marks and the reference, the words will appear to be their own, and they are in breach of academic integrity: they have plagiarised the words of others.

Collusion is another way of breaching the guidelines of academic integrity. Collusion is unauthorised collaboration with others, typically with other students in the same class. Students collude when they work together on an assessment task that is required to be carried out individually, or when two or more groups work together on a task that is expected to be the work of individual groups. A student who is struggling to conduct the background research for an essay might acquire a draft of a friend’s assignment. The two might then develop the essays independently, but they are nevertheless the result of collusion.

Page 11: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 5

The academic world has access to text-matching software that highlights passages encountered elsewhere. Properly used, this software can draw the assessor’s attention to similar passages that might represent plagiarism (if the similarity is with an existing source) or collusion (if the similarity is with the work of another student).

This takes place in a context in which students are aiming to be awarded the highest mark possible, and yet in which many students truly have nothing original to write about the topic in question. It can therefore take on the character of an arms race, where the students combine the words and ideas of previous writers in a way that they hope is novel, while the assessors try to assure themselves that the words submitted by their students are indeed their own, rather than being subtly or overtly copied from others.

Academic integrity has become an academic discipline in its own right. Universities have specialists in academic integrity. Academics write books and papers about academic integrity, covering such topics as how to practice it, how to detect breaches, how to punish breaches, how to write policies that explain it to students, and more. There is also an industry built upon helping students to breach academic integrity, for example by buying suitable essays from vast collections or by having work written to order (Clarke & Lancaster 2013, Walker & Townley 2012).

The foregoing description is deliberately couched in terms of essays and the written word, and indeed that is the context of much of the literature of academic integrity. Guides to academic integrity (Carroll, 2002; Harris, 2001; Neville, 2010) talk about using the words of others, about quotation marks, in-text references, reference lists, Harvard or APA style, paraphrasing, paper mills, and synthesising the words of others. However, these concepts refer to just one form of assessment item, and there are many other forms to which they simply do not apply.

This project focuses on assessment items that are not written in prose text, a phrase that merits explanation. An essay, which for some represents a typical university assessment, is not only textual in form, but is prose: it is made up of sentences and paragraphs, similar in form to this report. Many assessment items are not at all textual in form. Examples are paintings, musical compositions, and architectural drawings. However, it is not sufficient to classify items as text or non-text, because there are assessment items that are made up of text but are clearly not prose. Here is a small piece of computer program ‘code’:

def average(a): sum = 0 for k in range(0, len(a)): sum = sum + a[k] if len(a) > 0: return float(sum) / len(a) else: return 0

Page 12: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 6

There are some recognisable English words here – for example average, sum, for, range, and float – but they are clearly not being used in a way that makes sense in English. These words are joined by mathematical symbols such as > and =, and by non-words such as len and k. The program code is definitely textual in form, but it is clearly not prose. It is in recognition of assessment items in forms such as this that we use the term prose text to refer to ‘normal’ assessment items such as essays. However, having made this distinction we will sometimes use the word ‘text’ to mean prose text, typically when contrasting it with ‘non-text’.

The principle of academic integrity, assuring that the work submitted by students is their own, applies to non-textual assessments just as it does to textual assessments. However, the issues and implementation differ greatly. Consider, for example, a calculus assignment in which students are asked to differentiate a number of algebraic expressions. The rules of differentiation are clear and unambiguous. Any student applying those rules in the expected manner will produce the same answer with the same intermediate steps. Students might plagiarise, if they can find the same task in a textbook or an online example. Students might collude, seeking or giving help in applying the rules to derive the answer. Neither of these activities will be detected by text-matching software, partly because a mathematical derivation is not text as we generally understand it, but more because all correct answers will be close to identical, unlike an essay question, which has infinitely many correct answers. A mathematics teacher checking on academic integrity will be looking not for similarities in the correct answers, but for dissimilarities that might indicate errors made in copying, or similarities in incorrect answers that might indicate collusion. The same goal, seeking assurance that the work is that of the student who submitted it, has a completely different expression.

This project investigates questions of academic integrity specifically in computing and visual design, two disciplines that use non-textual assessments. It explores the questions described above, and seeks to clarify and understand any differences in standard practices in different disciplines.

Objectives Our aim was to explore the understandings of and attitudes to student academic integrity in areas of study that involve non-text-based assessment items, specifically in computing, art, and visual design. We aimed to find answers to a number of specific questions, including:

• What do academics and students in these areas think constitutes a breach of academic integrity?

• What do academics and students in these areas think might not be a breach of academic integrity, even though in a text-based area it might be?

• What do academics do to inform students as to the expected standard of integrity?

Page 13: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 7

• What do academics do to detect similarities that might suggest academic misconduct? (For example, do they use automated tools or simply their own experience and awareness?)

• How do academics deal with academic misconduct when it is discovered?

• Are there areas (such as perhaps computer programming) in which academics and/or students think that there is only one correct answer, so copying cannot be detected?

• Are there areas (such as perhaps visual images) in which academics and/or students think that every answer is unique, so copying is acceptable so long as one personalises the copy?

• To what extent do academics in these areas believe that university policies for academic integrity based on text are adequate for non-text-based assessments?

In exploring these questions, the project aimed to explore whether academic integrity is a one-size-fits-all concept, which can be adequately covered by policy statements little more than a paragraph in length. If it is found not to be such a uniform concept, the project would describe the variation in academic integrity and its understandings across the different disciplines being examined.

As part of the investigation the project aimed to gather exemplars in academic integrity for non-text-based assessments in computing, art, and visual design, illustrating good practice in policy, practice, and the education of students and academics.

Deliverables The following deliverables were expected from the project:

• A detailed report presenting a rich picture of Australian academics’ and students’ understandings of and attitudes to academic integrity in some areas of study that involve non-text-based assessment items.

The detail and richness cannot be captured in this report, which is necessarily brief. Instead they are being expressed in peer-reviewed academic publications, of which five have so far been published.

• A high-level summary of the findings, presented using terminology that is suitable for a wide range of audiences.

This report constitutes that high-level summary.

Page 14: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 8

• A collection of exemplars of good practice in helping students to understand what academic integrity is, and how to practice it, in some areas of study that involve non-text-based assessment items.

Very few exemplars were discovered in the course of the project. There are indeed some examples of good practice included as appendices to this report, but these constitute the full set as gathered, rather than a selection.

• A collection of strategies that academics have used successfully to address the problem of academic integrity with non-text-based assessment.

The strategies used by academics are discussed briefly in this report, and explained in more detail in the published papers.

• A brochure for students outlining the issues of plagiarism and collusion in some non-text-based assessment items, and how to avoid these breaches of academic integrity.

The research revealed a range of opinions that varied between academics and students as well as among academics and among students. These findings indicate that it is premature to produce a definitive brochure to guide students in navigating the academic requirements at Australian universities. Nevertheless, work is proceeding on some general brochures that should help to raise the awareness of students and academics concerning the variation in perceptions, and thus of the care that should be taken in specifying and undertaking assessments in the non-text-based areas.

• A recommendation for institutions outlining appropriate ways to deal with academic integrity in the context of assessment items that are not based in text.

The broad thrust of such a recommendation is embodied in this report. The detail must be arrived at in discussion between the universities and the disciplines in question.

Page 15: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 9

Chapter 2: Research Approach The research consisted of a literature review, interviews, focus groups, and a survey, as described below. The interviews, focus groups, and survey were all approved by the Human Research Ethics Committees at The University of Newcastle (approval number H-2012-0268), Monash University (2012001666), and The Australian National University (2012/669).

Literature review and document analysis A literature review was conducted in the broad area of academic integrity to form a picture of the current state of the research in that area. The review then focused on academic integrity in computing and in visual design, the two educational disciplines from which the project’s team members are drawn.

Pertinent subsets of the literature are summarised in the publications arising from this work, and the full bibliography is provided as an appendix.

Every Australian university has a policy that addresses academic integrity, and it is generally possibly to view those policies on the universities’ publicly accessible websites. We were able to examine the policy of each university to determine whether and how it addresses the question of different forms of assessment item. It was not always as easy to examine the procedures to be followed in cases of suspected breaches of the policy, but we were able to examine some of these procedures, forming a partial picture of the extent to which the policies and procedures deal with assessment items not based in prose text.

Interviews with academics We set out to conduct telephone interviews with academics in computing and in visual design or fine art at every Australian university, with the intention of gathering examples of the resources used to educate students about academic integrity in these non-text disciplines. We were unable to meet this ambitious target, but we were able to interview 28 computing academics from 23 universities and 12 art or design academics from 10 universities.

In addition to the questions about educational resources, interviewees were asked whether there was anything that they wished to add in regard to academic integrity in their disciplines.

The indicative script for the interviews is provided as an appendix to this report.

Page 16: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 10

Focus groups of academics and students We conducted focus groups of academics and of students in computing and in visual design to form an initial impression of their thoughts about, and their understandings of, academic integrity and to help inform the design of the survey.

While the focus group facilitators generally permitted discussion to range widely within the overall scope of the project, there was an indicative script (Appendix D) that asked: what plagiarism and collusion mean in the context of essays; and in the context of the non-textual assessments relevant to the group, how prevalent participants think these practices are; how they can be avoided; and a number of other questions.

Six focus groups were conducted at the authors’ universities and two at computing education conferences. Each group targeted people from a single profile, such as computing academics or visual design students. Together the focus groups involved 12 academics and eight students from visual design and 18 academics and 12 students from computing.

The responses from the focus groups, along with surveys reported in the literature, informed the design of the online survey.

Online survey of academics and students On the basis of the literature review, the interviews, and the focus groups, we designed a comprehensive online survey. Administered as a single survey, it splits at various points into up to four streams: one for design students, one for design academics, one for computing students, and one for computing academics.

A major component of the survey is a set of 14 scenarios describing practices that might or might not constitute academic misconduct; for example, Borrowing another student’s essay and rewriting it in one’s own words. Every participant was asked about the 14 scenarios as they pertain to essays. The computing participants were then asked about a parallel set of 14 scenarios dealing with computing assessments, and the design participants were asked about a parallel set of scenarios dealing with design assessments. For each scenario, participants were asked whether it constituted plagiarism/collusion (yes/unsure/no) and whether it was an acceptable practice. The responses would thus permit comparisons such as:

• essays vs computing assessments

• essays vs design assessments

• academics vs students

• plagiarism/collusion vs acceptability

The survey also included questions about

Page 17: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 11

• the perceived prevalence of academic misconduct

• the steps taken by academics to discourage misconduct or by students to avoid misconduct

• confidence that students know how to reference externally sourced material in non-text assessments

• possible differences between academic and professional practice in the discipline

• the adequacy of university policies to address academic integrity in non-text assessments

All four versions of the survey are provided as appendices to this report.

The survey was conducted between July and September 2013, promulgated by email and word of mouth. Responses were received from all 39 universities in Australia, with 1315 responses in total. Responses that were clearly incomplete, or that failed to indicate association with either computing or visual design, were eliminated. The remaining 990 responses represented 317 design students, 486 computing students, 117 design academics, and 70 computing academics.

Page 18: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 12

Chapter 3: Sample Findings The survey has produced an extremely rich set of data. A number of targeted analyses have been performed on the data, but there remains scope for further work. In this chapter we present brief summaries and highlights of some of the specific findings to date.

University policies and procedures The academic integrity policies of Australia’s universities generally define plagiarism and collusion in a way that does not restrict them to prose text. Some state this explicitly: for example, the policy of James Cook University defines plagiarism as “reproduction without acknowledgement of another person’s words, work or expressed thoughts from any source”, and adds that “the definition of words, works and thoughts includes such representations as diagrams, drawings, sketches, pictures, objects, text, lecture handouts, artistic works and other such expressions of ideas” (James Cook University 2016a). Not all policies are so explicit, but most are written in a way that admits a suitably broad interpretation.

The procedures and guidelines, on the other hand, are often couched in terms that apply only to prose text. Many, for example, express the expectation that references must be in specific forms such as APA, Harvard, Oxford, or IEEE – all of which are ways of referencing prose text. Again considering James Cook University as an example, its three easy steps for avoiding plagiarism include the headings “when you are thinking about your essay” and “when you are writing your essay” (James Cook University 2016b).

Where there are procedures or guidelines for other forms of work, they are generally produced by individual academics, and often used only by those individual academics. Generally speaking, the university-level procedures appear to be written for, and based upon, assessment items in prose text, with little acknowledgement of the many other forms that assessment items can take, and certainly with no assistance for students trying to find out how to reference the works of other people in non-textual assessment items.

Most universities appear to have short online courses or modules that walk students through definitions and examples of academic misconduct, but these are generally not publicly accessible. Most of those that we were able to access – principally those of the team members’ universities – deal exclusively with prose text.

Interviews with academics The interviews with academics were intended primarily to discover what resources were being used to educate students about the requirements of academic integrity for non-text items, and to identify exemplars of such material.

Page 19: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 13

The essential finding was that very few educational resources have been developed to instruct students about acceptable and unacceptable practices with regard to non-text-based assessment items. Those materials that were identified were generally created for specific courses (also known as subjects or units), with very few resources that were adopted across a whole discipline or department. Some of the materials shared with the project are provided in appendices I, J, K and L of this report.

Most of the academics who were interviewed expressed concerns that their university’s academic integrity policies and procedures failed to adequately address the non-text assessments that were used in their courses. In the visual area, academics generally felt that this was not a major problem, as they tend to observe the development of their students’ work from start to finish, so they can be reasonably confident that the work is the students’ own. The computing academics had no such assurance, and a number of them indicated that this is why a formal examination remains a major component of their assessment: it is the only item that is more or less assured to be that of the student.

Academics proposed a number of reasons why students might breach academic integrity. Some suggested that students from certain cultural backgrounds had been taught to work in a way that in Australia is considered as academic misconduct; others, typically in computing, thought that students were unable to complete the work, or had left too little time to do so; still others felt that academic misconduct was sometimes chosen because it was easier than doing the required work. Computing academics were concerned that there is an increasing tendency for students to outsource their assigned work to the many internet sites that are set up for small-scale contract programming.

Interviewees were asked what they do to try to detect breaches of academic integrity. The computing academics were all aware of code-matching software, which detects similarity in program code the way text-matching software does in prose text. Packages being used included MOSS, JPlag, and in-house systems; however, most interviewees did not use code-matching software, relying instead on visual detection of similarities. There was a general feeling that computing students are more likely to collude, helping one another more than is appropriate, than to plagiarise. Some academics interview students about their assignments, the assumption being that if students can explain their code, there is a reasonable chance that they wrote it.

In the visual design area, academics believe that they know when students have based their work on another image, in which case some use Google or TinEye to look for the source image. They believe that collusion is rare in their classes; they also believe that students basing their work on an existing image will source that image from the internet, not from a classmate.

Page 20: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 14

This suggests one clear difference between the two disciplines that are the focus of this study: a belief that students in computing are more likely to collude than to plagiarise, and that students in visual design are more likely to plagiarise than to collude.

Focus groups The students and academics participating in the six focus groups overwhelmingly expressed the view that considerations of academic integrity are not the same for text-based and non-text-based assessments. Universities’ definitions, standards, policies, and practices were seen as having been created essentially for written prose assessments, and not equally applicable to other forms of assessment.

In both computing and visual design there was more acceptance of unreferenced copying, but for different reasons. In the visual area it is understood that almost every design is based to some extent on other designs. However, there is no tradition of referencing the designs that might have been influential, and indeed no way of incorporating an explicit reference into a design. In computing there is an emphasis on reusing existing algorithms (approaches for solving standard problems), and where possible on reusing the actual program code. Computing educators do not want their students to re-invent the wheel. Students are sometimes encouraged to report the sources for any code that they copy and/or modify, but this is by no means a universal requirement, and there are no broadly accepted standards within the discipline for so doing.

There was broad agreement in the focus groups that university policies and guidelines fail to address the needs of non-textual assessment items. First, they are often couched in terms that apply just to the written word. Second, they impose strictures that cannot apply uniformly across all of the university’s assessments. For example, a policy that requires students to use the Harvard referencing system immediately renders itself inapplicable to almost any assessment that is not written in prose text. Third, they make no allowance for different practices in different disciplines.

There appeared to be consensus in all of the focus groups that there is a substantial difference between text-based and non-text-based assessments; that the boundaries of acceptability are harder to define for non-text assessments than for essays; and that there is a case for applying different standards to text and non-text assessments.

Survey – policies and procedures One section of the survey asked about policies, procedures, and practices with regard to non-textual assessment items. Questions in this section had answer options of agree, unsure, and disagree, and generally about a third of the respondents selected unsure. This in itself is a cause for concern, suggesting that these respondents were not sufficiently familiar with their university’s policies to have an opinion.

Page 21: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 15

Asked whether their university’s academic integrity policy adequately addressed non-text-based assessment items, about a third of the students thought that they did, but only about a quarter of the academics thought likewise. Responses were fairly similar to the question asking whether the university adequately educates students in relation to academic integrity requirements for non-text-based assessment items. Likewise, about a third of the students indicated that they understand academic integrity in the context of essays but not in the context of non-textual assessment items. This suggests that many universities have the scope to improve both their policies and their education with regard to such items.

While most universities now use text-matching software to detect similarities between essays, this software is not designed to detect similarities between non-text-based assessment items. There are software packages that detect similarity between images, or between different computer programs, but none of these is widely used. The survey responses showed strong agreement with the propositions that similarity is more difficult to detect in non-text-based assessment items; that there is a lack of effective detection tools for non-text-based assessment items; and that detecting breaches of academic integrity is more time-consuming than it is for text-based assessments. Between 60% and 80% of the academics reported that they detect plagiarism or collusion by closely inspecting students’ work, by checking for similarities in the work of different students, by noticing sudden improvements in the work of students, or by noticing the use of content that had not been covered in the course.

There was also a positive response to the proposition that there is a grey area between plagiarism and the traditions or practices of the non-text-based disciplines, traditions such as homage and collage in visual design and teamwork and code reuse in computing.

The overall impression from the responses to this part of the survey, supporting the impression from the focus groups, is that the current policies and procedures of Australia’s universities do not adequately address academic integrity for assessment items not written in prose text.

These findings are described in more detail in a paper presented to the Sixth International Integrity and Plagiarism Conference (Simon et al 2014a).

Survey – broad findings A core component of the survey was three sets of scenarios. All respondents were presented with a set of scenarios relating to essays; then, depending on which stream they had identified with, respondents were directed either to scenarios relating to computing assessments or to scenarios relating to visual assessments. For each scenario the respondents were asked ‘Is this plagiarism/collusion?’ (yes/unsure/no) and ‘Is this an acceptable practice?’ (yes/unsure/no). It might appear to some readers that we were effectively asking the same question twice, but we wished to explore whether there were

Page 22: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 16

practices considered acceptable although they were breaches of academic integrity guidelines, or practices considered unacceptable although they did not breach the guidelines.

The scenarios in each set were designed to be comparable, so that we could compare respondents’ answers to a given essay-based scenario with their answers to the corresponding non-text-based scenario. All four versions of the full survey are presented in appendices E, F, G, and H of this report, but here is one scenario from each set to illustrate the parallelism:

Essays: Borrowing another student’s essay and rewriting it in one’s own words

Visual design: Borrowing another student’s design and changing it so that it looks quite different

Computing: Borrowing another student’s code and changing it so that it looks quite different

All participants responded to the essay scenarios. They were then directed to either the computing or design scenarios, according to the discipline that they had nominated when starting the survey.

We found substantial differences between the respondents’ views on essays and non-text-based assessments, with the boundaries of acceptable and unacceptable practices more difficult to define for the latter.

A number of the computing scenarios were seen as plagiarism/collusion by significantly more respondents than the corresponding essay scenarios. These include using the work of others and fully referencing it. That is, there are respondents from computing who appear to believe that any use of the work of others in a computing assessment is plagiarism or collusion, regardless of whether it is referenced.

In the realm of non-text-based assessments, differences were observed between perceptions regarding assessments in visual design and assessments in computing. For example, purchasing code from the internet and including it in one’s assignment was seen as unacceptable by 86% of the computing respondents, while only 52% of design respondents considered it unacceptable to purchase an image from the internet and use it in one’s assignment.

The reuse of one’s own work, often known as self-plagiarism, was seen as unacceptable for essays by nearly 70% of the respondents and unacceptable for design assessments by 65% of the design cohort. In computing only 59% saw it as unacceptable, perhaps reflecting the positive regard accorded to code reuse. However, only about half of the respondents considered the practice to be plagiarism/ collusion, and a further quarter were unsure. This is possibly due to the standard definitions: the reuse of one’s own work is neither

Page 23: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 17

plagiarism, inappropriate use of the work of others, nor collusion, inappropriate collaboration with others. Nevertheless, it is considered by many to be unacceptable.

All of these findings support the proposition that academic integrity is not the same for non-text-based assessments as for essays, and indeed that it is not the same for different forms of non-text-based assessments.

These findings are described in more detail in a paper presented to the Sixth Asia-Pacific Conference on Education Integrity (Simon et al 2013a), and in press as a chapter in a book of selected papers from that conference.

Survey – design cohort Among the design respondents, the survey found a number of differences between their perceptions of essays and design assessments, and between what they see as plagiarism/collusion and what they see as unacceptable.

Some previous surveys have found that academics generally have a stricter view than students on what is acceptable (Brimble and Stevenson-Clarke 2005, Foltýnek et al 2013, Gynnild & Gotschalk 2008, Park 2003). While many of our findings support this, we did find one area in which students took a stricter view than academics. More design students than academics felt that it was plagiarism or collusion to ask other students for advice on how to improve a design, or even to discuss the detail of a design while working on it. This suggests that students are being extra cautious not to cross what they see as the blurred boundary of academic misconduct.

The survey included two scenarios that are generally considered not to constitute plagiarism or collusion: using the work of others and fully referencing it, and discussing the work in general terms with another student before undertaking it alone. While the majority of respondents agreed that these are not plagiarism or collusion, significantly more found these behaviours problematic for design assessments than for essays. Similar differences were found with some other scenarios; for example, revising an essay to incorporate issues found when reading another student’s essay was considered less problematic than the equivalent practice with a visual design. Conversely, basing a design on the work of another student, and basing it on freely available material without referencing the source, were less likely to be seen as plagiarism/collusion than the same practices when writing essays.

There were some interesting differences between what is considered to be plagiarism/ collusion and what is considered to be acceptable. Discussing the detail of the work while it is in progress, and seeking advice from another student after completing the work, were both considered by some respondents to be plagiarism/collusion and yet acceptable. By contrast, there were respondents who did not feel that resubmission of prior work was plagiarism/ collusion, yet saw it as unacceptable. Interestingly, there were academics who did not see it as plagiarism/collusion to purchase work from the internet, to pay another person to undertake the work, or to borrow another student’s work and change it; but who nevertheless saw these practices as unacceptable. These distinctions suggest that plagiarism and collusion are not yet adequately defined for the visual design context in all universities.

Page 24: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 18

High numbers of respondents were unsure whether a number of the scenarios constituted plagiarism/collusion: for example, posting an essay to an online forum and asking for advice on it, or purchasing images to incorporate into one’s design.

These findings are described in more detail in a paper presented to the 2014 conference of the Design Research Society (Simon et al 2014b).

Survey – computing cohort As assessed by our survey, there are two senses in which a practice can be considered wrong. It can be plagiarism or collusion, and thus in breach of the guidelines of academic integrity; or, somewhat independent of that, it can be unacceptable – known not to be right, even though it might not technically breach any rules. One focus of our analysis for the computing cohort was the extent to which certain practices are considered to be acceptable in the context of computing education.

According to discussion in the focus groups, many of the computing participants felt that unreferenced copying of code was legitimate so long as students subsequently did enough of their own work with the code. It was acknowledged that students are often discouraged from building their code from the ground up, and instead encouraged to make use of code that has already been written.

Three scenarios on the survey asked about using the work of other students. Doing so without the permission of the other student was considered unacceptable by 94% of the computing students; doing so with permission, and changing the code so that it looks different, was considered unacceptable by 81%; doing so with permission, and developing it into one’s own code, was considered unacceptable by only 67%. These figures support the notion expressed in the focus groups that using the code of others becomes more acceptable the more work one subsequently does with it.

An essay can have many errors of syntax and expression and still be readable. By contrast, a single syntactic error in a computer program will prevent it from running at all, and an error of expression will lead to incorrect output when it runs. Therefore identification and correction of errors (known as debugging) is an absolutely essential aspect of writing a programming assignment. Program development software provides some help with identifying errors, but students are often unable to make sense of the help or to know how to fix the problem. Help with debugging is therefore vital to many students when writing programs.

While only 15% of computing students consider it acceptable to give troublesome code to another student and ask them to fix it, 69% consider it acceptable to ask for another student’s advice on how to fix a program, and 52% consider it acceptable to post troublesome code to a message board and ask for help. For all of these scenarios there were also large numbers of respondents who were not sure. That is, most of the students consider it acceptable to seek the help of other students, or of professional programmers by way of the internet, when completing individual assignments.

While the opinions discussed here are those of the students, it is clear that many of the computing academics also see nothing wrong with collusion and plagiarism as they are generally defined. That is, they see it as acceptable for students to assist one another with individual assignments, and to make use of code written by others without acknowledging

Page 25: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 19

the fact or the source. These academics would therefore seem to hold positions that are at odds with the policies of their university, and something should be changed: either the academics should change their positions to align with the university’s policies, or the university’s policies should change to align with the understandings of the academics who teach in disciplines with different guidelines and expectations.

Moving from the question of acceptability to that of plagiarism/collusion, fewer than half of the participants thought that it was plagiarism or collusion to discuss the detail of work in progress, to show completed work to a friend and ask for advice on improving it, to post work to an online forum and ask for feedback, or to change completed work before submission to incorporate a feature from another student’s work. However, while these were judged to be plagiarism/collusion by a minority of participants, in each case the minority was bigger for the computing scenario than for the equivalent essay scenario; that is, the activity was more likely to be considered as plagiarism/collusion in the computing context than in the essay context.

Throughout all of the scenarios, uncertainty levels were high among both students and academics, indicating inadequate knowledge about the concepts of academic integrity.

Some interesting differences emerged between the perceptions of students and academics about what practices are considered acceptable. Significantly more students than academics thought it acceptable to base an assessment largely on one written for a previous course without acknowledging this, to base an assessment on freely available code without acknowledgement, and to base an assessment on freely available code and to reference the source.

On that same question of using the work of others and referencing it, significantly more academics found the practice unacceptable for computing assessments than for essays, supporting the impression that even referenced copying is not the accepted standard in programming assessments. With the scenarios of asking other students for advice and discussing the detail of work in progress, academics were more likely to classify them as plagiarism/collusion for computing assessments than for essays.

Another difference between essays and computing assessments emerged from the scenario of asking another student to improve one’s work when it was complete but prior to submission. Students were more likely to consider this as plagiarism/collusion for computing than for essays, yet more regarded it more acceptable for computing than for essays.

Differences were also found between acceptability and classification as plagiarism/collusion. The reuse of one’s own work, sometimes called recycling or self-plagiarism, was considered acceptable by significantly more participants than the number who classed it as plagiarism/collusion. This could well reflect that while the practice does not involve using the work of others, which is integral to most definitions of plagiarism and collusion, it is nevertheless seen as wrong.

Other practices showing as acceptable despite being plagiarism/collusion involve discussing the detail of code while working on the assignment and asking another student for assistance to fix troublesome code. These findings might reflect the absolute necessity, referred to earlier, of getting one’s code debugged and running.

Page 26: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 20

These findings are described in more detail in papers presented to the 19th ACM Conference on Innovation and Technology in Computer Science Education (Simon et al 2014c) and the Tenth International Computing Education Research Conference (Simon et al 2014d).

Page 27: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 21

Chapter 4: Dissemination and Impact Dissemination has been principally through fully peer-reviewed conference papers. We have presented to academic integrity conferences, which tend to focus on academic integrity of the written word; to a conference of design researchers many of whom are academics; and to several computing education conferences of computing academics.

Discussion on the topic has been lively whenever findings from the project have been presented, and it appears that many of those in attendance have acquired new thoughts about the issues of academic integrity as they apply to non-textual assessment items. It is too early to look for more formal evidence of impact, such as citations of these papers. It is hoped that the work will ultimately help to inform policy changes at the institutional level, but that is a slow process, and there is no evidence that it is yet happening.

In this chapter we list the titles and abstracts of the papers that have been published to date. Further analysis of the data and further publications are anticipated.

Plagiarism and related issues in assessments not involving text

Simon, Beth Cook, Judy Sheard, Angela Carbone, Chris Johnson, Chris Lawrence, and Mario Minichiello

Sixth Asia-Pacific Conference on Educational Integrity, Sydney, Australia, September 2013 (full version in press)

Abstract. Many disciplines of study use assessment items that are not written in prose text. Using focus groups and a nationwide survey, we investigate the perceptions of academics and students in computing and the visual arts regarding breaches of academic integrity both in essays and in non-text-based assessment items. The nationwide survey drew responses from all 39 universities in Australia. We find strong evidence that there are differences between non-text-based and text-based assessments, and that the boundaries of acceptable and unacceptable practices are more difficult to define for the former. In the realm of non-text-based assessments, we find differences between perceptions regarding assessments in visual design and assessments in computing. We find differences between what is regarded as plagiarism or collusion and what is regarded as unacceptable academic practice. Overall, we conclude that there is a case for different approaches to academic integrity for non-text-based assessments.

Academic integrity: differences between computing assessments and essays

Simon, Beth Cook, Judy Sheard, Angela Carbone, Chris Johnson

13th International Conference on Computing Education Research – Koli Calling 2013, Koli, Finland, November 2013

Page 28: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 22

Abstract. There appears to be a reasonably common understanding about plagiarism and collusion in essays and other assessment items written in prose text. However, most assessment items in computing are not based in prose. There are computer programs, databases, spreadsheets, and web designs, to name but a few. It is far from clear that the same sort of consensus about plagiarism and collusion applies when dealing with such assessment items; and indeed it is not clear that computing academics have the same core beliefs about originality of authorship as apply in the world of prose. We have conducted focus groups at three Australian universities to investigate what academics and students in computing think constitute breaches of academic integrity in non-text-based assessment items; how they regard such breaches; and how academics discourage such breaches, detect them, and deal with those that are found. We find a general belief that non-text-based computing assessments differ in this regard from text-based assessments, that the boundaries between acceptable and unacceptable practice are harder to define than they are for text assessments, and that there is a case for applying different standards to these two different types of assessment. We conclude by discussing what we can learn from these findings.

How well do academic integrity policies and procedures apply to non-text assessments?

Simon, Beth Cook, Judy Sheard, Angela Carbone, Chris Johnson, Chris Lawrence, Mario Minichiello

Sixth International Integrity and Plagiarism Conference, Gateshead, UK, June 2014

Abstract. Concerns regarding plagiarism and collusion in higher education have generated an extensive literature on the causes, composition and consequences of these practices, and the development and implementation of academic integrity policies. Although many disciplines use non-text-based assessments, such as computer programs, databases, spreadsheets, images and visual designs, the literature deals primarily with prose text.

This paper reports the results of a national Australian survey of academics and students who use non-text-based assessments, principally in computing or visual design. The survey investigated perceptions of the adequacy of current academic integrity policies, how academics and students ensure adherence to these policies, and how academics detect and deal with breaches of academic integrity.

The results reveal that a combination of education and assessment design methods are used to ensure compliance with academic integrity policies, that academics rely heavily on manual methods to detect breaches, and that they use both educational and punitive measures when breaches are detected. A consensus emerged that the current suite of academic integrity policies and associated educational efforts are inadequate for addressing issues related to assessments of these types. Participants perceived that existing policies and procedures regarding academic integrity are not always rigorously implemented for non-text-based assessments. A number of differences were evident between the views of academics and students, and between the computing and visual design streams. Future research could explore these differences more extensively and guide the development of discipline-specific guidelines, policies and procedures for non-text-based assessments.

Page 29: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 23

Academic integrity: differences between design assessments and essays

Simon, Beth Cook, Mario Minichiello, Chris Lawrence

2014 Conference of the Design Research Society, Umeå, Sweden, June 2014

Abstract. Perceptions of plagiarism and collusion in essays have occupied much research in academic integrity. This project explores such perceptions in relation to both text-based assessments such as essays and non-text-based assessment such as visual designs. The principal research instrument was an Australia-wide survey of academics and students who use non-text-based assessments.

We find substantial differences between perceptions in the text and non-text environments. With design assessments, participants are less likely to think that basing work on that of another student, or using freely available material without referencing it, is plagiarism or collusion; but they are more likely to think that discussing tasks with others or asking others to improve their work is plagiarism/collusion. Some participants deemed particular practices acceptable despite identifying them as plagiarism/collusion, and some regarded practices as unacceptable despite not considering them to be plagiarism/collusion.

As well as substantial differences in perceptions of plagiarism/collusion between text and non-text assessments, we find greater uncertainty regarding plagiarism and collusion in design assessments. This suggests a need for clear definitions of plagiarism and collusion for design assessments, and for universities to incorporate these definitions into their academic integrity policies and to implement appropriate educational strategies for academics and students.

Student perceptions of the acceptability of various code-writing practices

Simon, Beth Cook, Judy Sheard, Angela Carbone, Chris Johnson

19th ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’14), Uppsala, Sweden, June 2014

Abstract. This paper reports on research that used focus groups and a national online survey of computing students at Australian universities to investigate perceptions of acceptable academic practices in writing program code for assessment. The results indicate that computing students lack a comprehensive understanding of what constitutes acceptable academic practice with regard to writing program code. They are not clear on the need to reference code taken from other sources, or on how to do so. Where code from other sources is used, or inappropriate collaboration takes place between students, there appears to be a feeling that any academic misconduct is diminished or even nullified if the students subsequently work with the code to make it their own. These findings suggest a need for the development of standards that elucidate acceptable practices for computing, combined with ongoing education of computing students.

Page 30: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 24

Academic integrity perceptions regarding computing assessments and essays

Simon, Beth Cook, Judy Sheard, Angela Carbone, Chris Johnson

Tenth International Computing Education Research Conference (ICER 2014), Glasgow, Scotland, August 2014

Abstract. Student perceptions of academic integrity have been extensively researched in relation to text-based assessments, but there is rather less research relating to non-text-based assessments such as computer programs, databases, and spreadsheets. This paper reports the findings from a survey of computing students and academics to investigate perceptions of particular academic practices with regard to both essays and computing assessments. For each practice the research sought to discover whether it was perceived to constitute plagiarism or collusion and whether it was considered to be acceptable in an academic environment. While there was general agreement between academics and students regarding some practices, both groups displayed high levels of uncertainty about other practices. There was considerable variation between their attitudes to similar practices in the text and non-text environments, and between what was seen as plagiarism/collusion and perceptions of unacceptability. That is, there were practices that were perceived to be plagiarism or collusion but were considered acceptable, and others that were considered not to be plagiarism or collusion but were nevertheless thought unacceptable. These findings suggest a need for academic integrity policies and procedures specific to computing, accompanied by discipline-specific student education.

Academic integrity and professional integrity in computing education

Simon, Judy Sheard

20th ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’15), Vilnius, Lithuania, June 2015

Abstract. Certain practices, such as unauthorised collaboration with other students and unreferenced copying from external sources, are generally considered in the educational context to be breaches of academic integrity. This paper explores whether there are differences between the perceptions of the acceptability of these practices in the academic context and in the professional context.

From focus groups of computing academics and students, and an online survey, we find that there are indeed differences in perceptions: that many practices considered unacceptable in the academic context are considered significantly more acceptable in the professional context.

This raises questions concerning the roles of summative assessment and the possibilities of authentic assessment. The paper concludes that in programming education there is an unbreachable rift between the goal of authentic assessment, which necessarily entails collaborative work, and the need for summative assessment of individual effort, which typically requires work in isolation.

Page 31: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 25

The findings of our research have implications for computing education programs, particularly in regard to preparation of students for the workforce.

In their own words: students and academics write about academic integrity

Simon, Judy Sheard

15th International Conference on Computing Education Research – Koli Calling 2015, Koli, Finland, November 2015

Abstract. We report on a survey of Australian computing students and academics that was designed to explore their thoughts about academic integrity with regard to the assessments undertaken in computing degrees. A number of questions on the survey permitted free-text responses, and we have conducted a qualitative analysis of those responses to identify concerns that were not covered in the quantitative part of the survey and to uncover new perspectives on issues that were covered in the survey.

In response to specific questions, we identified a perception that copying program code without reference can be legitimate; a perception that copying of program code cannot be detected because all correct answers will effectively be the same; and some suggestions, possibly hitherto unreported, as to why students might engage in academic misconduct.

In response to a completely open question, we identified four themes, concerning the implementation and the applicability of academic integrity policy and procedure, possible reasons for breaching academic integrity, and justifications for breaching academic integrity.

We conclude by discussing what can be done, by universities and by individual academics, to bring the academic discipline of computing closer to consensus with regard to the meaning of academic integrity and its practice in computing assessments.

Academic integrity and computing assessments

Simon, Judy Sheard

18th Australasian Computing Education Conference (ACE 2016), Canberra, Australia, February 2016

Abstract. A recent Australian project has investigated academics’ and students’ understandings of and attitudes to academic integrity in computing assessments. We explain the project and summarise some of its findings, which have been presented in a number of prior papers. In an extended discussion section we then raise a number of questions that we believe must be addressed by the computing education community if it is to be seen to take academic integrity seriously. We question the value and the validity of a number of current educational practices, and urge the community to work towards resolutions of the unanswered questions.

Page 32: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 26

Chapter 5: Conclusions Much of the literature of academic integrity is expressed in terms that make sense only in the context of assessment items such as essays and other items that are written in prose text. Indeed, it can be argued that the general understanding of academic integrity is based on the assumption that assessment takes the form of essays or similar pieces of prose text. The findings of this project suggest that this is an oversimplification that leads to confusion among both students and academics.

It appears to be a defining principle of academic integrity that whenever one uses the words or ideas of others, one must include references to acknowledge the source of those words or ideas. This project has produced evidence that this apparent defining principle does not apply when one moves away from prose text as a vehicle for assessment.

In visual design and related disciplines it is fully understood that every design is based to varying degrees on other designs; some cases might constitute clear homage, while in others the designer might not be consciously aware of the sources from which inspiration might have been drawn.

In computer programming, students and professionals alike are taught to reuse algorithms and code rather than recreating them. Further, it is a standard practice to work together with others, and particularly to seek the help of others in identifying and fixing errors in program code.

In neither of these areas is there a way of including references akin to the in-text references of the written word. References can be provided in accompanying notes, or in comments in a computer program, but these will not be seen by the viewer of the design or the user of the computer program, whereas in-text references are integral to the text of an essay and are necessarily seen by people reading the essay. Furthermore, in visual design and in computer programming there are no standard approaches for writing such references.

But the lack of referencing standards is not the issue, because in both visual design and computer programming it is accepted that the influence of others does not need to be referenced in the same way that it does in an essay. Notwithstanding the policies of Australia’s universities, academics in these areas tend to expect their students to behave in a way that is more in accord with relevant professional practice.

This is not to suggest that there is no such thing as academic integrity in visual design or in computing. In both areas, behaving with academic integrity means ensuring that one’s work is conducted in the manner expected by the discipline, ensuring that the work is one’s own within the parameters understood to apply within the discipline. But those parameters are

Page 33: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 27

not the same as those that apply in disciplines that assess through the medium of prose text.

If a discipline that uses non-textual assessment items agrees that some form of referencing is appropriate when using the work of others, that discipline should explain how sources should be acknowledged, and should provide exemplars to illustrate the methods.

If academic integrity in the true sense of the words is to be encouraged at Australia’s universities, the policies and guidelines need to be reconsidered, with full recognition of the practices and understandings that apply in each discipline and for each form of assessment item. This will be a huge undertaking, but without it, universities will continue to expect their many and varied forms of assessment to comply with policies that are designed and framed for one subset of those assessment forms.

A number of questions need to be considered as part of the reframing of academic integrity.

Both the focus groups and the free-text comments on the survey suggested that there is wide-scale confusion between copyright and licensing on the one hand and academic integrity on the other. Material can be free of copyright and not require licensing, but this does not mean that students and academics are free to use it in a way that suggests that they produced it.

With program code there is evidence of a belief that understanding is a reasonable surrogate for writing: some students and some academics suggest that if the student understands the work, or can explain it, the student can be considered to have written it. We have seen no evidence that such a connection would ever be considered with prose text (please explain this essay to me to convince me that you wrote it) or with the visual arts. It is equally unlikely to apply in other creative areas, such as music composition. Perhaps it might be valid in an area such as mathematics, in which the medium of expression is so formalised that there are very few ways to express a given concept such as a proof. In such circumstances, more emphasis would be placed on correctness of expression than on originality of expression. It is possible that to a lesser extent this also pertains to program code, but this would need to be determined by the discipline.

In both of the disciplines studied by this project there is clear evidence of student confusion: are students permitted to reuse material without referencing, may they reuse with referencing, or must they create every new assessment item from the very start? There does not appear to be a single answer to this question: students mention that different academics have different expectations. For this reason it is essential that academics with any expectation regarding academic integrity make that expectation absolutely clear to their students: it is not feasible to rely on the understanding that they have gleaned from an inappropriate university-level policy or from a fellow academic with different expectations.

Page 34: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 28

Perhaps because the current policies and guidelines seem inappropriate for many forms of assessment item, there are very few exemplars of good practice in academic integrity as it pertains to various non-textual items. Suitable exemplars might emerge during discussions within the disciplines, and should be widely circulated as educational resources for students and for academics.

Although it is mentioned in the policies of many universities, the question of self-plagiarism is a vexed one. There is much uncertainty about the practice, among students and academics, with regard to both textual and non-textual assessment items. If a university decides that the practice is unacceptable, it must make this abundantly clear – ideally with a reason, as many people see no problem with reusing their own work when it is pertinent to a fresh task. Moreover, the term ‘self-plagiarism’ should be dropped, as it appears to be a contradiction, plagiarism being typically defined in terms of using the work of others. Perhaps it could be called ‘academic recycling’.

Page 35: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 29

Appendix A: certification

Certification by Deputy Vice-Chancellor (or equivalent)

I certify that all parts of the final report for this OLT grant provide an accurate representation of the implementation, impact and findings of the project, and that the report is of publishable quality.

Name: ………Professor Andrew Parfitt........................Date: ……13 May 2015............………

Page 36: Plagiarism and related issues in assessments not involving ... · Plagiarism and related issues in assessments not involving text . Final Report 2015 . The University of Newcastle

Plagiarism and related issues in assessment not involving texts 2012-2013 30

Further Appendices Appendix B: Bibliography

Appendix C: Interview questions

Appendix D: Focus group questions

Appendix E: Survey – version for design students

Appendix F: Survey – version for computing students

Appendix G: Survey – version for design academics

Appendix H: Survey – version for computing academics

Appendix I: Example computing guideline from Australian National University

Appendix J: Example computing guideline from The University of Newcastle

Appendix K: Example online resource from Monash University

Appendix L: Example design guideline from The University of Newcastle