the bar standards board central examinations board · pdf filethe bar standards board central...

61
Page 1 of 61 THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 Spring 2017

Upload: lythu

Post on 21-Mar-2018

290 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 1 of 61

THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017

Spring 2017

Page 2: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 2 of 61

EXECUTIVE SUMMARY The Central Examination Board (‘CEB’) has now completed its sixth cycle of overseeing Spring assessments in the three knowledge areas of the Bar Professional Training Course (‘BPTC’). The confirmed post-intervention outcomes of the Spring 2017 centralised assessments following review of cohort performance by the CEB are as follows:

2017 Spring Sit

2016 Spring Sit

2015 Spring Sit

2014 Spring Sit

Change Spring 2016 to 2017

Professional Ethics

Number of Candidates 1,589 1,570 1,572 1,649 +19

Passing MCQ N/A 97.4% 91.5% 81.0% N/A

Passing SAQ 57.6% 70.8% 58.0% 65.6% -13.2%

Passing Overall 57.6% 70.2% 56.7% 59.6% -12.6%

Civil Litigation, Evidence and Sentencing

Number of Candidates 1,597 1,499 1,595 1,663 +98

Passing MCQ 60.2% 74.1% 71.3% 68.6% -13.9%

Passing SAQ N/A 68.4% 65.0% 67.8% N/A

Passing Overall 60.2% 62.2% 58.0% 57.4% -2.0%

Criminal Litigation, Evidence and Sentencing

Number of Candidates 1,502 1,421 1,483 1,586 +81

Passing MCQ 78.2% 85.9% 83.3% 84.1% -7.7%

Passing SAQ N/A 72.1% 64.2% 78.2% N/A

Passing Overall 78.2% 70.3% 62.5% 72.8% 7.9%

1. BACKGROUND AND CONTEXT 1.1 Why the Central Examinations Board was established The 2010/11 academic year saw the first round of assessments under the BPTC regime (replacing the BVC) in the wake of the Wood Report (July 2008). For 2010/11, all Providers were required to assess candidates in Professional Ethics, Civil Litigation, Remedies & Evidence (‘Civil Litigation’), and Criminal Litigation, Evidence & Sentencing (‘Criminal Litigation’) (often referred to as the ‘knowledge areas’) by means of multiple choice questions (MCQs) and short answer questions (SAQs). Together these three subjects represent 25% of the BPTC (i.e. 30 credits out of 120). For 2010/11 the knowledge area assessments were set and marked by the Providers. Centralising these assessments was a key recommendation of the Wood Report, and the Central Examinations Board was established to oversee this change on behalf of the Bar Standards Board (‘BSB’). 2011/12 was the first year of

Page 3: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 3 of 61

operation for the system of centralised examinations for the knowledge areas on the BPTC. No changes were made to the format of assessment, but the setting of the assessments was undertaken independently of the Providers by a team of CEB examiners appointed by the BSB. 1.2 The 2011/12 to 2015/16 assessment formats From the 2011/12 academic year, up to and including the 2015/16 academic year, candidates in each of the three centrally assessed subjects were required to attempt a MCQ test, and a SAQ test. The Civil and Criminal Litigation assessments were comprised of 40 MCQs and five SAQs, candidates having three hours to complete the assessments. In Professional Ethics candidates were required to attempt 20 MCQs and three SAQs in two hours. All questions were compulsory and the pass mark in each paper was fixed at 60%. All MCQ papers were marked electronically using Speedwell scanning technology. All SAQ papers were marked by teaching staff at the relevant Provider institution, with marks being remitted to the CEB for processing. The marks for the MCQ and SAQ papers were aggregated to provide each candidate with a combined mark for each assessment. Candidates were required to achieve the pass mark of 60% in both elements of each assessment, there being no scope for the aggregation of marks below 60% between MCQ and SAQ scores to achieve the minimum 60% pass mark overall. 1.3 The assessment formats Spring 2017 onwards For the academic year Spring 2017, acting on the recommendations of the BSB’s Education and Training Committee, the CEB introduced significant changes to the format and marking processes for the centralised assessments on the BPTC. Both the Civil Litigation and Criminal Litigation assessments were modified to become three hour papers comprising 75 MCQ and Single Best Answer (SBA) questions. This change meant that the answers for the entire paper in each subject could be marked electronically using Speedwell scanning technology. The assessment in Professional Ethics became a two-hour paper comprised of six SAQs, the marking being undertaken by a team of independent markers appointed by the BSB. 1.3.1 This was also the first year in which Bar Transfer Test (BTT) candidates had

to take centralised assessments in the three knowledge areas rather than those delivered by the institution providing BTT training, BPP University. For the Spring 2017 sitting, BTT candidates thus sat the same Civil Litigation and Criminal Litigation papers as the BPTC cohort on the same dates, and (for logistical reasons relating to the Spring 2017 assessment) a separate Professional Ethics paper (see further section 6 for BTT results).

Page 4: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 4 of 61

1.4 Table of Provider centres and active dates

Organisation Centre 11/12 12/13 13/14 14/15 15/16 16/17

BPP University London Yes Yes Yes Yes Yes Yes

BPP University Leeds Yes Yes Yes Yes Yes Yes

BPP University Manchester No No Yes Yes Yes Yes

BPP University Birmingham No No No No Yes Yes

Cardiff University

Cardiff Yes Yes Yes Yes Yes Yes

City University London Yes Yes Yes Yes Yes Yes

University of Law (‘ULaw’)

Birmingham Yes Yes Yes Yes Yes Yes

University of Law (‘ULaw’)

London Yes Yes Yes Yes Yes Yes

University of Law (‘ULaw’)

Leeds No No No No No Yes

University of the West of England (‘UWE”)

Bristol Yes Yes Yes Yes Yes Yes

University of Northumbria (‘UNN’)

Newcastle Yes Yes Yes Yes Yes Yes

Manchester Metropolitan University (‘MMU’)

Manchester Yes Yes Yes Yes Yes Yes

Nottingham Trent University (‘NTU’)

Nottingham Yes Yes Yes Yes Yes Yes

Kaplan Law School

London Yes Yes Yes Referrals only

No No

1.4.1 As indicated above, BPP started to deliver the BPTC in Manchester (in Spring

2014) and Birmingham (in Spring 2016). The University of Law Leeds branch had examination candidates for the first time in Spring 2017. Kaplan Law School recruited its last intake in the 2013/14 academic year (although it had a very small number of referred and deferred candidates in the Spring 2015 cohort and a handful of candidates finishing in 2015/16).

1.5 Terms used in this report

• “All-Provider” refers to the aggregated data bringing together cohort performance across all Providers centres

• “By Provider” refers to data comparing the performance of each of the Providers relative to each other

• “Spring sit” refers to the March/April/May exam cycle. Note that some candidates undertaking these examinations may be doing so on a referred or deferred basis

Page 5: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 5 of 61

• “Summer sit” refers to the August exam cycle. Some candidates undertaking these examinations may be doing so on a deferred basis (i.e. as if for the first time)

• “Combined” refers to the pre-Spring 2017 assessment format where the result for a centrally assessed knowledge area was arrived at by aggregating a candidate’s MCQ and SAQ scores.

2. THE ASSESSMENT PROCESS FOR SPING 2017 The assessment process is overseen by the CEB whose members are appointed by the BSB. The CEB comprises a Chair, teams of examiners (a Chief Examiner and a number of Assistant Examiners for each subject), an independent observer, an independent psychometrician and senior staff from the BSB. The Chair and the examiners between them contribute a mix of both academic and practitioner experience. 2.1 How examination papers are devised and approved 2.1.1 The bank of material used for compiling the centralised assessments is

derived from a number of sources, including questions submitted by Provider institutions at the request of the BSB, questions devised by specialist question writers commissioned by the BSB (some of whom are based at Provider institutions), and questions devised by members of the central examining teams.

2.1.2 Draft assessment papers are compiled by the relevant CEB examiner teams,

under the guidance of the Chief Examiner for each centrally assessed knowledge area. A series of meetings are held, attended by the relevant examiner team, the Chair of the CEB, and key BSB support staff. These meetings consider the suitability of each question and the proposed answer, with particular emphasis on balance of coverage, syllabus coverage, currency of material, clarity and coherence of material, and level of challenge. In addition, the draft Litigation papers are reviewed by the BSB’s syllabus team to ensure that all questions comply with the current curriculum. Any recommendations made during this process by the BSB’s syllabus team are passed on to the Chief Examiner who will agree any changes to be made to the draft paper. The draft paper is then stress tested under the equivalent of exam conditions, and the outcomes used to inform further review by the relevant Chief Examiner. For Professional Ethics, a Technical Reader checks the draft exam paper to assess whether the examination questions are, in legal terms, technically correct and the language sufficiently clear. The outcome of this process is fed back to the Chief Examiner who makes the final decision on whether to alter any of the questions as a result. Finally, a proof reader checks each exam paper for compliance with house style, grammatical accuracy, typographical errors; and ease of reading.

Page 6: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 6 of 61

2.2 Standard setting: Civil Litigation & Evidence, and Criminal Litigation, Evidence & Sentencing 2.2.1 Before candidates attempt the examinations for Civil Litigation and Criminal

Litigation the papers are subjected to a standard setting process to determine a pass mark which will be recommended to the Final Examination Board. The method used for these two subjects is known as the Angoff Method, and it helps ensure that the standard required to achieve a pass mark is consistent from one sitting of the assessment to the next. Using standard setting, the number of MCQs a candidate needs to answer correctly in order to pass the assessment may go up or down from one sitting to the next depending on the level of challenge presented by the exam paper as determined by the standard setters. For a more detailed explanation of this process see: https://www.barstandardsBoard.org.uk/media/1854147/20171005_standard_setting_centralised_assessments.pdf

2.2.2 Standard setting for the Professional Ethics paper takes place after the

examination in that subject as explained below at 2.5. 2.3 How the exams are conducted 2.3.1 Candidates at each of the Provider institutions attempted the assessments in

each of the knowledge areas on the same dates as follows: BPTC Professional Ethics Friday 24 March 2017at 2pm BPTC and BTT Civil Litigation Tuesday 2 May 2017 at 2pm BPTC and BTT Criminal Litigation Thursday 4 May 2017 at 2pm BTT Professional Ethics Friday 21 April 2017 at 10am 2.3.2 In any case where a Provider identifies candidates as having special

assessment arrangements necessitating a start time earlier than that of the main cohort, the relevant candidates are not allowed to leave their assessment area until the commencement of the main cohort assessment. Secure delivery and collection arrangements are put in place for all examination materials.

2.3.3 In exceptional circumstances candidates can be allowed to attempt the

assessments at locations overseas. The onus is placed on the candidates’ Provider institution to ensure that a secure assessment centre is available, and the BSB normally requires the start time of the examination at the overseas centre to be the same as the UK start time (an earlier/later start time may be permitted as long as there is an overlap and candidates are quarantined). To ensure the complete security of the examination papers the BSB dispatches all examinations to the overseas contacts directly. See: https://www.barstandardsBoard.org.uk/qualifying-as-a-barrister/current-requirements/bar-professional-training-course/bptc-centralised-assessments/before-the-exam/

2.3.4 Provider institutions are given guidance on examination arrangements by the

BSB. Exam invigilation reports are submitted by Providers, detailing any major

Page 7: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 7 of 61

issues they believe may have had a bearing on the conduct of the examination itself at their assessment centres (for example, transport difficulties, bomb hoaxes, fire alarms, building noise), and these reports will be considered at the CEB Subject and Final Exam Boards.

2.3.5 Each Provider oversees its own "fit to sit" policy. Some Providers require

candidates to complete a "fit to sit" form at the time of an exam. Other Providers will complete this process at enrolment, candidates confirming that if they are present at the time of the exam, they are fit to sit the exam.

2.4 Marking 2.4.1 Candidates attempting the MCQ papers in Civil Litigation and Criminal

Litigation record their answers on machine-readable answer sheets. Provider institutions return the original answer sheets to the BSB for machine marking. The MCQ answer sheet scanning is undertaken by specially trained BSB support staff, using Speedwell scanners and software. The scanner removes the risk of wrongly capturing marks which may occur with human input. This process enables accurate production of data statistics and results analysis.

2.4.2 For Professional Ethics, candidates write their answers to the SAQs in the

answer booklets supplied by the BSB. These are scanned and uploaded by the Provider institutions, each candidate having a unique candidate number. Marking is undertaken by a large team of assessors appointed by the BSB (suitably qualified and trained individuals including academics working at Provider institutions, barristers and solicitors). To preserve candidate anonymity scripts are checked by CEB staff, prior to marking, to remove any identification of the candidate’s Provider institution. The candidate numbers uploaded are checked against the attendance lists sent by Providers to ensure all candidate scripts are accounted for.

2.4.3 The Chief Examiner for Professional Ethics reviews a random selection of

scripts (approximately 10%) in order to select ten that he feels are likely to generate a lot of discussion about the marks that should be awarded (e.g. scripts which are illegible, contain mutually exclusive answers, or with a correct answer that was not on the indicative mark scheme.) These ten scripts are then copied and sent to all members of the central marking team along with a draft marking scheme. The initial marks awarded by all markers are compiled into a spreadsheet to indicate where markers see consensus and where markers differ greatly. All markers then attend a markers’ meeting where, following consultation, the Chief Examiner proposes a range of refinements to the marking scheme.

2.4.4 The independent psychometrician compares the marks awarded by markers

in this preliminary exercise with the marks awarded to the same scripts by the Chief Examiner. Markers are then allocated questions to mark on the basis of how closely aligned their marks are to those of the Chief Examiner. Markers are then divided into six teams, each team consisting of BPTC Provider staff and practitioners. Care is taken to ensure Provider-based markers are not marking their own candidates’ scripts. This arrangement means that one

Page 8: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 8 of 61

member in each of the six marking teams only marks SAQ1, another only marks SAQ2 and so on. The advantage of this approach is that a candidate’s script is mark by six different examiners, thus helping to even out the impact of markers who are “hawks” (harder markers) and “doves (more generous markers). It also removes the ‘halo effect’ whereby a good (or poor) answer to a particular SAQ influences the marks awarded to other answers. As a further safeguard markers were asked to supply their marks in tranches to the Chief Examiner during the marking process so that further checks could be conducted on a risk based approach before the completion of the marking process.

2.4.5 Markers are supplied with an Excel spreadsheet onto which they enter the

relevant data. The Chief Examiner provides markers with a finalised marking scheme and they are encouraged to raise queries with the Professional Ethics examiner team as their marking progresses. Markers record their marks on the spreadsheet and these are returned to the CEB for processing and further clerical checks for duplicate entries and missing marks. The spreadsheet permits analysis of the way in which all markers approached a particular question, and allows comparison of marker group performance and individual marker performance.

2.4.6 Markers are instructed that they may award a candidate a mark of 0 for a part

of an answer if what the candidate has written is incoherent prose (bullet-point answers are acceptable). Similarly, where the salient points can only be identified by the marker making an extensive search for points throughout unconnected parts of the examination script, they are instructed that they may award a mark of 0 rather than joining together unconnected points from across the candidate’s script. Any decision by a marker that a script falls below these thresholds is subject to review and moderation to ensure fairness and consistency in the application of these threshold requirements. A similar approach is taken where an answer is deemed illegible.

2.4.7. For all three centrally assessed knowledge areas, once the marking is

completed, statistical data is generated (based on candidates' marks) and presented at a series of examination Boards.

2.5 Standard setting for the Professional Ethics assessment In Professional Ethics, standard setting uses the Contrasting Groups method. Candidate scripts are marked (as explained at 2.4.2 to 2.4.6 above) and a group of standard setters (who are not aware of the marks awarded) review a sample of scripts in order to allocate them to one of three groupings: ‘pass”, “fail” or “borderline”. Once this process is complete the data is analysed to identify the correlation between the marks awarded and the “borderline” performance, and in turn the recommended passing standard for the assessment. A more detailed explanation of this process can be found at: https://www.barstandardsBoard.org.uk/media/1854147/20171005_standard_setting_centralised_assessments.pdf

Page 9: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 9 of 61

2.6 Examination Boards 2.6.1 The CEB operates a two-tier examination Board process. A first-tier Subject

Board is convened for each of the knowledge areas attended by all members of the examining team, the independent psychometrician and independent observer. The recommendations from each of these first-tier Boards are then fed into an over-arching Final Examination Board where the recommendations are considered and a final decision on cohort performance in each of the centralised assessment knowledge areas is arrived at.

2.6.2 Each Subject Board reviews the raw data on cohort performance in relation to

the assessment as a whole and each component question (or part-question) comprising the assessment. The Subject Board is advised by the independent psychometrician in respect of the outcome of the standard setting process and the impact of the recommended pass standard is reflected in the passing rate data presented to the Subject Board. The Subject Board has the option to review whether (exceptionally) there are any reasons not to adopt the recommended pass standard and agree on a different pass standard. The key data presented to the Subject Board will then reflect the pass standard appropriate for the assessment, and will also include:

• overall pass rates and Provider pass rates for the current and previous two cycles of assessment;

• data showing the pass rate for each MCQ (for Civil and Criminal Litigation) and each component of each Ethics SAQ, achieved at each of the Providers cross-referenced to the representations made in the assessment pro-formas returned by the Providers – thus flagging up any correlation of Provider criticisms and concerns with systemic poor performance by candidates.

• ‘Manhattan’ diagrams (pentile histograms) which rank candidates into 20% bands based on their performance in an exam. For each exam question, the first bar of the Manhattan diagram shows the top 20% of candidates and the proportion who answered the question correctly. A decrease in correct answers going down through the bands indicates a good discrimination between strong and weak candidates.

• the Chief examiner’s commentary on the assessment process;

• Invigilator reports detailing evidence of issues that may have impacted on the conduct of the examination itself at any Provider centre.

2.6.3 On the basis of the above evidence, and as advised by the independent

psychometrician, the Subject Boards have the discretion to intervene where there is evidence that a particular element of an assessment has not operated effectively. Options typically include:

• crediting more than one answer to an MCQ as correct.

• disregarding an MCQ or part of an SAQ entirely if deemed defective or inappropriate (e.g. no correct answer) – no candidate is credited and the maximum score is recalculated.

• crediting all candidates with the correct answer if an MCQ or part of an SAQ is deemed defective or inappropriate.

• scaling overall marks for an assessment, or for a particular sub-cohort due to local assessment issues (provided the sub-cohort constitutes a statistically reliable sample for scaling purposes)

Page 10: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 10 of 61

2.6.4 In confirming marks for cohorts of candidates the CEB is concerned to ensure

that a consistent measure of achievement has been applied across all Providers, and that proper account has been taken of any relevant factors that may have had a bearing on the performance of a cohort of candidates. As a result, the CEB has the discretion to scale cohort marks (upwards or downwards) if it feels there are issues relating to all candidates, or a statistically relevant sub-cohort of candidates, that justify such intervention. The CEB will not use this discretion to intervene in respect of issues arising from the delivery of the course by a Provider or matters related to the conduct of the assessment that can be dealt with through a Provider’s extenuation processes.

2.6.5 The Final Examination Board considers the recommendations of the Subject

Boards in respect of the Provider cohort performances in the three knowledge areas. The meeting is attended by the CEB chair, the relevant chief examiners, key BSB staff, an independent psychometrician and independent observer. The function of the Final Examination Board is to test the recommendations of the Subject Boards, and to confirm the MCQ/SAQ cohort marks subject to any outstanding quality assurance issues. Once cohort marks are confirmed by the CEB they cannot subsequently be altered by Provider institutions. The process for challenging marks confirmed by the CEB is outlined here.

2.7 Reporting results to Providers 2.7.1 Once the CEB has confirmed the centralised assessment marks for each

cohort of candidates at each Provider the marks are distributed to the Providers where they feed into the individual BPTC or BTT candidate profiles considered at the Provider award and progression examination Boards. The actual scores achieved by candidates need to be aligned with the a 60% passing mark in order to best fit with the Providers’ systems. Hence, if the passing standard for Criminal Litigation is 43/75 (in effect 57%), a candidate achieving 43/75 will be reported as having a score of 60% (the pass mark). All other candidate scores will be translated accordingly depending on the pass standard adopted.

2.7.2 It is at the BPTC Provider examination Boards that issues relating to individual

candidates such as extenuating circumstances or academic misconduct are considered.

2.8 Grade boundary allocations 2.8.1 In addition to receiving a % score for each of the centrally assessed subjects,

BPTC candidates are also allocated to one of four grade groups (Outstanding, Very Competent, Competent and Not Competent) depending on their performance in each assessment. The CEB does not exercise any discretion in respect of these gradings – they are a product of the score achieved by the candidate. Prior to the introduction of standard setting to determine the pass

Page 11: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 11 of 61

standard for centralised assessments, the 60% to 100% range used for the awarding of passing grades was apportioned as follows:

• 10% (60-69%) for “Competent” (i.e. 25% of the available range from 60% to 100%);

• 15% (70-84%) for “Very Competent” (i.e. 37.5% of the available range from 60% to 100%); and

• 15% (85-100%) for “Outstanding” (i.e. 37.5% of the available range from 60% to 100%)

This was effectively a 2:3:3 allocation ratio across the three passing grades. 2.8.2 At its June 2017 meeting, the CEB Final Examination Board reviewed the

options in respect of the approach to be adopted to the allocation of grade boundaries in the light of the introduction of standard setting (where the passing standard can vary from one assessment to the next). Two options were considered: the “2:3:3” ratio methodology and a norm-referencing approach. Norm-referencing takes data from previous cycles as an indication of what a typical cohort performance might be expected to look like.

2.8.3 On the basis of the four Spring assessment cycles from 2012/13 to 2015/16

the averages for each of the centrally assessed subjects would be:

Professional Ethics Outstanding

Very Competent Competent

Not Competent

2012/13 20.2 54.5 11.6 13.7

2013/14 8.2 34.9 18.6 40.3

2014/15 8.8 35.4 12.5 43.3

2015/16 16.3 47 6.9 29.8

Average 4 cycles 13.1 43.0 12.2 31.8

Criminal Litigation Outstanding

Very Competent Competent

Not Competent

2012/13 14.0 42.8 11.3 31.8

2013/14 16.8 39.2 16.8 28.2

2014/15 18.5 33.6 11.5 38.5

2015/16 20.7 36.1 13.3 29.7

Average 4 cycles 18.3 38.9 13.2 31.6

Civil Litigation Outstanding

Very Competent Competent

Not Competent

2012/13 8.4 31.8 18.0 43.8

2013/14 8.6 32.8 18.6 42.6

2014/15 13.0 31.6 13.4 42.0

2015/16 16.1 31.3 14.8 38.8

Average 4 cycles 11.0 31.9 15.7 41.6

Page 12: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 12 of 61

2.8.4 Taking Professional Ethics as the example, on average over the last four years 13% of candidates have achieved “Outstanding”, 43% “Very Competent” and 12% “Competent”. The remainder have been “Not Competent”. Taking those that have passed as a group the ratio of the three passing grades is roughly 23:59:18. Using the same methodology, the ratios would be approximately 26:55:19 for Criminal Litigation and approximately 19:54:27 for Civil Litigation.

2.8.5 Applying the “2:3:3” ratio methodology, if the standard setting process

produced pass standards of 45/75 (60%) for both the Civil and Criminal Litigation papers the grade boundary points would be as follows (applying the 25%; 37.5% and 37.5% proportions above):

MarkThresholds

Raw Scaled Scalefactor

Competent 45 60 1.33

VeryCompetent 53 70 1.32

Outstanding 64 85 1.33

Maxmark 75 100 1.33 2.8.6 Similarly, for Professional Ethics (where a score of 36/60 would be 60%) the

grade boundary points would be:

MarkThresholds

Raw Scaled Scalefactor

Competent 36 60 1.67

VeryCompetent 42 70 1.67

Outstanding 51 85 1.67

Maxmark 60 100 1.67 2.8.7 Where, however, the standard setting process recommends a pass standard

that deviates from 45/75 or 36/60 the grade boundaries need to be recalibrated to maintain the 2:3:3 ratio (as explained at above at 2.8.3). For example, if the Civil Litigation pass standard was determined to be 50/75 (reflecting a view by the standard setters that the paper was less challenging) the grade boundaries (using the methodology outlined above) would be as follows:

MarkThresholds

Raw Scaled Scalefactor

Competent 50 60 1.20

VeryCompetent 56 70 1.24

Outstanding 66 85 1.30

Maxmark 75 100 1.33

Hence, with a pass standard of 50/75, a candidate would have to correctly answer at least 66/75 MCQs to be classified as “Outstanding” instead of 64/75 if the pass standard had been 45/75.

Page 13: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 13 of 61

2.8.8 Similarly if, for example, in Professional Ethics the standard setting process produced a pass standard of 24/60 the grade boundaries (using the methodology outlined above) would be as follows:

MarkThresholds

Raw Scaled Scalefactor

Competent 24 60 2.50

VeryCompetent 33 70 2.12

Outstanding 47 85 1.83

Maxmark 60 100 1.67

Hence, a candidate would only have to achieve 47/75 to be classified as “Outstanding” instead of 51/75 if the pass standard had been 36/60.

2.8.9 The Final Examination Board was unanimous in its view that the “2:3:3” ratio

methodology was to be preferred as a more objective approach to allocating candidates to the grade boundary framework on the basis that it was neither transparent nor best practice to adopt a quota-based approach to grade boundaries, and such an approach was not reflected in any other aspect of the CEB’s work. The CEB has always taken the view that the percentage of candidates falling within any particular grade boundary was a product of the examination process and not something that was in any way engineered by the CEB as a desirable or acceptable outcome.

2.8.10 Note that where a candidate’s standard setting adjusted % score falls

between two whole numbers a rounding up methodology is applied, hence a candidate with a post standard setting score of 69.5% is reported as “Very Competent” as the 69.5% is treated, for the purposes of grade boundary allocation, as 70%.

Page 14: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 14 of 61

3. SPRING 2017 RESULTS IN PROFESSIONAL ETHICS 3.1 Professional Ethics pre-intervention pass rates – all Providers Spring 2014 to Spring 2017

Professional Ethics All Provider pre-intervention

Spring 2017

Spring 2016

Spring 2015

Spring 2014

SAQ Comparison

57.6%

70.8% 58.0% 65.6%

Combined Comparison

70.2% 56.7% 59.6%

The Spring 2017 passing rate is not inconsistent with the passing rates achieved by previous cohorts, both in terms of their combined assessment performance (MCQ and SAQ) and their SAQ only performance in previous cycles (particularly 14/15 and 13/14). The Spring 2017 passing rate is a product of the Final Board endorsing a recommended passing standard for the Professional Ethics SAQ assessment of 24/60 (see 2.5 above for further explanation of standard setting. 3.1.1 Prior to the Spring 2017 examination, the Professional Ethics assessment

comprised twenty MCQs and three SAQs. Both the MCQ and the SAQ elements had a fixed pass mark of 60% and candidates had to achieve at least 60% in both elements to pass the assessment as whole. The Professional Ethics assessment now comprises one paper containing six SAQs. It is not possible, therefore, to provide a ‘like-for-like’ comparison with passing rates across previous cycles of assessment. The table above does, however, provide two points of comparison with previous cycles: (i) the passing rates across the Spring assessments for the SAQ only element of the Spring 2014 to Spring 2016 assessments; and (ii) the passing rates resulting from the SAQ and MCQ assessments being combined. Passing rates for the SAQ element of the assessment have consistently been lower than for the MCQ element as the following table demonstrates:

Ethics Spring Sit Final MCQ passing rates

Final SAQ passing rates

Variance

2011/12 92.7% 88.5% -4.2%

2012/13 94.3% 89.5% -4.8%

2013/14 81.0% 65.6% -15.4%

2014/15 91.5% 58.0% -33.5%

2015/16 98.4% 70.8% -26.6%

3.1.2 The marking of Professional Ethics SAQs was centralised for the Spring 2017

examination to ensure greater objectivity in the process, and to address concerns that Provider-based markers may have tended to be more generous in marking candidates close to the pass mark. The distribution of marks for the Spring 2015 assessment in Professional Ethics provides a typical example

Page 15: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 15 of 61

of the effect of this in terms of the distribution curve for Provider-marked SAQs (where 18/30 represented the pass mark).

3.1.3 By contrast, the data for the Spring 2017 Professional Ethics assessment

shows a more standard distribution curve around the pass standard (24/60 following standard setting).

As the independent psychometrician notes in relation to the Spring 2017 Professional Ethics SAQ marks distribution: “On the SAQ as a whole, the distribution of the total score is approximately normal, and this is a more common finding that the previous distributions which typically had a "spike" caused by a high number of candidates scoring exactly the passing mark with a corresponding "trough" of candidates on one mark below the pass mark. It is possible to conclude from this that the [Provider-based] markers had made

0

50

100

150

200

250

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

No

of

stu

de

nts

Total score on SAQ

Distribution of total score on Ethics SAQ Spring 2015

0

10

20

30

40

50

60

70

80

90

100

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46

No

of

stu

de

nts

Total score on SAQ

Distribution of total score on Ethics SAQ Spring 2017

Page 16: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 16 of 61

a particular effort to find an extra mark for those candidates who would otherwise have failed by a narrow margin. This cannot occur with a more objective marking system in which each question is assessed by a different marker. This in itself is likely to account for a lower overall pass rate.”

3.2 Details of Final Examination Board discussions

SAQ Intervention Applied

CEB rationale

SAQ 1(a) No intervention

The Psychometrician stated that the discrimination on SAQ 1(a) was fairly good, and that there was a negative correlation for the weaker candidates. He also stated SAQ 1(a) had a normal distribution of scores, which peaked at a score of 3.2. The equivalent of the Manhattan chart had a good profile, it stepped down nicely meaning this question discriminated well between the good and weak candidates and therefore was a question really performing rather well.

SAQ 1(b) No intervention

The Psychometrician stated SAQ 1(b) had a normal distribution of scores, which peaked at a score of 3.1. The equivalent of the Manhattan chart had a good profile, it stepped down nicely meaning this question discriminated well between the good and weak candidates and therefore was a question really performing rather well. The Chief Examiner referred to comments from MMU that candidates may have been confused as to the context of the question as it was based in a trial context rather than a sentencing context which they would have been more exposed to. He stated that the principles apply across all contexts. It was noted that candidates who performed poorly on this question often used pre-prepared answers which would have lost them marks as they were unable to apply these to the ethical issues raised in the scenario.

SAQ 2(a) No intervention

The Board noted SAQ 2(a) was a more challenging question. The Chief Examiner stated that the mark scheme had been more loosely drafted to allow for discussion as this was not a black and white issue. He went on to state that there was nothing of any note in the Provider comments. The Psychometrician commented that there was nothing in the statistics to give any cause for concern.

Page 17: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 17 of 61

SAQ Intervention Applied

CEB rationale

SAQ 2(b) No intervention

The Chief Examiner expressed his surprise that MMU had thought that there were no ethical issues in SAQ 2(b). The examiners agreed that threats and harassment were clearly signposted in the question, which was clearly on syllabus. It was thought that MMU had conflated the harassment in the question with the concept of harassment in criminal law, which was a mistake on their part. The Assistant Chief Examiner stated that there were many opportunities for candidates to get points. The Psychometrician commented that there was nothing in the statistics to give any cause for concern – it was a difficult question which worked well.

SAQ 3(a) No intervention

The Chief Examiner remarked that there was very little of note in the Provider comments regarding SAQ 3(a); however, sometimes candidates could be caught out by questions set in a mediation context. The Psychometrician commented that there was nothing in the statistics to give any cause for concern.

SAQ 3(b) No intervention

SAQ 3(b) had the lowest passing rate of all the questions. The Chief Examiner noted that he had no concerns about the question; all of the issues were clearly signposted. There was no feedback to suggest that the scenario was either misleading or not on the syllabus. The Board queried whether this topic was being covered properly by Providers. The Psychometrician observed that it was a difficult question and discriminated well (and better than SAQ 3(a)) and that there was nothing in the statistics to give any cause for concern.

SAQ 4(a) No intervention

The Chief Examiner remarked that there was very little of note in the Provider comments regarding SAQ 4(a). The Psychometrician commented that there was nothing in the statistics to give any cause for concern.

SAQ 4(b) No intervention

The Chief Examiner explained that there was very little of note in the Provider comments regarding SAQ 4(b), other than the question had been praised by UNN. The Psychometrician stated that there was nothing in the statistics to give any cause for concern; it had discriminated as expected with the brighter candidates doing moderately well (gaining on average 2.2 out of 4 marks).

Page 18: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 18 of 61

SAQ Intervention Applied

CEB rationale

SAQ 5(a) No intervention

The Chief Examiner said that the mark scheme had been amended in the light of Provider feedback (specifically bullet point 7). The Psychometrician commented that there was nothing in the statistics to give any cause for concern and the Manhattan graph showed that this question discriminated well.

SAQ 5(b) No intervention

No concerns were raised by the Providers, the examining team or the Psychometrician.

SAQ 6(a) No intervention

No concerns were raised by the Providers or the examining team. There was nothing in the statistics for both subparts of SAQ 6 to give any cause for concern.

SAQ 6(b) No intervention

No concerns were raised by the Providers or the examining team. There was nothing in the statistics for both subparts of SAQ 6 to give any cause for concern.

3.2.1 The Board confirmed that no interventions were required in respect of the

Professional Ethics assessment hence the passing rate remained at 57.6%. The comparison with the post-intervention passing rates across the Spring 2014 to Spring 2017 cycles is, therefore, as follows:

Professional Ethics All Provider post-intervention

Spring 2017

Spring 2016

Spring 2015

Spring 2014

SAQ Comparison

57.6%

70.8% 58.0% 65.6%

Combined Comparison

70.2% 56.7% 59.6%

See comments above at 3.1.1 regarding changes in the nature of the assessment and the consequent caveats in comparing passing rates from previous cycles of assessment in Professional Ethics.

3.2.2 The recommended pass standard determined through the standard setting

process was 24/60 (equivalent to 40%) and this was confirmed as the basis for determining candidate performance in the paper as a whole by the Final Board.

3.2.3 The Board noted that the reliability of the SAQ assessment, at 0.72, was

acceptable, and higher than for previous Professional Ethics SAQ assessments, but was not as high as had been predicted with the change to the new format.

Page 19: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 19 of 61

3.2.4 The Board noted that a fire alarm had sounded at the beginning of the exam at MMU (not all examination rooms were affected) and that the Provider had sanctioned an adjustment to the start and finish of the examination, together with allowing candidates an extra five minutes when restarting the exam for settling candidates down. The Board noted that this appeared to be an appropriate local adjustment and that no further intervention was warranted.

3.2.5 The Final Board also discussed the shortage of answer booklets experienced

at some assessment centres during the Professional Ethics examination. The Board was concerned with the impact this might have had on candidates but concluded that it was not appropriate, on the available evidence from invigilator reports, to recommend any comprehensive intervention. Providers had responded to the issue locally in a variety of ways but typically by allowing candidates more time to complete the examination. The impact of any disruption was also, to some extent, offset by the standard setting process. Providers were subsequently advised that they could, where appropriate, use their extenuating circumstances regulations to permit an additional attempt at the Summer 2017 examination (i.e. an uncapped deferral rather than capped referral) for candidates who had failed the Spring 2017 assessment in Professional Ethics if the view of the local exam board was that the shortage of answer books may have had a bearing on a candidate’s performance.

3.2.6 The Final Board noted that there had been some concerns raised relating to

the new format of the Professional Ethics examination in so far as candidates were now required to attempt six SAQs in two hours, whereas previously they had been required to attempt three SAQs and twenty MCQs in two hours. The particular issue was the absence of any specific reading time in addition to the two hours permitted for the assessment. The Board considered the evidence relating to the word count of the Spring 2017 assessment compared to previous Professional Ethics assessments (3,500 words less than the combined Spring 2016 MCQ and SAQ assessment), and the fact that there was no evidence of a falling off in candidate performance from SAQ 1 to SAQ 6 (assuming candidates attempted the SAQs in that sequence). SAQ 2 and SAQ 3 had the lowest passing rates of the six SAQs across all Provider cohorts taken together.

3.2.7 As the independent Psychometrician noted: “...the highest mean score

was ...in SAQ 6… and SAQ 5 had the third-highest mean score. Even if SAQs

5 and 6 were inherently more straightforward than the others, this still points

to the ability of many candidates to perform quite adequately on these

questions. SAQ 6 showed the highest correlation with the remaining SAQs

and this can be re-stated as its being the most discriminating question, with

SAQ 5 being the third most discriminating; there is no evidence of quasi-

random responses to these questions.”

3.2.8 The Final Board noted that it may have been the case that, under the previous format, candidates had not been dividing the two hours permitted evenly between the MCQ and SAQ aspects of the paper but had been spending a disproportionate amount of time on the three SAQs. The new format of six

Page 20: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 20 of 61

SAQs did not permit that flexibility hence it required candidates to be more disciplined in their allocation of time to each question. See further 7.4 (below).

3.3 Professional Ethics Spring 2017 pass rates across all Providers

Providers are ranged left to right in order of their passing rates. Hence BPP Manchester had the highest passing rate at 78.6 % and ULaw Leeds the lowest at 28.6% - a range of 50%, greater than with either Criminal or Civil Litigation. The variation in Provider cohort performance is very marked. The top three Provider cohorts have an average passing rate of 74%, whilst the bottom three Provider cohorts have an average passing rate of just 36%, suggesting the assessment discriminated effectively between weak and strong cohorts.

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

BP

P M

anch

este

r

Car

dif

f

BP

P L

eed

s

ULa

wB

irm

ingh

am

BP

P L

on

do

n

ULa

w L

on

do

n

MM

U

BP

P B

irm

ingh

am Cit

y

NTU

No

rth

um

bri

a

UW

E

ULa

w L

eed

s

Pas

s ra

te %

Provider

Professional Ethics Spring 2017Pass rates - all Providers

Page 21: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 21 of 61

3.4 Professional Ethics Spring 2017 Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Providers are ranged left to right in order of their Spring 2017 passing rates, and the data shows their passing rates across the four Spring assessment cycles from Spring 2014 to Spring 2017 (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017). In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Professional Ethics cohort performance data for 2017 is based on a SAQ paper with six questions. The Professional Ethics data for Spring 2014 to Spring 2016 used for this graph is based on the combined passing rate of candidates across both a MCQ paper (comprising twenty questions), and a SAQ paper (three questions). It should also be noted that prior to Spring 2017 candidates had to pass both the MCQ and SAQ elements (with a fixed pass mark of 60% on each paper) in Professional Ethics in order to achieve a passing grade for the paper as a whole. 3.4.1 With these caveats in mind, the following points emerge:

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

BP

P M

anch

este

r

Car

dif

f

BP

P L

eed

s

ULa

w B

irm

ingh

am

BP

P L

on

do

n

ULa

w L

on

do

n

MM

U

BP

P B

irm

ingh

am Cit

y

NTU

No

rth

um

bri

a

UW

E

ULa

w L

eed

s

Pas

s ra

te %

Provider

Professional Ethics Spring 2017 Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Spring 2017 Spring 2016 overall

Spring 2015 overall Spring 2014 overall

Page 22: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 22 of 61

(a) BPP Manchester achieve the highest average cohort passing rate over the Spring 2014 to Spring 2017 period (73.1%), whilst the lowest average is recorded by NTU (49.4% average).

(b) Looking at the change in Provider cohort performance from Spring 2016 to Spring 2017, BPP Birmingham shows the largest improvement with a rise of over 5% in respect of the new form of assessment, whilst UWE reported a drop of over 31% (one of 8 Providers to see a decline from Spring 2016 to Spring 2017). On average Providers saw a 11% drop in average passing rates compared to the average combined Professional Ethics passing rates of Spring 2016. (ULaw Leeds is excluded from this calculation as it had no cohort in Spring 2016.)

(c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the combined Professional Ethics assessment across the Spring 2014 to 2016 cycles shows that 5 of 11 record a higher passing rate for Spring 2017. BBP London recorded the biggest increase (14.1%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles. By far the biggest decline was recorded at UWE, where the 37.1% passing rate for Spring 2017 is 31% below its average passing rate for the previous three cycles. On average Providers saw a 3.5% decline in passing rates for Spring 2017 compared to the average combined Professional Ethics passing rates of across the Spring 2014 to Spring 2016 period. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all four cycles.)

Page 23: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 23 of 61

3.5 Professional Ethics Spring 2017 Provider pass rates vs. Spring 2014-2016 SAQ post-intervention pass rates

Providers are ranged left to right in order of their Spring 2017 passing rates, and the data shows their passing rates across the four assessment cycles from Spring 2014 to Spring 2017 (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017). In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Professional Ethics cohort performance data for Spring 2017 is based on a SAQ paper with six questions. The Professional Ethics data for Spring 2014 to Spring 2016 used for this graph is based on the passing rate of candidates in the three SAQ paper used during that period. Historically Provider cohorts achieved much higher passing rates in the MCQ section of the previous assessment format, hence the SAQ passing rates for Spring 2014 to Spring 2016 much more closely mirror the combined passing rates for the same period. 3.5.1 With these caveats in mind, the following points emerge:

(a) Of those Providers with results for cohorts in all four assessment cycles, BPP Manchester achieves the highest average Spring sit cohort passing rate (76.1%), whilst NTU has the weakest (52.3%).

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

Pas

s ra

te %

Provider

Professional Ethics Spring 2017 Provider pass rates vs. Spring 2014-2016 SAQ post-intervention pass rates

Spring 2017 Spring 2016 SAQ Spring 2015 SAQ Spring 2014 SAQ

Page 24: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 24 of 61

(b) Looking at the change in Provider cohort performance from Spring 2016 to Spring 2017, BPP Birmingham shows the largest improvement with a rise of 5% in respect of the new form of assessment, whilst UWE reported a drop of over 33% (one of nine Providers to see a decline from Spring 2016 to Spring 2017). On average Providers saw a 12% decline in average passing rates compared to the average SAQ only Professional Ethics passing rates of Spring 2016. (ULaw Leeds is excluded from this calculation as it had no cohort in Spring 2016.)

(c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the MCQ Professional Ethics assessment across the Spring 2014 to Spring 2016 cycles shows that three of the eleven record a higher passing rate for Spring 2017. BBP London recorded the biggest increase (6.5%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles. The biggest decline was reported by UWE, where the 37% passing rate for Spring 2017 is 37% below its average passing rate for the previous three cycles of SAQ assessment in Professional Ethics. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all four cycles.)

3.6 Overall grade boundary distribution

All Provider Grade Boundary Distribution Professional Ethics

Not Competent Competent Very Competent Outstanding

44.8 46.7 14.2 0.1

The standard setting process determines where the “Not Competent”/”Competent” boundary lies and grade boundaries are then calculated accordingly to ensure that the passing grades are allocated proportionately across the Competent / Very Competent / Outstanding classifications. As explained above at 2.8 (above), for an assessment comprising six SAQs, each carrying 10 marks, a passing standard of 36/60 equates to a passing score of 60%, thus mirroring the fixed pass mark used in centrally assessed exams prior to Spring 2017. In a system with a fixed pass mark of 60% candidates awarded marks of 60% to 69% were graded Competent; those awarded marks of 70% to 84% were graded Very Competent; and awarded marks between 85% and 100% were graded Outstanding. With the introduction of standard setting, the performance identified as equating to the pass standard can vary from one year to the next depending on the perceived level of difficulty offered by the examination. Where the passing standard is identified as being below 36/60 the range of Competent / Very Competent / Outstanding classifications is stretched to cover a broader range of scores. Conversely where the passing standard is identified as being above 36/60 the range of Competent / Very Competent / Outstanding classifications becomes compressed. The Spring 2017 all-Provider cohort results for Professional Ethics show that even with a passing standard set at 24/60 there is only one candidate achieving the Outstanding classification.

Page 25: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 25 of 61

3.7 Spring 2017 post-intervention grade boundaries by Provider

Providers are ranged from left to right in order of the percentage of candidates in each cohort graded Very Competent (only one candidate across all Providers achieved an Outstanding grade (at BPP London), therefore this is not used as the point of comparison). Predictably, given that it has the highest passing rate in Professional Ethics, BPP Manchester has the cohort with the highest percentage of candidates graded Very Competent (33%). As the table below indicates, apart from that outcome, however, there is no strong correlation between a Provider cohort passing rate and the percentage of candidates at each Provider being graded as Very Competent. For example, Cardiff had a lower percentage of Very Competent candidates than the lowest ranked Provider ULaw Leeds.

0%

10%

20%

30%

40%

50%

60%

70%

80%

90%

100%B

PP

Man

ches

ter

Car

dif

f

BP

P L

eed

s

Ula

w B

ham

BP

P L

on

do

n

Ula

w L

on

do

n

MM

U

BP

P B

irm

ingh

am Cit

y

NTU

No

rth

um

bri

a

UW

E

Ula

w L

eed

s

% s

tud

ent

per

cat

ego

ry

Provider

Professional Ethics Spring 2017Grade Boundaries by Provider

Not Competent Competent Very Competent Outstanding

Page 26: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 26 of 61

Provider Passing rate % "Very Competent"

BPP London 78.6 33.9

Cardiff 74.2 9.1

BPP Birmingham 68.9 14.8

ULaw Birmingham

67.7 12.9

BPP Leeds 65.3 21.4

ULaw London 61.1 12.2

MMU 56.1 7.3

BPP Manchester 55.4 12.5

City 55.0 11.3

NTU 50.8 6.6

Northumbria 41.9 4.4

UWE 37.1 6.7

ULaw Leeds 28.6 9.5

3.8 All-Provider Spring 2014 to Spring 2017 grade boundary trend analysis

Any comparison of grade boundary trends over the Spring 2014 to Spring 2017 cycles must be looked at in the context of the changes made to Spring 2017 Professional Ethics assessment – these have already been set out at 3.1, above. Hence this is not a true like-for-like comparison, but of interest nonetheless in assessing the impact of the new form of assessment on the distribution of gradings.

0

5

10

15

20

25

30

35

40

45

50

Outstanding Very Competent Competent Not Competent

% s

tud

ents

Grade category

Professional Ethics Grade boundaries trend analysis Spring 2014 to Spring 2017

Spring 2014 Spring 2015 Spring 2016 Spring 2017

Page 27: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 27 of 61

What can be seen is the virtual disappearance of the Outstanding grade in Spring 2017 and a drop in the Very Competent grade from 47% in Spring 2016 to 14% in Spring 2017.

Page 28: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 28 of 61

4. CRIMINAL LITIGATION RESULTS 4.1 Pre-intervention overall passing rate

Criminal Litigation All Provider pre-intervention

Spring 2017

Spring 2016

Spring 2015

Spring 2014

MCQ Comparison

77.0

85.2 80.4 74.6

Combined Comparison

69.9 61.5 65.9

This shows the all-Provider Spring 2017 pre-intervention cohort passing rate as being 77% for Criminal Litigation, based on a pass standard recommended to the Board (as a result of the standard setting process) of 45 out of 75. With the new assessment format, there is no precise point of comparison with previous years. The introduction of single best answer type MCQs has made the assessment more challenging than the previous MCQ assessment and this is reflected in the higher passing rates for the MCQ element in previous years. The passing rate for the new style assessment is, however, higher than the combined (MCQ and SAQ) passing rate for any of the previous three Spring assessment cycles. 4.2 Pre-intervention histogram of MCQs

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75

Pas

s ra

te %

MCQ Item Number

Criminal Litigation Spring 2017 Pre-scale MCQ All Provider question by question histogram Pass rate %

Pass rate %

Page 29: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 29 of 61

The pre-intervention data shows ten MCQs with an all-Provider cohort passing rate below 40%. There is no evidence of candidate performance fall-off when comparing passing rates across the paper. The average passing rate for MCQs 1-25 is 68%, for MCQs 26-50 67%, and for MCQs 51 to 75 it is 69%. 4.3 Details of Subject Board discussions and interventions

MCQ Intervention Applied

CEB rationale

9 No intervention

51% of candidates opted for [B], but this had negative discrimination. The examining team did not think there was anything misleading in the question. In the hierarchy of the answers [D] remains at the top. Although over half the candidates chose [B], the correlation is reasonable enough to infer that the weaker candidates had chosen this option. [B] is incorrect as the need for a court appointed legal representative has passed because the Prosecution case has finished. The examiners took the test directly from BCP. The Board decided there was no reason for intervention

10

No intervention

67% chose [C], however there was strong negative correlation. The examiners noted that this question was a good example of candidates being asked to make a judgement call. The custody threshold has not been met, therefore [A], [C] and [D] were wrong answers. The brightest candidates chose [B]. The Provider comments were noted but no change was made and this decision was based on the negative correlation. The Board decided there was no reason for intervention.

13 No intervention

47% of candidates chose [A] which had positive correlation. The Provider feedback was considered. Good candidates were unsure whether to answer [A] or [C]. [A] was correct but not the single best answer. It was a difficult question as candidates are used to a set order of events in the court. BCP backs up answer [C], and it remains the single best answer. The Board decided there was no reason for intervention.

20 No intervention

BCP 2017 D22.71 sets out the two exceptions to the rule, with TWOC being one of them. The psychometrician highlighted his concern regarding option [B] as it had positive correlation and was chosen by 24% of candidates. The examiners explained that the scenario ruled answer [B] out as being correct. The Provider comments related to option [A] which had been chosen by weaker candidates. The Board decided there was no reason for intervention.

Page 30: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 30 of 61

MCQ Intervention Applied

CEB rationale

26

The Board decided to credit [A] and [B] due to a lack of sufficient differentiation between them.

[A] was the best answer. The Board considered whether [B] was also correct and concluded that there was very little difference between options [A] and [B]. In real life, they should be done together. However, if someone had to choose one option only, they would chose answer [A]. Answer [B] established only half of the requirements. It was noted that the Board was alerted to possible problems with this question by the Provider comments, rather than the statistical analysis.

29 No intervention

Although the majority of candidates (55.8%) chose [B], there was a reasonable negative correlation. There was positive correlation on the correct answer. Only City commented in full regarding this question; the other Providers made no comment. The strongest Provider cohort (BBP Leeds) only achieved 13.5%. [B] was a weak answer and accordingly the weaker candidates chose this answer. Option [C] was not wrong, but [D] went further and therefore was the better answer. This cohort did not perform well; the Board had no doubt that [D] was the best answer, and the next best answer was [C]. The Board decided there was no reason for intervention.

36 No intervention

40% of candidates chose [A] which had very slight negative correlation. There were no comments from Providers. There were two good distractors in options [A] and [C], a difficult question which worked well. The Board decided there was no reason for intervention.

37

The Board decided to credit [A] and [C] as option [A] could also be correct as the fact pattern was not completely clear as to what the original evidence was being used for.

41% of candidates chose [B] which had negative correlation, 23% chose [A] which had a very small positive correlation. There were Provider comments from Cardiff, City and UWE as to the distinction between options [A] and [C]; the other Providers did not comment. The Board stated that ‘original evidence’ was clearly on syllabus (see BCP 2017 F16.20).

48 No intervention

The correct answer, [D], had positive correlation. 45% chose [C] which had strong negative correlation. In their

Page 31: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 31 of 61

MCQ Intervention Applied

CEB rationale

comments, BPP had asked for the question to be discounted. The Board reflected that BPP had been confused by admissions at the police station. This question was flagged for consideration by the psychometrician, who stated that the good candidates were split 50:50 between options [B] and [D]. The Board concluded that if a student knows s.78 well, they will know that [D] overrides [B] and therefore no change to the question was required. It was thought that this was hard but a good question. The Board decided there was no reason for intervention.

57

The Board decided to remove this question. The exam to be recalculated on the basis of a total of 74 questions.

25% of candidates chose [D] which had positive correlation. Only Cardiff and City provided any meaningful feedback. The examiners agreed that they had raised a valid point. There was some discussion by the Board regarding the fact that in 2008 the law was changed to take away the right to a community order. The Board concluded that fourth line of the question was fatally flawed. Due to this technical issue regarding a prospective piece of legislation which is not yet in force, this question was thought to be unfair to candidates and hence removed.

73 No intervention

22% of candidates chose option [C] which had weakly positive correlation. The Provider comments from UWE asked for [C] to be allowed. The Board considered this multiple-choice question: options [A] and [C] were definitely wrong, and option [B] was wrong because the defendant must remain in custody. The Board was not minded to make any changes. The Board decided there was no reason for intervention.

The psychometrician confirmed that the Criminal Litigation assessment exceeded the requirements of reliability. The majority of the Provider feedback acknowledged that the examination had been a fair assessment of candidate knowledge. The pass standard of 45 out of 74 (equivalent to 61%) recommended to the Board as a result of the standard setting process and the removal of MCQ 57 was endorsed by the Board.

Page 32: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 32 of 61

4.4 Spring 2017 post-intervention overall passing rate

Criminal Litigation All Provider post-intervention

Spring 2017

Spring 2016

Spring 2015

Spring 2014

MCQ Comparison 78.2

85.9 83.3 84.1

Combined Comparison 70.3 62.5 72.8

The all-Provider BPTC passing rate after interventions was 78.2% (higher than any previous combined MCQ and SAQ passing rate for Criminal Litigation across the Spring 2014 to Spring 2016 assessments). 4.5 Spring 2017 pre- and post-intervention MCQ passing rates by Provider

Providers are ranged left to right in order of their post-intervention passing rates. Hence BPP Manchester had the highest Spring 2017 post intervention passing rate at 96.1% and ULaw Birmingham the lowest at 62.5% - a range of 33%. The interventions had very little impact on passing rates, BPP Birmingham seeing the biggest change with a 3.7% uplift. As can happen where the Board intervenes to removes a question from the assessment post intervention passing rates can be lower than pre-intervention passing rates, as is the case here with BPP Leeds where passing rates fell by 2%. There is very slight evidence of the bottom seven Provider cohorts on average benefitting more from the intervention (up 1.6%) compared to the top six Provider cohorts (up 0.9%).

60

65

70

75

80

85

90

95

100

Pas

s ra

te %

Provider

Criminal Litigation - Spring 2017Pre and Post intervention comparison

Pre-Scale

Post-Scale

Page 33: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 33 of 61

4.6 Criminal Litigation Spring 2017 post-intervention Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Providers are ranged left to right in order of their Spring 2017 post-intervention passing rates, and the data shows their passing rates across the four Spring assessment cycles from 2014 to 2017. (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017.) In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Criminal Litigation cohort performance data for Spring 2017 is based on an MCQ paper with 75 questions, some of which were of the single best answer variety. The Criminal Litigation data for Spring 2014 to Spring 2016 used for this graph is based on the combined passing rate of candidates across both the MCQ paper (comprising forty questions – all of the “one correct answer” variety), and the SAQ paper (five questions). It should also be noted that, prior to Spring 2017 candidates had to pass both MCQ and SAQ elements (fixed pass mark of 60% on each paper) in Criminal Litigation in order to achieve a passing grade for the paper as a whole. 4.6.1 With these caveats in mind, the following points emerge:

(a) Cardiff achieves the highest average cohort passing rate over the Spring 2014 to Spring 2017 period (85.4%), whilst the weakest is Northumbria (63.3% average).

40

50

60

70

80

90

100

BP

P M

anch

este

r

BP

P L

eed

s

Car

dif

f

ULa

w L

eed

s

ULa

w L

on

do

n

BP

P B

irm

ingh

am

BP

P L

on

do

n

NTU Cit

y

No

rth

um

bri

a

UW

E

MM

U

ULa

w B

irm

ingh

am

Pas

s ra

te %

Provider

Criminal Litigation Spring 2017 Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Spring 2017

Spring 2016 overall

Spring 2015 overall

Spring 2014 overall

Page 34: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 34 of 61

(b) Looking at the change in Provider cohort performance from Spring 2016 to Spring 2017, BPP Birmingham shows the largest improvement with a rise of over 24% in respect of the new form of assessment, whilst ULaw Birmingham reported a drop of over 27% (one of four Providers to see a decline from Spring 2016 to Spring 2017). On average Providers saw a 6% rise in average passing rates for Spring 2017 compared to the average combined Criminal Litigation passing rates for Spring 2016. (ULaw Leeds is excluded from this calculation as it had no cohort in Spring 2016.) (c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the combined Criminal Litigation assessment across the Spring 2014 to Spring 2016 cycles shows that ten out of eleven record a higher passing rate for Spring 2017. BBP Leeds recorded the biggest increase (23.1%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles. The only decline was reported by ULaw Birmingham, where the 60% passing rate for Spring 2017 is 15% below its average passing rate for the previous three cycles. On average Providers saw an 8.6% rise in passing rates for Spring 2017 compared to the average combined Criminal Litigation passing rates of across the Spring 2014 to Spring 2016 period. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all four cycles of Spring sit assessment.)

4.7 Criminal Litigation Spring 2017 post-intervention Provider pass rates vs. Spring 2014-2016 MCQ post-intervention pass rates

50

55

60

65

70

75

80

85

90

95

100

BP

P M

anch

este

r

BP

P L

eed

s

Car

dif

f

ULa

w L

eed

s

Ula

w L

on

do

n

BP

P B

irm

ingh

am

BP

P L

on

do

n

NTU Cit

y

No

rth

um

bri

a

UW

E

MM

U

Ula

w B

ham

Pas

s ra

te %

Provider

Criminal Litigation Spring 2017 Provider pass rates vs. Spring 2014-2016 MCQ post-intervention pass rates

Spring 2017

Spring 2016 MCQ

Spring 2015 MCQ

Spring 2014 MCQ

Page 35: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 35 of 61

Providers are ranged left to right in order of their Spring 2017 post-intervention passing rates, and the data shows their passing rates across the four assessment cycles from Spring 2014 to Spring 2017. (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017.) In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Criminal Litigation cohort performance data for 2017 is based on an MCQ paper with 75 questions, some of which were of the SBA variety. The Criminal Litigation data for Spring 2014 to Spring 2016 used for this graph is based on the MCQ element of the assessment used in Spring 2014 to Spring 2016 (comprising forty questions – all of the “one correct answer” variety). This provides an alternative basis for trend analysis compared to 4.6 (above) where the combined passing rates include the SAQ section of the previous form of assessment. Candidates in previous years normally achieved higher passing rates in respect of the MCQ element of the assessment as against the SAQ element, hence one would expect the passing rates for the Spring 2017 MCQ assessment to be lower by comparison given the introduction of SBA type MCQs. 4.7.1 With these caveats in mind, the following points emerge:

(a) BPP Manchester enjoys the highest average cohort passing rate (91.5%) over the Spring 2014 to Spring 2017, with ULaw London the weakest (76.9% average). (b) In respect of the change in Provider cohort performance from Spring 2016 to Spring 2017, BPP Birmingham shows the largest improvement with a rise of 24% in respect of the new form of assessment, whilst ULaw Birmingham reported a drop of over 33% (one of eight Providers to see a decline from Spring 2016 to Spring 2017). On average Providers saw a 5% decline in average passing rates compared to the average MCQ only Criminal Litigation passing rates of Spring 2016. (ULaw Leeds is excluded from this calculation as it had no cohort in Spring 2016.) (c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the MCQ Criminal Litigation assessment across the Spring 2014 to Spring 2016 cycles shows that four of eleven record a higher passing rate for Spring 2017. BBP Leeds recorded the biggest increase (6.5%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles. The biggest decline was reported by ULaw Birmingham, where the 60% passing rate for Spring 2017 is 28% below its average passing rate for the previous three cycles of MCQ assessment in Criminal Litigation. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all 4 cycles.)

4.8 Overall grade boundary distribution

All Provider Grade Boundary Distribution – Criminal Litigation

Not Competent Competent Very Competent Outstanding

21.8 28.0 43.5 6.7

Page 36: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 36 of 61

The standard setting process determines where the “Not competent”/”Competent” boundary lies and grade boundaries are then calculated accordingly to ensure that the passing grades are allocated proportionately across the Competent / Very Competent / Outstanding classifications. As explained above at 2.8 (above), for an assessment comprising 75 MCQs a passing standard of 45/75 equates to a passing score of 60%, thus mirroring the fixed pass mark used in centrally assessed exams prior to Spring 2017. Under the 60% fixed pass mark regime candidates awarded marks of 60% to 69% were graded Competent; those awarded marks of 70% to 84% were graded Very Competent; and awarded marks between 85% and 100% were graded Outstanding. From Spring 2017 onwards, where the passing standard is identified as being below 45/75, the range of Competent / Very Competent / Outstanding classifications is stretched to cover a broader range of scores. Conversely where the passing standard is identified as being above 45/75 the range of Competent / Very Competent / Outstanding classifications becomes compressed. The Spring 2017 all-Provider cohort results for Criminal Litigation show that even with a passing standard set at 45/75 there are relatively few candidates achieving the Outstanding classification. 4.9 Spring 2017 grade boundaries by Provider

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

% s

tud

ent

per

cat

ego

ry

Provider

Criminal Litigation Spring 2017Grade boundaries by Provider

Not Competent Competent Very Competent Outstanding

Page 37: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 37 of 61

Providers are ranged from left to right in order of the percentage of candidates in each cohort graded “Outstanding”. Predictably, given that it has the highest passing rate in Criminal Litigation, BPP Manchester has the cohort with the highest percentage of candidates graded Outstanding (16%). As the table below indicates, apart from that outcome, however, there is no strong correlation between a Provider cohort passing rate and the percentage of candidates at each Provider being graded as Outstanding. Cardiff had a lower percentage of Outstanding candidates than the lowest ranked Provider ULaw Birmingham. NTU was the only cohort not to have any candidates graded as Outstanding for Criminal Litigation.

Provider Post-intervention passing rate

Outstanding

BPP Manchester 96.1 15.7

BPP Leeds 94.2 5.8

Cardiff 92.1 3.2

ULaw Leeds 81.0 9.5

ULaw London 80.1 7.4

BPP Birmingham 79.2 1.9

BPP London 78.6 6.5

NTU 77.6 0.0

City 75.7 7.8

Northumbria 73.8 6.2

UWE 72.4 9.2

MMU 72.0 2.7

ULaw Birmingham 60.0 4.0

4.10 All-Provider Spring 2014 to Spring 2017 grade boundaries trend analysis

Any comparison of grade boundary trends over the Spring 2014 to Spring 2017 cycles must be looked at in the context of the changes made to the assessment in Criminal Litigation for Spring 2017 – these have already been set out at 4.6 above. Hence this is not a true like-for-like comparison, but of interest nonetheless in

0

10

20

30

40

50

Spring 2014 Spring 2015 Spring 2016 Spring 2017

% s

tud

ents

Grade catefory

Criminal Litigation Grade boundaries trend analysis Spring 2014 to Spring 2017

Outstanding Very Competent Competent Not Competent

Page 38: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 38 of 61

assessing the impact of the new form of assessment on the distribution of gradings. What can be seen is sharp decline in the proportion of candidates achieving an Outstanding grade in Spring 2017 compared to the average across Spring 2014 to Spring 2016 (down 11.6%). The decline from Spring 2016 to Spring 2017 is very marked (down 14%), with consequent year-on-year increases in candidates graded Very Competent (7.4%) and those graded Competent (14.9%).

Page 39: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 39 of 61

5. CIVIL LITIGATION RESULTS 5.1 Pre-intervention Spring 2017 overall passing rates

Civil Litigation All Provider pre-intervention

Spring 2017

Spring 2016

Spring 2015

Spring 2014

MCQ Comparison 59.3%

74.1% 63.6% 60.9%

Combined Comparison 62.2% 44.3% 44.6%

This shows the all-Provider Spring 2017 pre-intervention cohort passing rate as being 59.3% for Civil Litigation (based on a pass standard recommended to the Board as a result of the standard setting process of 47 out of 75). With the new assessment format there is no precise point of comparison with previous years. The introduction of single best answer type MCQs has made the assessment more challenging than the previous MCQ assessment and this is reflected in the higher passing rates for the MCQ element in previous years. The passing rate for the new style assessment is, however, higher than the combined (MCQ and SAQ) passing rate for the Spring 2014 and 2015 cycles. 5.2 Pre-intervention histogram of 75 MCQs

The pre-intervention data shows seven MCQs with an all-Provider cohort passing rate below 40%. There is no evidence of candidate performance fall-off when comparing passing rates across the paper, indeed the exact opposite is demonstrated by the data. The pre-intervention average passing rate for MCQs 1-25 is 59.9%, for MCQs 26-50 64.5%, and for MCQs 51 to 75 it is 69.5%.

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75

Pas

s ra

te %

MCQ Item Number

Civil Litigation Spring 2017 Pre-interventionMCQ All Provider question by question histogram pass rate %

Pass rate %

Page 40: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 40 of 61

5.3 Details of Subject Board discussions and interventions

MCQ Intervention Applied

CEB rationale

Q43

The Board decided to intervene by disregarding the question and recalculating the pass mark.

Good discrimination. Correct answer [D]. 64% of candidates chose option [B], and 26% chose option [C] which had weakly positive correlation. As such this was a question which the psychometrician had flagged for review. The Chair noted that the low pass rates across all Providers suggested that this question was a candidate for some sort of intervention. The examiners stated that they would potentially change the wording of the question were it to be used again (the addition of ‘at the last moment’ would be beneficial) – thus the drafting had been sub-optimal. They also agreed that the question was clearly on syllabus and that [D] was the correct answer and were clear that [B] was not correct and credit should not be given to those who had selected it.

Q66 No intervention

Poor discrimination. The Board struggled to justify why candidates chose [D] as a correct answer. The psychometrician suggested that options [A] and [C] were unattractive, leading candidates to option [D]. The Board was confident that the wording was not misleading candidates. The Board decided there was no reason for intervention.

Q71

The Board decided to intervene by crediting options [A] and [C].

Passing rate 59%. Weaker discrimination. Although 30% of candidates chose the incorrect option [C], there was negative discrimination suggesting that they were the weaker candidates. Neither option [B] or [D] had positive discrimination and the examiners found no merit in those options. However, the Assistant Chief Examiners agreed that the wording of the question could be improved. The Chief Examiner agreed that [C] was strictly wrong but not happily worded as ‘will’ could be read as ‘likely to’ – and would have been better phrased as ‘must.’ The Board accepted that given the wording of CPR 32.5 and the interplay between 32.5 3 and 4, candidates should be credited with a mark for answering either [A] or [C]. Noted: (i) the question should be edited before re-use; (ii) Provider feedback had been helpful in alerting the Board to a possible flaw in this question.

Page 41: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 41 of 61

The independent psychometrician confirmed that the Civil Litigation assessment exceeded the required level of reliability. The Provider feedback received in respect of the assessment was largely positive, although the length of some of the examination questions was mentioned as a concern by some. The Final Board concluded that there was no statistical data to suggest the length of the questions was a factor affecting candidate performance. The pass standard of 47/74 (equivalent to 64%), recommended to the Board as a result of the standard setting process and the removal of MCQ 43, was endorsed by the Board. 5.4 Spring 2017 Post-intervention passing rate

Civil Litigation All Provider post-intervention

Spring 2017 Spring 2016 Spring 2015 Spring 2014

MCQ Comparison 60.2%

74.1% 71.3% 68.6%

Combined Comparison 62.2% 58.0% 57.4%

As can be seen from the above table the effect of the two interventions agreed by the Final Board was relatively marginal, increasing the passing rate by less than 1%. The post-intervention all-Provider passing rate of 60.2% is very much in the range achieved over the Spring 2014 to Spring 2016 cycles for the combined Civil Litigation assessments, and predictably lower than that achieved in respect of the MCQ assessment alone over the same period. 5.5 Pre- and post-intervention MCQ passing rates by Provider

Providers are ranged left to right in order of their post-intervention passing rates. Hence BPP Manchester had the highest post intervention passing rate at 79.2% and Manchester Metropolitan University the lowest at 36.3% - a range of 43%. The interventions had very little impact on passing rates, Cardiff University seeing the biggest change with a 3.1% uplift in passing rates. No Provider cohort saw passing

30.0

35.0

40.0

45.0

50.0

55.0

60.0

65.0

70.0

75.0

80.0

Pas

s ra

te %

Provider

Civil Litigation - Spring 2017Pre and Post intervention comparison

Pre-Scale

Post-Scale

Page 42: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 42 of 61

rates decline as a result of the interventions. Neither is there evidence to suggest that the weaker cohorts benefitted more from the interventions, the average uplift being the same across the top 6 Providers as the bottom 7. 5.6 Civil Litigation Spring 2017 post-intervention Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Providers are ranged left to right in order of their Spring 2017 post-intervention passing rates, and the data shows their passing rates across the four assessment cycles from Spring 2014 to Spring 2017. (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017.) In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Civil Litigation cohort performance data for Spring 2017 is based on an MCQ paper with 75 questions, some of which were of the single best answer variety. The Civil Litigation data for Spring 2014 to Spring 2016 used for this graph is based on the combined passing rate of candidates across both the MCQ paper (comprising forty questions – all of the “one correct answer” variety), and the SAQ paper (five questions). It should also be noted that, prior to Spring 2017, candidates had to pass both MCQ and SAQ elements (with a fixed pass mark of 60% on each paper) in Civil Litigation in order to achieve a passing grade for the paper as a whole. 5.6.1 With these caveats in mind, the following points emerge:

(a) BPP Manchester enjoys the highest average cohort passing rate over the Spring 2014 to Spring 2017 period (75.9%) and the weakest is MMU (44.3% average).

30.0

40.0

50.0

60.0

70.0

80.0

90.0

BP

P M

anch

este

r

Car

dif

f

BP

P L

eed

s

Cit

y

NTU

BP

P L

on

do

n

UW

E

ULa

w L

on

do

n

ULa

w L

eed

s

ULa

w B

irm

ingh

am

BP

P B

irm

ingh

am

No

rth

um

bri

a

MM

U

Pas

s ra

te %

Provider

Civil Litigation Spring 2017 Provider pass rates vs. Spring 2014-2016 combined MCQ & SAQ post-intervention pass rates

Spring 2017

Spring 2016 overall

Spring 2015 overall

Spring 2014 overall

Page 43: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 43 of 61

(b) Looking at the change in Provider cohort performance from Spring 2016 to Spring 2017, NTU shows the largest improvement with a rise of over 20% in respect of the new form of assessment, whilst ULaw London reported a drop of nearly 11%. Two-thirds of Providers with data for Spring 2016 saw a decline in Spring 2017 with the average reduction in passing rates being 2.5% as against the average combined Civil Litigation passing rates of Spring 2016. (ULaw Leeds is excluded from this calculation as it had no cohort in Spring 2016.)

(c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the combined Civil Litigation assessment across the Spring 2014 to Spring 2016 cycles shows that half of those with reportable data achieved a higher passing rate in Spring 2017. NTU recorded the biggest increase (over 15%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles, and the largest drop was reported by MMU, where the 36% passing rate for Spring 2017 is over 10% below its average passing rate for the previous three cycles. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all four cycles.)

5.7 Civil Litigation Spring 2017 post-intervention Provider pass rates vs. Spring 2014-2016 MCQ post-intervention pass rates

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

BP

P M

anch

este

r

Car

dif

f

BP

P L

eed

s

Cit

y

NTU

BP

P L

on

do

n

UW

E

Ula

w L

on

do

n

ULa

w L

eed

s

Ula

w B

ham

BP

P B

irm

ingh

am

No

rth

um

bri

a

MM

U

Pas

s ra

te %

Provider

Civil Litigation Spring 2017 Provider pass rates vs. Spring 2014-2016 MCQ post-intervention pass rates

Spring 2017 Spring 2016 MCQ

Spring 2015 MCQ Spring 2014 MCQ

Page 44: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 44 of 61

Providers are ranged left to right in order of their Spring 2017 post-intervention passing rates, and the graph shows their passing rates across the four Spring assessment cycles from Spring 2014 to Spring 2017. (BPP Birmingham entered its first cohort in Spring 2016 and ULaw Leeds in Spring 2017.) In comparing Provider performance in Spring 2017 with previous years, it must be borne in mind that this is not a like-for-like comparison. The Civil Litigation cohort performance data for 2017 is based on an MCQ paper with 75 questions, some of which were of the single best answer variety. The Civil Litigation data for Spring 2014 to Spring 2016 used for this graph is based on the MCQ element of the assessment used in Spring 2014 to Spring 2016 (comprising forty questions – all of the “one correct answer” variety). This provides an alternative basis for trend analysis compared to 5.6 (above) where the combined passing rates include the SAQ section of the previous form of assessment. As might be expected candidates in previous years normally achieved higher passing rates in respect of the MCQ element of the assessment as against the SAQ element, hence one would expect the passing rates for the Spring 2017 MCQ assessment to be lower by comparison given the introduction of SBA type MCQs. 5.7.1 With these caveats in mind, the following points emerge:

(a) BPP Manchester enjoys the highest average cohort passing rate (83.8%) over the Spring 2014 to Spring 2017 period. Northumbria was the weakest across four cycles (54.8% average). (b) Looking at the change in Provider cohort performance from Spring 2016 to Spring 2017, NTU is the only Provider cohort to show an improvement (6.5%). The sharpest decline is at ULaw Birmingham (29.5%). The average decline in Spring 2017 passing rates compared to the Spring 2016 MCQ only passing rates is over 13%. (ULaw Leeds excluded from this calculation as it had no cohort in Spring 2016.) (c) Comparison of each Provider’s cohort passing rate for Spring 2017 with the average of their passing rates for the MCQ Civil Litigation assessment across the Spring 2014 to Spring 2016 cycles shows that only two of eleven Providers record a higher passing rate for Spring 2017. Cardiff recorded the biggest increase (5%) in respect of their Spring 2017 cohort compared with the average of their previous three cycles. The biggest fall was reported by MMU, where the 36.3% passing rate for Spring 2017 is over 28% below its average passing rate for the previous three Spring sit cycles of MCQ only assessment in Civil Litigation. (ULaw Leeds and BPP Birmingham are excluded from this calculation as they did not offer the BPTC across all four cycles.)

5.8 Overall grade boundary distribution

All Provider Grade Boundary Distribution Civil Litigation

Not Competent Competent Very Competent Outstanding

39.8 27.7 27.4 5.2

Page 45: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 45 of 61

The standard setting process determines where the “Not competent”/”Competent” boundary lies and grade boundaries are then calculated accordingly to ensure that the passing grades are allocated proportionately across the Competent / Very Competent / Outstanding classifications. As explained above at 2.8, for an assessment comprising 75 MCQs a passing standard of 45/75 equates to a passing score of 60%, thus mirroring the fixed pass mark used in centrally assessed exams prior to Spring 2017. Under the 60% fixed pass mark regime candidates awarded marks of 60% to 69% were graded Competent; those awarded marks of 70% to 84% were graded Very Competent; and awarded marks between 85% and 100% were graded Outstanding. Where the passing standard is identified as being below 45/75 the range of Competent / Very Competent / Outstanding classifications is stretched to cover a broader range of scores. Conversely where the passing standard is identified as being above 45/75 the range of Competent / Very Competent / Outstanding classifications becomes compressed. The Spring 2017 all-Provider cohort results for Civil Litigation show that with a passing standard set at 47/74 (equivalent to 64%), there are relatively few candidates achieving the Outstanding classification. 5.9 Spring 2017 Civil Litigation grade boundaries by Provider

Providers are ranged from left to right in order of the percentage of candidates in each cohort graded Outstanding. Predictably, given that it has the highest passing rate in Civil Litigation, BPP Manchester has the cohort with the highest percentage of candidates graded Outstanding (13.2%). As the table below indicates, apart from that outcome, however, there is no strong correlation between a Provider cohort passing rate and the percentage of its candidates graded as Outstanding. Cardiff had a lower percentage of Outstanding candidates than the second-lowest ranked

0.0

10.0

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

% s

tud

ent

per

cat

ego

ry

Provider

Civil Litigation Spring 2017Grade boundaries by Provider

Not Competent Competent Very Competent Outstanding

Page 46: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 46 of 61

Provider, Northumbria. BPP Birmingham, MMU and NTU were the only cohorts not to have any candidates graded as Outstanding for Civil Litigation.

Provider Post-intervention passing rate

Outstanding

BPP Manchester 79.2 13.2

Cardiff 75.4 1.5

BPP Leeds 67.3 5.8

City 64.6 6.3

NTU 64.3 0.0

BPP London 63.6 7.3

UWE 61.2 1.0

ULaw London 58.9 7.6

ULaw Leeds 57.1 4.8

ULaw Birmingham 52.9 2.9

BPP Birmingham 43.9 0.0

Northumbria 43.6 2.9

MMU 36.3 0.0

5.10 All-Provider Spring 2014 to Spring 2017 grade boundaries trend analysis

Any comparison of grade boundary trends over the Spring 2014 to Spring 2017 cycles must be looked at in the context of the changes made to the assessment in Civil Litigation for Spring 2017 – these have already been set out at 5.6 above. Hence this is not a true like-for-like comparison, but of interest nonetheless in assessing the impact of the new form of assessment on the distribution of gradings. 5.10.1 As with the results for Criminal Litigation, what can be seen is decline in the

proportion of candidates achieving an Outstanding grade in Spring 2017 compared to the average across Spring 2014 to Spring 2016 (down 7%). The decline from Spring 2016 to Spring 2017 is quite marked (down 10.9%), with a smaller decline in the percentage of candidates graded as Very Competent

-10

10

30

50

Spring 2014 Spring 2015 Spring 2016 Spring 2017

% s

tud

ents

Grade category

Civil Litigation grade boundaries trend analysis Spring 2014 to Spring 2017

Outstanding Very Competent Competent Not Competent

Page 47: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 47 of 61

(down 3.9%). The 27.7% graded as Competent in Spring 2017 is the highest across the four assessment cycles in the data set.

Page 48: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 48 of 61

6. BAR TRANSFER TEST RESULTS The results for Bar Transfer test (‘BTT’) candidates attempting the Spring 2017 BTT assessments were considered by the Subject Exam Boards and the Final Board. In Criminal Litigation and Civil Litigation BTT candidates attempted the same paper as the BPTC candidates. Candidates for Professional Ethics attempted a separate BTT Professional Ethics paper (due to the different time scale required for the BTT for Spring 2017 only). For the BTT Professional Ethics assessment the pass mark determined through the standard setting process was 21 out of 60 (equivalent to 35%). This recommendation was accepted by the Final Board, which also endorsed the recommendation that no interventions were required in respect of the BTT Professional Ethics assessment. 6.1 BTT Spring 2017 results

Subject Number of BTT

candidates Spring 2017 post-intervention

passing rate

Professional Ethics 81 19.8

Civil Litigation 53 39.6

Criminal Litigation 62 61.3

Page 49: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 49 of 61

7. COMPARING POST-INTERVENTION PASSING RATES ACROSS SUBJECT AREAS SPRING 2014 TO SPRING 2017 7.1 MCQ/SAQ passing rates Spring 2014 to Spring 2017

This table shows the all-Provider passing rates for the Spring sit assessments in Professional Ethics (SAQ paper only Spring 2014 to Spring 2016), Criminal Litigation (MCQ paper only Spring 2014 to Spring 2016) and Civil Litigation (MCQ paper only Spring 2014 to Spring 2016). The data for Spring 2017 shown here is derived from the new format assessments as detailed at 1.3 above, hence this is not a straight like-for-like comparison, more an indication of the impact of the new form of assessment introduced in Spring 2017. A number of trends are noticeable. The passing rate in the Civil Litigation MCQ assessment fell year-on-year from Spring 2014 to Spring 2016 and fell again with the introduction of the new format 75 MCQ paper in Spring 2017. The passing rate for the Professional Ethics SAQ assessment has oscillated somewhat but also declines sharply following the introduction of the new format six SAQ paper in Spring 2017. Criminal Litigation, by contrast, shows a third successive year of rising MCQ passing rates if the new 75 MCQ format paper used in Spring 2017 is included (but see comments at 9.1, below regarding the effectiveness of distractors in the Criminal Litigation assessment). Over the four cycles of assessment, Civil Litigation enjoys the highest average passing rate at 75.4%, followed by Criminal Litigation at 72.8%, with Professional Ethics some way behind on 63%. There is considerable range within each subject area between the highest and lowest passing rates over these four cycles. For Civil Litigation, for example, the range is 23.9%, and for Professional Ethics 13.2%.

50.0

55.0

60.0

65.0

70.0

75.0

80.0

85.0

90.0

Spring 14 Spring 15 Spring 16 Spring 17

Pas

s ra

te %

Post intervention pass rate trend analysis Spring 2014 - 2017

Ethics SAQ Civil MCQ Criminal MCQ

Page 50: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 50 of 61

7.2 Combined passing rates Spring 2014 to Spring 2017

As an alternative point of comparison to that provided at 7.1 (above), this table shows the all-Provider passing rates for the Spring sit assessments in Professional Ethics (MCQ and SAQ combined Spring 2014 to Spring 2016), Criminal Litigation (MCQ and SAQ combined) and Civil Litigation (MCQ and SAQ combined). The data for Spring 2017 shown here is derived from the new format assessments as detailed at 1.3 above, hence this is not a straight like-for-like comparison, more an indication of the impact of the new form of assessment introduced in Spring 2017. With this basis of comparison, the impact of the SAQ passing rates on the combined passing rates for Criminal and Civil Litigation in the period Spring 2014 to Spring 2016 becomes more evident. Criminal Litigation continues a three-year trend of rising passing rates but both Civil Litigation and Professional Ethics show a decline in passing rates with the introduction of the new formats for assessment. Over the four cycles of assessment Criminal Litigation enjoys the highest average passing rate at 71%, followed by Professional Ethics at 62.5%, and Civil Litigation a close third on 59.5%. The range within each subject area between the highest and lowest passing rates over these four cycles is 15.7% for Criminal Litigation, 13.5% for Professional Ethics, and 4.8% for Civil Litigation. 7.3 Grade boundaries Spring 2017

Ethics Spring 2017

Civil Spring 2017

Criminal Spring 2017

Outstanding 0.1 5.2 6.7

Very Competent 14.2 27.4 43.5

Competent 46.7 27.7 28.0

Not Competent 44.8 39.8 21.8

As has been indicated above at 3.6, 4.9 and 5.9, the percentage of candidates securing an Outstanding grade in the Spring 2017 assessments is low or negligible

50.0

55.0

60.0

65.0

70.0

75.0

80.0

Spring 14 Spring 15 Spring 16 Spring 17

Pas

s ra

te %

Post-intervention pass rate trend analysisSpring 2014 - 2017

Ethics overall Civil overall Criminal overall

Page 51: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 51 of 61

across all three subject areas with consequent increases in various grade bands below that.

Page 52: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 52 of 61

8. COMPARING SPRING 2017 RESULTS ACROSS PROVIDERS 8.1 Spring 2017 pre-intervention passing rates by Provider

Providers are ordered left to right according to the average pre-intervention passing rate achieved by their cohorts across all three subject areas in the Spring 2017 centralised assessments. Hence BPP Manchester emerges as the highest performing Provider cohort with an average passing rate across the three subject areas of 84.6%. This compares with Northumbria which recorded the lowest average passing rate across the three subject areas of 52.1% - a range of 32.5%.

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0

BP

P M

anch

este

r

Car

dif

f

BP

P L

eed

s

BP

P L

on

do

n

Ula

w L

on

do

n

Cit

y

NTU

Ula

w B

irm

ingh

am

BP

P B

irm

ingh

am

UW

E

Ula

w L

eed

s

Man

ches

ter

No

rth

um

bri

a

Pas

s ra

te %

Provider

Knowledge areas Spring 2017Pre intervention pass rates by Provider

Ethics Spring 2017

Civil Spring 2017

Criminal Spring 2017

Page 53: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 53 of 61

8.2 Spring 2017 post-intervention passing rates by Provider

8.2.1 Providers are ordered left to right according to the average post-intervention

passing rate achieved by their cohorts across all three subject areas in the Spring 2017 centralised assessments. Given the modest range of interventions approved by the Final Board there is little change from the data at 8.1, and the order of Providers remains unchanged. BPP Manchester, therefore, remains the highest performing Provider cohort with an average passing rate across the three subject areas of 84.6% and Northumbria the lowest at 53.1%. It is notable that BPP Manchester has the highest performing cohort in all three subject areas, whilst Cardiff is second in two and third in another. The highest cohort passing rate in any of the centrally examined subject areas was achieved by BPP Manchester with a 96.1% passing rate in respect of Criminal Litigation. The worst Provider cohort performance across any centrally examined subject areas was achieved by ULaw Leeds where only 28.6% passed Professional Ethics.

8.2.2 If the Provider cohort results are aggregated to show performance by Provider

rather than study centre (i.e. combining the passing rates across all branches operated by Providers), Cardiff emerges as the strongest Provider in each subject area and overall; see table below. BPP emerge as the second strongest followed by City.

20.0

30.0

40.0

50.0

60.0

70.0

80.0

90.0

100.0B

PP

Man

ches

ter

Car

dif

f

BP

P L

eed

s

BP

P L

on

do

n

Ula

w L

on

do

n

Cit

y

NTU

Ula

w B

irm

ingh

am

BP

P B

irm

ingh

am

UW

E

Ula

w L

eed

s

Man

ches

ter

No

rth

um

bri

a

Pas

s ra

te %

Provider

Knowledge areas Spring 2017Post Scale pass rates by Provider

Ethics Spring 2017

Civil Spring 2017

Criminal Spring 2017

Page 54: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 54 of 61

Provider

Ethics Spring 2017

Civil Spring 2017

Criminal Spring 2017

Average

Cardiff 74.2 75.4 92.1 80.6

BPP 67.0 63.5 87.0 72.5

City 55.0 64.6 75.7 65.1

NTU 50.8 64.3 77.6 64.2

ULaw 52.5 56.3 73.7 60.8

UWE 37.1 61.2 72.4 56.9

Manchester 56.1 36.3 72 54.8

Northumbria 41.9 43.6 73.8 53.1

8.2.3 If these passing rates are further aggregated by Mission Group (i.e. Private

Providers, Pre ’92 Universities and Post ’92 Universities) the results are as follows:

Provider

Ethics Spring 2017

Civil Spring 2017

Criminal Spring 2017

Average

Pre '92 64.62 69.98 83.90 72.8

Private 59.74 59.90 80.36 66.7

Post '92 46.49 51.33 73.95 57.3

Page 55: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 55 of 61

8.3 Changes in Provider passing rates Spring 2016 to Spring 2017

This table brings together data already considered at 3.4.1 (b), 3.5.1 (b), 5.6.1 (b), 4.7.1 (b), 5.6.1 (b), and 5.7.1(b), showing the extent to which Provider cohorts have been able to adapt to the new formats for centralised assessments introduced for the Spring 2017 assessments, the impact of standard setting and, in respect of Professional Ethics, centralised marking. No Providers achieved an increase in passing rates across all three subject areas when compared to Spring 2016, whilst two Providers, ULaw Birmingham and MMU, record a decline across all three year-on-year. The average change across the subject areas is: Professional Ethics down 12%; Civil Litigation down 1.8%, and Criminal Litigation up 6.34%. Looking at the change in passing rates across all three subjects at each Provider (i.e. taking the rise and fall in passing rates together) shows that BPP Birmingham achieved the biggest cumulative increase compared to Spring 2016, up a total of 41.7%. By contrast, ULaw Birmingham seems to have had the greatest challenge in adapting to the new formats with passing rates across all three subjects falling by a total of 45%. At subject level, the highest year-on-year improvement was achieved by BPP Birmingham in respect of Criminal Litigation (24.4%), whilst the biggest reverse was experience by UWE in respect of Professional Ethics (-32.6%).

-40.0

-30.0

-20.0

-10.0

0.0

10.0

20.0

30.0

BP

P B

irm

ingh

am

BP

P L

on

do

n

ULa

w B

irm

ingh

am

MM

U

Car

dif

f

NTU Cit

y

BP

P L

eed

s

BP

P M

anch

este

r

No

rth

um

bri

a

ULa

w L

on

do

n

UW

E

% c

han

ge

Provider

Changes in Provider pass ratesSpring 2016 vs Spring 2017

Ethics Civil Criminal

Page 56: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 56 of 61

8.4 Spring 2017 pass rates by mode of study

Spring 2017 Mode of study pass rates

Professional

Ethics

Civil Litigation

Criminal Litigation

Part-Time

pass %

Full-Time

pass %

Part-Time

pass %

Full-Time

pass %

Part-Time

pass %

Full-Time

pass %

BPP Birmingham N/A 55.4 N/A 43.9 N/A 79.2

BPP Leeds 47.1 77.3 75.0 65.9 75.0 97.7

BPP London 51.2 69.1 45.2 67.5 72.7 79.7

BPP Manchester N/A 78.6 N/A 79.2 N/A 96.1

Cardiff N/A 74.2 N/A 75.4 N/A 92.1

City 0.0 56.4 54.1 65.6 40.0 77.1

MMU 90.0 51.4 40.0 35.7 63.6 73.4

Northumbria 0.0 42.9 25.0 44.1 100.0 73.0

NTU N/A 50.8 N/A 64.3 N/A 77.6

ULaw Bham 80.0 65.4 20.0 58.6 33.3 63.6

Ulaw Leeds N/A 28.6 N/A 57.1 N/A 81.0

ULaw London 52.0 63.2 45.9 62.8 75.0 81.7

UWE 60.0 31.8 84.6 57.6 71.4 72.7

This table disaggregates full-time and part-time candidates passing rates by Provider for the Spring 2017 assessments. Where a Provider is shown as “N/A” it indicates that there is no part-time mode offered. A passing rate of “0.0” indicates that no part-time candidates passed. Generally part-time cohorts achieve lower passing rates compared to full-time cohorts. The biggest negative variance is in Criminal Litigation where passing rates average 16% below those for full-time candidates, followed by Professional Ethics (-9.6%) and Civil Litigation (-8.5%). Taking the average part-time cohort passing rate across the three centrally assessed subject areas, UWE has the strongest Spring 2017 part-time cohort, with an average passing rate of 72%, whilst City has the weakest at 31%. In terms of performance of part-time cohort versus full-time cohort, again UWE records the biggest differential with the part-time cohort having an average passing rate across the three centrally assessed subject areas that is 18% higher than its full-time cohort average. By contrast, at City, the gap between the two cohorts shows part-time candidates achieving an average passing rate 35% below that achieved by full-time candidates. Taking the average passing rate across all three centrally assessed subject areas and across all Providers, part-time cohorts have passing rates that average 9% below those achieved by full-time cohorts.

Page 57: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 57 of 61

9. GENERAL POST EXAM BOARD ISSUES 9.1 MCQs with weak distractors 9.1.1 The CEB is concerned to ensure that the assessments for the centralised

examinations operate effectively and present candidates with an appropriate level of challenge. For Criminal Litigation and Civil Litigation (the two areas assessed by means of MCQs), candidates are presented with a brief statement or fact scenario and must select the best or only correct answer from a range of four possible answers (incorrect answers are known as ‘distractors”). An effective MCQ would, therefore, have three effective distractors in the sense that it presents the candidate with the correct answer and three plausible wrong answers. The data presented to the examination Boards indicates what percentage of candidates overall chose each possible answer across the MCQs in each assessment. It also demonstrates where the strong candidates chose the correct answer and where the weaker candidates chose the correct answer.

9.1.2 Where this data indicates that there are two distractors that attract very few

candidates there is a danger that candidates are effectively left with the much less challenging task of opting for one out of two plausible answers. The data available to the Board also indicates the extent to which each MCQ has been effective in discriminating between stronger and weaker candidates (what is referred to as the point biserial for each MCQ), and, where this is satisfactory, there is some reassurance that the MCQ is operating as an effective assessment instrument, but MCQs with weak distractors may nevertheless be contributing to a higher passing rate overall, thus undermining the rigour of the assessment.

9.1.3 The data for the Criminal Litigation examination shows that there were forty

two out of seventy five MCQs where two distractors attracted fewer than 6% of candidates each. Of these, there were thirteen MCQs where there were two distractors attracting fewer than 2% of candidates each.

9.1.4 Comparable data for the Civil Litigation assessment shows that there were

twenty five out of seventy five MCQs where two distractors attracted fewer than 6% of candidates each. Of these, there were only four MCQs where there were two distractors attracting fewer than 2% of candidates each.

9.1.5 Hence, whilst there is evidence of MCQs with at least two weak distractors in

the Civil Litigation assessment, the issue is rather more pressing in Criminal Litigation. This is a matter that the CEB needs to reflect on, whilst acknowledging the view that part of the challenge for the question writers in Criminal Litigation may be that, as a subject area, Civil Litigation is better suited to questions on tactics which could form challenging single best answer questions, giving rise to a more plausible range of distractors

Page 58: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 58 of 61

9.2 Eradicating clerical errors 9.2.1 On 5 July the BSB received communications from two Providers expressing

concern over the marks awarded in respect of certain candidates who had sat the Spring 2017 Professional Ethics examination. The incidents reported related to a small number of candidates’ SAQ answers being incorrectly marked as not having been attempted (“DNA”) when in fact the candidate had provided an answer.

9.2.2 A full check of every DNA script was undertaken by BSB support staff

indicating that 214 candidates had one or more answers marked as DNA in respect of 349 answers. The checks revealed three instances of spreadsheet input errors by the markers (a marker had wrongly input marks into the cell designated for identifying a DNA, the formula then assuming that the question had not been attempted). None of the candidates’ overall marks for Ethics were affected by this error.

9.2.3 Thirty two scripts were identified as warranting further expert analysis by a

CEB examiner. Of the thirty two sent for analysis/marking, ten were raised as queries because answers were not clearly identifiable. Following review by a member of the Professional Ethics examiner team twenty two candidates had their marks increased, including six who moved from a fail to a pass. No candidate marks were reduced as a result of this investigation. A full check of all scripts was conducted but did not reveal any further issues.

9.2.4 The CEB and BSB fully accept that there is no acceptable level of clerical

error where candidates’ results are concerned, and steps will be taken to help ensure the errors encountered in this sitting of centralised assessments do not recur. These include:

• manual checking of any answers recorded on the system as DNA;

• manual checking of any mark recorded as anything other than a whole number or a 0.5;

• manual checking of every script to ensure that answers awarded ‘0’ have been assessed as such and reclassified as DNA if there is no evidence of any answer being provided;

• refining the spreadsheet by locking certain cell so that marks cannot be recorded where only text is required.

9.3 Use of half marks in the Professional Ethics marking scheme 9.3.1 Representations were made to the BSB/CEB regarding the use of half marks

in the marking scheme issued for the Professional Ethics SAQ assessment. These related to the fact that the rubric on the examination paper advised candidates that: “The number of individual points that you are expected to make in answer to a sub-part is indicated by the total number of marks available for that sub-part, bearing in mind that only full marks are available in this paper.”

9.3.2 The argument was put forward that this instruction could have misled some

candidates into making only one point for each available mark (for example

Page 59: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 59 of 61

four points where a question sub-part was shown as attracting four marks), when in fact more points should have been made to ensure that all available half-marks were awarded.

9.3.3 The issue was considered at a post-exam Board review meeting attended by

the Professional Ethics examining team, the Chair of the CEB, the BSB Director and key BSB support staff. The evidence from those who had looked at a wide range of candidate scripts was that candidates had, by and large, attempted to make as many points as possible in answering the questions. The notion that candidates would deliberately refrain from making additional good points believing that they had already secured all the marks available was not regarded as plausible. For the avoidance of doubt, however, it was agreed that the rubric for the Professional Ethics SAQ examination paper would be clarified, for the Summer 2017 examination and thereafter in these terms: “You should aim to provide a comprehensive answer to the question and not limit yourself to making the same number of points as the marks available.”

9.4. Allowing candidates additional reading time in the Professional Ethics

SAQ assessment

9.4.1 As noted above at 3.2.6 to 3.2.8, the Final Exam Board considered the issue of whether the lack of specific additional reading time might have been a factor impacting on candidate performance in the Professional Ethics SAQ assessment.

9.4.2 The Spring 2017 Professional Ethics assessment had a word count of 2164,

which compares favourably (from a candidate’s perspective) with the mock assessment (2,352 words) and the Spring 2016 Professional Ethics assessment (MCQ and SAQ combined – 5,600 words). Interestingly, the SAQ element of the Spring 2016 Professional Ethics assessment comprised 1,376 words for three SAQs – hence pro rata it would have been the equivalent of a six SAQ paper with a word count of 2,752, considerably more than the word count for the Spring 2017 assessment.

9.4.3 On this basis:

(a) it is hard to see any evidence that the mock assessment underrepresented the amount of material a candidate would have been expecting to deal with (indeed the mock assessment was longer than the Spring 2017 assessment); (b) the Spring 2017 assessment was, pro rata, shorter than the Spring 2016 SAQ assessment. Furthermore, the evidence from the pilot testing of the Spring 2017 Professional Ethics assessment did not suggest that it was more onerous that the Spring 2016 assessment in terms of the time needed for completion.

9.4.4 The view of the CEB is that properly prepared candidates should have been

able to complete the Spring 2017 Professional Ethics SAQ assessment in the time allowed. Providers need to make clear to candidates that there is a need to be disciplined in the allocation of examination time to each question.

Page 60: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 60 of 61

However, having met with Provider representatives on 4 September 2017, the CEB has agreed to allow 2.5 hours for the Professional Ethics examination for the 2017/18 academic year and will review the impact of this change accordingly.

9.5 Consideration of Provider Feedback at the CEB Boards 9.5.1 With changes to the format of assessments and the centralisation of marking

for the Professional Ethics SAQ assessment, the CEB has reviewed the role played by Provider feedback as part of the evidence base available for its Exam Boards.

9.5.2 For Civil and Criminal Litigation, Providers will be invited to provide feedback

to the relevant Chief Examiner on the question paper with reference to the proposed correct answer. These comments, and the response from the Chief Examiner, will be considered by the relevant Subject Boards and the Final Exam Board.

9.5.3 For Professional Ethics, Providers will be invited to comment on the version of

the question paper supplied to candidates (not the marking guidance developed in conjunction with the central marking team). Feedback from Providers should relate to issues such as syllabus, typographical errors, clarity of drafting or question flaws. Providers will not be privy to, or invited to comment on, the proposed marking guidance as this is developed on an iterative basis in the dialogue between the Professional Ethics examining team and the central marking team.

9.6 Giving feedback to candidates on their Professional Ethics SAQ examination scripts Following the release of the Spring 2017 Professional Ethics examination results queries arose as to the extent to which staff at Provider institutions could assist candidates preparing to retake at the Summer 2017 sitting. Specifically, there were queries regarding the extent to which candidates could be permitted access to previous question papers, marking schemes and their scripts. 9.6.1 Guidance was given to Providers on this issue in 2011. The key points are

restated here: It is acceptable, in giving feedback to a student who has failed a summative assessment to give an indication of the proportion of [MCQs or SAQs] answered correctly and to provide feedback on general areas of law where there were weaknesses and where further work is required. The solutions to examination questions from the main Question Bank must never be revealed to students.

9.6.2 The CEB has resisted requests to equip staff at Provider institutions with

detailed marking schemes for the Professional Ethics assessment for a number of reasons.

Page 61: THE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD · PDF fileTHE BAR STANDARDS BOARD CENTRAL EXAMINATIONS BOARD CHAIR’S REPORT OCTOBER 2017 ... Education and Training Committee,

Page 61 of 61

(a) Perfectly adequate feedback can be provided without a forensic explanation to candidates of exactly where each mark or half mark may or may not have been listed by a candidate;

(b) The marking scheme is developed through a dialogue between the Professional Ethics examiner team and the central marking team, a dialogue to which staff at Provider institutions are not privy. It would be invidious for staff at Provider institutions to be drawn into discussions with candidates about whether they thought the marking scheme was fair and appropriate. There is a real prospect of this if those giving the feedback have the detailed mark scheme and candidates are aware that they have it.

(c) The CEB has genuine concerns that any risk of detailed marking schemes gaining wider circulation should be minimised to preserve the integrity of the question bank from which the Professional Ethics SAQs are drawn. This is not just a matter of the resources needed to be invested in the production of new questions but also the fact that the Professional Ethics syllabus is relatively narrow and there are a limited number of topics that can be examined within the medium of an SAQ. There is, inevitably and despite the best efforts of the question writers, a degree of predictability in terms of topics covered and genuine concern amongst the Professional Ethics examiner team that providing more detailed marking schemes would undermine the extent to which questions could be recycled.

9.6.3 Recognising fully the need for candidates who have failed to have useful

feedback, the CEB met with Provider representatives on 4 September 2017 and agreed that, for the 2017/18 assessment cycle the Professional Ethics examiner team would: (a) issue Providers with updated general marking guidance (not the agreed marking scheme) used by the central marking team; (b) produce ‘suggestion solutions’ for the updated mock assessment, with explanations given as to why the answers were good; (c) produce an examination Board report comprising information gathered from markers. Examples of commentary would be ‘…..the majority of candidates made a particular error’ or ‘candidates failed to mention X’.

Professor Mike Molan Chair of the Central Examination Board 25 October 2017