clinical competency exercises: some student perceptions

8
Clinical competency exercises: some student perceptions S. Rolland, R. Hobson and S. Hanwell School of Dental Sciences, Newcastle upon Tyne, UK Abstract: Clinical competency assessments are an important part of dental curricula—to satisfy national requirements, main- tain professionalism and ensure graduates are prepared for independent clinical practice. It has been observed within Newcastle Dental School (UK) that students tend to undertake the majority of their competency assessments at a very late stage. A questionnaire was designed to investigate student perceptions of two different competency assessment processes (formative structured clinical operative tests vs. summative grading), assess why they chose to undertake competency exercises at a particular time, investigate how well prepared they felt, and finally to evaluate potential barriers that students perceived within the competency process. Data regarding the timing of competency assessments and grades achieved were analysed. Fifty-nine per cent of students reported preferring the summative grading system. Most students felt that they under- took their competency assessments at about the right time (54%: conservation department, 66%: paediatric department) and the majority felt adequately prepared to undertake each exercise (68—98%). The greatest barrier stated to undertaking compe- tency assessments was a lack of suitable patients both on which to practise and to undertake the exercise. No correlation was found between when students took summative assessments and the grades achieved. Therefore, we must encourage students to undertake their competency assessments once they have accrued sufficient clinical experience and reassure them that timing has little effect on the grade achieved. We should assist them to locate suitable patients wherever possible. Key words: competency-based education; educational assess- ment; dental students. ª 2007 The Authors. Journal Compilation ª 2007 Blackwell Munksgaard Accepted for publication, 24 January 2007 Introduction C linical competency has been defined as the behaviour expected of newly qualified inde- pendent practitioners. This behaviour incorporates understanding, skills and values in an integrated response to the full range of circumstances encoun- tered in general professional practice (1). The General Dental Council of the United Kingdom document ‘The First Five Years’ (2) states the import- ance of ensuring that prior to graduation students can demonstrate that their clinical skills render them fit for independent practice. Mutual recognition of den- tal qualifications in Europe has necessitated the publication of European guidelines which should also be considered (3). Students’ clinical and profes- sional skills must be assessed adequately prior to proceeding to the final examination. This assessment process can be formative or summative, but compe- tency must be demonstrated within a number of disciplines prior to qualification and inclusion on the dental register. Clinical performance, however, is often hard to measure (4) and techniques used to assess competency must be transparent, robust and accountable (5, 6). Within Newcastle Dental School, several assess- ment methods are employed to assess clinical competency in different disciplines. Within the Department of Conservative Dentistry (CONS), assessment of competency is a summative process and grades obtained from the three competency exercises used contribute equally towards the final examination grade. Competency assessments are cri- terion marked, with a number of marks assigned to each part of the procedure. If performance is deemed unsatisfactory, assessments can be retaken, but a maximum grade of only 50% can be achieved in the second attempt. In the Department of Child Dental Health (CDH), competency assessments are formative but must be completed prior to being entered for the finals examination. These are classified as struc- tured clinical operative tests (SCOTs) (4) and are straightforward exercises designed to show basic clinical competency within core areas, incorporating 184 Eur J Dent Educ 2007; 11: 184–191 All rights reserved ª 2007 The Authors. Journal Compilation ª 2007 Blackwell Munksgaard european journal of Dental Education

Upload: s-rolland

Post on 21-Jul-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Clinical competency exercises: some student perceptions

Clinical competency exercises: some student

perceptions

S. Rolland, R. Hobson and S. HanwellSchool of Dental Sciences, Newcastle upon Tyne, UK

Abstract: Clinical competency assessments are an importantpart of dental curricula—to satisfy national requirements, main-

tain professionalism and ensure graduates are prepared forindependent clinical practice. It has been observed within

Newcastle Dental School (UK) that students tend to undertakethe majority of their competency assessments at a very late

stage. A questionnaire was designed to investigate studentperceptions of two different competency assessment processes

(formative structured clinical operative tests vs. summativegrading), assess why they chose to undertake competency

exercises at a particular time, investigate how well prepared theyfelt, and finally to evaluate potential barriers that students

perceived within the competency process. Data regarding the

timing of competency assessments and grades achieved wereanalysed. Fifty-nine per cent of students reported preferring the

summative grading system. Most students felt that they under-took their competency assessments at about the right time (54%:

conservation department, 66%: paediatric department) and the

majority felt adequately prepared to undertake each exercise(68—98%). The greatest barrier stated to undertaking compe-

tency assessments was a lack of suitable patients both on whichto practise and to undertake the exercise. No correlation was

found between when students took summative assessments andthe grades achieved. Therefore, we must encourage students to

undertake their competency assessments once they haveaccrued sufficient clinical experience and reassure them that

timing has little effect on the grade achieved. We should assistthem to locate suitable patients wherever possible.

Key words: competency-based education; educational assess-ment; dental students.

ª 2007 The Authors. Journal Compilation ª 2007 Blackwell

MunksgaardAccepted for publication, 24 January 2007

Introduction

C linical competency has been defined as the

behaviour expected of newly qualified inde-

pendent practitioners. This behaviour incorporates

understanding, skills and values in an integrated

response to the full range of circumstances encoun-

tered in general professional practice (1).

The General Dental Council of the United Kingdom

document ‘The First Five Years’ (2) states the import-

ance of ensuring that prior to graduation students can

demonstrate that their clinical skills render them fit

for independent practice. Mutual recognition of den-

tal qualifications in Europe has necessitated the

publication of European guidelines which should

also be considered (3). Students’ clinical and profes-

sional skills must be assessed adequately prior to

proceeding to the final examination. This assessment

process can be formative or summative, but compe-

tency must be demonstrated within a number of

disciplines prior to qualification and inclusion on the

dental register. Clinical performance, however, is

often hard to measure (4) and techniques used to

assess competency must be transparent, robust and

accountable (5, 6).

Within Newcastle Dental School, several assess-

ment methods are employed to assess clinical

competency in different disciplines. Within the

Department of Conservative Dentistry (CONS),

assessment of competency is a summative process

and grades obtained from the three competency

exercises used contribute equally towards the final

examination grade. Competency assessments are cri-

terion marked, with a number of marks assigned to

each part of the procedure. If performance is deemed

unsatisfactory, assessments can be retaken, but a

maximum grade of only 50% can be achieved in the

second attempt. In the Department of Child Dental

Health (CDH), competency assessments are formative

but must be completed prior to being entered for

the finals examination. These are classified as struc-

tured clinical operative tests (SCOTs) (4) and are

straightforward exercises designed to show basic

clinical competency within core areas, incorporating

184

Eur J Dent Educ 2007; 11: 184–191All rights reserved

ª 2007 The Authors. Journal Compilation ª 2007 Blackwell Munksgaard

euro pean journal of

Dental Education

Page 2: Clinical competency exercises: some student perceptions

assessment of operative and non-operative skills (7)

(e.g. attitude, communication). SCOTs can be retaken

if necessary without penalty and a simple competent/

not competent grade is awarded.

Due to the need to identify suitable patients, both

for gaining experience and for the final competency

exercise, students must choose an appropriate time to

undertake their assessments. However, it has been

observed that both within restorative and paediatric

departments, students often delay undertaking com-

petency exercises until the last few available weeks,

presumably because they assume they are more likely

to pass and achieve a higher grade at a later stage of

their studies. This clearly increases anxiety for staff

and students, because it leaves insufficient time for the

assessment to be repeated if required.

A shortage of appropriate patients for student

dental treatment is clearly a barrier to the education

and assessment process. Blinkhorn (8) identified three

reasons why there may be a shortage of patients for

student care: first, in the areas where dental schools

are situated there may only be a small proportion of

the resident population seeking primary dental care;

second, dental schools are areas of clinical expertise,

where referred cases are generally complex (and

therefore may be unsuitable for undergraduate stu-

dents); and third, the problem of car parking sur-

rounding inner city locations. Self-confidence is

known to be related to clinical experience (9) and

therefore a lack of patients on whom to practise may

affect confidence, which in turn may result in students

delaying assessments. Students may also postpone

their assessment to try and select particular staff

members because the variable nature of patient-based

clinical assessments can result in significant individual

variation between assessors (5).

The aim of this study was to investigate student

perceptions of the assessment process, and what

factors were important in influencing when they

decided to undertake assessments. Four basic research

questions were identified.

• Timing of competency assessments—when were

assessments taken and did this affect the grade

achieved?

• Preparation prior to competency assessments—did

students feel adequately prepared prior to under-

taking the assessment?

• Barriers to undertaking competency assess-

ments—what factors affected when students under-

took the assessment exercises?

• Grading systems—discriminating between the

grading systems (formative/summative)—which

did students prefer and why?

Methods

A questionnaire (outlined in Table 1) was designed by

staff and students within conservation and paediatric

departments in order to answer the four research

questions and distinguish between the different

assessment processes. Fourth year students were

consulted regarding the design and content of the

questionnaire. This took the form of a focus group

type discussion, and helped to identify the main

perceived problems within the competency assess-

ment process, particularly regarding barriers to under-

taking competency assessments and also highlighted

the assumption that higher grades may be achieved if

the competency exercise was undertaken at a later

stage.

The Department of Child Dental Health SCOTs

cover a range of clinical activities (fissure sealant, oral

hygiene instruction, impressions, diet history, adjust-

ing a removable appliance) and are designed to test

the students’ competency in communication skills,

team working, operative skills, cross infection control

and knowledge. For example, in placing a fissure

sealant, the student is assessed on preparation of the

surgery, explanation of the procedure, placement of

the fissure sealant, communication with the assistant

and respecting cross infection control procedures. It

should be possible for students to undertake the

majority of these assessments during years 3 and 4,

except impressions and adjusting a removable appli-

ance which may be completed in years 4 and 5. In the

conservation department, competency exercises cover

a range of clinical procedures (Class II cavity and

restoration, molar endodontics and a posterior crown).

The Class II exercise should be completed in years 3 or

4 and all other conservative dentistry assessments

around the start of year 5. The emphasis in these

exercises is on clinical skills, although failure to act

professionally would result in the student failing the

exercise. All aspects of the procedure are criterion

marked, following which a grade is awarded. For

example, within molar endodontics, the student is

graded on their ability to achieve adequate isolation

and access, working length determination, canal pre-

paration and obturation. Within each category are

subcategories and statements such as ‘Is the root

filling correctly extended?’ to assist marking. Students

have access to the marking sheets both before and

after the exercise, so they know what they are aiming

to achieve, and have both written and verbal feedback

after completion.

The questionnaire was distributed to all final year

students (n ¼ 63) with a covering letter. It was

Clinical competency exercises

185

Page 3: Clinical competency exercises: some student perceptions

administered between the end of final examinations

and graduation, to all students who had passed. The

questionnaire was anonymous, so no reminders or

follow-up questionnaires were administered. All ques-

tions required a box to be ticked for response, with an

option to add additional comments if considered

appropriate. Responses were selected on scales which

we hoped would promote an honest response, but also

provoke further comment.

Categorical responses were collated and analysed

using Minitab statistical software (basic descriptive

statistics, chi-squared analysis, least squares regres-

sion). Responses to written open-ended comments

made on the questionnaires were collated, read and

analysed qualitatively. General themes were identified

by the authors from the comments made and com-

ments classified into those themes—for example,

positive and negative comments regarding formative

and summative assessment techniques.

Departmental assessment databases were consulted

to obtain data regarding when individual students

undertook assessments, and the grades that they

obtained. Questionnaires were anonymous to encour-

age more honest responses, therefore no direct corre-

lations could be sought between database and

questionnaire data, although the results of the two

data sources could be compared.

Results

Results from questionnaireThe questionnaire was administered to 63 students

and a response collected from 56 (response rate 89%).

This was considered adequate to give meaningful

data. The numerical data will be presented by

considering the proposed research questions:

Timing of assessments

The majority of students (54% CONS, 66% CDH) felt

that the time at which they took their assessments was

‘about right’. There was a greater tendency towards

feeling that they undertook their assessments later in

the conservation department (CONS 39% late/very

late) although chi-squared analysis of the frequency

data shows no difference between clinics (P ¼ 0.66).

When asked whether they felt that the time they took

assessments affected the grade they achieved, only

32% felt that this was the case in the paediatric

department, in contrast to 63% feeling that timing had

TABLE 1. Questionnaire used with results in italics

Timing

In general, do you feel that the time you sat your assessments was:

Very early Early About right Late Very late

CONS (% respondents) 1.8 5.4 53.6 26.8 12.5CDH (% respondents) 1.8 1.8 66.1 21.4 8.9

Do you feel that the time at which you sat your assessments affected the grade you achieved?

Definitely Slightly No

CONS (% respondents) 21.4 41.1 37.5CDH (% respondents) 7.1 25.0 67.9

Preparation

Did you feel adequately prepared for each of the following assessments?Results see Fig. 1

Factors influencing timing

Did any of the following affect the time at which you sat your assessments?

Pts for practice Pts for assessment Self-confidence Supervisors present Other

CONS mean (% respondents) 45 48 29 7 1CDH mean (% respondents) 14 33 8 2 1

Grading system

Which grading system did you prefer?CDH: 41.1%/CONS: 58.9%

General

Do you feel that our competency system is a fair way to test your clinical skills?Yes: 78.6%/No: 21.4%

Rolland et al.

186

Page 4: Clinical competency exercises: some student perceptions

influenced the grade in CONS, although only 21% felt

that this was definitely the case. Chi-squared analysis

indicated that there was a difference between the two

clinics regarding the effect of timing on grade

(P < 0.05).

Preparation prior to assessments

The majority of students felt adequately prepared to

undertake their assessments (CONS 68—84%, CDH

82—98%). Not surprisingly, there was a noticeable

increase in those who did not feel adequately pre-

pared to undertake their assessments in the more

difficult procedures (Fig. 1), that is crown preparation

(32%), molar endodontics (29%), adjusting a remov-

able appliance (18%) and a Class II restoration (16%).

Factors affecting timing of assessments

When asked what factors affected the time that they

took the assessments, the most commonly reported

problem was finding suitable patients on whom to

practise and undertake the assessments and 76% of all

problems could be attributed to this cause. Many

students reported that more than one factor had

influenced when they took their assessments, hence

the data are reported both according to the proportion

of total problems reported and the number of respond-

ents who reported problems (Table 1). In the conserva-

tion department the most commonly reported problem

was finding appropriate patients (72% problems

reported in CONS, reported by 48% respondents),

and in addition self-confidence (23% problems in

CONS, reported by 29% respondents) was identified

as an issue. These trends were repeated in paediatric

dentistry (80% CDH reported problems due to lack of

patients, reported by 33% of respondents, 13% lack

of self-confidence, reported by 8% respondents),

although the number of students reporting prob-

lems was reduced. Chi-squared analysis indicated a

significant difference between the two departments

(P < 0.05) in the number of respondents reporting

problems.

Preferred grading system

Fifty-nine per cent of respondents preferred the

summative grading system. Reasons cited for this

preference included preferring a grade to a compet-

ent/not competent category because ‘having grading

system encourages you to put extra effort in’. Students

appreciated the contribution the competency assess-

ment grade made to the final examination grade

because being able to carry grades forward ‘made it

worthwhile’ and ‘it’s good to have good marks to go

into finals with’ and several suggested that SCOTs

should count towards the final examination grade.

General issues

An overwhelming majority (79%) of students felt that

the assessment system was a fair way of assessing

competency of clinical skills. Positive comments inclu-

ded ‘because they test the things we’ve learnt & not

too much’, ‘good to have a checklist of things that (we)

need to do’ and ‘it’s good to get feedback’. The anxiety

related to competency exercises was recognised, but as

a positive feature in the comment ‘dentists work

under pressure’. Students who felt the system was

not a fair one gave reasons such as ‘doesn’t take

bad day into account’ and the fact that it ‘is a one off

thing’ where ‘much (is) dependant on patient for

assessment.’ The problem of perceived inconsistency

between examiners was identified, with comments

such as ‘a lot seemed to depend on clinicians marking

assessment—some a lot harsher than others!’ and ‘all

examiners have a different opinion on which grades

are worth what’.

Results from student recordsCollation of data from student records has enabled us

to investigate the dates when students undertook

these assessments (Fig. 2) and their grades (where

appropriate). Grades (lower quartile, median, upper

quartile) awarded in the conservation department

(maximum grade ¼ 20, pass ‡ 10) were crown pre-

paration (14, 15, 17), molar endodontics (14, 15, 16.25)

and Class II (14, 16, 17) indicating very little differ-

ence between the spread of grades for the three

assessments.

There was a tendency for assessments to be under-

taken earlier and over a greater spread of time within

the child dental health department. Most of the

assessments within the conservation department were

undertaken at a very late stage. The assessments in

0

5

10

15

20

25

30

35

40

Class II Crown prep

Molar endo

Fissuresealant

Oral hygiene

instruction

Impressions Diet history

Adjustappliance

%CONS CDH

Fig. 1. Percentage of students reporting feeling inadequatelyprepared for undertaking individual clinical assessments.

Clinical competency exercises

187

Page 5: Clinical competency exercises: some student perceptions

which more students reported feeling less well pre-

pared (Class II, crown preparation, molar endodont-

ics, adjusting an appliance) tended to be sat later by

the majority of students.

For the three conservative dentistry assessments

where grades are awarded, no correlation (least

squares regression) could be found between the dates

when the assessments were taken and the final grade

achieved. Surprisingly, grades obtained in the three

conservation department exercises were very similar,

despite a perceived difference in the difficulty of the

exercises. In child dental health no grades were

awarded, so no correlation could be sought between

dates the assessments were sat and grades achieved.

However, it was observed that the four tests that were

failed at the first attempt were undertaken between

November and March of the final year.

Discussion

A questionnaire was utilised for this study because it

allowed information to be gathered in an anonymous

fashion, from a large cohort of students over a short

period of time (10) The questionnaire was designed to

be quick and easy to complete, but allowed open-

ended input for each question, and the response rate

of 89% showed that it achieved these aims. The

questionnaire was not officially validated using a pilot

study, but issues relating to design and content were

discussed with a representative sample of fourth year

students. The decision to make the questionnaire

anonymous prevented opinions being matched with

actual data regarding timing of assessments and

grades obtained, but was felt to be the only way to

obtain open and honest answers.

The time at which the questionnaire was adminis-

tered was difficult to decide. It had to be after the end

of the clinical term in final year, to ensure that all

students completed the competency exercises, but it

was important that it did not interfere with their final

examinations.

The results of this survey indicate that a large

number of students were aware that they are not

undertaking their assessments until too late, and that

this is a particular problem in the conservation

department. Therefore, clinical tutors must continue

to encourage all students to undertake their assess-

ments as soon as they have acquired the necessary

skills through experience. Many students felt that the

time that they took their assessments in conservative

dentistry affected the grade that they then obtained,

although the actual data do not support this view,

showing no correlation between the time when the

assessment was taken and the grade obtained. This is

likely to be due to minimal variation in the grades

achieved and a large number of the assessments being

undertaken at a similar time in year 5. These data

should be used to help persuade students that they are

not likely to achieve a higher grade by leaving their

assessments to the last possible moment.

Not surprisingly, students reported feeling less well

prepared for the exercises that are perceived to be

more difficult and also tended to undertake these

assessments later. The problem of self-confidence is

difficult to address, and can be reduced by a greater

exposure to procedures and patients (9). However,

clinical experience and confidence may not correlate

with performance in simulation or written tests (9).

Self-reported confidence amongst newly qualified

medical students was found to be unrelated to clinical

competence, and it is concerning that in some exerci-

ses there was a tendency to report a high level of

confidence, whilst being assessed as clinically incom-

petent (11). A factor that was proposed to contribute to

this discrepancy was the competitive nature of the

medical field making graduates unwilling to expose

fears and deficiencies. This desire to be seen to be right

is less likely to influence undergraduates in a learning

environment, but may have a role. In a qualitative

study (12) of Newcastle medical students the authors

observed that whilst possessing confidence related to a

feeling of competence (actual clinical competence was

not assessed), a perceived lack of confidence tended to

be related to anxiety rather than to a lack of compet-

ence. Therefore, whilst a number of our students

reported feeling poorly prepared for assessments, this

is likely to be influenced by anxiety and may be

reduced by increased exposure to clinical procedures.

It is only natural that anxiety will be expressed during

the assessment process and the influence of anxiety on

Dat

e

Adj

ust a

pplia

nce

Impr

essi

ons

Die

t his

tory

Fis

sure

sea

lant

Ora

l hyg

iene

Cla

ss 2

Mol

ar e

ndod

ontic

s

Cro

wn

prep

ratio

n

January year 5

January year 4

January year 3

Assessment

Fig. 2. Box plot of dates on which students undertook assessments.

Rolland et al.

188

Page 6: Clinical competency exercises: some student perceptions

a reported lack of self-confidence is difficult to

evaluate.

The most significant problem perceived was a lack

of appropriate patients, which overall accounted for

76% of all barriers to undertaking assessments. This

problem within undergraduate dental training is not

restricted to our school and has been related to: only a

small proportion of the resident population seeking

primary dental care; many cases referred from secon-

dary care being complex and unsuitable for under-

graduate students; and the problem of car parking

surrounding inner city locations (8). One possible

solution to this problem is the development of com-

munity-based ‘outreach’ schemes (8) where students

undertake dental treatment within a secondary care

setting. This helps to overcome a number of problems

with care of patients in the dental school setting (13),

provides a wide range of treatment relevant to

primary care and shifts the emphasis from student

education to patient care (14), therefore better prepar-

ing undergraduates for the ‘real world’. Newcastle

Dental School is well advanced in the process of

developing a community outreach scheme which was

initiated shortly after this questionnaire was adminis-

tered and SCOTs have been introduced to be under-

taken in the outreach setting. It will be of interest to

repeat this questionnaire to see whether there is a

change in the number of students who report lack of

practice as a barrier to undertaking assessment.

A further problem recognised by students is the

fact that the assessment is dependent on the patient

who attends for the assessment. Variability between

patients is inevitable (15, 16) and impossible to avoid if

the assessment is undertaken in an authentic (i.e.

clinical, not laboratory) setting under realistic condi-

tions. However, patient variables may introduce bias

(where ‘two individuals with equal ability… do not

have the same probability of success’: 17) which is

clearly unfair. Students are encouraged to carefully

select patients for competency exercises to reduce this,

although the shortage of suitable patients may make

this difficult. Equally, assessments should be carefully

designed to allow for patient variability.

A well-recognised problem within clinical assess-

ments is significant individual variation between

assessors (5, 18) and this was recognised by our

students, who reported that the assessors present

influenced when they took their assessments. A

questionnaire sent to UK restorative staff identified

that only 56% of clinical teachers thought that staff in

their institution were consistent and accurate in

assessing students’ clinical work (18). In our study

this issue was identified as a greater problem on the

CONS department, possibly because a greater number

of staff are involved in teaching (including a signifi-

cant number of general dental practitioners) and

also because all students have two sessions per

week with different staff members and so possess

an element of choice. The use of part-time external

practitioners in CONS competency assessments is

a necessity; however, it is important that they are

trained in the competency process (19) to ensure

consistency between examiners (4). In CDH a period

of staff training was undertaken prior to the intro-

duction of SCOTs, which hopefully reduced the

students’ perception of examiner bias. Observer bias

due to prior knowledge of the student’s reputation

has been identified as a problem in observed clinical

examinations (20) and therefore it is important that

the student is assessed by a number of different

examiners during their assessment period (21). Struc-

tured observation (4) and detailed checklists (5) (as

employed in both CDH and CONS) help to improve

reliability, validity and manageability of the assess-

ment process and also assist feedback (19), provided

criteria included in the checklist are carefully selected

and can be reproducibly applied (4).

Although this information was not requested,

another possible reason for students sitting their

assessments late is a combination of poor self-organ-

isation and putting off the inevitable. An example of

this is the diet history exercise which has to be

undertaken over a number of visits on any CDH

patient. This competency assessment presented a

markedly skewed distribution of completion dates

with a high proportion of students completing it very

late, although as no hands-on clinical skills are

involved, it could have been completed early in year 3.

Surprisingly, 59% of students preferred the CONS

grading system. This is a much more demanding

grading system, with greater implications if perform-

ance is poor, as the results contribute to the final

examination. Furthermore, a large number of students

felt that the CDH grades should count towards finals.

However, the CDH SCOTs examine important non-

operative aspects of clinical care, such as communica-

tion skills and professional attitudes, which are

important attributes of healthcare professionals and

therefore should be part of the competency assessment

process (3,6), although it is often hard to grade these

‘soft skills’ in a summative fashion (21). Therefore,

formative and summative assessment techniques

should be used together to drive the assessment

process (22). One student suggested that continuous

assessment may provide an alternative to these

assessment procedures, and this approach has been

Clinical competency exercises

189

Page 7: Clinical competency exercises: some student perceptions

successfully applied at the School of Oral Health

Sciences at the University of Western Australia (5). At

Newcastle University a portfolio-based assessment

system is utilised to formatively assess every patient

contact, and this process is used to encourage feed-

back and personal reflection (23) and to monitor

improvement (24), but a more formalised summative

competency exercise is still required to satisfy the

requirements of the examination process. It is a

delicate balance between using assessment to drive

learning (25) and assessing to satisfy external bodies

(26) whilst avoiding over-assessing students to the

point where it becomes ineffective. Assessment tasks

must be coherent with teaching strategies and learning

objectives and underpinned by the principles of

constructive alignment (27).

The worldwide issue of developing comparable

competency assessments to ensure teaching quality

has been identified, and in 2002, a working party was

set up to establish a framework and highlight import-

ant competency-related issues (28). They recognised

that there could not be a single assessment technique

that could be universally applied if different skills are

to be tested, and by necessity, different assessment

tools must be applied. However, we must be aware of

the increasing need to be accountable, to the students,

the professional bodies and to the general public (6).

As higher education moves from being provided to

being marketed at a cost to its consumers, the need for

all processes to be transparent and of a consistently

high quality is ever increasing. Within clinical com-

petency assessments, therefore, we must understand

what we need to assess, why we need to assess it and

the most appropriate means available for undertaking

the assessment (29).

Conclusion

This study has highlighted a number of positive

areas regarding assessment of clinical competency.

The majority of students felt that the assessment

process was fair; they felt adequately prepared and

undertook their assessments at about the ‘right’ time.

It was interesting and surprising that the majority of

students preferred the summative grading system

and CDH SCOTs have now been modified to

contribute a grade towards finals. However, this

study also highlights areas where we can strive to

improve the assessment process. First, students must

be encouraged to undertake their assessments as

early as is realistically possible with reassurance that

sitting them early (or late) does not seem to have an

impact on the grade that they achieve. Second, staff

must endeavour to make the assessments accessible,

by assisting with the identification of suitable

patients for both practice and undertaking the com-

petency exercises. All assessors must be adequately

trained in the assessment process to reduce interex-

aminer variability. Finally, the importance of design-

ing and maintaining a range of competence exercises

which fulfil current guidelines must be clear to staff

and students, so that they are seen more as a

gateway to independent practice than a hurdle which

must be overcome.

Acknowledgements

We would like to thank all the students who took part

in this study, and Susan Johnstone and Maria Clarke

for their assistance accessing the assessment database

information.

References

1. Chambers DW, Gerrow JD. Manual for developing andformatting competency statements. J Dent Educ 1988: 58:361–366.

2. General Dental Council. The first five years – a frame-work for undergraduate dental education, 2nd edn.London: General Dental Council, 2002.

3. Plasschaert AJM, Holbrook WP, Delap E, Martinez C,Walmsley AD. Profile and competences for the Europeandentist. Eur J Dent Educ 2005: 9: 98–107.

4. Scott BJJ, Evans DJP, Drummond JR, Mossey PA, StirrupsDR. An investigation into the use of a structured clinicaloperative test for the assessment of a clinical skill. Eur JDent Educ 2001: 5: 31–37.

5. Tennant M, Scriva J. Clinical assessment in dentaleducation: a new method. Aust Dent J 2000: 45: 125–130.

6. Murray E, Gruppen L, Catton P, Hays R, Woolliscroft JO.The accountability of clinical education: its definition andassessment. Med Educ 2000: 34: 871–879.

7. Mossey PA. Structured clinical operative tests. In:Mossey PA, Newton JP, Mason A, Stirrups DR, eds.Clinical competencies in dentistry conference. London:Medical and Dental Education Network, 1999: 40–42.

8. Blinkhorn F. Evaluation of an undergraduate commu-nity-based course in family dentistry. Eur J Dent Educ2002: 6: 40–44.

9. Morgan PJ, Cleave Hogg D. Comparison between medicalstudents’ experience, confidence and competence. MedEduc 2002: 36: 534–539.

10. Leung WC. How to design a questionnaire. Student BrMed J 2001: 9: 187–189.

11. Arnsley L, Lyon PM, Ralston SJ, et al. Clinical skills injunior medical officers: a comparison of self-reportedconfidence and observed competence. Med Educ 2004:38: 358–367.

12. Stewart J, O’Halloran C, Barton JR, Singleton SJ, HarriganP, Spencer J. Clarifying the concepts of confidence and

Rolland et al.

190

Page 8: Clinical competency exercises: some student perceptions

competence to produce appropriate self-evaluationmeasurement scales. Med Educ 2000: 34: 903–909.

13. Elkind A. Outreach teaching: is this the future for dentaleducation? Br Dent J 2002: 193: 111–112.

14. Harris RV, Dailey Y, Lennon MA. Recording and under-standing social histories by dental undergraduates in acommunity-based clinical programme. Eur J Dent Educ2003: 7: 34–40.

15. Buchanan RN. Problems related to the use of humansubjects in clinical evaluation/responsibility for follow-up care. J Dent Educ 1991: 55: 797–800.

16. Gadbury-Amyot CC, Bray KK, Branson BS, et al. Predic-tive validity of dental hygiene competency assessmentmeasures on one-shot clinical licensure examinations.J Dent Educ 2005: 69: 363–370.

17. Shepard L, Camilli G, Averill M. Comparison ofprocedures for detecting test item bias with bothinternal and external ability criteria. J Educ Stat 1981:6: 317–375.

18. Manogue M, Brown G, Foster H. Clinical assessment ofdental students: values and practices of teachers inrestorative dentistry. Med Educ 2001: 35: 364–370.

19. Manogue M, Kelly M, Masaryk SB, et al. 2.1. Evolvingmethods of assessment. Eur J Dent Educ 2002: 6: 53–66.

20. Redfern S, Norman I, Calman L, Watson R, Murrells T.Assessing competence to practise in nursing: a review ofthe literature. Res Pap Educ 2002: 17: 51–77.

21. Knight P. LTSN generic centre assessment series 7. Abriefing on key concepts. York: LTSN, 2001: 11–13.

22. Prescott LE, Norcini JJ, McKinlay D, Rennie JS. Facing thechallenges of competency-based assessment of post-graduate dental training: Longitudinal Evaluation ofPerformance (LEP). Med Educ 2002: 36: 92–97.

23. Robinson PB, Davies BR. Reflective practice in theteaching of conservative dentistry to undergraduate

dental students—perceptions derived from a pilot studyusing personal development diaries. Eur J Dent Educ2004: 8: 67–71.

24. Lindemann RA, Jedrychowski J. Self-assessed clinicalcompetence: a comparison between students in anadvanced dental education elective and in the generalclinic. Eur J Dent Educ 2002: 6: 16–21.

25. Spike N, Alexander H, Elliott S, et al. In-training assess-ment and its potential in enhancing clinical teaching.Med Educ 2000: 34: 858–861.

26. Boud D. Sustainable Assessment: rethinking assessmentfor the learning society. Stud Contin Educ 2000: 22: 151–167.

27. Biggs J. Enhancing teaching through constructive align-ment. Higher Educ 1996: 32: 347–364.

28. Plasschaert A, Boyd M, Andrieu S, et al. 1.3. Develop-ment of professional competences. Eur J Dent Educ 2002:6: 33–44.

29. Newble DI. ASME medical education booklet no. 25.Assessing clinical competence at the undergraduatelevel. Med Educ 1992: 26: 504.

Address:

Sarah L. Rolland

School of Dental Sciences

Framlington Place

Newcastle upon Tyne NE2 4BW

UK

Tel: +44 (0)191 222 7471

Fax: +44 (0)191 222 8191

e-mail: [email protected]

Clinical competency exercises

191