best evidence on medical school assessment: can it predict future performance?

4
Best evidence on medical school assessment: can it predict future performance? Hossam Hamdy and Nahed Khalek, College of Medicine, University of Sharjah, United Arab Emirates S tudent assessment in medi- cal schools and the predic- tive value of its results in relation to future performance are fundamental issues in medical education that still need further study. The following basic questions are important but difficult to answer: What to measure? How to measure? When to measure? How strong is the relationship between the predictors and the outcomes? The multi-faceted and complex nature necessary to be a doctor, together with the diversity and multi-dimensionality of the working environment, increase the difficulty of defining and interpreting measurable or observable outcomes of medical education and training programmes. Medical school grades are widely accepted measures of performance quality. Without supportive evidence, it is assumed that grades provide a basis for predicting future performance in the workplace. The further we are from the exit of an educational programme, the more difficult it becomes to measure the relation between performance in the pro- gramme and performance in practice. 1 Observation of perfor- mance for purposes of student appraisal in medical schools is done with a view to extrapolating and generalising the competence that extends beyond the tasks observed. For the Best Evidence Medical Education (BEME) systematic review on ‘predictive values of measurements obtained in medi- cal schools and future perfor- mances in medical practice’, Hamdy et al 2 reviewed the Medical school grades are widely accepted measures of performance quality BEME review Ó Blackwell Publishing Ltd 2008. THE CLINICAL TEACHER 2008; 5: 109–112 109

Upload: hossam-hamdy

Post on 21-Jul-2016

215 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Best evidence on medical school assessment: can it predict future performance?

Best evidence onmedical schoolassessment: can itpredict futureperformance?Hossam Hamdy and Nahed Khalek, College of Medicine, University of Sharjah, UnitedArab Emirates

Student assessment in medi-cal schools and the predic-tive value of its results in

relation to future performance arefundamental issues in medicaleducation that still need furtherstudy.

The following basic questionsare important but difficult toanswer:

• What to measure?

• How to measure?

• When to measure?

• How strong is the relationshipbetween the predictors andthe outcomes?

The multi-faceted and complexnature necessary to be a doctor,together with the diversity andmulti-dimensionality of theworking environment, increasethe difficulty of defining andinterpreting measurable orobservable outcomes of medicaleducation and trainingprogrammes.

Medical school grades arewidely accepted measures ofperformance quality. Withoutsupportive evidence, it is assumedthat grades provide a basis forpredicting future performance inthe workplace. The further we arefrom the exit of an educational

programme, the more difficult itbecomes to measure the relationbetween performance in the pro-gramme and performance inpractice.1 Observation of perfor-mance for purposes of studentappraisal in medical schools isdone with a view to extrapolatingand generalising the competencethat extends beyond the tasksobserved.

For the Best Evidence MedicalEducation (BEME) systematicreview on ‘predictive values ofmeasurements obtained in medi-cal schools and future perfor-mances in medical practice’,Hamdy et al2 reviewed the

Medical schoolgrades arewidely acceptedmeasures ofperformancequality

BEMEreview

� Blackwell Publishing Ltd 2008. THE CLINICAL TEACHER 2008; 5: 109–112 109

Page 2: Best evidence on medical school assessment: can it predict future performance?

literature from 1960 to 2004 inan attempt to answer the questionof whether medical schoolassessment can predict futureperformance.

The strength of the relation-ships between medical schoolperformance and performanceafter graduation varies dependingon the conceptual relevanceof the measures taken duringand after medical school, e.g.pre-clinical grade point average(GPA), and shows more overlapwith doctors’ medical knowledgethan with their interpersonalskills. This expected pattern ofrelationships has been con-firmed empirically.3 Predictionof practice performanceyears after formal training isdifficult.

In the 1960s, studies showeda lack of any relationship betweenmedical school academicperformance (GPA) and practiceperformance.4–6 In the 1980s,studies used a broader range ofassessment procedures, includingclinically-based measures, toinvestigate the associationbetween academic performanceand competence during the earlypostgraduate years. These studiesrevealed weak relationshipsbetween the grades achieved andperformance in postgraduatetraining.7

In the 1990s Taylor and Albo6

studied doctors’ performances andtheir relationships to two predic-tors: performance of medical stu-dents in their pre-clinical yearsand performance in their clinical

years. There was poor correlationbetween doctors’ medical schoolgrades and a large number ofperformance measures. The com-plex competencies needed for adoctor to perform effectively arepoorly measured by academicscores, which are obtainedthrough measurements thatexamine a narrow band of theextremely complex total spectrumof skills, abilities and perfor-mances.8

Other researchers have estab-lished a moderate relationshipbetween academic performanceand practice performance, withhigher correlations when anattribute is evaluated before andafter graduation using a similarassessment method.9

PREDICTING PERFORMANCEDURING POSTGRADUATETRAINING

A recent systematic review2 look-ing at correlations betweenNational Board of Medical Exami-nation (NBME) or US MedicalLicensure Examination (USMLE I)and supervisor rating duringinternship or first-year residencywas low (0.22), but statisticallysignificant. However, correlationof NBME I and II with NBME IIIand the American Board of Spe-cialty Examinations was moder-ately high (0.6–0.7) andstatistically significant.

Although there is signifi-cant improvement in studentassessment during the clinicalyears, the problem of measuringdoctors’ clinical competenceduring training is a complexand daunting task, the complex-ity of professional competencenecessitating the use of multipleassessment methods of evalu-ating performance. Despitethe availability of severalevaluation tools, it is stillunclear how objective residentsupervisors have been aboutevaluating their trainees’ clinicalperformance.

Competenciesneeded for a

doctor toperform

effectively arepoorly

measured byacademic

scores

It is stillunclear how

objectiveresident

supervisorshave been aboutevaluating their

trainees’clinical

performance

110 � Blackwell Publishing Ltd 2008. THE CLINICAL TEACHER 2008; 5: 109–112

Page 3: Best evidence on medical school assessment: can it predict future performance?

The relationship betweeninstruments relatively recentlyincorporated into the assessmentof medical students such as theObjective Structured Clinical Exam(OSCE), and its predictive validity,need further study. The correla-tion coefficient between OSCE andsupervisor rating yielded an esti-mate of 0.30 (95% CI 0.24–0.37),suggesting a low correlation. Theweak but statistically significantcorrelations obtained from severalstudies looking into the predic-tive value of constructs assessedby the OSCE (e.g. interpersonalskills, data collection and physicalexamination skills, and residents’supervisor rating) could be ex-plained on the basis that theresidents’ assessment does notgive an objective evaluation ofthe same constructs assessed inthe undergraduate programme.

Other methods of assessmentof clinical competencies in med-ical schools, such as the post-clerkship clinical examination(PCX), have demonstrated a cor-relation with first-year residencysupervisor ratings of 0.16–0.43(mean 0.32).10

Studies on prediction of scoresshowed the importance of mea-surements of similar constructsbefore and after graduation inorder to find a positive, strong

association. The correlation be-tween clerkships’ GPA as predictorand supervisor rating during res-idency as outcome (0.3) washigher than other predictors inthe preclinical phase (NBMEI = 0.18). Studies in the 1960sand 1970s11 supported the sameview, particularly in the clerkshipsrelated to the field of residencychosen by the student. Otherpredictors such as evaluation bypeers were found to be betterpredictors of future internshipsuccess than estimated by pre-clinical and clinical members ofstaff.

When looking into the predic-tive value of assessment measuresin medical schools, it is importantto consider the time of measure-ment of the predictors and out-comes along the continuum andtimeline of a doctor’s education,training and practice. Measure-ment can take place during or atthe end of undergraduate educa-tional programmes, immediatelyafter graduation (internship orlicensure examination), duringand at the end of residencytraining, and in practice. Studieslooking into the importance ofmeasurements of similar con-structs at two different levels ofexpected performance (medicalstudents and final-year residents)indicated that scores obtained by

students at the end of clerkshipusing a script concordance (SC)test (scenarios of patient prob-lems testing students’ diagnosticreasoning and decision-making)predicted their clinical reasoningperformance at the end of resi-dency through measurement byOSCE, short answer managementproblems and simulated officeorals. They reported a good corre-lation for OSCE of 0.7, short answermanagement problems of 0.8 andsimulated office orals of 0.5.

PREDICTING ON-THE-JOBPRACTICE PERFORMANCE

Few studies looked into the rela-tionship between medical schoolmeasurements and on-the-jobperformance beyond residency.Tamblyn et al12 investigatedscores on the Canadian licensureexamination, which is takenimmediately at the end of medicalschool, and prediction of clinicalbehaviour 4–7 years later. In thisstudy indicators of practice per-formance were selected on thebasis of unexplained practicevariations or their associationwith the outcomes or costs ofcare, e.g. mammography screen-ing rate used to assess preventivecare, contra-indicated prescrib-ing, accounting for 20 per cent ofdrug-related adverse events, andconsultation rate, used as anindicator of resource use. Thestudy showed that scores on theCanadian licensure examinationwere a significant predictor ofpractice performance. Measure-ment of performance should notbe limited to technical aspectsand knowledge, but should alsoconsider attitudes. An interestingstudy13 looked into the unprofes-sional behaviour of students inmedical school and whether thereis an association with subsequentdisciplinary action by a statemedical board. It was found thatthe prevalence of problematicbehaviour was 38 per cent in thegroup who behaved unprofes-sionally and 19 per cent in thecontrol group (odds ratio 2.15).

Measurement ofperformanceshould not belimited totechnicalaspects andknowledge

Other predictorssuch asevaluation bypeers werefound to bebetterpredictors offutureinternshipsuccess

� Blackwell Publishing Ltd 2008. THE CLINICAL TEACHER 2008; 5: 109–112 111

Page 4: Best evidence on medical school assessment: can it predict future performance?

These findings indicate theimportance of professionalism asan essential competency thatstudents should demonstrate inorder to graduate from medicalschool.

The available evidence dem-onstrates the need to defineglobal core learning outcomes forundergraduate medical education,postgraduate residency trainingand continuing professionaldevelopment. The six competen-cies of ACGME (AccreditationCouncil of Graduates MedicalEducation) outcomes are:14 pa-tient care, knowledge, ethics andprofessionalism, communicationskills, practice-based learning andsystem-based practice. CanMEDS15

outcomes can be models fordefining constructs that can bemeasured at different levels of adoctor’s professional life. A doc-tor’s expected performance couldalso be examined in relation to acore of common health problemsor presentations encountered inthe specific domain of practice.

CONCLUSION AND KEYPOINTS

• Systematic reviews on predic-tive value of assessmentscores obtained in medicalschools and future perfor-mance provided statisticallysignificant, mild-to-moderatecorrelations.

• Performances on similarinstruments of measurementscorrelate better when similarconstructs are assessed, suchas NBME II and NBME IIIscores, medical school clerk-ship grades and supervisor

rating of residents, and OSCEand supervisor rating of resi-dents.

• Basic science grades and clin-ical grades in medical schoolscan predict internship resi-dency performance. The cor-relation is mild with basicsciences and moderate withassessment of clinical skills.

• Evidence on predictors ofperformance in practice be-yond residency training is rareand weak. New measures ofperformance in practice, suchas ‘patient outcomes’ and‘process of care’ might beconsidered for future studies.

REFERENCES

1. Gonnella JS, Hojat M, Erdmann JB,

Veloski JJ. A case of mistaken

identity. Signal and noise in con-

necting performance assessments

before and after graduation from

medical school. Acad Med 1993;68(2):S9–S16.

2. Hamdy H, Prasad K, Anderson MB,

et al. BEME systematic review: Pre-

dictive values of measurements ob-

tained in medical schools and future

performance in medical practice.

Med Teach 2006;28(2):103–116.

3. Hojat M, Gonnella JS, Veloski JJ,

Erdmann JB. Is the glass half full or

half empty? A re-examination of the

association between assessment

measures during medical school and

clinical competence after gradua-

tion. Acad Med 1993;68(2):S69–S76.

4. Price PB. Search for excellence. Am J

Surg 1969;118:815–821.

5. Wingard JR, Williamson JW. Grades as

predictors of physicians career per-

formance. An evaluative literature

review. Med Educ 1973;48:311–322.

6. Gonnella JS, Hojat M. Relationship

between performance in medical

school and postgraduate compe-

tence. Med Educ 1983;58:679–685.

7. Taylor CW, Albo D. Measuring and

predicting the performances of

practicing physicians: An overview

of two decades of research at the

University of Utah. Acad Med

1993;68(2):S65–S67.

8. Borowitz SM, Saulsbury FT, Wilson

WG. Information collected during

the residency match process does

not predict clinical performance.

Arch Pediatr Adolesc Med 2000;154:

256–260.

9. Markert RJ. The relationship of aca-

demic measures in medical school to

performance after graduation. Acad

Med 1993;68(2):S31–S34.

10. Vu NV, Barrows HS, Marcy ML,

Verhulst SJ, Colliver JA, Travis T.

Six years of comprehensive, clinical,

performance-based assessment

using standardized patients at the

Southern Illinois University School

of Medicine. Acad Med 1992;67:42–

50.

11. Richards JM, Taylor CW, Price PB.

The prediction of medical intern

performance. J Appl Psychol 1962;

46:142–146.

12. Tamblyn R, Abrahamoqicz M,

Dauphinee WD, et al. Association

between licensure examination

scores and practice in primary care.

JAMA 2002;288:3019–3026.

13. Papadakis MA, Hodgson CS, Teherani

A, Kohatsu ND. Unprofessional

behavior in medical school is asso-

ciated with subsequent disciplinary

action by a state medical board.

Acad Med 2004;79:244–249.

14. Accreditation Council for Graduate

Medical Education (ACGME). Out-

come Project: general competencies,

2000. Available at: http://www.

acgme.org/outcome.

15. CanMEDS 2005. Framework. In: Frank

JR, Jabbour M et al. (eds), Report of

the CanMEDS Phase IV Working

Groups. Ottawa: The Royal College of

Physicians and Surgeons of Canada,

2005.

Availableevidence

demonstratesthe need to

define globalcore learningoutcomes for

undergraduatemedical

education

112 � Blackwell Publishing Ltd 2008. THE CLINICAL TEACHER 2008; 5: 109–112