concordance among ccc members on emotional …...clinical competency committees (ccc) will assess...

Post on 24-May-2020

4 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

M A T T H E W E B E R L Y , M D A P D , N A T I O N A L C A P I T A L C O N S O R T I U M

W A L T E R R E E D N A T I O N A L M I L I T A R Y M E D I C A L C E N T E R

U N I F O R M E D S E R V I C E S U N I V E R S I T Y

Concordance among CCC members on Emotional Intelligence Milestone

Placement of Pediatric Residents

Background

Clinical Competency Committees (CCC) will assess trainees using new ACGME Milestones and advise the PD regarding resident progress Promotion, Remediation, Dismissal

Concordance among CCC members for placement and progression of residents along the Milestones has not been previously established

We sought to determine the inter-rater reliability of trainee assessments by CCC members using a single Pediatric subcompetency

METHODS

The CCC of a single medium-sized pediatric program (10-13 residents/yr) assessed the ICS-2 subcompetency of 10 PGY2 residents who were 4 months into the academic year “Demonstrate the insight and understanding into

emotion and human response to emotion that allows one to appropriately develop and manage human interactions.”

Methods

22 members of our CCC (replaced Education Committee)

Department Chief Service Chiefs APDs Advisors Chief Resident

Training in Milestones Project One hour lecture in July following Morning Report Discussed at monthly Education Meetings (2) Mock Assessment of residents using PC-6 subcompetency

Methods

CCC members, now trained in Milestone assessment, were read 360-degree evaluations from the previous 3 months of rotations, and participated in the discussion of the resident’s progress

Each member then independently assessed the resident using the ICS-2 Milestones

The program director participated in the CCC deliberations, but did not assess trainees using the Milestones

Methods

22 faculty members of the CCC participated for a

total of 231 pairs of assessments.

Methods

(21) AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, AL, AM, AN, AO, AP, AQ, AR, AS, AT, AU, AV (20) BC, BD, BE, BF, BG, BH, BI, BJ, BK, BL, BM, BN, BO, BP, BQ, BR, BS, BT, BU, BV (19) CD, CE, CF, CG, CH, CI, CJ, CK, CL, CM, CN, CO, CP, CQ, CR, CS, CT, CU, CV (18) DE, DF, DG, DH, DI, DJ, DK, DL, DM, DN, DO, DP, DQ, DR, DS, DT, DU, DV (17) EF, EG, EH, EI, EJ, EK, EL, EM, EN, EO, EP, EQ, ER, ES, ET, EU, EV (16) FG, FH, FI, FJ, FK, FL, FM, FN, FO, FP, FQ, FR, FS, FT, FU, FV (15) GH, GI, GJ, GK, GL, GM, GN, GO, GP, GQ, GR, GS, GT, GU, GV (14) HI, HJ, HK, HL, HM, HN, HO, HP, HQ, HR, HS, HT, HU, HV (13) IJ, IK, IL, IM, IN, IO, IP, IQ, IR, IS, IT, IU, IV (12) JK, JL, JM, JN, JO, JP, JQ, JR, JS, JT, JU, JV (11) KL, KM, KN, KO, KP, KQ, KR, KS, KT, KU, LV (10) LM, LN, LO, LP, LQ, LR, LS, LT, LU, LV (9) MN, MO, MP, MQ, MR, MS, MT, MU, MV (8) NO, NP, NQ, NR, NS, NT, NU, NV (7) OP, OQ, OR, OS, OT, OU, OV (6) PQ, PR, PS, PT, PU, PV (5) QR, QS, QT, QU, QV (4) RS, RT, RU, RV (3) ST, SU, SV (2) TU, TV (1) UV

22 FACULTY NAMES A V 231 total pairings

Methods

Weighted kappas of all faculty pairs were calculated kappa of 1 – perfect agreement between 2 raters kappa of 0 – agreement equivalent to chance Weighted kappa – assigns less weight to agreement as

categories are further apart < 1 1.5 2 2.5 3 3.5 4 4.5 5 >

Stratified analysis by pairs of hospitalists, pairs of

continuity clinic faculty, and pairs of GME leaders was performed

RESULTS

Weighted kappas ranged from 0.008 to 0.83 for the

231 pairings

Median of 0.37 [IQR 0.24 - 0.56]

Mean of 0.39 ± 0.20

Results

Excellent (k ≥ 0.81) agreement was seen in only 1.3% of observer pairs

Substantial agreement (0.61 ≤ k ≤ 0.80) was seen in 17.8% of observer pairs

The majority of pairs (54.6%) had poor agreement

Results

Excellent 1%

Substantial 18%

Fair 26%

Poor 55%

Breakdown of kappas for the 231 pairs

Results

35 hospitalist faculty pairs Mean kappa = 0.23 ± 0.15

105 continuity clinic faculty pairs Mean kappa = 0.46 ± 0.19

6 GME leader pairs Mean kappa = 0.57 ± 0.15

p < 0.001

p < 0.001

p = 0.17

Conclusions

ICS-2 Milestones assessment by members of key faculty using 360-degree observations and personal experience with trainees has poor inter-rater reliability

Faculty who interact with pediatric trainees in the primary care continuity clinic have better agreement on the ICS-2 Milestones assessment with each other than do hospitalists

Implications

How can the CCC decide on promotion or remediation of residents if members cannot agree on placement along the Milestones ?

The level of agreement on Milestones placement of residents may be improved by smaller-sized CCC’s and by limiting membership to those most familiar with the Milestones project

Determination of the ideal composition and size of the CCC needs to maximize inter-rater reliability while maintaining a wide-range of observer input

Changes in our Program

CCC now comprised of 10 members

Education Committee consisting of 22 members continues to have monthly meetings to discuss resident performance

Members of the Education Committee now report on the progression of each set of Milestones to the CCC

Acknowledgments

Greg Gorman, MD Theo Stokes, MD Jenn Hepps, MD Danika Alexander, MD Theresa Kiefer

Questions?

top related