international journal of nursing studies ekman, n., taft, c., …1460508/... · 2020. 8. 25. ·...
TRANSCRIPT
http://www.diva-portal.org
This is the published version of a paper published in International Journal of NursingStudies.
Citation for the original published paper (version of record):
Ekman, N., Taft, C., Moons, P., Mäkitalo, Å., Boström, E. et al. (2020)A state-of-the-art review of direct observation tools for assessing competency inperson-centred careInternational Journal of Nursing Studies, 109: 103634https://doi.org/10.1016/j.ijnurstu.2020.103634
Access to the published version may require subscription.
N.B. When citing this work, cite the original published paper.
Permanent link to this version:http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-174421
International Journal of Nursing Studies 109 (2020) 103634
Contents lists available at ScienceDirect
International Journal of Nursing Studies
j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / i j n s
A state-of-the-art review of direct observation tools for assessing
competency in person-centred care
Nina Ekman
a , b , c , ∗, Charles Taft a , b , Philip Moons a , b , c , Asa Mäkitalo
d , Eva Boström
e , Andreas Fors a , b , f
a Institute of Health and Care Sciences, Sahlgrenska Academy, University of Gothenburg, Box 457, 405 30 Gothenburg, Sweden b Centre for Person-Centred Care (GPCC), University of Gothenburg, Sweden c Academic Centre for Nursing and Midwifery, Faculty of Medicine, KU Leuven, Leuven, Belgium
d Department of Education, Communication and Learning, University of Gothenburg, Gothenburg, Sweden e Department of Nursing, University of Umeå, Umeå, Sweden f Närhälsan Research and Development Primary Health Care, Region Västra Götaland, Sweden
a r t i c l e i n f o
Article history:
Received 19 September 2019
Received in revised form 21 January 2020
Accepted 22 April 2020
Keywords:
Observation-based methods
Patient-centred care
Person-centred care
State-of-the-art review
a b s t r a c t
Background: Direct observation is a common assessment strategy in health education and training, in
which trainees are observed and assessed while undertaking authentic patient care and clinical activities.
A variety of direct observation tools have been developed for assessing competency in delivering person-
centred care (PCC), yet to our knowledge no review of such tools exists.
Objective: To review and evaluate direct observation tools developed to assess health professionals’ com-
petency in delivering PCC.
Design: State-of-the-art review
Data sources: Electronic literature searches were conducted in PubMed, ERIC, CINAHL, and Web of Sci-
ence for English-language articles describing the development and testing of direct observation tools for
assessing PCC published until March 2017.
Review methods: Three authors independently assessed the records for eligibility. Duplicates were re-
moved and articles were excluded that were irrelevant based on title and/or abstract. All remaining arti-
cles were read in full text. A data extraction form was developed to cover and extract information about
the tools. The articles were examined for any conceptual or theoretical frameworks underlying tool de-
velopment and coverage of recognized PCC dimensions was evaluated against a standard framework. The
psychometric performance of the tools was obtained directly from the original articles.
Result: 16 tools were identified: five assessed PCC holistically and 11 assessed PCC within specific skill
domains. Conceptual/theoretical underpinnings of the tools were generally unclear. Coverage of PCC do-
mains varied markedly between tools. Most tools reported assessments of inter-rater reliability, internal
consistency reliability and concurrent validity; however, intra-rater reliability, content and construct va-
lidity were rarely reported. Predictive and discriminant validity were not assessed.
Conclusion: Differences in scope, coverage and content of the tools likely reflect the complexity of PCC
and lack of consensus in defining this concept. Although all may serve formative purposes, evidence sup-
porting their use in summative evaluations is limited. Patients were not involved in the development of
any tool, which seems intrinsically paradoxical given the aims of PCC. The tools may be useful for pro-
viding trainee feedback; however, rigorously tested and patient-derived tools are needed for high-stakes
use.
© 2020 The Authors. Published by Elsevier Ltd.
This is an open access article under the CC BY-NC-ND license.
( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
A
h
0
∗ Corresponding author at: Institute of Health and Care Sciences, Sahlgrenska
cademy, University of Gothenburg, Sweden. Box 457, 405 30 Gothenburg, Sweden.
ttps://doi.org/10.1016/j.ijnurstu.2020.103634
020-7489/© 2020 The Authors. Published by Elsevier Ltd. This is an open access article u
E-mail address: [email protected] (N. Ekman).
nder the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )
2 N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634
t
m
t
o
(
c
c
p
2
2
P
t
i
h
c
c
A
i
2
c
w
a
p
e
t
f
i
i
i
t
d
s
a
2
t
o
s
c
r
i
3
e
b
w
e
I
t
a
f
t
f
What is already known about the topic?
• Person-centred care (PCC) has been designated and endorsed
by professional bodies as a core competency needed for health
professionals. • Numerous direct observation tools have been developed for use
specifically in assessing skills in delivering PCC. • To our knowledge, no review has been conducted of direct ob-
servation tools for use in assessing PCC competency.
What this paper adds
• This state-of-the-art review identified 16 tools and found differ-
ences in their scope, coverage and content as well as a lack of
consensus in defining PCC. • Although all may serve formative purposes, evidence support-
ing their use in summative evaluations is limited. • Paradoxically, given the aims of PCC, patients were not involved
in the development of any of the tools.
1. Background
Widely acknowledged as an essential element of high qual-
ity care (( Institute of Medicine, 2001 ; World Health Organization,
20 06 ; World Health Organization, 20 08 ; Australian Commission
on Safety and Quality in Health Care (ACSQHC), 2011 ; Goodrich
and Cornwell, 2008 ; Agency for Healthcare Research and Qual-
ity (AHRQ), 2003 ; International Alliance of Patient’s Organizations,
2007 ; Socialstyrelsen, 2016 )), person-centred care ¹ (PCC) has been
designated and endorsed by professional bodies as one of a set
of five core competencies needed for health professionals to meet
the evolving challenges facing health care (( Institute of Medicine
(US) 2003 ; World Health Organization 2005 ; World Health Organi-
zation 2003 ; Accreditation Council for Pharmacy Education (ACPE)
2015 )). The particular importance of PCC competency was under-
scored when, in a follow-up report to their landmark Crossing the
Quality Chasm , the US Institute of Medicine (IOM) positioned PCC
as the central, overarching competency, recommending that All
health professionals should be educated to deliver patient-centred care
as members of an interdisciplinary team, emphasizing evidence-based
practice, quality improvement approaches, and informatics. (9, chap-
ter 3) Reflecting its importance for quality care, PCC is increasingly
incorporated into education and training programs for health pro-
fessionals ( Dwamena et al., 2012 ).
Direct observation is a common assessment strategy in health
education and training, in which trainees are observed and as-
sessed while undertaking authentic patient care and clinical ac-
tivities. Historically, such assessments have been implicit, unstan-
dardized, and based on global, subjective judgments ( Van der
Vleuten, 1996 ); however, today a variety of checklists, rating scales
and coding systems are available to guide, delineate and struc-
ture assessments of clinical competencies ( Kogan et al., 2009 ).
Likewise, numerous direct observation tools have been developed
for use specifically in assessing skills in delivering PCC. As little
agreement exists within and across professions on PCC nomencla-
ture ( Håkansson Eklund et al., 2018 ), definitions ( Socialstyrelsen,
2016 ; Constand et al., 2014 ) and theoretical or conceptual frame-
works ( Mead and Bower, 20 0 0 ; Lawrence and Kinn, 2012 ; Cronin,
2004 ; Scholl et al., 2014 ), it seems reasonable to assume that such
tools differ significantly in their definitions and coverage of be-
haviours and behavioural domains indicative of PCC competency.
To our knowledge no state-of-the-art review has been conducted
of direct observation tools for use in assessing PCC competency.
Given the increasing importance of PCC in healthcare education
and training there is a need for guidance in selecting among ex-
isting direct observation tools for assessing trainees’ competency
in delivering PCC. This review therefore aims to identify available
ools and evaluate them with respect to: assessed competency do-
ain or domains; existence of underlying theoretical or concep-
ual frameworks; coverage of recognized components of PCC; types
f behavioural indicators; psychometric performance; and format
checklist, rating scale, coding system).
¹ Although differences exist in the definitions of person-centred
are and patient-centred care, the terms are frequently used inter-
hangeably in the literature. For the sake of parsimony, the term
erson-centred care has systematically been used in this review.
. Method
.1. Search strategies for identification of studies
The search was performed in March 2017 using the databases
ubMed, CINAHL and Scopus to identify relevant studies. Cen-
red and centeredness (both UK and US spelling) and the follow-
ng terms were identified and used in the search-string: (care OR
ealthcare) AND ("patient-centred" OR "patient centred" OR "person
entred" OR "person centred" OR "patient centeredness" OR "patient
entredness " OR "person centeredness " OR "person centredness")
ND (observ ∗ OR video OR audio) AND (Humans[Mesh] AND (Dan-
sh[lang] OR Norwegian[lang] OR Swedish[lang] OR English[lang])).
.2. Selection of studies
EndNote X7 software was used. Duplicates were removed and
learly irrelevant articles were excluded. Criteria for inclusion
ere: (i) direct observation tool (ii) reports and/or descriptions of
ny development or evaluation of an instrument that measures
atient-centred care, PCC or person centredness (iii) not clinical
ncounters. The selection process started with examining the ti-
les. After inclusion of relevant titles, the same procedure was per-
ormed with the abstracts. Abstracts that clearly did not match the
nclusion criteria were removed. The articles with potential to be
ncluded were read in full text in order to determine whether to
nclude or exclude. Snowballing was used to identify other poten-
ially relevant records. Three authors (NE, CT and AF) indepen-
ently assessed the records for eligibility and recorded the rea-
ons for either inclusion or exclusion, which was documented in
PRISMA flowchart.
.3. Data extraction
A data extraction form was developed to cover, identify and ex-
ract information about the direct observation tool including name
f tool, main concept assessed, characteristics of the development
ample, assessment format, assessed domains of person-centred,
onceptual framework underpinning the tools, type of psychomet-
ic assessments performed (reliability and validity), and developer
dentified limitations.
. Analyses
Each tool was examined against a standard framework for cov-
rage of PCC dimensions. For this purpose the framework endorsed
y the IOM ( Institute of Medicine, 2001 ) was used. The frame-
ork includes six dimensions: Respect for patients’ values, pref-
rences, and expressed needs, Coordination and integration of care,
nformation, communication, and education, Physical comfort, Emo-
ional support—relieving fear and anxiety and Involvement of family
nd friends ( Institute of Medicine, 2001 ) . Articles were examined
or evidence of theoretical or conceptual frameworks underpinning
he development of the tools. If the articles referred to one specific
ramework and used phrases such as: “we operationalized PCC…” or
N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634 3
“
v
o
e
c
o
a
4
4
A
t
f
i
p
P
4
(
e
e
F
t
G
2
c
m
c
w
A
S
e
a
a
T
2
G
C
(
1
s
(
i
c
fi
(
i
e
b
(
r
f
O
l
a
p
a
2
e
F
4
t
m
t
t
i
d
p
c
4
l
f
c
s
r
fi
K
t
a
b
w
i
g
t
h
h
n
s
o
v
g
v
o
(
d
t
a
v
t
t
s
T
r
c
a
w
c
t
s
0
t
1
s
c
c
i
person-centred care using the framework of…” or “building on pre-
ious conceptual basis we developed…”it was judged to have a the-
retical or conceptual framework; those referring to several differ-
nt frameworks were classified as unclear. Information about psy-
hometric performance of the tools was obtained directly from the
riginal articles and supplemented by a review of references from
rticle bibliographies.
. Result
.1. Search results
An initial search yielded 2371 non-duplicated records ( Fig. 1 ).
fter screening by titles, 91 abstracts were read. After excluding ar-
icles based on abstracts, 42 articles were read in full text. Thirteen
ull-texts met inclusion criteria and six additional articles were
dentified by snowballing and found to be eligible. In total, 19 pa-
ers describing 16 different direct observation tools for assessing
CC or a specific aspect of PCC were identified ( Fig. 1 ).
.2. Characteristics of the direct observation tools
Eleven of the 16 direct observation tools were coding systems
Bertakis and Azari, 2011 ; Henbest and Stewart, 1989 ; Zandbelt
t al., 2005 ; Paul-Savoie et al., 2015 ; Braddock et al., 1997 ; Clayman
t al., 2012 ; Dong et al., 2014 ; Sabee et al., 2020 ; Mjaaland and
inset, 2009 ; Krupat et al., 2006 ; D’Agostino and Bylund, 2014 ),
hree were rating scales ( Elwyn et al., 2003 ; Shields et al., 2005 ;
allagher et al., 2001 ) and two were checklists ( Gaugler et al.,
013 ; Chesser et al., 2013 ) ( Table 1 ). Eleven tools focused on spe-
ific aspects of PCC, namely communication and shared decision-
aking, whereas five tools purported to assess the general
oncepts PCC or person-centredness. Various conceptual frame-
orks were presented as starting points for developing the tools.
framework was clearly identified in seven tools ( Henbest and
tewart, 1989 ; Zandbelt et al., 2005 ; Clayman et al., 2012 ; Elwyn
t al., 2003 ; Chesser et al., 2013 ; Sabee et al., 2020 ; D’Agostino
nd Bylund, 2014 ), while the others referred to different definitions
nd concepts and no specific framework could be distinguished.
he tools varied in their coverage of the six IOM domains ( Table
) from a single domain ( D’Agostino and Bylund, 2011 October ;
allagher et al., 2001 ) to all six domains ( Bertakis and Azari, 2011 ;
hesser et al., 2013 ), with an average of three domains per tools
Gaugler et al., 2013 ; Henbest and Stewart, 1989 ; Braddock et al.,
997 ; Clayman et al., 2012 ; Elwyn et al., 2003 ; Mjaaland and Fin-
et, 2009 ; Krupat et al., 2006 ).
Inter-rater reliability was reported for all tools except two
Bertakis and Azari, 2011 ; Clayman et al., 2012 ) ( Table 2 ). Reliabil-
ty was estimated using a variety of methods, where the intra-class
orrelation (ICC) and Cohen s kappa were most common. ICC coef-
cients were fair to excellent ( Hallgren, 2012 ) ranging from 0.53
PBCI subscale inhibiting behaviour) to 0.93 (PBCI subscale facil-
tating behaviour and SOS-PCC). Kappa coefficients reflected gen-
rally moderate to substantial agreement ( Hallgren, 2012 ), ranging
etween 0.46 (PISCH subscale enabling self-management) and 0.72
PISCH subscale fostering relationships). Intra-rater reliability was
eported for three tools, where stability in codings was high to per-
ect for the NAAS subcategories ( r = 0.82–1.0) and satisfactory for
PTION ( r = 0.66).
Most of the tools (12 of 16) reported some evaluation of va-
idity, where construct and content validity were most frequently
ssessed. About half of the tools were developed and evaluated in
rimary care settings ( Bertakis and Azari, 2011 ; Henbest and Stew-
rt, 1989 ; Braddock et al., 1997 ; Elwyn et al., 2003 ; Shields et al.,
0 05 ; Mjaaland and Finset, 20 09 ; Krupat et al., 20 06 ; Gallagher
t al., 2001 ) and all but two ( Zandbelt et al., 2005 ; Mjaaland and
inset, 2009 ) were developed in English speaking countries.
.3. Description of the direct observation tools
The following section provides a brief overview of informa-
ion about the included observation tools regarding their for-
at, coverage, scoring, conceptual framework and psychome-
ric evaluations/ performance as reported in the original ar-
icles. The tools are organized in accordance with Table 1 ,
.e. tools for assessing global PCC/ person centredness, shared-
ecision making, person-centred communication and nonverbal
erson-centred communication, and alphabetically within each
ategory.
.3.1. Global person-centred care/ person centredness
COT is a 16-item checklist for assessing global PCC. The check-
ist assesses if the staff performs behaviours indicative of PCC,
or example “speaks to a resident at least a total of 15 s during
are interaction”. Items are scored as 1 if the behaviour is ob-
erved and 0 if not observed and scores are summed, where 16
epresents maximum PCC. Inter-rater reliability, assessed across
ve raters (interdisciplinary reviewers) was satisfactory (ICC of all
appa coefficients = 0.77. Face validity included verbal and writ-
en feedback from scientific experts on earlier versions to refine
nd revise the tool. Content validity was assessed based on feed-
ack from nine interdisciplinary scientific experts regarding 31 care
orker-dementia patient interactions and open-ended feedback on
tems ( Gaugler et al., 2013 ).
Modified version of DOC is a coding system for assessing
lobal person-centred practice style and includes six different clus-
ers (e.g. , technical and health behaviour ) of physician practice be-
aviours among the 20 DOC codes (e.g., structuring interaction,
ealth education and health knowledge ). For each DOC code, the
umber of intervals during which the associated behaviour is ob-
erved is recorded and is expressed as a percentage of the total
f all DOC-coded behaviours noted during the visit. A total of 509
ideotaped encounters between patients and family physicians or
eneral internists were used for the development of the modified
ersion of DOC. Reliability and validity are documented for the
riginal DOC but have not been assessed for the modified version
Bertakis and Azari, 2011 ).
Henbest and Stewart instrument is a coding system assessing
octors’ global person-centred behaviour in primary care consul-
ations. The method involves identifying patients’ offers, defined
s any symptom, complaint, thought, feeling, expectation or obser-
ation expressed by the patient. The assessor then rates the doc-
ors’ responses to these offers on a four-point scale (0–3). The to-
al score for a consultation divided by the number of offers as-
essed gives the person-centredness score for that consultation.
his tool showed high inter-rater (18 tapes analyzed) and intra-
ater reliability after two weeks (two tapes analyzed) (Spearman
orrelation = 0.91 and 0.88, respectively). After six weeks (12 tapes
nalyzed) the intra-rater coefficient decreased to 0.63. The tool
as sensitive to differences among physicians and among physi-
ians responses to different patient offers. Analysis of 12 tapes with
wo raters comparing this tool with that of Brown and colleagues
howed moderate to high criterion validity (Spearman correlation
.51–0.89) ( Henbest and Stewart, 1989 ).
PBCI is a coding system assessing global person-centredness in
wo dimensions. The first dimension, Facilitating behaviours, has
1 categories ( e.g. , open and closed questions) . The second dimen-
ion, Inhibiting behaviours, includes 8 categories (e.g. , changing fo-
us ). These two dimensions are coded in relation to content ( medi-
al, psycho-social and other ). Reliability and validity were evaluated
n a sample of 323 videotaped appointments between residents
4
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
Table 1
Summary table of studies included in this state-of-the-art review.
Direct observation
tool ¹ Competency Format/ content
Conceptual
basis IOM domains ³Development
setting Reliability Validity
Respect Coordination Information
Physical
comfort
Emotional
support
Involvement
of family
The CARES
Observational tool
(COT)
( Gaugler et al.,
2013 )
Person-
centred
Care
Checklist, 16 items
• Greet • Introduce • Use name • Smile/Eye • Physical Contact • Approach • Eye level • Calm
• Ask/Discuss/Assess • 15 s • Explain • Involve in care/Activity • Resident’s Life • Comfort • Share • Write
Unclear 1 X X X Dementia
home care,
USA,
Care workers-
patients
Inter-rater reliability:
Intraclass correlation
coefficient (ICC) = 0.77
N interactions: 5
N raters: 5
Face validity: PI
with input from
scientific advisors
reviewed
Content validity:
panel of several
interdisciplinary
experts
Modified version
of the Davis
Observation Code
(DOC)
( Bertakis and
Azari, 2011 )
Person-
centred
care
Coding system, 6
categories
• Technical cluster • Health behaviour
cluster • Addiction cluster • Patient activation
cluster • Preventive service
cluster • counselling cluster
Unclear X X X X X X Primary care,
USA,
Physicians-
patients
Not reported (NR) NR
Henbest and
Stewart instrument
( Henbest and
Stewart, 1989 )
Person-
centredness
Coding systemidentifying
patients’ offers defined as:
• Symptoms • Thoughts • Feelings • Expectations • Prompts • Non-specific cues
Yes X X X Primary care,:
UK,
Physicians-
patients
Inter-rater reliability:
Spearman correlation = 0.91
Intra-rater reliability:
Spearman correlation = 0.88
(after 2 weeks) and 0.63
(after 6 weeks).
N interactions:18
(inter-rater); 8 (intra-rater,
2 weeks); 12 (intra-rater,
12 weeks)
N raters: 2
Criterion validity:
Correlation with
other measure
(Patient-Centered
Clinical Method)
rs = 0.51 and 0.89
(2 raters, 12
interactions)
( Continued on next page )
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
5
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
The
patient-centred
behaviour coding
instrument (PBCI)
( Zandbelt et al.,
2005 )
Person-
centredness
Coding system
Facilitating behaviours
Inhibiting behaviours
Yes X X X X Residents and
specialist in
general
internal
medicine,
rheumatology
and gastro-
enterology,
Netherlands
Physicians-
patients
Inter-rater reliability (ICC);
Relative agreement
facilitating = 0.93,
inhibiting = 0.53;
Absolute agreement
Facilitating = 0.92,
inhibiting = 0.53
Internal consistency
reliability (Cronbach s
alpha):
: Facilitating = 0.64,
inhibiting = 0.50.
N interactions: 323
N raters: 4
Concurrent
validity:
Correlation with
other measure (Eu-
rocommunication):
facilitating
( r = 0.28 and
inhibiting
( r = −0.29)
The Sherbrooke
Observation Scale
of Patient-Centered
Care (SOS-PCC)
( Paul-Savoie et al.,
2015 )
Person-
centred
care
Coding system
• Considers biological
aspects • Considers life projects • Considers
psychological aspects • Considers the impact
of the current
conditions on the
patient’s life • Considers past
experiences • Wishes to establish a
therapeutic
relationship • Shows an open mind,
without prejudice • Provides a treatment
plan in collaboration
with the patient • Inquires about the
patient’s
understanding of
his/her current
medical condition
Unclear X X X X Chronic pain
consultations,
Canada
Physicians-
Nurses-
patients
Inter-rater reliability:
ICC = 0.93
Internal consistency
reliability:
Cronbach s alpha = 0.88
N interactions: 42
N raters: 3
Content validity:
7 interdisciplinary
experts in the
health care field
( Continued on next page )
6
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
Informed Decision
Making instrument
(IDM)
( Braddock et al.,
1997 )
Shared
decision
making
Coding system
• Eliciting patient s
preference • Assessment of
understanding • Discussion of
uncertainty • Discussion of
pros/cons • Discussion of
alternatives • Discussion of
issue/decision
Unclear X X X Primary care,
USA
Physicians-
patients
Inter-rater reliability:
Agreement = 77%.
N interactions: 20
N raters: 3
NR
Detail of Essential
Elements and
Participants in
Shared Decision
Making
(DEEP-SDM)
( Clayman et al.,
2012 )
Shared
Decision
Making
Coding system, 10
categories
• Rationale for option • Definition of option • Process or procedure • Risk/cons • Benefits/pros • Patient self-efficacy • Patient preference and
values • Patients outcome
expectations • Patient understanding
confirmed • Plan for follow-up
Yes X X X Oncology
consultations,
USA
Physicians-
patients
Reliability: NR NR
( Continued on next page )
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
7
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
The OPTION
(observing patient
involvement)
( Elwyn et al.,
2003 )
Shared
decision
making • Rating scale, 12 items • The clinician: • identifies a problem(s)
…• states that there is
more than one way…• lists “options” …• explains the pros and
cons…• checks the patient’s
preferred
information…• explores the patient’s
expectations…• explores the patient’s
concerns…• the patient has
understood…• provides
opportunities…• preferred level…• An opportunity …• Arrangements are
made…
Yes X X X Primary care,
United
Kingdom
Physicians-
patients
Inter-rater reliability:
ICC = 0.62; Cohen s
kappa = 0.71;
Generalisability
coefficient = 0.68
Intra-rater reliability:
Generalisability
coefficient = 0.66.
Internal consistency
reliability:
Cronbach s alpha = 0.79
N interactions: 186
N raters: 2
Content validity:
items formulated
from existing
literature
Known groups
validity: scores
influenced by
patient age
(negative); sex of
clinician (positive
in favour of
female);
qualification of
clinician (positive),
and clinical
equipoise
(positive).
( Continued on next page )
8
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
The Rochester
Participatory
Decision-Making
Scale (RPAD)
( Shields et al.,
2005 )
Shared
decision
making
Rating scale, 9 items
• Explain the clinical
issue or nature of the
decision • Discussion of the
uncertainties
associated with the
situation • Clarification of
agreement • Examine barriers to
follow-through with
treatment plan • Physician gives patient
opportunity to ask
questions and checks
patients understanding
of the treatment plan • Physician gives patient
opportunity to ask
questions and checks
patients understanding
of the treatment plan • Physician asks, “Any
questions?”• Physician asks
open-ended questions • Physician checks
his/her understanding
of patient’s point of
view
Unclear X X Primary care,
USA
Physicians-
patients
Inter-rater reliability:
ICC = 0.72
N interactions: 193
N raters: NR
Concurrent
validity:
correlation with
other measure
(MPCC, dimension
finding common
ground) r = 0.19.
Correlation with
standardized
patient perceptions
( r = 0.32–0.36)
and patient survey
measures
( r = 0.06–0.07).
( Continued on next page )
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
9
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
Modified version
of The Measure of
Patient-Centered
Communication
(MPCC)
( Dong et al., 2014 )
Person-
Centred
Communica-
tion
Coding system
• Subcomponent 1:
Coping with ideas, feelings
and expectations of
radiotheraphy
• Subcomponent 2:
Receiving clear
information regarding
treatment, side effects,
and effect on function.
Unclear X X X X Radiotherapy
context,
Australia
Physicians-
patients
Inter-coder reliability:
Krippendorff’s α for
process categories = 0.86.
Internal consistency
reliability: Cronbach‘s
alpha = 0.48
N interactions: 56
N raters: NR
Content validity:
Panel of radiation
therapists and PCC
researchers.
Concurrent
validity:
Comparison with
other measure
(Patient-perceived
patient-
centredness),
Pearson
correlation = 0.01
The
patient-Centered
Observation Form
(PCOF)
( Chesser et al.,
2013 ; Schirmer
et al., 2005 )
Person-
centred
communica-
tion
Checklist (form) 13
categories
• Establishes Rapport • Maintains a
Relationship
Throughout the Visit • Collaborative Upfront
Agenda Setting • Maintains Efficiency • Gathering Information • Assessing Patient’s
Perspective on Health • Electronic Medical
Record Use • Physical Exam
• Sharing Information • Behaviour Change
discussion • Informed Decision
Making • Shared Decision
Making • Closure and Follow-up
Yes X X X X X X Family
medicine
residency
centre, USA
Physicians-
patients
Inter-rater reliability:
Cronbach s alpha = 0.67
N interactions: 13
N raters: 4
Validity: NR
( Continued on next page )
10
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
The Process of
Interactional
Sensitivity Coding
in Healthcare
(PISCH)
( Sabee et al., 2020 )
Person-
centred
communica-
tion
Coding system, 7
categories:
• Exchanging
information • Fostering relationships • Managing uncertainty • Meta-communication • Recognizing and
responding to
emotions • Making decisions • Enabling
self-management
Yes X X X X Type 2
diabetes
consultations,
USA
Physicians-
patients
Inter-rater reliability:
Cohen s kappa = 0.46–0.72;
Scotts s pi = 0.44–0.72
N interactions: 50
N raters: NR
Face validity:
review by panel of
experts
Modified version
of The Roter
Interaction
Analysis System
(RIAS), ARCS
( Mjaaland and
Finset, 2009 )
Person-
centred
communica-
tion
Coding system
• Social talk • Biomedical questions • Biomedical
information • Questions about
lifestyle and
psychosocial issues • Information about
lifestyle and
psychosocial issues • Gives orientation • Facilitation • Empathy • Shows disapproval • Residual, Unintelligible • Attribution (ARCS) • Resources (ARCS) • Coping (ARCS) • Solution-focused
techniques (ARCS)
Unclear X X X Primary care,
Norway
Physicians-
patients
Inter-rater reliability
(Cohen s kappa): 0.52
N interactions: 145
N raters: 5
Concurrent
validity:
correlation with
other measure
(RIAS). No
misclassification
between RIAS
codes and ARCS
codes.
( Continued on next page )
N. E
km
an
, C
. Ta
ft a
nd P.
Mo
on
s et
al. / In
terna
tion
al Jo
urn
al o
f N
ursin
g Stu
dies
109 (2
02
0) 10
36
34
11
Table 1 ( Continued ).
Direct observation
tool ¹Competency Format/ content Conceptual
basis
IOM domains ³ Development
setting
Reliability Validity
Respect Coordination Information Physical
comfort
Emotional
support
Involvement
of family
Four Habits Coding
Scheme (4HCS)
( Krupat et al.,
2006 ; Frankel and
Stein, 2001 )
Person-
centred
communica-
tion
Coding system
• Habit 1: Invest in the
Beginning • Habit 2: Patient’s
Perspective • Habit 3: demonstrate
Empathy • Habit 4: Invest in the
End
Unclear X X X Primary care,
USA
Physicians-
patients
Inter-rater reliability
(Pearson correlation):
Habit 1 = 0.70, Habit
2 = 0.80, Habit 3 = 0.71,
Habit 4 = 0.69, Overall
0.72
Internal consistency
reliability (Cronbach s
alpha): Habit 1 = 0.71,
Habit 2 = 0.51, Habit
3 = 0.81 and Habit
4 = 0.61
N interactions: 13
N raters: 2
Concurrent
validity:
correlation with
other measure
(RIAS).
Habit
1 = −0.07–0.28
Habit
2 = 0.08–0.37
Habit
3 = −0.01–0.37
Habit
4 = 0.01–0.21
The Nonverbal
Accommodation
Analysis System
(NAAS)
( D’Agostino and
Bylund, 2011
October ;
D’Agostino and
Bylund, 2014 )
Nonverbal
person-
centred
communica-
tion
Coding system, 10 codes
Paraverbal and Non-verbal
Yes X Oncology
consultations,
USA
Physicians-
patients
Inter-rater reliability
(Pearson correlation):
paraverbal = 0.81–0.96;
nonverbal = 0.85–0.93
Intra-rater reliability
(Pearson correlation):
paraverbal = 0.82–1.0;
non-verbal = 0.89–0.94
N interactions: 10
N raters: 2
Concurrent
validity:
correlation with
other measure
(MIPS): physician
eye contact = 0.45;
patient eye
contact = 0.62.
Adapted version of
the Burgoon and
Hale Relational
communication
scale for
observational
measurement
(RCS-O)
( Gallagher et al.,
2001 )
Nonverbal
person-
centred
communica-
tion
Rating scale, 34 items, 6
categories:
• Immediacy/affection • Similarity/depth • Receptivity/trust • Composure • Formality • Dominance
Unclear X Primary care,
USA
Physicians-
patients
Inter-rater-reliability
(Cronbach s alpha):
Immediacy/affection = 0.62;
Similarity/depth = 0.51;
Receptivity/trust = 0.72;
Composure = 0.69;
Formality = 0.02;
Dominance = 0.34
Internal consistency
(Cronbach s alpha):
Immediacy/affection = 0.95;
Similarity/depth = 0.84;
Receptivity/trust = 0.94;
Composure = 0.98;
Formality = 0.92;
Dominance = 0.60
Inter-rater-agreement
(within group agreement
coefficient):
Immediacy/affection =
0.65;
Similarity/depth = 0.72;
Receptivity/trust = 0.86;
Composure = 0.74;
Formality = 0.58
Dominance = 0.78
N interactions: 20
N raters: 3
Concurrent
validity:
correlation with
other measure
(Interview Rating
Scale):
Immediacy/affection = 0.65;
Similarity/depth = 0.50;
Receptivity/trust = 0.76;
Composure = 0.62;
Formality = −0.31;
Dominance = −0.26
1 Refers to several different frameworks.
12 N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634
Fig. 1. PRISMA 2009 flow diagram. 1 ) (i) Not a direct observation tool (ii) no reports and/or descriptions of any development or evaluation of an instrument that measures patient-centred care, PCC or person
centredness (iii) not clinical encounters.
i
t
w
m
r
c
a
and specialists in general medicine, internal medicine, rheuma-
tology and gastro-enterology. Four raters (social scientists) coded
the sessions. High inter-rater reliability was noted for the Facili-
tating behaviour dimension (relative agreement, ICC = 0.93 and ab-
solute agreement ICC = 0.92) while it was moderate for the In-
hibiting behaviour dimension (ICC = 0.53 and ICC = 0.53, respectively)
( D’Agostino and Bylund, 2014 ). Internal consistency was assessed
by Cronbach s alpha (Facilitating behaviour dimension = 0.64 and
nhibiting behaviour dimension = 0.50). Convergent validity was
ested against the Eurocommunication scale where all correlations
here in the expected directions (positive for the facilitating di-
ension ( r = 0.28) and negative for the inhibiting dimension
= −0.29) ( Zandbelt et al., 2005 ).
SOS-PCC is a 9-item coding system assessing global person-
entred care in four dimensions (i.e. biological aspects, establish
therapeutic relationship and provides a treatment plan in col-
N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634 13
Ta
ble 2
Su
mm
ary ta
ble o
f re
lia
bil
ity a
nd v
ali
dit
y in th
e in
clu
de
d st
ud
ies.
Re
lia
bil
ity
Va
lid
ity
Dir
ect o
bse
rva
tio
n to
ol ¹
Inte
r-ra
ter
Test
-re
test re
lia
bil
ity
Inte
rna
l co
nsi
ste
ncy
Face
Co
nte
nt
Co
ncu
rre
nt
Kn
ow
n g
rou
ps
Pe
rso
n-c
en
tre
d c
are
/ce
nte
red
ne
ss
CO
T ( G
au
gle
r e
t a
l.,
20
13 )
√ √
√ D
OC ( B
ert
ak
is a
nd A
za
ri,
20
11 )
He
nb
est a
nd S
tew
art ( H
en
be
st a
nd S
tew
art
, 1
98
9 )
√ √
√ P
BC
I ( Z
an
db
elt e
t a
l.,
20
05 )
√ √
√ S
OS
-PC
C ( P
au
l-S
av
oie e
t a
l.,
20
15 )
√ √
√ S
ha
red d
ec
isio
n-m
ak
ing
IDM ( B
rad
do
ck e
t a
l.,
19
97 )
√ D
EE
P-S
DM ( C
lay
ma
n e
t a
l.,
20
12 )
OP
TIO
N ( E
lwy
n e
t a
l.,
20
03 )
√ √
√ √
√ R
PA
D ( S
hie
lds
et
al.
, 2
00
5 )
√ √
Pe
rso
n-c
en
tre
d c
om
mu
nic
ati
on
MP
CC ( D
on
g e
t a
l.,
20
14 )
√ .
√ √
√ P
CO
F ( C
he
sse
r e
t a
l.,
20
13 ;
Sch
irm
er
et
al.
, 2
00
5 )
√ √
PIS
CH ( S
ab
ee e
t a
l.,
20
20 )
√ √
Mo
difi
ed R
IAS ( M
jaa
lan
d a
nd F
inse
t, 2
00
9 )
√ √
4H
CS ( K
rup
at
et
al.
, 2
00
6 ;
Fra
nk
el
an
d S
tein
, 2
00
1 )
√ √
√ N
on
ve
rba
l p
ers
on
-ce
ntr
ed c
om
mu
nic
ati
on
NA
AS ( D
’Ag
ost
ino a
nd B
ylu
nd
, 2
01
1 O
cto
be
r ; D
’Ag
ost
ino a
nd B
ylu
nd
, 2
01
4 )
√ √
√ √
RC
S-O
( G
all
ag
he
r e
t a
l.,
20
01 )
√ √
√
l
e
s
t
i
t
a
C
s
c
S
4
i
a
c
a
b
b
i
c
l
n
d
f
i
l
s
i
p
v
c
a
r
t
c
a
O
t
s
s
t
l
m
“
t
a
C
s
s
t
s
g
p
4
i
n
o
s
m
aboration with the patient). Items are scored on a four-step Lik-
rt scale ranging from 1 (not demonstrated) to 4 (strongly demon-
trated). An expert panel developed 5 videos and the SOS-PCC was
ested in a sample of 21 registered nurses and 21 physicians work-
ng with chronic pain patients. The inter-rater reliability between
hree observers (one registered nurse, a resident in psychiatry
nd a PhD student in the health care field) was good (ICC = 0.93).
ontent validity was considered to be satisfactory by a panel of
even interdisciplinary experts in the health care field and internal
onsistency reliability was high (Cronbach s alpha = 0.88) ( Paul-
avoie et al., 2015 ).
.3.2. Shared decision-making
IDM is a coding system assessing shared decision making and
nformed consent. It has six key elements (e.g., discussion of risks
nd benefits) and scores are determined for each consultation de-
ision. Each decision is rated on each element from 0 to 2 and then
ggregated over the six elements to a total score. Inter-rater relia-
ility was assessed in a sample of 20 audiotaped encounters rated
y three coders (one physician and two graduate students in med-
cal ethics) and the complete agreement percentage on a given de-
ision was 77%. The developers have reported that although pre-
iminarily analyses indicate good validity, it needs further exami-
ation ( Braddock et al., 1997 ).
DEEP-SDM is a coding system with ten coding frames (e.g.,
efinition of option, patient preferences and values, and plan for
ollow-up) for assessing essential elements of shared decision mak-
ng. Decision making is coded from 1 (doctor led) to 9 (patient
ed) for each decision. Coding was conducted by two research as-
istants using a sample of 20 video-recordings of 20 women at vis-
ts with their medical oncologist. Validity and reliability were not
resented ( Clayman et al., 2012 ).
OPTION is a 12-item checklist assessing how well clinicians in-
olve patients in decision making. An example item is “The clini-
ian identifies a problem(s) needing a decision making process ”. Items
re rated against a 5-point scale (strongly agree-disagree). Inter-
ater reliability was acceptable in a random sample of 21 consul-
ations rated by two non-clinical raters (inter-rater generalisability
oefficient = 0.68; cohen s kappa = 0.71; intra-class coefficient = 0.62),
s was intra-rater reliability (0.66) ( D’Agostino and Bylund, 2011
ctober ). Content validity was based on literature reviews, quali-
ative studies and consultations with patients and clinicians. Con-
truct validity was supported by showing that OPTION was sen-
itive to differences in clinician age, sex, qualification and clinical
opic ( Elwyn et al., 2003 ).
RPAD is a 9-item rating scale assessing patient-physician col-
aborative decision making. Example items include “Physician’s
edical language matches patient’s level of understanding” and
Physician gives patient opportunity to ask questions and checks pa-
ients understanding of the treatment plan”. Items are rated against
3-point Likert scale and item ratings are summed to a total score.
onstruct validity was tested against another measure (MPCC) re-
ulting in a modest correlation ( r = 0.19). RPAD correlated with
tandardized patient’s perceptions of the physician-patient rela-
ionship ( r = 0.32–0.36) but less with the patient survey mea-
ures ( r = 0.06 to 0.07). Inter-rater reliability was shown to be
ood (ICC = 0.72) in comparisons of ratings of 193 recordings from
hysician-patient encounters ( Shields et al., 2005 ).
.3.3. Person-centred communication
The modified version of MPCC is a coding system assess-
ng person-centred communication. The tool comprises compo-
ents and subcomponents which categorises physicians’ provision
f information and responses to patients’ verbal ‘offers’ regarding
ymptoms, ideas, expectations, feelings and side effects of treat-
ent and effect on function. The first subcomponent consists of
14 N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634
t
l
a
s
2
c
i
w
a
o
i
c
0
a
t
(
B
a
r
t
s
p
i
(
b
d
i
r
c
(
l
t
f
d
m
r
s
c
e
5
f
s
v
e
a
c
(
s
a
w
c
o
b
i
c
t
s
f
c
c
six items which are rated on a 6-point scale for depth of dis-
cussion and the second subcomponent consists of nine items. Re-
liability and validity were examined in 56 recorded radiother-
apy consultations. Ten percent of the consultations were dual-
coded and inter-coder reliability was high (Krippendorffs α= 0.86).
Internal consistency reliability was also good (Cronbach s alpha
0.48). The MPCC correlated weakly with patient-perceived person-
centeredness (PPPC) ( r = 0.01) ( Dong et al., 2014 ).
PCOF is a 13-item checklist assessing person-centred commu-
nication in 13 different categories with three different options
reflecting person-centred behaviours. For example the category
Gathering Information has 3 options: uses open-ended questions, uses
reflecting statement, uses summary/clarifying statement). Scores rep-
resent the total number of options endorsed. Four raters (two
physicians and two PhD social science researchers) observed 13
encounters in a family medicine residency centre resulting in an
overall inter-rater reliability of 0.67 (Cronbach’s alpha). Reliabil-
ity estimates also differed by discipline. Validity was not reported
( Chesser et al., 2013 ; Schirmer et al., 2005 ).
PISCH is a coding system assessing interactional sensitivity in
person-centred communication. It comprises 7 interaction units in-
cluding multiple codes. Scores are expressed as a percentage of
the units. Inter-coder reliability was assessed using a sample of 50
transcripts of conversations between physicians and patients with
type 2 diabetes. Reliability for the PISCH units was estimated to
0.46–0.72 (Cohens kappa) and 0.44–0.72 (Scotts’s pi). Four of the
categories (exchanging information, fostering relationships, man-
aging uncertainty and meta-communication) reached acceptable
inter-coder reliability. A panel of experts (health communication
and/or discourse analysis) reviewed the coding scheme to test the
face validity of the instrument ( Sabee et al., 2020 ).
A modified version of RIAS, ARCS is a coding system assess-
ing person-centred communication. The RIAS includes 10 cate-
gories (e.g. social talk, biomedical questions and empathy) and the
ARCS supplements these with four additional person-centred cat-
egories: Attribution, Resources, Coping and Solution-focused tech-
niques (ARCS). Verbal ‘utterances’ by both the doctor and patient
are coded into each category. Scores represent the frequencies of
utterances in the different 10 categories. The inter-rater reliabil-
ity was fair (Cohens kappa = 0.52) between the five coders (re-
searchers experienced in the communication skills training) us-
ing 145 video-recorded consultations with 24 general practitioners
( Arora, 2003 ). Construct validity was assessed by a correlation with
the RIAS measure where no misclassifications across the general
RIAS and ARCS codes were identified ( Mjaaland and Finset, 2009 ).
4HCS is a coding system assessing clinicians’ communication
skills. The 4HCS identifies 23 behaviours, each associated with one
of the four habits: Invest in the beginning, (six items); Patient’s
perspective (three items); Demonstrate empathy (four items) and
Invest in the end (10 items). Items are rated on a 5-point scale.
Construct validity was tested against other available measures.
Inter-coder reliability was assessed in13 videotaped visits in pri-
mary care that were rated independently by two coders (health
professions students). Reliability estimates were satisfactory (0.69–
0.80 for the four habits (overall 0.72)). Internal consistency reliabil-
ity assessed using Cronbach’s alpha was satisfactory for two habits
(Habit 1 = 0.71 and Habit 3 = 0.81) and unsatisfactory for the
other two (Habit 2 = 0.51 and Habit 4 = 0.61) Evidence for con-
struct validity was provided by correlations between 4HCS ratings,
RIAS, back channel responses, and non-verbal measures ( Krupat
et al., 2006 ; Frankel and Stein, 2001 ).
4.3.4. Nonverbal person-centred communication
NAAS is a coding system assessing nonverbal communication.
The tool consists of two codes (paraverbal and non-verbal) which
include ten behaviour categories (e.g., talk-time, smiling, eye con-
act). NAAS categories are coded in one-minute segments. A base-
ine score is first calculated as ( Institute of Medicine, 2001 ) an
verage of the physician’s NAAS behaviours during the first two
egments in the consultation; and ( World Health Organization,
006 ) an average of the patient’s NAAS behaviours during the
orresponding segments. Subsequent segments are compared with
nitial segments to reveal if the physician and patient changed
ithin each NAAS behaviour and, if so, in what direction. Inter-
nd intra-rater reliability was assessed in ten randomly selected
ncology consultations coded by two raters. Inter-rater reliabil-
ty for the ten NAAS categories in the paraverbal and non-verbal
odes was satisfactory (Pearson correlation = 0.81–0.96 and 0.85–
.93, respectively). Intra-rater reliability was 0.82–1.0 (paraverbal)
nd 0.89–0.94 (non-verbal). Construct validity was assessed with
he MIPS measure showing correlations with physician eye contact
r = 0.447) and patient eye contact ( r = 0.623) ( D’Agostino and
ylund, 2011 October ; D’Agostino and Bylund, 2014 ).
This adapted version of the RCS-O is a 34-item rating scale
ssessing six dimensions of nonverbal communication. Items are
ated on a 7-point Likert scale ranging from “strongly disagree”
o “strongly agree” and item ratings are summed to dimension
cores. Inter-rater reliability was assessed using a random sam-
le of 20 videotaped interactions of medical students interview-
ng patients. Interactions were rated by three trained observers
social and behavioural scientists) at two time points separated
y eight weeks. At time one, four of the six dimensions (imme-
iacy/affection, similarity/depth, receptivity/trust and composure)
ndicated fair to excellent internal consistency (0.84–0.98), inter-
ater reliability (0.51–0.72), inter-rater agreement (0.65–0.86) and
onstruct validity (correlation with the interview rating scale (IRS))
0.50–0.76). The estimates were similar at time two (eight weeks
ater), except that the inter-rater reliability for receptivity/trust
hen was low (0.19). Internal consistency was also strong for the
ormality dimension (0.91–0.92) but moderate for the dominance
imension (0.60–0.66). Inter-rater reliability was low for both for-
ality ( −0.11–0.02) and dominance (0.34–0.50) and even if inter-
ater agreement was good (0.58–0.86) these two dimensions were
uggested to be dropped or revised. Formality and dominance both
orrelated negatively with the IRS ( −0.26 to −0.31) ( Gallagher
t al., 2001 ).
. Discussion and conclusion
This review identified a manifold of direct-observation methods
or use in assessing PCC, varying in assessed main constructs; as-
essment formats; conceptual frameworks underpinning their de-
elopment; coverage of core PCC dimensions; and psychometric
valuations and performance.
Given the multitude and variation in existing PCC frameworks
nd concepts, it is important that developers of assessment tools
arefully articulate the conceptual underpinnings of their tool
Hallgren, 2012 ). Although clear descriptions were provided for
ome methods, in 9 of the 16 tools we were nevertheless un-
ble to ascertain with any certainty what conceptual framework(s)
as applied or developed to guide their construction. Furthermore,
ontent analyses were seldom performed to affirm if the content
f the methods adequately represents the construct(s) intended to
e assessed. Similar shortcomings have previously been reported
n quantitative measures for assessing patient-centred communi-
ation ( Epstein et al., 2005 ). In the absence of information about
he assumptions underlying a tool and theoretical and/or empirical
upport for those assumptions, its validity and utility as a measure
or assessing PCC may be questioned.
Given that PCC assessment tools often embody different con-
epts, apply different nomenclature to designate often similar con-
epts ( Arora, 2003 ; Mead and Bower, 20 0 0 Jan ) or seemingly sim-
N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634 15
i
2
s
a
s
M
a
a
f
t
h
s
t
f
m
B
d
c
J
t
i
t
e
w
s
r
s
m
a
p
a
d
c
e
m
w
s
2
t
j
v
c
i
b
P
p
s
F
t
m
i
c
i
t
g
r
r
a
p
v
f
a
l
N
i
m
t
m
t
q
t
F
d
m
l
u
B
2
2
p
d
i
f
t
m
d
l
r
r
w
s
b
n
u
t
I
e
r
r
l
t
o
n
c
n
t
f
s
w
c
u
m
t
d
t
r
h
d
n
e
s
w
f
a
s
o
lar concepts that are operationalized differently ( Epstein et al.,
005 ), we examined the content of each tool in relation to a
tandard framework in order to identify conceptual commonalities
nd differences between the tools. For this purpose we chose the
ix IOM-endorsed dimensions covering aspects of PCC ( Institute of
edicine 2001 ). We found that only two tools, the DOC ( Bertakis
nd Azari, 2011 ) and PCOF ( Chesser et al., 2013 ), appeared to cover
ll six dimensions, whereas coverage by the remaining tools varied
rom as few as one to a maximum of four dimensions. Interestingly
he DOC was the only tool of the five purporting to measure the
olistic concept PCC or patient-centeredness to tap all IOM dimen-
ions. The other four holistic tools were judged to include between
hree and four dimensions, indicating both notable conceptual dif-
erences between methods and significant gaps in assessment do-
ains. This is in line with findings by Mead & Bower ( Mead and
ower, 20 0 0 Jan ) showing that PCC assessed by means of three
ifferent direct-observation tools (an adapted RIAS, Eurocommuni-
ation and Henbest & Stewart instrument) ( Mead and Bower, 20 0 0
an ) correlated poorly, which the authors suggested indicate that
hey tap distinct aspects of doctor-patient interactions.
Similarly, dimension coverage varied in the four tools (each tool
ncluded between two and three dimensions) designed specifically
o assess shared decision making. For example, IDM ( Braddock
t al., 1997 ) was judged to tap Involvement of family and friends ,
hereas OPTION ( Elwyn et al., 2003 ) instead covered Emotional
upport . This difference may in part account for weak correlations
eported between these measures ( Weiss and Peters, 2008 ). On a
imilar note, the five tools assessing patient-centred verbal com-
unication also appeared to differ somewhat in dimension cover-
ge, although commonalities in coverage were apparent. PCOF ap-
eared to tap all six IOM dimensions and the others between three
nd four dimensions. With content suggesting three common un-
erlying concepts, substantial agreement between these communi-
ation assessment tools may be expected; however, to our knowl-
dge the only head-on-head evaluation of these methods has been
ade between the MPCC and the 4HCS, where weak correlations
ere found ( Clayton et al., 2011 ). The two identified tools for as-
essing nonverbal communication (NAAS ( D’Agostino and Bylund,
014 ) and RSC
–O ( Gallagher et al., 2001 ) were unsurprisingly, given
heir focus on behavioural aspects doctor-patient interactions, both
udged to tap a single IOM dimension, namely Respect for patients’
alues, preferences, and expressed needs.
Few of the tools appeared to cover the IOM dimensions Physi-
al comfort ( n = 4) or Involvement of family and friends ( n = 3). It
s noteworthy that these two dimensions have also been shown to
e the least frequently covered IOM domains in patient-reported
CC measures ( Tzelepis et al., 2015 ). This suggests that their im-
ortance as independent core elements of PCC may either be over-
tated in the IOM framework or undervalued in assessment tools.
or example, providing patients with physical comfort may be seen
o lie within the realm of standard professional ethics and good
edical practice rather than as a component distinctly characteriz-
ng PCC. On the other hand, involving family and friends in patient
are may serve to further the aims of PCC in that they may facil-
tate patient-clinician communication, provide emotional support
o the patient during consultations and at home and help to ne-
otiate medical decision making ( Rosland et al., 2011 ). Despite its
elatively meagre representation in the methods included in this
eview, involvement of family and friends has also been identified
s one of the six most frequently recurring domains found in nine
rominent PCC frameworks ( Cronin, 2004 ).
The comprehensiveness of reported psychometric evaluations
aried substantially between tools. No evaluations were reported
or two tools (DEEP-SDM ( Clayman et al., 2012 ) and DOC ( Bertakis
nd Azari, 2011 ), whereas relatively rigorous reliability and va-
idity evaluations were performed for the OPTION, MPCC and
AAS. Reliability was evaluated in terms of inter-observer reliabil-
ty (IOR) for 12 of the 16 tools, all of which reported IOR estimates
eeting conventional cutoffs for moderate to high agreement be-
ween/among raters. Cases of moderate IOR associated with the
odified RIAS, PISCH and RSC
–O were attributed to shortfalls in
he tool related to ambiguities in concept definitions or inade-
uacies in defining target behaviours, or inclusion of behaviours
hat are not readily observable ( Sabee et al., 2020 ; Mjaaland and
inset, 2009 ; Gallagher et al., 2001 ). Although IOR is often un-
erstood to be a characteristic inherent to an assessment tool, it
ay be influenced by a number of methodological factors unre-
ated to the observation tool per se, such as sample homogeneity,
nit of analysis ( Hallgren, 2012 ), extent of rater training ( Mead and
ower, 20 0 0 Jan ), raters’ professional background ( Chesser et al.,
013 ) and other rater characteristics/ idiosyncrasies ( Kogan et al.,
017 ), etc. For example, although checklists are assumed to im-
rove IOR because they provide relatively unambiguous behavioral
efinitions ( Regehr et al., 1998 ) we noted that some of the cod-
ng systems, e.g., Henbest & Stewart and SOS-PCC actually outper-
ormed the two checklists PCOF and COT. It is conceivable that in
hese cases extensive rater training in using the coding schemes
ay have offset the effect of greater clarity in target behaviour
efinitions.
It is noteworthy that other important types of reliability were
ess frequently reported for the various tools. For example, test-
etest reliability, reflecting the stability of ratings over time, was
eported for only 3 of the 16 tools, and homogeneity, as measured
ith e.g. Cronbach’s alpha, was reported for 7 of them. In the ab-
ence of comprehensive evaluations of reliability it is not possi-
le to fully appraise the precision to be expected of codings. Also
oteworthy is that power calculations for estimating samples sizes
sed in testing reliability (and validity) were reported for only one
ool, the SOC-PCC, undermining confidence in the estimates.
Validity assessments were reported for all tools except DOC,
DM, DEEP-SDM and PCOF ( Bertakis and Azari, 2011 ; Braddock
t al., 1997 ; Clayman et al., 2012 ; Chesser et al., 2013 ). Concur-
ent validity was most commonly evaluated but notably many cor-
elations were very low. Content validity was evaluated by and
arge by panels of experts; however, the experts’ areas of exper-
ise were often not specified. Paradoxically, we found no mention
f patients taking part in the evaluation process. Moreover, it is
oteworthy that other measurement characteristics that are criti-
al for appraising and interpreting outcomes, such as responsive-
ess and sensitivity were conspicuously absent in evaluations of
he various methods. Likewise, predictive validity was not assessed
or any method.
All of the identified methods have their own particular
trengths and weaknesses and decisions about which one to use
ill invariably involve some amount of tradeoff. Ultimately, the
hoice of tool will depend on the purpose for which it is to be
sed. For formative use to guide learning by providing bench-
arks to orient and give feedback to reinforce learners, all of the
ools may in some way be useful. However, at a minimum evi-
ence supporting content validity of the tool should be available
o ensure that the sampled behaviours comprehensively and accu-
ately represent the construct (s) of interest. On the other hand, for
igh stakes, summative use, tools with clear conceptual base and
emonstrated strong validity and reliability are needed. We can-
ot recommend any of the tools without some reservations; how-
ver, of the tools assessing holistic PCC the Henbest & Stewart in-
trument ( Henbest and Stewart, 1989 ) appears promising given its
ell-defined conceptual core and relatively good psychometric per-
ormance. It has not, however, been evaluated for content validity
nd only three IOM dimensions were covered. Of the tools for as-
essing shared decision making, OPTION ( Elwyn et al., 2003 ) is one
f the two with a defined conceptual framework and stands out as
16 N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634
C
C
C
C
D
D
D
E
E
F
G
G
G
H
H
I
I
K
K
K
L
M
M
P
R
R
the most rigorously tested tool. Nevertheless, inter-rater reliabil-
ity estimates were moderate and only three IOM dimensions were
covered. Of the patient-centred communication tools, the MPCC
( Dong et al., 2014 ) performed best in terms of inter-rater relia-
bility and content validity was evaluated; however, its conceptual
framework was unclear and concurrent validity was not supported.
Finally, of the two tools intended to measure nonverbal patient-
centred communication, the NAAS ( D’Agostino and Bylund, 2014 )
clearly outperformed the RCS-O ( Gallagher et al., 2001 ) on every
count.
5.1. Limitations
There are limitations to this review. The eligibility of tools for
inclusion and their coverage of the IOM domains were subjectively
judged but were thoroughly discussed by co-authors until agree-
ment was reached. Our main search was limited principally to ar-
ticles identified through the search string, subsequent evaluations
of the included tools were not sought and no attempt was made
to contact developers for additional information about their tools.
On the other hand, after a tool was identified in the search, we
searched the literature for earlier articles describing the develop-
ment of the tools.
6. Conclusions
Differences in the scope, coverage and content of the identi-
fied direct observation tools likely reflect the complexity of PCC
and lack of consensus in defining this concept. Greater clarity and
transparency are needed in describing conceptualizations and as-
sumptions underpinning of the tools to enable potential users to
make informed decisions when selecting tools. Although all of the
tools may serve formative purposes, evidence supporting their use
in summative evaluations is scarce. Patients were not involved in
the development of any of the assessment tools, which seems in-
trinsically paradoxical given the aims of PCC. All of the tools may
be useful for providing trainee feedback; however, rigorously tested
and patient-derived tools are needed for high-stakes use.
Funding sources
This work was supported by the centre for Person-centred Care
at the University of Gothenburg (GPCC), Sweden. GPCC is funded
by the Swedish Government’s grant for Strategic Research Areas,
Care Sciences (Application to Swedish Research Council no. 2009-
1088) and co-funded by the University of Gothenburg, Sweden.
Declaration of Competing Interest
None.
References
Agency for Healthcare Research and Quality (AHRQ), 2003. National HealthcareQuality Report .
Accreditation Council for Pharmacy Education (ACPE) (2015). Accreditation stan-
dards and key elements for the professional program in pharmacy lead-ing to the doctor of pharmacy degree. https://www.acpe-accredit.org/pdf/
Standards2016FINAL.pdf Arora, N.K., 2003. Interacting with cancer patients: the significance of physi-
cians’ communication behaviour. Soc. Sci. Med. 57, 791–806. doi: 10.1016/S0277- 9536(02)00449- 5 .
Australian Commission on Safety and Quality in Health Care (ACSQHC)., 2011. Pa-tient Centred care: Improving quality and Safety Through Partnership With Pa-
tients and Consumers. ACSQHC, Sydney, Australia .
Bertakis, K.D. , Azari, R. , 2011. Determinants and outcomes of patient-centred care.Patient Educ. Couns. 85, 46–52 .
Braddock, C.H. , Fihn, S.D. , Levinson, W. , Jonsen, A.R. , Robert, A. , Pearlman, R.A. , 1997.How doctors and patients discuss routine clinical decisions. informed decision
making in the outpatient setting. J. Gen. Intern. Med. 12, 339–345 .
hesser, A. , Reyes, J. , Woods, N.K. , Williams, K. , Kraft, R. , 2013. Reliability in patien-t-centred observations of family physicians. Fam. Med. 45 (6), 428–432 .
layman, M.L. , Makoul, G. , Harper, M.M. , Koby, D.G. , Williams, A.R. , 2012. Develop-ment of a shared decision making coding system for analysis of patient-health-
care provider encounters. Patient Educ. Couns. 88, 367–372 . Clayton, M.F. , Latimer, S. , Dunn, T.W. , Haas, L. , 2011. Assessing patient-centred com-
munication in a family practice setting: how do we measure it, and whose opin-ion matters? Patient Educ. Counc. 84, 294–302 .
onstand, M.K. , MacDermid, J.C. , Bello-Haas, V.D. , Law, M , 2014. Scoping review of
patient-centred care approaches in healthcare. BMC Health Serv. Res. 19 (14),271 .
ronin, C., “Patient-Centered Care: An Overview of Definitions and Concepts,” pre-pared for the National Health Council, 2004.
’Agostino, T.A. , Bylund, C.L. , 2011 October. The nonverbal Accomodation AnalysisSystem (NAAS): initial application and evaluation. Patient Educ. Couns. 85 (1),
33–39 .
’Agostino, T.A. , Bylund, C.L. , 2014. Nonverbal accommodation in healthcare com-munication. Health Commun. 29 (6), 563–573 .
ong, S. , Butow, P.N. , Costa, D.S. , Dhillon, H.M. , Shields, C.G. , 2014. The influenceof patient-centred communication during radiotherapy education sessions on
post-consultation patient outcomes. Patient Educ. Couns. 95, 305–312 . Dwamena, F. , Holmes-Rovner, M. , Gaulden, C.M. , Jorgenson, S. , Sadigh, G. , Siko-
rskii, A. , Lewin, S. , Smith, R.C. , Coffey, J. , Olomu, A. , Beasley, M , 2012. Interven-
tions for providers to promote a patient-centred approach in clinical consulta-tions. Cochrane Datab. System. Rev. Issue 12 .
lwyn, G. , Edwards, A. , Wensing, M. , Hood, K. , Atwell, C. , Grol, R. , 2003. Shared de-cision making: developing the OPTION scale for measuring patient involvement.
Qual. Saf. Health Care 12, 93–99 . pstein, R.M. , Franks, P. , Fiscella, K. , Shields, C.G. , Meldrum, S.C. , Kravitz, R.L. , Duber-
stein, P.R. , 2005. Measuring patient-centred communication in Patient-Physician
consultations: theoretical and practical issues. Soc. Sci. Med. 61, 1516–1528 . rankel, R.M. , Stein, T. , 2001. Getting the most out of the clinical encounter: the four
habits model. J. Med. Pract. Manag. 16 (4), 184–191 . allagher, T.J. , Hartung, P.J. , Gregory, S.W. , 2001. Assessment of a measure of re-
lational communication for doctor-patient interactions. Patient Educ. Counc. 3(45), 211–218 .
augler, J.E. , Hobday, J.V. , Savik, K. , 2013. The CARES(R) Observational Tool: a valid
and reliable instrument to assess person-centred dementia care. Geriatr. Nurs.(Minneap) 34, 194–198 .
oodrich, J. , Cornwell, J. , 2008. Seeing the Person in the Patient. The Point of CareReview Paper. King’s Fund, London .
åkansson Eklund, J , et al. , 2018. "Same same or different?"; A review of reviews ofperson-centred and patient-centred care.". Patient Educ. Couns. Published on-
line .
allgren, K. , 2012. Computing inter-rater reliability for observational data: anoverview and tutorial. Tutor. Quant. Methods Psychol. 8 (1), 23–34 .
Henbest, R.J. , Stewart, M.A. , 1989. Patient-centredness in the consultation. 1: amethod for measurement. Fam. Pract. 6 (4), 249–253 .
ICN Framework of Competencies for the Generalist Nurse. International Council ofNurses, Geneva 2003.
Institute of Medicine, 2001. Committee On Quality of Health Care in America. Cross-ing the Quality chasm: a New Health System For the 21st Century. National
Academy Press .
nstitute of Medicine (US), 2003. Committee on the health professions educationsummit. In: Greiner, AC, Knebel, E (Eds.), Health Professions Education: A Bridge
to Quality. National Academies Press (US), Washington (DC) . nternational Alliance of Patient s Organizations – A global voice for patients
(2007). What is Patient-centred healthcare? A review of definitions and prin-ciples IAPO http://iapo.org.uk/sites/default/files/files/IAPO%20Patient-Centred%
20Healthcare%20Review%202nd%20edition.pdf accessed Sept 2018;
ogan, J.R. , Hatala, R. , Hauer, K.E. , Holmboe, E. , 2017. Guidelines: the do’s, don’tsand don’t knows of direct observation of clinical skills in medical education.
Perspect. Med. Educ. 6, 286–305 . ogan, J.R. , Holmboe, E.S. , Hauer, K.E , 2009. Tools for direct observation and as-
sessment of clinical skills of medical trainees: a systematic review. JAMA 302,1316–1326 .
rupat, E. , Frankel, R. , Stein, T. , Irish, J. , 2006. The Four Habits Coding
Scheme:validation of an instrument to assess clinicians communication be-haviour. Patient Educ. Couns. 26, 38–45 .
awrence, M. , Kinn, S , 2012. Defining and measuring patient-centred care: an ex-ample from a mixed-methods systematic review of the stroke literature. Health
Expect 15, 295–326 . ead, N. , Bower, P , 20 0 0. Measuring patient-centredness: a comparison of three
observation-based instruments. Patient Educ. Couns. 39 (1), 71–80 .
Mead, N. , Bower, P , 20 0 0. Patient-centredness: a conceptual framework and reviewof the empirical literature. Soc. Sci. Med. 51 (7), 1087–1110 .
jaaland, T.A. , Finset, A. , 2009. Frequency of GP communication addressing the pa-tient’s resources and coping strategies in medical interviews: a video-based.
BMC Fam. Pract. 10 (49), 1–9 . aul-Savoie, E. , Bourgault, P. , Gosselin, E. , Potvin, S. , Lafrenaye, S. , 2015. Assessing
patient-centred care for chronic pain: validation of a new research paradigm.
Pain Res. Manag. 20 (4), 183–188 . egehr, G. , et al. , 1998. Comparing the psychometric properties of checklists and
global rating scales for assessing performance on an OSCE-format examination.Acad. Med. 73 (9), 993–997 .
osland, A.-.N. , Piette, J.D. , Choi, H.J. , Heisler, M. , 2011. Family and friend par-
N. Ekman, C. Taft and P. Moons et al. / International Journal of Nursing Studies 109 (2020) 103634 17
S
S
S
S
S
T
V
W
W
W
W
Z
ticipation in primary care visits of patients with diabetes or heart failure:patient and physician determinants and experiences. Med. Care 49 (1), 37–
45 . abee, C.M. Koenig, C.J. Wingard, L. Foster, J. Chivers, N. Olsher, D. Vandergriff, I.
The process of interactional sensitivity coding in health care: conceptually andoperationally defining patient-centred communication. J. Health Commun., 20:7,
773–782. chirmer, J.M. , Mauksch, L. , Lang, F. , Marvel, K. , Zoppi, K. , Epstein, R.M. , Brock, D. ,
Pryzbylski, M. , 2005. Assessing Communication competence: a review of current
tools. Fam. Med. 37 (3), 184–192 . choll, I. , Zill, J.M. , Harter, M. , Dirmaier, J. , 2014. An integrative model of patient-cen-
teredness – a systematic review and concept analysis. PLoS ONE 9 (9), e107828 .hields, C.G. , Franks, P. , Fiscella, K. , Meldrum, S. , Epstein, R.M. , 2005. Rochester Par-
ticipatory Decision-Making Scale (RPAD): reliability and validity. Ann. Fam. Med.3, 436–442 .
ocialstyrelsen. En mer tillgänglig och patientcentrerad vård, 2016. Sammanfat-
tning Och Analys Av Landstingens Och Regionernas Handlingsplaner-Delrapport.Swedish National Board of Health and Welfare https://www.socialstyrelsen.se/
Lists/Artikelkatalog/Attachments/20115/2016- 3- 22.pdf .
zelepis, F. , Sanson-Fisher, R.W. , Zucca, A.C. , Fradgley, E.A. , 2015. Measuring thequality of patient-centred care: why patient-reported measures are critical to
reliable assessment. Patient Prefer Adher. 9, 831–835 . an der Vleuten, C.P.M. , 1996. The assessment of professional competence: devel-
opment, research and practical implications. Adv. Health Sci. Educ. 1, 41–67 . eiss, M.C. , Peters, T.J. , 2008. Measuring shared decision making in the consulta-
tion: a comparison of the OPTION and Informed Decision Making instruments.Patient Educ. Counc. 70, 79–86 .
orld Health Organization. (2005). The challenge of chronic conditions: preparing
a health care workforce for the 21st century. orld Health Organization. (2006). Quality of care: a process for making strate-
gic choices in health systems. http://apps.who.int/iris/bitstream/handle/10665/ 43470/9241563249 _ eng.pdf?sequence=1&isAllowed=y
orld Health Organization. (2008). The world health report 2008: primary healthcare now more than ever: introduction and overview. Retrieved from: www.
who.int/whr/2008/en/index.html .
andbelt, L.C. , Smets, E.M. , Oort, F.J. , de Haes, H.C. , 2005. Coding patient-centredbehaviour in the medical encounter. Soc. Sci. Med. 61, 661–671 .