what does that mean? author selection of virtual patient metrics

19
What Does That Mean? Author Selection Of Virtual Patient Metrics Rachel Ellaway, David Topps, Richard Witham Northern Ontario School of Medicine

Upload: ellery

Post on 12-Jan-2016

37 views

Category:

Documents


0 download

DESCRIPTION

What Does That Mean? Author Selection Of Virtual Patient Metrics. Rachel Ellaway, David Topps, Richard Witham Northern Ontario School of Medicine. Designs for learning. PBL Simulation CAL OSCE Games Virtual Patients Design using patterns, methods, templates. Virtual Patients. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

What Does That Mean?Author Selection Of Virtual Patient Metrics

Rachel Ellaway, David Topps, Richard Witham Northern Ontario School of Medicine

Page 2: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Designs for learning

• PBL

• Simulation

• CAL

• OSCE

• Games

• Virtual Patients

Design using patterns, methods, templates

Page 3: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Virtual Patients

• “an interactive computer simulation of real-life clinical scenarios for the purpose of health professions training, education, or assessment. Users may be learners, teachers, or examiners” Ellaway, Candler et al. 2006

• Response to changing needs

• Technological possibilities

• Breadth of applications

• Current themes

Page 4: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Problem Statement

“virtual patients can … be used in learner

assessment, but scoring rubrics should emphasize

non-analytical clinical reasoning rather than

completeness of information or algorithmic

approaches. Potential variations in VP design are

practically limitless, yet few studies have

rigorously explored design issues”

Cook, D. and Triola M. (2009) Virtual Patients: a critical literature review and proposed next steps. Medical Education (in press)

Page 5: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Metrics, assessment and feedback

• Educational game = rules + experience + simulation + educational design

• Educational design: objectives/intervention/feedback/outcomes

• Formative: feedback at decision point, outcomes guided

• Summative: feedback at end of activity, outcomes unguided

• Game rules: agency, feedback, assessment

Page 6: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

OpenLabyrinth

• Pattern-based: medical model

• Narrative-based: timeline, character, motive, causality

• Game-based: branching, strategy, scores, counters and rules

OpenLabyrinth: open source VP authoring, delivery and feedback system – supports all three forms although without strong templating is more useful for narrative and game based VPs

Page 7: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Diff kinds of OL VP metrics

• Reaching end point(s)

• Time taken

• Number of steps taken

• Patient model – survival, pulse, BP

• Professional model – DDx, Rx

• Other counters (keys, strength, chance factors)

• Steps/areas visited or avoided

• Sequence of steps

• Confidence of decision

• Aggregate/function of some or all of the above

Page 8: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

OL metrics (time and sequence)

sequence key nodes

time per decision

Page 9: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

OL metrics (counters and sequence)

counter value

change over time: trend, max, min

end values

Page 10: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

OpenLabyrinth Design

OL three design dimensions: narrative, simulation, game

OL three implementation dimensions: topology, rules, content

Authoring process:

• Deductive – objectives > key points > narrative > CSP > branching > rules > media

• Inductive – narrative > CSP > branching > key points > objectives > rules > media

Recurring issues with best use of metrics

Page 11: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Author selection of metrics

• Focus on critical story path,

• Branching distractors, or multiple clues/resources

• Typically one successful endpoint + several failure endpoints – few mazes or phases

• Counters – time, patient health (typically not money or reputation)

• Conditionals to regulate flow rather than measure

• Largely formative – summative have tended to resort to tried and tested e.g. key feature problem modes

Page 12: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Four dimensions of Validation

• Face Validity - presentation and interface – not a metrics issue

• Content Validity - relation to domain and context – not a metrics issue

• Predictive Validity - functions as predicted by practitioners/experts

• Convergent and Discriminant Validity - performance correlates with other measures as predicted by practitioners/experts

Page 13: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Predictive Validity Techniques

• design: review overall options and design

• standard setting (modified Angoff – probabilistic expert estimate of performance of minimally passing learners)

• suitability: pilot with 3-5 representative candidates to evaluate understandability, accessibility, performability, usability and applicability

• runtime: review and validate different ways VP can be executed by a candidate

– Experts/authors (problems with non-linear extrapolation of expertise)

– Excellent candidates

– Minimally passing candidates

Page 14: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Metrics and Standards

• SCORM1 has basic tracking (pilot Peter- MedBiq 2005), expectation of more in SCORM2 – watch this space

• MedBiquitous VP is an extension of SCORM – tracking is a base functional requirement – nodes visited and counter values in a session

• OL implements MVP (at least two variants)

• However, no standard currently exists that models objective measures of performance in VPs – tracking is purely for runtime

Page 15: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

R&D: telemetric research

VERSE (Virtual Educational Research Services Environment)

Remote telemetrics tracking and database for Second Life, haptics (Omni), OpenLabyrinth, Mitsubishi light surfaces

Generic data tracking model

Requires major storage and parsing

Creates new opportunities for metrics development and modeling

Page 16: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

R&D: telemetric research

Network enabled platform

Edge services = device + wrapper

Heterogenous devices: virtual patients (OpenLabyrinth), mannequins (LaerdalSimMan 3G), light fields (virtualised cameras), 3D visualization (RSV and Volseg), multiple data sources (CMA, Medline)

Integrated service model for connecting, controlling and intertwining heterogenous devices (physical, online, endpoint, model, source, renderer, aggregator)

Page 17: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

HSVO Service Architecture

Page 18: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Alien VPs and metrics

• Physiognomic models – physiognomes

• Ontology and AI

• Persistent avatars

• EHRs as VPs

• Data shadows

Increasing convergence – augmented by new mashups – geotagging, transponder feeds, twitters

Page 19: What Does That Mean? Author  Selection Of Virtual Patient  Metrics

Where next?

Current VP models provide rich but relatively unused metrics

New VP models produce rapidly increasing dimensions and details

Key issues:

• testing of validation methods

• testing of correlation between VP and real world performance

• Identify and separate causal and coincident factors

• Develop predictive mapping between VP design, selection and use of metrics and reliability of conclusions

Platform and design development still in flux – creativity and opportunity ahead of knowledge

Standards follow once evidence base is established and validated