give-me: gamification in virtual environments for ...in virtual environments for multimodal...

Download GIVE-ME: Gamification In Virtual Environments for ...In Virtual Environments for Multimodal Evaluation ... wearable range-vibrotactile ... , 13th International Conference on Computer Analysis of Images and

Post on 18-Apr-2018

213 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • GIVE-ME: Gamification In Virtual Environments

    for Multimodal Evaluation

    A FrameworkWAI KHOO

    DEPARTMENT OF COMPUTER SCIENCE

    THE GRADUATE CENTER,

    CITY UNIVERSITY OF NEW YORK

  • Committee Members

    Prof. Zhigang Zhu City College, Computer Science

    Prof. Yingli Tian City College, Electrical Engineering

    Prof. Tony Ro The Graduate Center, Psychology

    Dr. Aries Arditi Visibility Metrics LLC

    2

  • Outline

    Motivation Research questionsApproach Proposed framework 4 applications of the framework Conclusion & Future work

    Funding supports:1. NSF Awards # Emerging Frontiers in Research

    & Innovation (EFRI) 1137172 Chemical, Bioengineering,

    Environmental, and Transport Systems (CBET) 1160046

    Industrial Innovation & Partnerships (IIP) 1416396

    2. VentureWell (formerly NCIIA, through Award # 10087-12).

    3. CUNY Graduate Center Science Fellowship (2009 2014)

    3

  • NYCPenn

    Station

    Source: Jason Gibbs. Retrieved from http://jasongibbs.com/pennstation/ on March 19, 2016

    4

    A B

  • Travel Aids

    5

    Guide Dog Talking GPS Miniguide White cane

    Argus II, retinal implant

    Brainport

  • Background

    050100150200250300

    2002 2014

    WorldwideVisuallyImpairedPopulation(inmillion)

    6

    161 285

    77%

  • Outline

    Motivation

    Research questions

    Approach

    Proposed framework

    4 applications of the framework

    Conclusion & Future work

    7

  • Research Questions

    1. How to establish a benchmark for heterogeneous systems?

    2. How to provide a well-controlled and safe testing environment?

    3. How to provide a robust evaluation and scientificcomparison of the effectiveness and friendliness of multimodal assistive technologies?

    8

  • 9

    Inspiration

    Jason Park, Helen MacRae, Laura J. Musselman, Peter Rossos, Stanley J. Hamstra, Stephen Wolman, Richard K. Reznick, Randomized controlled trial of virtual reality simulator training: transfer to live patients, The American Journal of Surgery, Volume 194, Issue 2, August 2007, Pages 205-211

  • Outline

    Motivation

    Research questions

    Approach

    Proposed framework

    4 applications of the framework

    Conclusion & Future work

    10

  • Approach

    VirtualReality

    Gami-fication

    Multi-modality

    Unified formal evaluation and comparison approach.

    Differ from Degara et al. (2013)[1]Only sounds

    Differ from Lahav et.al (2012) [2] and Huang (2010) [3] Focused on cognitive mapping in unknown space.

    11

  • Virtual Reality

    Use a game engine to design a virtual environment and simulate part of an assistive technology.

    Benefits:Rapid prototyping EARLY user involvement Psychophysics evaluation Safe & well-controlled environment for navigation tasks

    12

  • Gamification

    13

    Use game design elements for research & evaluation.

    Benefits: Fun/engaging experiment sessions Sustainable evaluationCrowd-sourcing data collection Package designed VE as a simulation/training tool

  • Multimodality

    Multimodal input and output of data

    Benefits: Enable alternative perception (sensory substitution) Allows a mixture of input and output devices.

    14

  • 15

    Who Benefits from My Research?

    Researchers/developers

    Assistive technology companies

    Visually impaired users

  • Outline

    Motivation

    Research questions

    Approach

    Proposed framework

    4 applications of the framework

    Conclusion & Future work

    16

  • Proposed Framework: Gamification in Virtual Environments for Multimodal Evaluation (GIVE-ME)

    17

  • Framework: User Interface

    18

  • Framework: Foundation

    19

  • GIVE-ME Software Impl.

    20

    http://ccvcl.org/~khoo/GIVE_ME.unitypackage

    Minimal coding

    Click-&-drag

    Fully customizable

    Package

  • Outline

    Motivation

    Research questions

    Approach

    Proposed framework

    4 applications of the framework

    Conclusion & Future work

    21

  • Application 1: VibrotactileNav

    22

    Vista Wearable, Inc.

  • Application 1: VibrotactileNav

    23

  • Application 1: VibrotactileNav

    24

    WaiL.Khoo,JoeyKnapp,FranklinPalmer,TonyRo,andZhigangZhu.Designingandtestingwearablerange-vibrotactiledevices.JournalofAssistiveTechnologies,7(2):102-117,2013.

    Controller:Joystick/mouse

    Stimulators:VibratorsSounds

    Virtual sensor:Infrared

  • VibrotactileNav Results

    25

  • VibrotactileNav Results

    26

  • VibrotactileNav ResultsEASY HALLWAY

    9 out of 18 succeeded

    Succeeded:

    Avg time: 280.10 sec

    Avg bumps: 17.3

    Failed:

    Avg time: 288.65 sec

    Avg bumps: 22.1

    COMPLEX HALLWAY

    3 out of 18 succeeded

    Succeeded:

    Avg time: 120.25 sec

    Avg bumps: 12.7

    Failed:

    Avg time: 353.67 sec

    Avg bumps: 42.727

  • EEG Data Collection

    28

  • Application 2: BrainportNav

    29

    Wicab, Inc.

  • Application 2: BrainportNav

    30

    Margaret Vincent, Hao Tang, Wai L. Khoo, Zhigang Zhu, and Tony Ro. Shape discrimination using the tongue: Implications for a visual-to-tactile sensory substitution device. Multisensory Research, 2016.

    Controller:Joystick

    Stimulators:Electrode array

    Virtual sensor:Camera in VEPathfinding

  • 31

    BrainportNav Results

    Run1Time=151secondsAccuracy=0.95

    Run2Time=91secondsAccuracy=0.95

  • 32

    BrainportNav Results

    Avg accuracy:

    83.33%

    82.66%

    93%

    69.66%

  • 33

    Application 3: CrowdSourceNav

  • 34

    Application 3: CrowdSourceNav

    OR

    TCP/IP Conn

    Stream Game View

    Can be used for testing algorithms

    UsabilityStudy

    Wai L. Khoo, Greg Olmschenk, Zhigang Zhu and Tony Ro, "Evaluating Crowd Sourced Navigation for the Visually Impaired in a Virtual Environment," in Mobile Services (MS), 2015 IEEE International Conference on , pp.431-437, June 27 2015-July 2 2015

    Controller:Joystick

    Stimulator:Text-to-speech

    Virtual Sensor:Camera in VE

    Online crowd members

  • 35

    CrowdSourceNav Results

    Maze 1Time = 514 secNumBump = 7

    Maze 2Time = 345 secNumBump = 0

  • 36

    11 crowd members

    Crowd completion time for either aggregation method is not significantly different (two-sample t-test, p=0.432 at 5% significance level, df=6)

    CrowdSourceNav Results

  • 37

    Sample size of 11

    Rating of 1 7 to the following statements1. It is useful2. It is easy to use3. It is user friendly4. I learned to use it quickly5. I am satisfied with it

    CrowdSourceNav Results

  • CrowdSourceNav Real Exp.

    38

    5.5

    m

    0.5

    m

    3 m

    0.5 m0.5 m

    2 m

    1 m

    1 m

    1 m

    1 m

    1 m

    1 m1

    m

    1 m

    1 m

    1 m

    2.235 m Similarities

    Simple average aggregation method

    Speech feedback

    Differences

    Random obstacles

    Stream from cams view

    Pros

    Skill transfer: Preemptive instructions

    Provided minimal training to crowd volunteer

    Cons

    Various cameras angle and height

    Various walking speed

  • Application 4: VistaNav

    39

    Vista Wearable, Inc.

  • Experiments

    40

    1. Configurations Experiment1. Four configs

    2. Training Experiment1. Half with VE

    training

  • VR setup

    41

    Controller:Xbox 360 game pad

    Stimulators:VibratorsSounds

    Virtual sensor:Infrared

  • VistaNav Results Configurations

    42

  • VistaNav Results - Configurations

    43

  • VistaNav Results - Configurations

    44

    Two-way repeated measures ANOVA:

    F(3,51) = 10.54, p = 0.00

    Multiple comparisons w/ Bonferroni correction:

    C3 vs. C4, C6 (p = 0.00)

    C3 is the best!

  • VistaNav Results -Training

    45

    Training session Virtual hallway 10 minutes Free exploration Audio & haptic feedback 3 VISTA devices; one on each wrist and one on chest

    Testing session Real U-shaped hallway (71 ft x 52 ft) Sighted subjects are blindfolded 2 VISTA devices; one on each wrist Goal: reach destination w/o bumping into obstacles

  • VistaNav Results -Training

    46

    17 out of 21 subjects included in analysis 2 outliers are trimmed from each group (training vs.

    no training)

    VE training significantly improved the performance in real hallway navigation.

    t(15) = -1.91, p = 0.04, two-sample, one-tailed

    Training Mean SD

    YES 249.00s 92.45s

    NO 333.78s 90.76s

  • VistaNav Results -Training

    47

    A usability questionnaire is given at the end of the hallway System Usability Scale (SUS) 10 questions with 5 Likert-scale responses Strongly disagree strongly agree 5 positive and 5 negative statements, which

    alternate

  • VistaNav Results -Training

    48

  • VistaNav Results -Training

    49

    Overall mean = 80.48

    Overall SD = 13.62

  • Outline

    Motivation

    Research questions

    Approach

    Proposed framework

    4 applications of the framework

    Conclusion & Future work

    50

  • GIVE-ME ContributionsUnified