wearable virtual guide for indoor navigation. introduction assistance for indoor navigation using a...

Post on 29-Jan-2016

218 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Wearable Virtual Guide forIndoor Navigation

Introduction• Assistance for indoor navigation using a

wearable vision system• Novel cognitive model for representing visual

concept as a hierarchical structure

Introduction• Indoor localization and navigation help in

context-aware services• Challenges of sensor-based localization– Infrastructure cost– Accuracy of localization– Reliability of signals

• Of-late recent advances in wearable vision devices (google glass)

• First person view (FPV)

Introduction

• Only camera to receive visual information, but it’s alone is not sufficient, human intelligence needs to be incorporated– Representation of navigation knowledge– Designing interactions between user and the

system• Building the cognitive model is necessary and

this is absent in existing sensor-based systems

Introduction• Mimic human behaviour of wayfinding

• Go through the glass door and turn left– Easier to follow, reduces stress from user– But requires cognitive knowledge of the building

• Contribution– Model to represent cognitive knowledge for indoor

navigation– Interaction protocol for context-aware navigation

Methodology

• Cognitive Knowledge Representation• Interaction Design with Context-awareness

Area : (1)Shopping area, (2) Transition area ( and (3) Office area.The bottom level contains sub-classes of locations in eacharea. For example, the conceptual locations in the Shoppingarea are Lobby, MainEntrance, Shop, GateToLiftLobby, etc.;the locations in Transition area are LiftLobby, InLift, Entrance, etc.; the Office area has MeetingRoom, Junction, Corridor, Entrance/Exit etc. The nodes within each level are connected if there is a direct pathbetween them

Cognitive Knowledge Representation

• Hierarchical context-model

Cognitive model

• Scenes are mapped to the location/area nodes – Image classification algo

• Generate cognitive route model • Given source-destination– Chain of nodes --- area and location – Area nodes connects child location nodes

• Define Trip segments

Working principle

• In an actual navigation task with a given origin and destination– scenes (i.e., image sequences) are captured continuously

using the wearable camera. • The scenes are categorized into area nodes and

location nodes, which are compared with the nodes in the cognitive route model.

• Once a match is found between a recognized node and that in the cognitive route model– specific navigation instructions are retrieved according to

predefined rules.

Interaction Design with Context-awareness

• 3 types of contexts– Recognize egocentric scenes: localization of the

user in the environment – Temporal context: right time when the

instructions to be provided– User context: user’s cognitive status

Interaction Design with Context-awareness

• Data Collection– 6 participants in sequence– 3 destinations in sequence – M0 -> M1 -> M2 -> M3

• Different floor• Different tower

Main entrance

Training Setup

• Self reported ability of spatial cognition – Santa Barbara Sense of Direction (SBSOD)– Score is used to adjust system behavior

• Vision system-webcam-tablet PC• Resolution of 640*480 at 8 frames/sec• Send scenes to PC• Human assistant=>Explicit help/confusion

SBSOD

Localization using cognitive visual scenes

• A task-specific route model is constructed– Visual scenes are captured continuous and used to

build the task specific route model• Once route model is built, determine location• Two cues– Image categorization • Image-driven localization can achieve accuracy upto

84%

– Time

Image categorization

Improve localization using temporal inference

• Localization using cognitive visual scenes• Tackle the dynamics of wayfinding – Various walking time

Improve localization using temporal inference

• Localization using cognitive visual scenes– Li

– ti0

– Rand(p) random Number between 0 and p

First Compute

Probability that a scene is associated with that segment

Interaction Design with Context-awareness

• Context-aware navigation instructions– Provide effective navigation aids– Determine a decision point Dj and associate a

probability value Pj related with number of subjects n who requested help

– What if the user do not comply to the aids?

Interaction Design with Context-awareness

• Context-aware navigation instructions– TTj-1 narration at decision point Dj

– TTj-2 rephrased narration

Interaction Design with Context-awareness

Interaction Design with Context-awareness

• Context-aware navigation instructions– Utility of the instructions are measured using

SBSOD score– Cp denotes the spatial cognitive level

– At time tk the navigation instruction is provides as per the rules below

Experimental Evaluation• Participants– 12 participants (6 M, 6 F)

• Experiment Design– SBSOD score as input for each participant– Task1 : M0->M1, Task2: M1->M2; Task3: M2-> M3– At different order for different participant

• Measures• Hypotheses

Experimental Evaluation• Measures– Objective

Experimental Evaluation• Measures– Subjective

Experimental Evaluation• Hypotheses

Results• Task Performance– One-way ANOVA– Posthoc analysis with paired t-test

Results• Subjective Evaluation– Easy-to-use– Reduced cognitive load– More intelligent– CNG has poor performance in terms of enjoyment,

stress and trust

top related