peripheral vision: a new killer app for smart glassesbraudt/papers/peripheral-vision... ·...

14
Peripheral Vision: A New Killer App for Smart Glasses Isha Chaturvedi The Hong Kong University of Science and Technology Hong Kong [email protected] Farshid Hassani Bijarbooneh The Hong Kong University of Science and Technology Hong Kong [email protected] Tristan Braud The Hong Kong University of Science and Technology Hong Kong [email protected] Pan Hui University of Helsinki Helsinki, Finland The Hong Kong University of Science and Technology Hong Kong [email protected].fi ABSTRACT Most smart glasses have a small and limited field of view. The head-mounted display often spreads between the human central and peripheral vision. In this paper, we exploit this characteristic to display information in the peripheral vision of the user. We introduce a mobile peripheral vision model, which can be used on any smart glasses with a head-mounted display without any additional hardware requirement. This model taps into the blocked peripheral vision of a user and simplifies multi-tasking when using smart glasses. To display the potential applications of this model, we implement an application for indoor and outdoor navigation. We conduct an experiment on 20 people on both smartphone and smart glass to evaluate our model on indoor and outdoor conditions. Users report to have spent at least 50% less time looking at the screen by exploiting their peripheral vision with smart glass. 90% of the users Agree that using the model for navigation is more practical than standard navigation applications. CCS CONCEPTS Human-centered computing User studies; Empiri- cal studies in HCI; Computing methodologies Per- ception. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6272-6/19/03. . . $15.00 https://doi.org/10.1145/3301275.3302263 KEYWORDS Human perception; field of view; peripheral vision; smart glasses head-mounted display; information input ACM Reference Format: Isha Chaturvedi, Farshid Hassani Bijarbooneh, Tristan Braud, and Pan Hui. 2019. Peripheral Vision: A New Killer App for Smart Glasses. In 24th International Conference on Intelligent User Interfaces (IUI ’19), March 17–20, 2019, Marina del Ray, CA, USA. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3301275.3302263 1 INTRODUCTION Smartglasses have become increasingly popular in recent years. They provide various applications in information visu- alization [49], education [16], gaming [41], medical [36] and other commercial industries [2, 15]. Nowadays, most smart- glasses embed a small head-mounted screen which spreads over the eye of the user. The Angular field of view (AFOV or AOV) measures the angular extent of a 360-degree circle that is visible by the human eye [6]. Figure 1 shows the AFOV of the human eye. The foveal system, responsible for the foveal vision, lies within the central and para-central area. The area outside the foveal system is responsible for the pe- ripheral vision [39, 42]. The term Field of view (FOV) is often used interchangeably with AFOV. Most smartglasses have small and limited FOV which restricts their potential applica- tions [34, 46]. The AFOV of Google Glass 1 is approximately 30 degrees (as represented Figure 2), which is significantly smaller than the AFOV of the human eye. This is the case for most smartglasses including MadGaze Glass 2 . This limited FOV forces the user to direct his central eye gaze towards the small screen of the glass to extract meaningful information. Additionally, focusing the eyes on a display screen at close fo- cal distances causes visual fatigue [37, 43], which immensely affects the usability of smartglasses. As the user focuses his central eye gaze on the screen of the smartglass at a close 1 Google Inc, https://en.wikipedia.org/wiki/Google_Glass 2 MadGaze Group, http://madgaze.com/x5/specs

Upload: others

Post on 19-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart GlassesIsha Chaturvedi

The Hong Kong University of Science and TechnologyHong Kong

[email protected]

Farshid Hassani BijarboonehThe Hong Kong University of Science and Technology

Hong [email protected]

Tristan BraudThe Hong Kong University of Science and Technology

Hong [email protected]

Pan HuiUniversity of Helsinki

Helsinki, FinlandThe Hong Kong University of Science and Technology

Hong [email protected]

ABSTRACTMost smart glasses have a small and limited field of view.The head-mounted display often spreads between the humancentral and peripheral vision. In this paper, we exploit thischaracteristic to display information in the peripheral visionof the user. We introduce a mobile peripheral vision model,which can be used on any smart glasses with a head-mounteddisplay without any additional hardware requirement. Thismodel taps into the blocked peripheral vision of a user andsimplifies multi-tasking when using smart glasses. To displaythe potential applications of this model, we implement anapplication for indoor and outdoor navigation. We conductan experiment on 20 people on both smartphone and smartglass to evaluate ourmodel on indoor and outdoor conditions.Users report to have spent at least 50% less time looking at thescreen by exploiting their peripheral vision with smart glass.90% of the users Agree that using the model for navigationis more practical than standard navigation applications.

CCS CONCEPTS•Human-centered computing→User studies; Empiri-cal studies inHCI; •Computingmethodologies→ Per-ception.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are notmade or distributed for profit or commercial advantage and that copies bearthis notice and the full citation on the first page. Copyrights for componentsof this work owned by others than ACMmust be honored. Abstracting withcredit is permitted. To copy otherwise, or republish, to post on servers or toredistribute to lists, requires prior specific permission and/or a fee. Requestpermissions from [email protected] ’19, March 17–20, 2019, Marina del Ray, CA, USA© 2019 Association for Computing Machinery.ACM ISBN 978-1-4503-6272-6/19/03. . . $15.00https://doi.org/10.1145/3301275.3302263

KEYWORDSHuman perception; field of view; peripheral vision; smartglasses head-mounted display; information inputACM Reference Format:Isha Chaturvedi, FarshidHassani Bijarbooneh, Tristan Braud, and PanHui. 2019. Peripheral Vision: A New Killer App for Smart Glasses.In 24th International Conference on Intelligent User Interfaces (IUI’19), March 17–20, 2019, Marina del Ray, CA, USA. ACM, New York,NY, USA, 14 pages. https://doi.org/10.1145/3301275.3302263

1 INTRODUCTIONSmartglasses have become increasingly popular in recentyears. They provide various applications in information visu-alization [49], education [16], gaming [41], medical [36] andother commercial industries [2, 15]. Nowadays, most smart-glasses embed a small head-mounted screen which spreadsover the eye of the user. The Angular field of view (AFOV orAOV) measures the angular extent of a 360-degree circle thatis visible by the human eye [6]. Figure 1 shows the AFOVof the human eye. The foveal system, responsible for thefoveal vision, lies within the central and para-central area.The area outside the foveal system is responsible for the pe-ripheral vision [39, 42]. The term Field of view (FOV) is oftenused interchangeably with AFOV. Most smartglasses havesmall and limited FOV which restricts their potential applica-tions [34, 46]. The AFOV of Google Glass1 is approximately30 degrees (as represented Figure 2), which is significantlysmaller than the AFOV of the human eye. This is the case formost smartglasses including MadGaze Glass2. This limitedFOV forces the user to direct his central eye gaze towards thesmall screen of the glass to extract meaningful information.Additionally, focusing the eyes on a display screen at close fo-cal distances causes visual fatigue [37, 43], which immenselyaffects the usability of smartglasses. As the user focuses hiscentral eye gaze on the screen of the smartglass at a close1Google Inc, https://en.wikipedia.org/wiki/Google_Glass2MadGaze Group, http://madgaze.com/x5/specs

Page 2: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

30° 60° 90°

central

near-peripheral

paracentral

farperipheral

farperipheral

mid-peripheral

Figure 1: Angular Field of View of the Human Eye

AFOV~30°

Figure 2: Angular Field of View of Google Glass

focal point, his multitasking ability is strongly affected. Thistemporary shift of focus may have deadly consequences. Forinstance, a user driving a car on the highway at 100km/hwho takes his eyes off the road for one second to look ata map screen is actually blind for 28 meters. Using mobiledevices also limits cognitive ability and restricts peripheralvision [20]. There have been about 5,984 pedestrian trafficfatalities in 2017. One of the main causes of these accidentsis the extensive use of mobile devices3.Smartglasses with a head-mounted display like Google

Glass or even Microsoft HoloLens partially cover the user’speripheral vision4. The peripheral visual field is an importantpart of the human vision and is useful for daily locomotive ac-tivities such as walking, driving, and sports [40]. Visual cuesfrom the periphery can help to detect obstacles, avoid acci-dents and ensure proper foot placement while walking [21].

In this paper, we present a Mobile Peripheral Vision (MPV)model. Any smartglass with a head-mounted display over-lapping with the peripheral vision can run this model, whichdoes not require any additional hardware. Our model tapsinto the peripheral vision of the user by using the screen ofthe head-mounted display of the smartglass to present visualcues. The model simplifies multi-tasking for the mobile user

3Pedestrian Traffic Fatalities by State, https://www.ghsa.org/resources/spotlight-pedestrians184Google Glass Blocks Peripheral Vision, https://www.livescience.com/48608-google-glass-blocks-peripheral-vision.html

by removing the need to focus on the screen of smartglasses.This paper contributes to the state-of-the-art by develop-ing a model that combines two theories: motion detectionthrough peripheral vision [8] and color sensitivity of humaneye [26] and demonstrates its application for navigation onsmartglasses with a head-mounted display. Existing worksmainly focus on exploring peripheral vision by changingthe hardware of the smartglasses while we propose in thispage a pure software solution. Using our model, we developa high-fidelity peripheral vision-based navigation applicationfor both indoor and outdoor environment scenarios. To thebest of our knowledge, this paper presents the first use ofperipheral vision in a mobile context, using standard smart-glasses in both indoor and outdoor environment withoutadditional hardware.

This paper presents the following contributions:

• We present an MPV Model using color and motion todisplay visual cues in the peripheral vision of the user.

• We implement the MPV Model within a navigationapplication. This application is then compared to astandard navigation application on smartglasses, aswell as the same application on smartphone. As such,we are able to isolate both the impact of peripheralvision and use of smartglasses. Thanks to our model,users spend on average 50% less time looking at thescreen of the smartglasses. Furthermore, 90% Agreethat the smartphone application was beneficial.

• We further discuss two specific cases, namely strabis-mus and color-blindness, for which our MPV modeldoes not apply. Indeed, color-blindness changes thecolor sensitivity of the eye, while strabismus impactsthe eye mobility. We propose modifications to ourmodel to account for these specific cases.

The rest of this paper is organized as follows: We firstdiscuss research studies related to ways of increasing fieldof view, use of peripheral vision in providing notifications tothe user, and navigation using smartglasses. In Section 2, weexplain our MPV model and its applications for the mobileusers. In Section 3, we discuss our demo application and theuser study built around the application. Finally, we discussthe results of the experiments to evaluate the applicabilityof our model.

Related WorkIn this section, we present the main related studies. Thesestudies spread around three main fields: enhancing the FOVof smartglasses, displaying information on peripheral vision,and navigation on smartglasses.

Page 3: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

Enhancing the FOV of smartglassesAugmenting the field of view has previously been studied bychanging the hardware of the smartglasses [7, 28]. Sparse-LightAR increases the field of view of head-mounted displaysby adding an array of Light Emitting Diodes (LEDs) aroundthe central display [47]. Similarly, AmbiGlasses illuminatesthe periphery of the human visual field by adding 12 LEDsin the frame of the glasses [31]. Matviienko et al. [22] dis-cuss the possibility of employing ambient light in the car tokeep the focus of the user on the road. Some of the studiespresent new optical designs for head-mounted displays likepinlight display which uses an LCD panel and an array ofpoint light sources directly in front of the eye [19], or usecurved screens and curved lenses to achieve a wider fieldof view [32]. Finally, Yamada et al. [48] propose to expandthe field of view by filling peripheral vision with blurredimages. through the use of two different kinds of lenses withdifferent magnification levels. Contrary to these studies, weaim at providing an off-the-shelf solution that necessitatesno additional hardware and targets pedestrians, cyclists andcar drivers alike.

Peripheral VisionSome other studies explore peripheral vision to present in-formation to the user. Few investigate adding a peripheralvision display to ordinary eyeglasses using LEDs [5, 9, 27].Hahn et al. [10] use an attention-aware peripheral displayon ambient displays to measure the user gaze as visual at-tention through an infrared camera to give notifications tothe user. However, most of the studies involve hardwarechanges to the smartglasses. Bailey et al. [1] experiment onthe possibility to direct the gaze of the user through subtleimage-space modulation. Another study [38] built on topof these results and significantly improve the performanceof the system. However, both studies only propose to guidegaze, rather than using the full surface of the eye and pe-ripheral vision. A study proposes to tap peripheral vision topresent information without compromising the performanceof the primary task [3]. However, the study only introducesthe concept of motion perception in the peripheral area with-out proposing any real system design or evaluation. Thisstudy also proposes a new global positioning system (GPS)navigation system design using a motion-based interface,requiring a web-cam based eye tracker. The authors in [13]focus on information presentation mechanism for mobile ARsystems using users’ gaze information. The study providesa hardware-based solution. It provides a mobile AR displaysystem using a combination of amobile, spectacle-type, wear-able retinal image display (RID) and an eye-tracker system.Finally, another study [17] also exploit visual cues to guidethe user gaze in a Virtual Reality scene. As shown in this

non-exhaustive list of studies, gaze detection and guiding hasbeen a very active field to target the user’s attention towardsspecific details of a scene and improve global recollection.However, none of these studies exploit peripheral vision tosend subtle cues to the user without altering his focus on themain task.Few studies have explored the possibility of using ani-

mations in peripheral vision displays for enhancing visualinterest, without distracting the user [30]. The study in [18]explores usable visual language by limiting possible shapesand color and using meaningful motion and orientation, fornear-eye out-of-focus displays which are placed inside a pairof glasses in far peripheral extremes of human view. Thisstudy formulates five guidelines for designing near-eye-out-of-focus displays. The study recommends using simple andsingle prominent shapes and to avoid composite shapes. Itsuggests avoiding secondary colors and to limit the usage tothe primary colors. Furthermore, the study suggests that mo-tion detection is independent of shape recognition and canbe used to convey complex information like path change ofthe symbols moving on the screen. Apart from the requiredexternal hardware modifications, the visual language in thisstudy is restricted to near-eye out-of-focus displays and istested for only static users, where, as shown in Figure 2,the screen of smartglasses occupies part of the central andnear-peripheral area of the human eye.

Navigation on smartglassesThe small field of view of smartglasses makes it difficult touse existing Google Maps application. The 3D street viewand the blue directional arrow in the Google Maps on GoogleGlass application5 are not easily visible unless the eye focusis completely directed towards the screen of the glass, whichmay cause accidents during navigation. Moreover, GoogleMaps work only in an outdoor environment. One of the nav-igation methods for smartglasses uses a navigation systemin which a LED matrix is placed at the peripheral vision ofthe driver to signal the turns on the road, thus requiringadditional hardware changes to the smartglasses [29]. Thisnavigation system differs from our model, as our model pro-vides a software-only solution to navigation on smartglasses.Rehman et al [33] implement an augmented reality-based in-door navigation application for wearable head-mounted dis-plays like Google Glass. The navigation application overlaysinformation like the location zone and directional instruc-tions on the visual overlay of environment map in the GoogleGlass. A drawback of this application is that it overlays infor-mation on the small visual overlay of the glass screen. Theenvironment map and the overlaid information may not be

5https://support.google.com/glass/answer/3086042?hl=en

Page 4: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

easily visible, and the user still has to direct central eye gazeto the screen.Our MPV model differs from the existing works as it

provides a software solution to explore peripheral visionto present information to the user. The model uses simpleshapes like single rectangular bars or big circular shape dotsand three primary colors to convey information at the pe-riphery. The use of symbols is limited to basic shapes andwe display complex information using movement recogni-tions [18] to achieve high recognition rate. The novelty ofour MPV model is that it is adaptable for any smartglasseswith varying positions of head-mounted displays in the fieldof view as long as it covers the peripheral vision. Navigationapplications can greatly benefit from this model. Indeed, themodel is independent of the field of view of glasses and usesperipheral view of the user. Using our model, the user canalso multi-task while navigating the path using the informa-tion at the periphery. Thus, our model provides a softwaresolution to navigation on smartglasses with a head-mounteddisplay.

2 SYSTEM DESIGNIn this Section, we introduce the mobile peripheral visionmodel, discuss the applications of the model, and introducethe peripheral vision-based navigation application for smart-glasses with a head-mounted display.

Mobile Peripheral Vision ModelThe model we propose in this study uses the peripheral vi-sion of a human eye to display information without havingto actively look at the screen of the smartglasses. The usercan thus pick up information on the smartglass without in-terrupting his main activity. This model uses the entire glassscreen to present information to the user and requires nochanges to the hardware. As such, any smartglasses witha head-mounted display overlapping with the peripheralvision can use it. This model articulates around two funda-mental concepts:

• Color detection in the peripheral area of the eye• Motion detection in the peripheral vision

Color detectionAccording to Gerald M. Murch et al. [26], the retina centerfor a normal human eye has a high cones density which areresponsible for color vision. Their distribution is as follows:red: 64%, green: 32% and blue: 4%. Figure 3 shows the asym-metrical color distribution of the three color cones [26]. Thecenter of the retina has mainly green and red cones. Theblue cones mostly occupy the area outside the central retina.The periphery has a high percentage of rod cells, the photo-receptors responsible for motion and night detection [26, 35].

fixates upon them. Figure 6 maps the color zones of a typical retina.

Beyond the retina. The bundle of nerves collectively called the optic nerve is made up of fibers that connect to the photoreceptors. The first way station along the optic nerve path is the lateral geniculate body. Here, a reevaluation of the output from the photoreceptors takes place; that is, the information from the separate cones is recombined. Figure 7 provides a schematic outline of how the recombination takes place. Notice that the original three channels from the retina, called red, green, and blue for simplicity's sake, form three new opponent channels. One channel signals the red-to-green ratio, while the second signals the yellow-to-blue ratio. These two channels provide the color signal, while the final channel indicates brightness.

Here again we find a bias against the blue photopig-ments in that brightness, and hence edges and shapes, is signaled by the red and green photopigments. This means that colors differing only in terms of the amount of blue do not produce sharp edges. For example, adja-cent mixtures with the same percentage of red and green but a different percentage of blue produce a fuzzy border.

The opponent-channel schema produces at least one other interesting effect as well. Because it links together the opponent colors of red and green and yellow and blue, it becomes physiologically impossible to experience such combinations as reddish green or yellowish blue.

From the lateral geniculate body, the opponent chan-nel signals continue on to the cortex of the brain. What happens next is the aspect of the color vision process we understand the least. Specialized cells encode the infor-mation in order to permit the individual to experience color.

Color blindness. It is unfortunate that we use the term color blindness to summarize the variety of color defi-ciencies that beset about nine percent of the population. In actuality, only a tiny proportion of the color deficien-cies produce a true blindness for color. Persons with true color blindness are called monochromats and usually ex-perience several visual problems, such as foveal blind-ness.

Although all causes of color-deficient vision are not known, some stem directly from the cones and their photopigments. Perhaps the best known is the so-called red-green deficiency of dichromatism. In reality, this form of color deficiency can be produced by the lack of either the red or green photopigment. Nevertheless, these two distinct variations on the condition result in similar symptoms: affected persons have trouble discriminating any color that is dependent upon a ratio of red to green photopigment (see Figure 5). They do differ, however, in the perception of brightness, since long-wavelength stimuli appear dark to the individual lacking the red photopigment. A relatively rare form of color deficiency also exists in which the blue photopigment is missing.

More common among those with color deficiencies are individuals whose response functions to the photo-

Figure 6. The zones of color sensitivity for the normal human eye.

270 4 -

Mb°j/

225°

pOj 40 P

vx\

m it

ΛΡ

\JÖT1_^" y^> ' \4oJ_

/To [ 50

---JKVBLUE'

Y E L L O W ^ ^

vnT^T—Wfc-JS^GREENLWJ

t -VR E D^X>

180°

\ 45°

Λ \ Λ ! WO I 80 90°

^"ΤΓ 135°

52 IEEE CG&A Figure 3: The color sensitivity zones for the normal humaneye.

We assume that the glass screen is located between thenear-peripheral and mid-peripheral region of the humaneye (Figure 1). As mentioned above, motion detection inthe peripheral area of human eye vision can trigger humanawareness. Though color sensitivity diminishes with thedistance to the central vision area, the human eye can stilldetect some colors such as blue outside the central vision.Since blue color sensitivity is mostly located outside thecentral vision, blue makes a good peripheral backgroundcolor.The red and green colors are easily visually perceivable

due to the abundance of the red and green color cones at theretina center. As the smartglass screen partially covers thecenter of the human FOV, we integrate them in our model tocode extra bits of information. The yellow color, which is amixture of red and green color, is easily visually perceivableas well. However, it can create confusion when used togetherwith red and green color [26, 35].

The model, therefore, uses the following three colors tosignal information to the user: (i) Blue: highest number ofcones in the periphery, (ii) Red: high number of cones in theretina center, (iii) Green: high contrast with the two othercolors.

To achieve high recognition rate, we combine basic shapewith movement recognition in the model. The model articu-lates around three fundamental concepts:

• motion detection by the peripheral vision,• the presence of blue cones at the periphery,• the abundance of red and green cones at the center ofthe retina (primary colors), that allow red and greento be easily recognized, even at the periphery.

Page 5: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

Motion detectionThe periphery has a high percentage of rod cells, the photore-ceptors responsible for motion and night detection [26, 35].Retinal eccentricity measures how far a given point in thevisual field is from the central fixed point [24]. The ratethreshold or velocity for visual motion perception varieswith retinal eccentricity [23, 25]. This implies that the veloc-ity (degrees per second) with which the notifications flickeror move depends on the field of view and location of theglass screen in the periphery. For simplicity, we use constantvelocity for motion detection for our model. We define thevelocity as follows: the rate threshold increases as we gofar from the central gaze [44, 45]. Thus, the rate thresholdfor motion detection in the glass depends on the location ofthe head-mounted display in the field of view. Since mostsmartglasses have a small field of view and we assume theglass screen to be located between near-peripheral and mid-peripheral region, the threshold velocity for constant motiondetection does not vary considerably. However, we can alsoextend the model to head-mounted displays in far peripheralregions by adjusting the rate threshold.

Main differences with existing modelsAlthough building on top of the study by Luyten et al. [18],our MPV model presents significant differences. Indeed, theauthors consider a bulky setup composed of twowide screens(60x60mm) parallel to the head direction. Our MPV modeluses state-of-the-art smartglasses such as Google glass thatprovide a single, much smaller 10x10mm screen, perpendic-ular to the head direction and lying between the central andperipheral areas of the eye. Moreover, our setup is meant tobe used in a variety of indoor and outdoor conditions, with,among other things, variable backgrounds and luminosity.Such a setup limits the amount of symbols and colors avail-able to display information. Therefore, our model presents amuch more limited graphical language than other studies inorder to avoid confusion.After preliminary experiments, we choose to distinguish

actions solely through color and movement changes. Indeed,displaying too detailed information leads the user to switchhis eye focus to the screen. Furthermore, Luyten et al. per-formed their study on static users and arbitrarily avoidedblinking of the symbols as it may attract too much the userattention. However, smartglasses may be used in a widevariety of lighting conditions, with a multitude of possiblebackgrounds, and, more importantly, in motion, which is anissue not considered by former studies. When the user ismoving, our preliminary experiments show that the move-ment of a symbol is perceptible within the peripheral vision.However, the motion of the user mixes with the motion ofthe symbol, leading our participants to direct their gaze to

(a) (b)

Figure 4: Basic navigation application based on our MPVmodel. A blue color dot blinks on the left sign to indicatea point of interest on the left (a). Once the user turns left,the entire screen turns blue to indicate that the user is inthe correct direction (b).

the screen to interpret the signal. Blinking, on the other hand,permits to attract the user attention while keeping his eyefocus on the activity.As the experimental conditions are much more diverse

than previous studies, we limit the visual language to threeprimary colors (blue, red and green), simple symbols (cir-cles, squares, bands), and blinking movement to activatethe peripheral vision without directing the gaze of the userto the screen. This visual language forces to carefully de-sign applications as the number of possible combinationsconsiderably limits the amount of information (18 possiblesymbols). We also believe that as the display of smartglassessuch as Google glass also overlaps on the central vision, amore limited language allows the user to focus on his maintask without being tempted to look at the screen to confirmthe item displayed. Similarly, this model may cause problemsto users suffering from certain forms of color-blindness andstrabismus.We investigate these issues and provide solutionsin Section 3.

Application of the ModelThe MPV model prevents the user from focusing on thescreen on the smartglass by tapping the peripheral vision.This model can be used by amobile user as well, as it does notobstruct the vision awareness. The user can walk and pickup the visual cues at the periphery without looking into thescreen. One of the main applications of the model is in indoorand outdoor navigation. This model allows the user to beaware of his surroundings while walking or driving withoutbeing distracted by the navigation application. Indoors, themodel can be used inmuseums or theme-parks where visitorscan navigate without having to constantly stare at the guidemap and losing the enjoyment of the place.Similarly, it can also be used outdoors for sightseeing,

where a tourist wants to find the direction of points of interestwhile walking in the street. Figure 4a shows a simple wayof implementing this idea using our MPV model. A blue dotblinks at a velocity above the rate threshold of detection onthe left side of the glass screen, indicating the user to turn

Page 6: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

left to see points of interest. Once the user turns left, theentire screen turns blue (Figure 4b), facilitating an easy tounderstand user interface for navigation. Since blue colorcones are mostly found outside the central vision (Section 2),peripheral vision can easily detect blue color.MPV model applications are not restricted to navigation.

For example, in video games, the MPV model can providehints to the user in gaming applications, blinking red squarefor immediate danger when an enemy is showing up, bluebackground in case of hidden objects to find, or even movingsymbols (left to right or right to left) to indicate directions.Outdoor augmented reality gaming applications can alsobenefit from the model [11]. The player can move aroundlooking at the physical world and pick up visual cues at theperiphery without looking at them. In this case, the applica-tion can change the color of the background when the useris looking at a specific object useful for the game. Finally,important notification alerts can make use of the MPVmodelfor signaling events such as traffic alerts and incidents, andweather warnings, although the amount of information todisplay using only simple symbols, colors and movementpatterns may require some adaptation so that the appearanceof an event won’t cause distraction from his main task.

3 EVALUATIONIn this section, we present our implementation of the MPVmodel, based on a peripheral vision-based navigation appli-cation. We will first discuss the application specificationsand then introduce our setup for the peripheral vision experi-ment, followed by two user studies conducted to evaluate theapplication. Finally, we will present and discuss the resultsof the experiment.

Peripheral Vision-Based Navigation DemoApplicationWe evaluate our MPV by developing a navigation applicationon both Google Glass and MadGaze glass. This application,based on the above MPVmodel (Section 2), guides the user inindoors and outdoors environment using peripheral vision.The application detects the location of the user and informshim to walk straight or to turn left or right.Indoor application

The indoor application operates within the university cam-pus and uses the university public Wi-Fi networks to de-termine the location of the user. The application considersthe Basic Service Set Identifier (BSSID) and Received SignalStrength Indicator (RSSI) of Wireless Access Points (WAP)to estimate the location of the user. We use the Wi-Fi signalswith RSSI less than 90 decibels (dB) to have stability in loca-tion detection. The application only takes into account thetop 5 Wi-Fi hotspots on the path with highest RSSI. To evalu-ate the capacity of users to navigate using peripheral vision

(a) User continuesto walk straight.

(b) User is notifiedto turn right.

(c) User is notifiedto turn left.

Figure 5: The user view of our MPV navigation application,showing the changes on the glass screen during navigation.5a When the entire screen is blue, the user walks straightahead. 5b When a red bar on a black color screen blinks to-wards the right (left to right of the screen), the user turnsright. 5c When a green bar on a black color screen blinks to-wards the left (right to left of the screen), the user turns left.When the entire screen turns black, the user has reached thedesired destination and thus stops.

compared to a handheld device, we select a fixed path in thecampus between two end-points. The path has 9 locationsfrom start to end, covering various activities. It starts withstaircases with multiple turns and variable light conditionsand ends with a straight walk with a single turn in brightlight. This diversity of path and luminosity conditions helpin confronting the MPV model to different scenarios.Outdoor applicationThe outdoor application runs on MadGaze Glass X5 as

it has a quad-core processor that is more suited for multi-threaded GPS applications in outdoor environments. Theoutdoor environment also allows for using a traditional, GPS-based navigation application for comparison purposes. Wecompare our results to a smartglass-based application forprecisely evaluating the difference between central and pe-ripheral vision on this support. The existing navigation appon both Google Glass and MadGaze Glass has a long delay ofat least 3 seconds to detect a change in user position, whichmakes it impossible to use the default navigation app forour experiments. Therefore, we design our own responsivenavigation app with more frequent GPS position polling tocompare to our MPV app. Our navigation app simply imple-ments a map and a route with 9 points of interest in a citypark. The user follows the route and sees the attractions inthe city park. The GPS in the city park has a precision ofat most 5 meters. We chose a route of approximately 200meters that takes around 5 minutes to complete. The routealternates between parts under tree shades and sunshine andfeatures a variety of decor and colors that may impact theuser’s reaction to the colors in the MPV model.Implementation of the MPV model

Figure 5 shows the user view of our MPV application, lay-ing out the changes that happen on the glass screen duringthe navigation.When the entire screen is blue, the user has to

Page 7: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

keep walking straight. (Figure 5a). When a red bar on a blackbackground blinks from left end to right end of the screen,the user has to turn right (Figure 5b). Similarly, when a greenbar on a black background blinks from right end to left endside of the screen, the user has to turn left (Figure 5b). Wechoose the colors according to our preliminary observations:when the user goes straight ahead, a blue background helpsto keep an indication without distracting the user’s attentionon his main task. Blue cones are indeed the most present inthe peripheral area of the eye and blue is the easiest color todetect in the peripheral vision. We also notice that the bestway to signal a punctual event was through a combination ofcolor change, movement, and blinking. As such, even thoughthe peripheral area of the eye does not present as many redand green cones as blue cones, changing from blue to red andgreen provides enough contrast to signal an event. Moreover,as red and green are primary colors, they remain easy todetect when the user is in motion. We avoid using a singlecolor to show different direction changes to prevent confu-sion if the user fails to notice the direction of bar movementon the small glass screen. These colors, although not optimal,provide a visual reminder of the instruction to follow. Theywere also chosen by participants in a study from Luyten etal [18] investigating the use of peripheral vision to providecontrast on the display of symbols in their peripheral vision.Furthermore, we combine these colors with the movementof the hint bar to compensate the lack of color cones at theperiphery and make peripheral detection easier. The yellowcolor (Section 2) is not used as it can cause color confusionwith red and green. The black color provides darkness con-trast to the eye, which helps the peripheral eye to detect redand green color bar movement changes even in the daylight(Section 2).

As previously said, the navigation hint bars stimulate pe-ripheral vision as much as possible through motion. Thebars movement covers the entire 30 degrees of angular fieldof view of Google Glass (Figure 2). Note that many onlinereports on the angular field of view measure of Google Glassis incorrect, and therefore for this study, we model the exact2D copy of Google Glass as shown in Figure 2 to measure theprecise value of the angular field of view. The velocity formotion detection of the bar in the application is kept at theconstant rate of 15 deg/sec, to provide the hint for the rightand left turns. This ensures that the velocity is far above theperipheral vision motion detection threshold value of 2.15deg/sec at 90 degrees eccentricity as it appears in [25]. Since90 degrees retinal eccentricity already falls in the far periph-eral visual field (See Figure 1), the velocity of 15 degreesper second stimulates the peripheral vision as the screen islocated between near-periphery and mid-periphery region.

The bars on the head-mounted display occupy 25% of thescreen width. Each time the bar appears, it blinks 4 times for

2 seconds from left to the right of the screen at a velocity of15 degrees per second. Considering that the AFOV of GoogleGlass is 30 degrees and that the bars are moving at the rate of15 degrees per second, the total periphery stimulation time is3015 = 2 seconds. This ensures that the user receives the hint ina reasonable time to react as it has been shown that the visualreaction time to rapid movements is at least 250ms [14]. Thebar blinks 4 times, covering 7.5 degrees in every half second.The cue is activated within a radius of 3 meters from eachturn. As such, we assume the blinking speed of the bar to beabove the threshold velocity for motion detection at everyretinal eccentricity in the field of view of the glass screen.Apart from the blinking, we also consider sliding of the barfrom one end to the other of the glass screen in the initialprototype.Initial Experiment: maximizing peripheral activationthrough movementWe conduct an initial experiment session with 20 users,

where we evaluate both blinking and sliding in the indooruniversity environment. The users walk a fixed path consist-ing of a 2 minutes walk from ground floor to a classroom onthe first floor. The path covers light to dark lighting areasand consists of staircases, a straight walkway path, and 7turns (3 for left and 4 for right). The experiment takes placein the busy evening time, to make sure the path is crowdedand the participants have enough diversions to not stare intothe screen. This experiment shows that 70% of users preferblinking over sliding for direction detection. This is particu-larly because users’ peripheral vision is not activated quicklyenough with sliding bars, leading them to miss the directionson the screen. To avoid missing the directions, the users haveto pay more focus on the screen, and thus have problems innavigating the crowded path. By blinking the bar, we addjust the amount of peripheral vision activation necessary forthe user not to focus on the screen while in motion. Basedon this initial experiment, we select the blinking option inour high-fidelity MPV navigation application.The main advantage of using a peripheral vision-based

navigation application as compared to other navigation appli-cations on mobile devices and wearables is that it simplifiesmulti-tasking and does not obstruct the user’s visual aware-ness. Existing navigation applications on the mobile devicesand wearables require the user to look into the screen con-stantly. Through our MPV application, the user does nothave to stare into the glass screen and can perform anothertask without obstruction.

Experiment SetupWe set up our experiment for both indoor and outdoor envi-ronments using the applications described in Section 3. Theindoor experiment takes place inside the university campuswhile the outdoor experiment takes place in a public park.

Page 8: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

We define a primary task and a secondary task for the users.The users are asked to perform both tasks simultaneously.Experiment 1: Indoor navigation

The primary task consists of looking at 7 points of interestwhile navigating in the environment. This simulates manyscenarios in real life such as walking in a museum, nationalpark, shopping mall etc. The pointers consist of 14 large col-orful pictures of common animals and fruits placed at eachpoint of interest. We place 7 pictures on the right-hand sideof the user and 7 pictures on the left-hand side along theindoor corridors. The pictures are placed within the cue acti-vation area. The secondary task is to navigate to a specificclassroom at the end of the path, using the demo applicationwithout looking directly at the screen. The path is unknownto the participants, and presents several intersections onlyindicated through our demo application. It is made sure thatpath covers different lightning areas and walkaways simi-lar to the Initial Experiment. The primary task is set up tomake sure that the users do not directly look into the screen.The pictures of the animals and fruits are chosen such thatthey appear quirky and humorous to the user. Further, theexperiment is conducted in the presence of an observer whoconverses with the user while walking on the path. Thesedistractions ensure that the users do not have focus shiftstowards the secondary task of navigation. It helps in evalu-ating whether a user can perform a primary task of lookingat pictures while performing a secondary navigation task.The observer does not help the user in any way during theexperiment.

We also run the demo application on a Xiaomi Mi 3 smart-phone, to compare the user experience of using peripheralvision-based navigation application, on smartglasses againstmobile devices. The interface of the mobile phone applica-tion is entirely the same as the one on the Google Glass and itprovides the same functionality. We perform the experimentwith both Google Glass and Xiaomi mobile phone for eachuser. The users alternate the starting device between eachtest run to avoid biased perceptions that one device wouldalways be used first in the experiment.Experiment 2: Outdoor navigation

For our outdoor experiment, we choose a relatively crowdedcity park with many attraction locations. We select a routewith 9 attractions as our primary tasks. The demo applica-tion directs the users with the MadGaze Glass X5 to eachattraction location. We perform the experiment with bothourMPV application and the navigation application presentedin the previous section. The MPV application allows users tofocus on the environment and the attractions, whereas thenavigation application requires users to look into the glassscreen to see the map and follow the route. We evaluate theimpact of both cases on the same smartglasses. The goal ofthis experiment is twofold: extend our evaluation to a wider

range of light conditions and evaluate the impact of our MPVmodel compared to a more traditional navigation applicationon smartglasses.Experimental methodology

We conduct our study on a diverse group of 20 participantsfor both the indoor and the outdoor environment. The twoexperiments were performed with entirely different sets ofparticipants to ensure that no one had prior experience withour MPV model. The participants’ age ranges from 18 to 25.For the indoor experiment, 90% of the users are using GoogleGlass for the first time, whereas, for the outdoor experiment,all the users are using the MadGaze Glass for the first time.70% of the students have eye power glasses and thus wearGoogle Glass on top of their eyeglasses. Two participants inthe indoor experiment have specific eye conditions, respec-tively color blindness and strabismus.After the experiment, the users fill a computer-based as-

sessment survey after conducting the experiment on both de-vices. The survey is based on NASA Task Load Index (NASATLX) assessment tool [12]. This tool allows to measure theperceived workload of a task by asking users to rate theeffort and frustration experienced during the experiment.Mental demand refers to the amount of mental and percep-tual activity to complete the experiment. Physical demandmeasures the amount of physical activity and whether the ex-periment is slack or strenuous for the user. The users reportthe perceived mental demand and physical demand whileperforming the experiment on all devices, on a scale of 1 to10, 10 being the highest. Users also report their frustrationlevel, which measures the level of irritation, stress, and an-noyance that the user feels while performing the experimentin a similar fashion. We asked the users the following ques-tions, on a scale from 1 to 10, 1 being "very low" and 10 being"very high":

(1) How mentally demanding was the task?(2) How physically demanding was the task?(3) How insecure, discouraged, irritated, stressed and an-

noyed were you?

These measures are recorded for Google Glass, MadGazeGlass, and the Xiaomi mobile phone. Additionally, we recordwhich device is used to start the experiment and ask the usersfor additional comments and opinion about the usability ofour MPV model in real life.The experiment is conducted in the presence of an ob-

server, to record additional data for our assessment. Theexperiment is considered successful if the user reaches thedestination of the path. The observer records whether theuser completes the experiment successfully and the totaltime taken by the user to trace the path to reach the desti-nation. We ask the users to rate the time spent looking intothe glass on the following scale: 1 – rarely, 2 – moderate,

Page 9: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

1 2 3 4 5 6 7 8 9 10

Mental Dem and

0

1

2

3

4

5

6

7

Nu

mb

er

of

Use

rs

Google Glass - Indoor

Mobile Phone - Indoor

Figure 6: Mental Demand of 20 users (indoors).

1 2 3 4 5 6 7 8 9 10

Mental Dem and

0

1

2

3

4

5

6

7

Nu

mb

er

of

Use

rs

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 7: Mental Demand of 20 users (outdoors).

0 1 2 3 4 5 6 7 8 9 10

Mental Dem and

Google Glass - Indoor

Mobile Phone - Indoor

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 8: The boxplot of mental demand for 20 Users whilecarrying out the experiment indoor and outdoor, indicatingthe first quartile (Q1), second quartile (Q2 or median), andthe third quartile (Q3) of the data.

and 3 – often. This scale simplifies the measurement, as wenoticed that the measurement by observation may not beprecise, while the more interesting metric, in this case, isthe users’ perception of how often they have to look into thescreen and the amount of time spemt looking into the screenof the Google Glass and the Xiaomi mobile phone duringthe experiment. We do not record the reaction time of theuser after movement detection. The amount of time lookedinto the screen is normalized to compare the individual timespent by different users for looking into each device’s screen.The normalization is done by computing the percentage ofthe time the user looked into the screen of the device giventhe total time of the experiment.

Experiment ResultsAs mentioned in Section 3, the user study involves 20 partic-ipants for both the indoor and outdoor environment. This

1 2 3 4 5 6 7 8 9 10

Physical Dem and

0

2

4

6

8

Nu

mb

er

of

Use

rs

Google Glass - Indoor

Mobile Phone - Indoor

Figure 9: Physical Demand of 20 users (outdoors).

1 2 3 4 5 6 7 8 9 10

Physical Dem and

0

1

2

3

4

5

6

Nu

mb

er

of

Use

rs

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 10: Physical Demand of 20 users (indoors).

0 1 2 3 4 5 6 7 8 9 10

Physical Dem and

Google Glass - Indoor

Mobile Phone - Indoor

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 11: The boxplot of physical demand for 20 Userswhile carrying out the experiment indoor and outdoor, indi-cating the first quartile (Q1), second quartile (Q2 or median),and the third quartile (Q3) of the data.

sample size is a typically accepted baseline in many previousworks [4]. We repeat each experiment twice with each userto compare the MVP model with either a similar experienceon the phone (indoors conditions) or an ordinary navigation-based application (outdoors conditions). We limit the exper-iment to two applications per user and per experiment inorder not to overload them. For the indoor experiments, eachuser used our MVP based app and the mobile phone app. Theexperiments are performed on Google Glass and Xiaomi Mi 3respectively. All the users were able to reach the destinationusing the devices.

We record mental demand, physical demand, and frustra-tion level during the experiment, for all the devices on a scalefrom 1 to 10, 0 being the highest level. Let Q1, Q2, and Q3 bethe first, second, and third quartiles respectively.Mental demand

Page 10: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

1 2 3 4 5 6 7 8 9 10

Frustration Level

0

2

4

6

8

Nu

mb

er

of

Use

rs

Google Glass - Indoor

Mobile Phone - Indoor

Figure 12: Frustration level of 20 users (indoors).

1 2 3 4 5 6 7 8 9 10

Frust rat ion Level

0

1

2

3

4

5

6

Nu

mb

er

of

Use

rs

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 13: Frustration level of 20 users (outdoors).

0 1 2 3 4 5 6 7 8 9 10

Frust rat ion Level

Google Glass - Indoor

Mobile Phone - Indoor

MadGaze Glass MPV App - Outdoor

MadGaze Glass Nav App - Outdoor

Figure 14: The boxplot of frustration level for 20 Users whilecarrying out the experiment indoor and outdoor, indicatingthe first quartile (Q1), second quartile (Q2 or median), andthe third quartile (Q3) of the data.

Figure 6 and 7 show the mental demand reported by userswhile performing the experiment respectively indoors andoutdoors. The results in Figure 8 show that for the indoorexperiment 50% of users experience low mental demand(Q2 = 4). The curve is skewed towards the left and the upperquartile (Q3) is 5.00, showing that the 75% of users experi-ence low mental demand (< 5). The average mental demandrequired by 20 users in performing the experiment usingGoogle Glass is 3.8. When using the smartphone, partici-pants show a much higher mental demand, with much morevariance in the results. If the first and third quartile are rel-atively close (respectively 5 and 7), we observe disparities,with 25% of users experiencing a mental demand between 2and 5, and 25% between 7 and 9. Although the curve clearlytends towards the right, a non-negligible amount of themreports a low mental demand. When looking at the individ-ual results, most users report a lower mental demand for the

smartglass application, confirming the superiority of navigat-ing using peripheral vision compared to a similar applicationusing central vision.Regarding the outdoor experiment, the results are even

more noticeable. The mental demand required for perform-ing the outdoors tasks is even lower using the MPV app onthe MadGaze glass. Interestingly, when looking at Figure 8,we can see that using the MPV app on a smartphone or anavigation app on smartglasses results in a similar distri-bution of mental demand among participants. We can thusconclude that the hardware is not the cause of this highermental demand and that activating the peripheral vision no-ticeably requires less focus from the user, focus that couldbe directed to the road in the case of a car driver.Physical demandFigures 9 and 10 show the physical demand required by

the users while performing the experiment indoor and out-door. The graph curve for the physical demand using GoogleGlass (Figure 9) is highly skewed towards the left, and thephysical demand of 50% users falls below the curve at 2.00or lower. Further, the physical demand of 75% users fallsunder 3.00. This indicates that peripheral vision approachsignificantly reduced the physical demand on the users. Theaverage physical demand in conducting the experiment us-ing Google Glass is 2.65. On the other hand, the curve forthe physical demand using a mobile phone is spread out,with the median (Q2) lying at 4.50 (Figure 11), which is 125%higher than that of the Google Glass. The upper quartile(Q3) for the physical demand using the mobile phone is 6.00,which is significantly higher than that of the glass (2.00). Theaverage physical demand required by 20 users in conductingthe experiment using a mobile phone is 4.65, which is 76%higher than the physical demand using Google Glass.This difference in physical demand between the smart-

glass and the phone can easily be explained by the fact thatsmartglasses do require little to no muscle activation forholding. The user can look at the screen through a short eyemovement compared to a handheld smartphone. Comparingour MPV application to a traditional navigation applicationon smartglasses also shows significant differences in physi-cal demand. Indeed, the data for using a regular navigationapp on MadGaze Glass is strongly skewed towards havinga higher physical demand, whereas the data for the sameexperiment with anMVP app shows significantly lower phys-ical demands. It is however notable that the physical demandrequired outdoor is higher in comparison to indoor envi-ronment as shown by Q1 in Figure 11. As both applicationsare running on smartglasses in the outdoor experiment, wecan conclude that actively looking at the screen can becomestrenuous for the user, whereas using peripheral vision al-lows to keep their eye gaze towards their path.Frustration levels

Page 11: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

Figure 12 and 13 show the frustration level of the usersduring the experiment indoors and outdoors. The curve ismore skewed towards the low frustration level region for theGoogle Glass than for the Xiaomi mobile phone. The Q1, Q2and Q3 for the frustration level as shown in Figure 14 in anindoor environment are 1, 2.5, and 4.2 respectively, whereasfor a mobile phone is 1.75, 3.5 and 5.2 respectively. Thisshows that even though 90% users were first time users of theGoogle Glass, they experience less frustration in performingthe experiment using Google Glass than with a mobile phone.The outdoor results are also strongly similar to the indoorresults for the frustration level, with a notable point thatbased on the interquartile range (IQR=Q3 −Q1 = 6 − 2 = 4)the frustration for the regular navigation app on MadGazeGlass is overall higher than all the other experiments.

We also do statistical t-test analysis to further examine thedifferences in the metrics. We calculate the paired-samplet-test for all data to evaluate the null hypothesis (that is themean difference between the paired sample data is 0). Thet-test values for all the indoor data are as follows:

tMental Demand = −1.021,Pr (T 19 ≥ −1.021) = 1 − 0.16 = 0.84 (1)

tPhysical Demand = −1.409,Pr (T 19 ≥ −1.409) = 1 − 0.0875 = 0.9125 (2)

tFrustration Level = −0.682,Pr (T 19 ≥ −0.682) = 1 − 0.2517 = 0.7483 (3)

Given the probability results in the equations (1),(2), and (3),based on the table of t-distribution values, the probabilitythat the means of the two paired-sample data is different isat least 84%, 91%, and 74% for the mental demand, physicaldemand, and frustration level respectively. This shows thatstatistically, the sample data for the MVP model is stronglydeviating from the same data collected in the experimentusing a mobile phone. Similarly, using the data samples of theoutdoor experiments, the probability that the means of thetwo paired-sample data is different is at least 86%, 92%, and71% respectively, indicating that the MVP model performseven better in an outdoor environment.User focus and central eye gazeThe user feedback shows that when using the MVP app,

the users spend on average a very low amount of time look-ing into the head-mounted display of the smartglass, whereasfor the regular navigation app users very often look into thescreen. In our questionnaire, we ask the user to report onthe amount of time spent looking into the head-mountedscreen with the following discrete scale: 25%, 50%, and 75%.

The users report to have spent on average 50% less time look-ing into the screen of their devices using our MVP model.In other words, users save approximately 50% more timefor their main activity by using their peripheral vision withsmartglasses instead of looking into the screen of a mobilephone or the display of the glass. They reported their gazeto be directed most of the time towards their path and/ortheir main activity instead of the screen. Users can thereforefocus on the main task without being distracted by the MPVapplication.

The results from the above analysis show that the users re-quire less mental and physical demand while performing theexperiment using Google Glass or MadGaze Glass than witha regular navigation app on a smartglass or a mobile phone.Further, most users experience lower frustration while doingthe experiment using a smartglass, even though 90% of themwere unfamiliar with using smartglasses at the beginningof the experiment. This allows the users to easily carry outboth the primary and secondary task simultaneously. Fur-ther, users also find it easier to walk on the staircases whilenavigating the path through the demo application on GoogleGlass than on a mobile phone, as the MPV model allowsthem to focus on their path rather than on the screen. Theusers found it more difficult to perform the experiment usinga mobile phone in low light levels. Except for the case ofcolor-blind user, other users found it easy to navigate thepath using peripheral vision-based navigation demo applica-tion on the Google Glass. 90% of the users Agree that usingtheir peripheral vision for the navigation application is morebeneficial and efficient than looking into the screen.In both indoor and outdoor conditions, our MPV model

performs according to its primary goals:

• The model exploits the peripheral vision properly andthe users do not need to look at the screen to get in-formed of changes in direction.

• The users spend a low amount of time looking at thescreen, 50% less than a traditional application, keepingtheir central eye gaze to the main task.

• The users experience a low mental and physical de-mand when navigation is a secondary task. Our modelallows the users to navigate while focusing on theirmain task.

Special cases: color blindness and strabismusSeveral conditions can affect the detection of movement,color, or even the basic activation of peripheral vision. Amongthe participants to our study, two users were suffering fromcolor blindness and strabismus. This was a good opportunityto study the impact of such conditions on our model.Color blindness

Page 12: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

Color blindness is characterized by a decreased ability todistinguish colors caused by problems within the color conesystem. The most common form of color blindness is the red-green color blindness. People living with such a conditionexperience difficulties differentiating red and green colors.As our application is based on red and green visual cues, weexpect to observe lower performances for color-blind people.We tested the indoor application on one color-blind per-

son. The user did not inform us that he was color-blind andwe ran the experiment in the exact same conditions as forother participants. The participant was unable to completethe task. His mental and physical demand levels with theMVP model are extremely high (respectively 9 and 7). Hisfrustration level also reaches 10. As the participant cannotdistinguish between the red and green cues, he has to rely onlooking constantly at the screen for movement. The user alsoreports headaches and strain in the eye while performing theexperiment. However, when using the same application on asmartphone, the user reports demand and frustration in thefirst quartile of our result distribution. This is probably dueto the user starting with the application on smartglass. Eventhough the application on the phone was still not optimal, itprovides intense relief after using the smartglass application.This experiment confirms that although a combination

of color and movement can activate the peripheral vision,movement alone is not sufficient for our application. Weuse red and green as they are two primary colors providingmore contrast for most people. For the most common casesof color-blindness, we can adapt the color patterns to thecondition. However, in the case of Achromatopsia (no colordetection), our model will lose its validity.Strabismus

We also encountered a user suffering from noticeable stra-bismus, characterized by a misalignment of the eyes whenlooking at an object. This user reports mental and physicaldemand levels in the third quartile when using our appli-cation (respectively 6 and 7) with a low frustration level at2. Interestingly, the mental demand when using the smart-phone is significantly higher (9), but the physical demandis slightly lower (6). Indeed, the short distance to the screenof the smartglasses caused a slight strain to activate the eye.Similarly, the frustration level when using the smartphonewas slightly lower (1).

Although not significant, these results show that eitherour MPV model or smartglasses, in general, are not adaptedto people with this condition. However, to precisely eval-uate the impact of each of these components, we shouldacquire smartglasses with the display on the left side. Thissetup would allow eliminating the interrogations regardingone eye being less sensitive than the other. Similarly, smart-glasses with a screen on both eyes may mitigate the effect ofstrabismus on peripheral vision by introducing redundancy.

4 CONCLUSION AND FUTUREWORKIn this paper, we have introduced a MPV model that com-bines motion detection with the theories of the peripheralvision and the color sensitivity of the normal human eye. Themodel taps into the peripheral vision of the user to conveyinformation, without having to look at the small screen ofthe smartglasses. The MPV model uses the concepts of mo-tion detection by the peripheral vision, and color perceptionof a human eye. It incorporates rate threshold for motion de-tection to make the model adaptable to the varying positionsof head-mounted displays (Section 2).

This model can be used on any smartglasses with a head-mounted display overlappingwith the peripheral visionwith-out any additional hardware requirement. It functions wellin both indoor and outdoor environment navigation scenar-ios. The model resolves the constricted peripheral awarenessproblem of the mobile devices and provides a software so-lution to deal with the small field of view problem of thesmartglasses (Section 1).Our experiments on peripheral vision navigation con-

ducted on 20 users show that the model is less demand-ing mentally and physically, and less frustrating for multi-tasking compared with staring into the smartphone screen.On average, user saved 50% more time for other activities byusing their peripheral vision with Google Glass instead oflooking into the screen of Xiaomi mobile phone. The userassessment survey also shows that 90% of the users findusing their peripheral vision more beneficial and efficient.The model also worked in both bright and low light condi-tions. Compared to a regular navigation application, userswere able to focus more on their primary task, and foundit much less demanding (40 to 50% less) and frustrating. Pe-ripheral vision enabled participants to focus on their maintask, with their central eye gaze being directed at the screen50%less than when using the MPV model on a smartphoneor a navigation application on smartglasses.In the future, we would like to extend our work and ex-

periment on different scenarios such as augmented realityand virtual reality games, etc. We are also considering thepossibilities to enhance our model with eye tracking features.Finally, we should expand our panel of users to people suf-fering from strabismus and color-blindness to precise theevaluation of such conditions on our model.

5 ACKNOWLEDGEMENTSThe authors thank the anonymous reviewers for their in-sightful comments. This research has been supported, inpart, by projects 26211515, 16214817, and G-HKUST604/16from the Research Grants Council of Hong Kong, as well asthe 5GEAR project from the Academy of Finland ICT 2023programme.

Page 13: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA

REFERENCES[1] Reynold Bailey, Ann McNamara, Aaron Costello, Srinivas Sridharan,

and Cindy Grimm. 2012. Impact of Subtle Gaze Direction on Short-term Spatial Information Recall. In Proceedings of the Symposium onEye Tracking Research and Applications (ETRA ’12). ACM, New York,NY, USA, 67–74. https://doi.org/10.1145/2168556.2168567

[2] Mark Billinghurst, Adrian Clark, Gun Lee, et al. 2015. A survey of aug-mented reality. Foundations and Trends® Human–Computer Interaction8, 2-3 (2015), 73–272.

[3] Mon-Chu Chen and Roberta L Klatzky. 2007. Displays attentive tounattended regions: presenting information in a peripheral-vision-friendly way. In International Conference on Human-Computer Interac-tion. Springer, 23–31.

[4] SunnyConsolvo, LarryArnstein, and B. Robert Franza. 2002. User StudyTechniques in the Design and Evaluation of a Ubicomp Environment.Springer Berlin Heidelberg, Berlin, Heidelberg, 73–90. https://doi.org/10.1007/3-540-45809-3_6

[5] Enrico Costanza, Samuel A Inverso, Elan Pavlov, Rebecca Allen, andPattie Maes. 2006. Eye-q: Eyeglass peripheral display for subtle in-timate notifications. In Proceedings of the 8th conference on Human-computer interaction with mobile devices and services. ACM, 211–218.

[6] Tim Dobbert. 2006. Matchmoving: the invisible art of camera tracking.John Wiley & Sons.

[7] Kevin Fan, Jochen Huber, Suranga Nanayakkara, and Masahiko Inami.2014. SpiderVision: extending the human field of view for augmentedawareness. In Proceedings of the 5th Augmented Human InternationalConference. ACM, 49.

[8] David Finlay. 1982. Motion perception in the peripheral visual field.Perception (1982).

[9] Uwe Gruenefeld, Tim Claudius Stratmann, Jinki Jung, Hyeopwoo Lee,Jeehye Choi, Abhilasha Nanda, and Wilko Heuten. [n. d.]. GuidingSmombies: Augmenting Peripheral Vision with Low-Cost Glasses toShift the Attention of Smartphone Users. ([n. d.]).

[10] M Hahn and Y Kim. 2009. Designing Attention-Aware Peripheral Dis-plays with Gaze-based Notification Control. In International UniversalCommunication Symposium 2009. 241–244.

[11] Michael Haller. 2006. Emerging Technologies of Augmented Reality:Interfaces and Design: Interfaces and Design. Igi Global.

[12] SG Hart and L Staveland. 1986. NASA Task Load Index (TLX) V1. 0Users Manual.

[13] Yoshio Ishiguro and Jun Rekimoto. 2011. Peripheral vision annota-tion: noninterference information presentation method for mobileaugmented reality. In Proceedings of the 2nd Augmented Human Inter-national Conference. ACM, 8.

[14] Steven W Keele and Michael I Posner. 1968. Processing of visualfeedback in rapid movements. Journal of experimental psychology 77,1 (1968), 155.

[15] Greg Kipper and Joseph Rampolla. 2012. Augmented Reality: an emerg-ing technologies guide to AR. Elsevier.

[16] Kangdon Lee. 2012. Augmented reality in education and training.TechTrends 56, 2 (2012), 13–21.

[17] W. Lu, B. L. H. Duh, and S. Feiner. 2012. Subtle cueing for visual searchin augmented reality. In 2012 IEEE International Symposium on Mixedand Augmented Reality (ISMAR). 161–166. https://doi.org/10.1109/ISMAR.2012.6402553

[18] Kris Luyten, Donald Degraen, Gustavo Rovelo Ruiz, Sven Coppers,and Davy Vanacken. 2016. Hidden in Plain Sight: an Exploration of aVisual Language for Near-Eye Out-of-Focus Displays in the PeripheralView. In Proceedings of the 2016 CHI Conference on Human Factors inComputing Systems. ACM, 487–497.

[19] Andrew Maimone, Douglas Lanman, Kishore Rathinavel, Kurtis Keller,David Luebke, and Henry Fuchs. 2016. Building Wide Field of ViewAugmented Reality Eyewear with Pinlight Displays. (2016).

[20] WC Maples, Wes DeRosier, Richard Hoenes, Rodney Bendure, andSherl Moore. 2008. The effects of cell phone use on peripheral vision.Optometry-Journal of the American Optometric Association 79, 1 (2008),36–42.

[21] Daniel S Marigold. 2008. Role of peripheral visual cues in online visualguidance of locomotion. Exercise and sport sciences reviews 36, 3 (2008),145–151.

[22] Andrii Matviienko, Andreas Löcken, Abdallah El Ali, Wilko Heuten,and Susanne Boll. 2016. NaviLight: Investigating Ambient Light Dis-plays for Turn-by-turn Navigation in Cars. In Proceedings of the 18thInternational Conference on Human-Computer Interaction with MobileDevices and Services (MobileHCI ’16). ACM, New York, NY, USA, 283–294. https://doi.org/10.1145/2935334.2935359

[23] Suzanne P McKee and Ken Nakayama. 1984. The detection of motionin the peripheral visual field. Vision research 24, 1 (1984), 25–32.

[24] Michel Millodot. 2014. Dictionary of optometry and visual science.Elsevier Health Sciences.

[25] William A Monaco, Joel T Kalb, and Chris A Johnson. 2007. Motiondetection in the far peripheral visual field. Army Research LaboratoryReport ARL-MR-06 (2007).

[26] Gerald M Murch. 1984. Physiological principles for the effective useof color. IEEE Computer Graphics and Applications 4, 11 (1984), 48–55.

[27] Takuro Nakuo and Kai Kunze. 2016. Smart glasses with a periph-eral vision display. In Proceedings of the 2016 ACM International JointConference on Pervasive and Ubiquitous Computing: Adjunct. ACM,341–344.

[28] Jason Orlosky, Qifan Wu, Kiyoshi Kiyokawa, Haruo Takemura, andChristian Nitschke. 2014. Fisheye vision: peripheral spatial compres-sion for improved field of view in head mounted displays. In Pro-ceedings of the 2nd ACM symposium on Spatial user interaction. ACM,54–61.

[29] Oskar Palinko, Andrew L Kun, Zachary Cook, Adam Downey, AaronLecomte, Meredith Swanson, and Tina Tomaszewski. 2013. Towardsaugmented reality navigation using affordable technology. In Proceed-ings of the 5th International Conference on Automotive User Interfacesand Interactive Vehicular Applications. ACM, 238–241.

[30] Christopher Plaue and John Stasko. 2007. Animation in a peripheraldisplay: distraction, appeal, and information conveyance in varyingdisplay configurations. In Proceedings of Graphics Interface 2007. ACM,135–142.

[31] Benjamin Poppinga, Niels Henze, Jutta Fortmann, Wilko Heuten, andSusanne Boll. 2012. AmbiGlasses-Information in the Periphery of theVisual Field.. In Mensch & Computer. 153–162.

[32] I Rakkolainen, M Turk, and T Höllerer. 2016. A Superwide-FOVOpticalDesign for Head-Mounted Displays. (2016).

[33] Umair Rehman and Shi Cao. 2015. Augmented Reality-Based IndoorNavigation Using Google Glass as a Wearable Head-Mounted Dis-play. In Systems, Man, and Cybernetics (SMC), 2015 IEEE InternationalConference on. IEEE, 1452–1457.

[34] Donghao Ren, Tibor Goldschwendt, YunSuk Chang, and TobiasHöllerer. 2016. Evaluating wide-field-of-view augmented reality withmixed reality simulation. In Virtual Reality (VR), 2016 IEEE. IEEE, 93–102.

[35] Kara Rogers. 2010. The eye: the physiology of human perception. TheRosen Publishing Group.

[36] Tobias Sielhorst, Marco Feuerstein, and Nassir Navab. 2008. Advancedmedical displays: A literature review of augmented reality. Journal ofDisplay Technology 4, 4 (2008), 451–467.

Page 14: Peripheral Vision: A New Killer App for Smart Glassesbraudt/papers/peripheral-vision... · 2019-06-26 · Peripheral Vision: A New Killer App for Smart Glasses IUI ’19, March 17–20,

IUI ’19, March 17–20, 2019, Marina del Ray, CA, USA I. Chaturvedi et al.

[37] Wanda J Smith. 1979. A review of literature relating to visual fatigue.In Proceedings of the human factors and Ergonomics Society annualmeeting, Vol. 23. Sage Publications, 362–366.

[38] Srinivas Sridharan and Reynold Bailey. 2015. Automatic TargetPrediction and Subtle Gaze Guidance for Improved Spatial Infor-mation Recall. In Proceedings of the ACM SIGGRAPH Symposiumon Applied Perception (SAP ’15). ACM, New York, NY, USA, 99–106.https://doi.org/10.1145/2804408.2804415

[39] Hans Strasburger, Ingo Rentschler, andMartin Jüttner. 2011. Peripheralvision and pattern recognition: A review. Journal of vision 11, 5 (2011),13–13.

[40] Ianchulev T, Minckler DS, Hoskins H, and et al. 2014. Wearabletechnology with head-mounted displays and visual function. JAMA312, 17 (2014), 1799–1801. https://doi.org/10.1001/jama.2014.13754arXiv:/data/journals/jama/931027/jld140032.pdf

[41] Chek Tien Tan and Donny Soh. 2010. Augmented reality games: Areview. Proceedings of Gameon-Arabia, Eurosis (2010).

[42] Harry Moss Traquair, George Ian Scott, and Norman McOmish Dott.1957. Clinical perimetry. Kimpton.

[43] Kazuhiko Ukai and Peter A Howarth. 2008. Visual fatigue causedby viewing stereoscopic motion images: Background, theories, andobservations. Displays 29, 2 (2008), 106–116.

[44] Alexander H Wertheim. 2012. Tutorials on motion perception. Vol. 20.Springer Science & Business Media.

[45] Bruce J West. 1993. Patterns, information and chaos in neuronal systems.Vol. 2. World Scientific. 65 pages.

[46] Robert Xiao and Hrvoje Benko. 2016. Augmenting the Field-of-View of Head-Mounted Displays with Sparse PeripheralDisplays. https://www.microsoft.com/en-us/research/publication/augmenting-field-view-head-mounted-displays-sparse-peripheral-displays/

[47] Robert Xiao and Hrvoje Benko. 2016. Augmenting the Field-of-Viewof Head-Mounted Displays with Sparse Peripheral Displays. In Pro-ceedings of the 2016 CHI Conference on Human Factors in ComputingSystems. ACM, 1221–1232.

[48] Wataru Yamada and Hiroyuki Manabe. 2016. Expanding the Field-of-View of Head-Mounted Displays with Peripheral Blurred Images. InProceedings of the 29th Annual Symposium on User Interface Softwareand Technology. ACM, 141–142.

[49] Donggang Yu, Jesse Sheng Jin, Suhuai Luo, Wei Lai, and QingmingHuang. 2009. A useful visualization technique: a literature review foraugmented reality and its application, limitation & future direction.In Visual information communication. Springer, 311–337.