daugherty measuring vergence over stereoscopic video with a remote eye tracker

4
Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail [email protected] . ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 Measuring Vergence Over Stereoscopic Video with a Remote Eye Tracker Brian C. Daugherty Andrew T. Duchowski †* Donald H. House †* Celambarasan Ramasamy School of Computing & Digital Production Arts, Clemson University Abstract A remote eye tracker is used to explore its utility for ocular ver- gence measurement. Subsequently, vergence measurements are compared in response to anaglyphic stereographic stimuli as well as in response to monoscopic stimulus presentation on a standard display. Results indicate a highly significant effect of anaglyphic stereoscopic display on ocular vergence when viewing a stereo- scopic calibration video. Significant convergence measurements were obtained for stimuli fused in the anterior image plane. CR Categories: I.3.3 [Computer Graphics]: Picture/Im- age Generation—Display algorithms; I.3.6 [Computer Graphics]: Methodology and Techniques—Ergonomics; Keywords: eye tracking, stereoscopic rendering 1 Introduction & Background Since their 1838 introduction by Sir Charles Wheatstone [Lipton 1982], stereoscopic images have appeared in a variety of forms, including dichoptic stereo pairs (different image to each eye), random-dot stereograms [Julesz 1964], autostereograms (e.g., the popular Magic Eye and Magic Eye II images), and anaglyphic im- ages and movies, with the latter currently resurging in popularity in American cinema (e.g., Monsters vs. Aliens, DreamWorks An- imation and Paramount Pictures). An anaglyphic stereogram is a composite image consisting of two colors and two slightly differ- ent perspectives that produces a stereoscopic image when viewed through two corresponding colored filters. Although the elicited perception of depth appears to be effective, relatively little is known about how similarly this effect may be on viewers’ eye movements. Autostereograms, for example, are easily fused by some, but not by others. When the eyes move through equal angles in opposite directions, disjunctive movement, or vergence, is produced [Howard 2002]. In horizontal vergence, each visual axis moves within a plane con- taining the interocular axis. When the visual axes move inwards, the eyes converge; when the axes move outwards, they diverge. The convergent movement of the eyes (binocular convergence), i.e., their simultaneous inward rotation toward each other (cf. di- vergence denotes the outward rotation), ensures that the projec- tion of images on the retina of both eyes are in registration with each other, allowing the brain to fuse the images into a single per- cept. This fused percept provides stereoscopic vision of three- dimensional space. Normal binocular vision is primarily charac- terized by this type of fusional vergence of the disparate retinal im- ages [Shakhnovich 1977]. Vergence driven by retinal blur is distin- guished as accommodative vergence [B¨ uttner-Ennever 1988]. * Email: {andrewd | dhouse}@cs.clemson.edu The angle between the visual axes is the vergence angle. When a person fixates a point at infinity, the visual axes are parallel and the vergence angle is zero. The angle increases when the eyes converge. For symmetrical convergence, the angle of horizontal vergence φ is related to the interocular distance a and the distance of the point of fixation from a point midway between the eyes D by the expres- sion: tan (φ/2) = a/(2D). Thus, the change in vergence per unit change in distance is greater at near than at far viewing distances. About 70% of a person’s normal range of vergence is used within one meter from the eyes. The angle of vergence changes about 14 when gaze is moved from infinity to the nearest distance for comfortable convergence at about 25 cm. Vergence changes about 36 when the gaze moves to the nearest point to which the eyes can converge. About 90% of this total change occurs when the eyes converge from 1 m. In this paper, vergence measurements are made over anaglyphic stereo imagery. Although depth perception has been studied on desktop 3D displays [Holliman et al. 2007], eye movements were not used to verify vergence. Holliman et al. conclude that depth judgment cannot always be predicted from display geometry alone. The only other similar effort we are aware of is measurement of in- terocular distance on a stereo display during rendering of a stereo image at five different depths [Kwon and Shul 2006]. Interocular distance was seen to range by about 10 pixels across three partici- pants. In this paper, we report observations on how binocular an- gular disparity (of twelve participants) is affected when viewing an anaglyphic stereo video clip. How can vergence be measured when viewing a dichoptic computer animation presented anaglyphically? To gauge the effect of stereo display on ocular vergence, it is sufficient to measure the disparity between the left and right horizontal gaze coordinates, e.g., xr - x l given the left and right gaze points, (x l ,y l ), (xr ,yr ) as delivered by current binocular eye trackers. Thus far, to our knowledge, the only such vergence measurements to have been carried out have been performed over random dot stereograms [Essig et al. 2004]. The question that we are attempting to address is whether the ver- gence angle can be measured from gaze point data captured by a (binocular) eye tracker. Of particular interest is the measure of rel- ative vergence, that is, the change in vergence from fixating a point P placed some distance Δd behind (or in front of) point F , the point at which the visual axes converge at viewing distance D. The visual angle between P and F at the nodal point of the left eye is φ l , signed positive if P is to the right of the fixation point. The same angle for the right eye is φr , signed in the same way. The binocular disparity of the images of F is zero, since each image is centered on each eye’s visual axis. The angular disparity η of the images of P is φ l - φr . If θF is the binocular subtense of point F and θP is the binocular subtense of point P , then η = φ l - φr = θP - θF . Thus, the angular disparity between the images of a pair of objects is the binocular subtense of one object minus the binocular subtense of the other (see Figure 1). Given the binocular gaze point coordinates reported by the eye tracker, (x l ,y l ) and (xr ,yr ), an estimate of η can be derived fol- lowing calculation of the distance Δd between F and P , obtained 97

Upload: kalle

Post on 12-Jan-2015

1.172 views

Category:

Documents


0 download

DESCRIPTION

A remote eye tracker is used to explore its utility for ocular vergence measurement. Subsequently, vergence measurements are compared in response to anaglyphic stereographic stimuli as well as in response to monoscopic stimulus presentation on a standard display. Results indicate a highly significant effect of anaglyphic stereoscopic display on ocular vergence when viewing a stereoscopic calibration video. Significant convergence measurements were obtained for stimuli fused in the anterior image plane.

TRANSCRIPT

Page 1: Daugherty Measuring Vergence Over Stereoscopic Video With A Remote Eye Tracker

Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail [email protected]. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00

Measuring Vergence Over Stereoscopic Video with a Remote Eye Tracker

Brian C. Daugherty† Andrew T. Duchowski†∗ Donald H. House†∗ Celambarasan Ramasamy‡

†School of Computing & ‡Digital Production Arts, Clemson University

Abstract

A remote eye tracker is used to explore its utility for ocular ver-gence measurement. Subsequently, vergence measurements arecompared in response to anaglyphic stereographic stimuli as wellas in response to monoscopic stimulus presentation on a standarddisplay. Results indicate a highly significant effect of anaglyphicstereoscopic display on ocular vergence when viewing a stereo-scopic calibration video. Significant convergence measurementswere obtained for stimuli fused in the anterior image plane.

CR Categories: I.3.3 [Computer Graphics]: Picture/Im-age Generation—Display algorithms; I.3.6 [Computer Graphics]:Methodology and Techniques—Ergonomics;

Keywords: eye tracking, stereoscopic rendering

1 Introduction & Background

Since their 1838 introduction by Sir Charles Wheatstone [Lipton1982], stereoscopic images have appeared in a variety of forms,including dichoptic stereo pairs (different image to each eye),random-dot stereograms [Julesz 1964], autostereograms (e.g., thepopular Magic Eye and Magic Eye II images), and anaglyphic im-ages and movies, with the latter currently resurging in popularityin American cinema (e.g., Monsters vs. Aliens, DreamWorks An-imation and Paramount Pictures). An anaglyphic stereogram is acomposite image consisting of two colors and two slightly differ-ent perspectives that produces a stereoscopic image when viewedthrough two corresponding colored filters. Although the elicitedperception of depth appears to be effective, relatively little is knownabout how similarly this effect may be on viewers’ eye movements.Autostereograms, for example, are easily fused by some, but not byothers.

When the eyes move through equal angles in opposite directions,disjunctive movement, or vergence, is produced [Howard 2002]. Inhorizontal vergence, each visual axis moves within a plane con-taining the interocular axis. When the visual axes move inwards,the eyes converge; when the axes move outwards, they diverge.The convergent movement of the eyes (binocular convergence),i.e., their simultaneous inward rotation toward each other (cf. di-vergence denotes the outward rotation), ensures that the projec-tion of images on the retina of both eyes are in registration witheach other, allowing the brain to fuse the images into a single per-cept. This fused percept provides stereoscopic vision of three-dimensional space. Normal binocular vision is primarily charac-terized by this type of fusional vergence of the disparate retinal im-ages [Shakhnovich 1977]. Vergence driven by retinal blur is distin-guished as accommodative vergence [Buttner-Ennever 1988].

∗Email: {andrewd | dhouse}@cs.clemson.edu

The angle between the visual axes is the vergence angle. When aperson fixates a point at infinity, the visual axes are parallel and thevergence angle is zero. The angle increases when the eyes converge.For symmetrical convergence, the angle of horizontal vergence φ isrelated to the interocular distance a and the distance of the point offixation from a point midway between the eyes D by the expres-sion: tan (φ/2) = a/(2D). Thus, the change in vergence per unitchange in distance is greater at near than at far viewing distances.

About 70% of a person’s normal range of vergence is used withinone meter from the eyes. The angle of vergence changes about14◦ when gaze is moved from infinity to the nearest distance forcomfortable convergence at about 25 cm. Vergence changes about36◦ when the gaze moves to the nearest point to which the eyescan converge. About 90% of this total change occurs when the eyesconverge from 1 m.

In this paper, vergence measurements are made over anaglyphicstereo imagery. Although depth perception has been studied ondesktop 3D displays [Holliman et al. 2007], eye movements werenot used to verify vergence. Holliman et al. conclude that depthjudgment cannot always be predicted from display geometry alone.The only other similar effort we are aware of is measurement of in-terocular distance on a stereo display during rendering of a stereoimage at five different depths [Kwon and Shul 2006]. Interoculardistance was seen to range by about 10 pixels across three partici-pants. In this paper, we report observations on how binocular an-gular disparity (of twelve participants) is affected when viewing ananaglyphic stereo video clip.

How can vergence be measured when viewing a dichoptic computeranimation presented anaglyphically? To gauge the effect of stereodisplay on ocular vergence, it is sufficient to measure the disparitybetween the left and right horizontal gaze coordinates, e.g., xr−xl

given the left and right gaze points, (xl, yl), (xr, yr) as deliveredby current binocular eye trackers. Thus far, to our knowledge, theonly such vergence measurements to have been carried out havebeen performed over random dot stereograms [Essig et al. 2004].

The question that we are attempting to address is whether the ver-gence angle can be measured from gaze point data captured by a(binocular) eye tracker. Of particular interest is the measure of rel-ative vergence, that is, the change in vergence from fixating a pointP placed some distance ∆d behind (or in front of) point F , thepoint at which the visual axes converge at viewing distance D. Thevisual angle between P and F at the nodal point of the left eye is φl,signed positive if P is to the right of the fixation point. The sameangle for the right eye is φr , signed in the same way. The binoculardisparity of the images of F is zero, since each image is centeredon each eye’s visual axis. The angular disparity η of the images ofP is φl − φr . If θF is the binocular subtense of point F and θP isthe binocular subtense of point P , then η = φl − φr = θP − θF .Thus, the angular disparity between the images of a pair of objectsis the binocular subtense of one object minus the binocular subtenseof the other (see Figure 1).

Given the binocular gaze point coordinates reported by the eyetracker, (xl, yl) and (xr, yr), an estimate of η can be derived fol-lowing calculation of the distance ∆d between F and P , obtained

97

Page 2: Daugherty Measuring Vergence Over Stereoscopic Video With A Remote Eye Tracker

D

a

F

left eye

P

right eye

d∆

φl

(x ,y )rr

(x ,y )ll

φr

θ P

θF

Figure 1: Binocular disparity of point P with respect to fixationpoint F , at viewing distance D with (assumed) interocular distancea [Howard and Rogers 2002]. Given the binocular gaze point co-ordinates on the image plane (xl, yl) and (xr, yr) the distance be-tween F and P , ∆d, is obtained via triangle similarity. Assumingsymmetrical vergence and small disparities, angular disparity η isderived (see text).

via triangle similarity:

a

(D + ∆d)=

xr − xl

∆d⇒ ∆d =

(xr − xl)D

a− (xr − xl).

For objects in the median plane of the head, φl = φr so the totaldisparity η is 2φ degrees. By elementary geometry, φ = θF − θP

[Howard and Rogers 2002]. If the interocular distance is a,

tanθP

2=

a

2(D + ∆d)and tan

θF

2=

a

2D.

For small angles, the tangent of an angle is equal to the angle inradians. Therefore,

η = 2φ ≈ a

2(D + ∆d)− a

2Dor η ≈ −a∆d

D2 + D∆d. (1)

Since for objects within Panum’s fusional area ∆d is usually smallby comparison with D we can write

η ≈ −a∆d

D2. (2)

Thus, for symmetrical vergence and small disparities, the disparitybetween the images of a small object is approximately proportionalto the distance in depth of the object from the fixation point.

In the current analysis, the following assumptions are made for sim-plicity. Interocular distance is assumed to be the average separationbetween the eyes (a = 6.3 cm), i.e., the average for all peopleregardless of gender, although this can vary considerably (ranging

from about 5.3 to 7.3 cm) [Smith and Atchison 1997]. Vergenceis assumed to be symmetrical (although in our experiments no chinrest was used and so the head was free to rotate, violating this as-sumption; for a derivation of angular disparity under this conditionsee Howard and Rogers [2002]). Viewing distance is assumed tobe D = 50 cm, the operating range of the eye tracker, although thetracker allows head movement within a 30 × 15 × 20 cm volume(see below). It is also important to note that the above derivationof angular disparity (vergence) assumes zero binocular disparitywhen viewing point F at the screen surface. Empirical measure-ments during calibration show that this assumption does not alwayshold, or it may be obscured by the noise inherent in the eye tracker’sposition signal.

2 Methodology

A within-subjects experiment was conducted to test vergence mea-surement. One of nine video clips served as the independent vari-able in the analysis, a subset of a larger study. Two versions of thevideo clip were used to test vergence response. The only differencebetween the two versions was that one was rendered in a standardtwo-dimensional monoscopic format while the other was renderedin a red-cyan anaglyphic stereoscopic format. Ocular angular ver-gence response served as the dependent variable. The operationalhypothesis was simply that a significant difference in vergence re-sponse would be observed between watching the monoscopic andstereoscopic versions of the video.

2.1 Apparatus

A Tobii ET-1750 video-based corneal reflection (binocular) eyetracker was used for real-time gaze coordinate measurement (andrecording). The eye tracker operates at a sampling rate of 50 Hzwith an accuracy typically better than 0.3◦ over a ±20◦ horizontaland vertical range using the pupil/corneal reflection difference [To-bii Technology AB 2003] (in practice, measurement error rangesroughly ± 10 pixels). The eye tracker’s 17′′ LCD monitor wasset to 1280 × 1024 resolution and the stimulus display was maxi-mized to cover the entire screen (save for its title bar at the top ofthe screen). The eye tracking server ran on a dual 2.0 GHz AMDOpteron 246 PC (2 G RAM) running Windows XP. The client dis-play application ran on a 2.2 GHz AMD Opteron 148 Sun Ultra 20running the CentOS operating system. The client/server PCs wereconnected via 1 Gb Ethernet (connected via a switch on the samesubnet). Participants initially sat at a viewing distance of about 50cm from the monitor, the tracker video camera’s focal length.

2.2 Participants

Twelve college students (9 M, 3 F; ages 22-27) participated in thestudy, recruited verbally on a volunteer basis. Only three partici-pants had previously seen a stereoscopic film. Participants were notscreened for color blindness or impaired depth perception.

2.3 Stimulus

The anaglyphic stereogram used in this study was created with a redimage for the left eye, and a cyan image for the right eye. Likewise,viewers of these images wore glasses with a red lens in front of theleft eye, and a cyan lens in front of the right. The distance betweencorresponding pixels in the red and cyan images creates an illusionof depth when the composite image is fused together by the viewer.

Eight anaglyphic video clips were shown to participants, with aninth rendered traditionally (monoscopically). All nine videos werecomputer-generated. The first of the anaglyphic videos was of a

98

Page 3: Daugherty Measuring Vergence Over Stereoscopic Video With A Remote Eye Tracker

(a) back visual plane (b) middle visual plane (c) front visual plane

Figure 2: Calibration video, showing a white disk visiting each of the four corners and the center of each visual plane, along with a viewer’sgaze point (represented by a small rectangle) during visualization: (a) the disk appears to sink into the screen, (b) the disk appears at themonocular, middle image plane; (c) the disk appears to “pop out” of the screen. The size of the gaze point rectangle is scaled to visuallydepict horizontal disparity. A smaller rectangle, as in (a), represents divergence, while a larger rectangle, as in (c), represents convergence.

roving disk in three-dimensional space, as shown in Figure 2. Thepurpose of this video was calibration of vergence normalization, asthe stereoscopic depth of the roving disk matched the depth of theother video clips. The goal was to elicit divergent eye movementsas the disk sunk into and beyond the monocular image plane, and toelicit convergent eye movements as the disk passed through and infront of the monocular image plane. The roving disk moves withina cube, texture-mapped with a checkerboard texture to provide ad-ditional depth cues. The disk starts moving in the back plane. Afterstopping at all four corners and the center, the disk moves closer tothe viewer to the middle plane. The disk again visits each of thefour corners and the center, before translating to the front plane,where again each of the four corners and center is visited. Only the40 s calibration video clip is relevant to the analysis given in thispaper. The video was always the first viewed by each participant.

2.4 Procedure

Demographic information consisting of the age and gender of eachparticipant was collected. Each participant filled out a short pre-test questionnaire regarding his or her familiarity with anaglyphicstereographs. A quick (two-dimensional) calibration of the eyetracker was performed by having participants visually follow a rov-ing dot between nine different locations on the screen. After 2Dcalibration, participants were presented with a second, this time 3D(stereoscopic) calibration video of the roving disk, translating in2D as well as in depth. Next, participants were shown three shortvideos, either of the long videos (stereo or mono), three more shortvideos, and once again the long video (stereo or mono). The orderof presentation of the short videos followed a Latin square rotation.The order of the long videos was toggled for viewers such that allodd-numbered viewers saw the stereoscopic version first, all even-numbered viewers saw the monoscopic version first.

Participants were instructed to keep looking at the center of the rov-ing calibration disk as it moved on the screen, during each of the2D and 3D calibrations. No other instructions were given to par-ticipants as they viewed the other 9 stimulus video clips (a “freeviewing” task was implied).

2.5 Data Stream Synchronization

A major issue concerning gaze data analysis over dynamic me-dia such as video is synchronization. It is imperative that gaze

data be properly aligned with video frames over which eye move-ments were recorded (this is needed for subsequent visualization).Neither data streams necessarily begin at the same time, nor arethey streamed at the same data rate. The video display libraryused (xine-lib) provides media player style functionality (ver-sus video processing), and as such is liable to drop frames followingvideo stream decompression in order to maintain the desired play-back speed. All videos used in the experiment were created to runat 25 frames per second. If no video frames were dropped, synchro-nization is straightforward, since the eye tracker records data at 50Hz, and relies mainly on identification of a common start point (theeye tracker provides a timestamp that can be used for this purposeassuming both streams are initiated at about the same time, e.g., ascontrolled by the application).

For binocular vergence analysis, eye movement analysis is requiredto both smooth the data, to reduce inherent noise due to eye move-ment jitter, as well as to identify fixations within the eye movementdata stream. A simple and popular approach to denoising is theuse of a smoothing (averaging) filter. For visualization playback,the coordinate used is the linear average of the filter (of width 10frames, in the present case).

The use of a smoothing filter can introduce lag, depending on the fil-ter’s temporal position within the data stream. If the filter is alignedto compute the average of the last ten gaze data points, and thedifference in timestamps between successive gaze points is 20 ms,then the average gaze coordinate from the filter summation will be100 ms behind. To alleviate this lag, the filter is temporally shiftedforward by half its length.

Care in the filtering summation is taken to ignore invalid gaze datapoints. The eye tracker will, on occasion, e.g., due to blinks orother reasons for loss of eye image in the tracker’s cameras, flaggaze data as invalid (a validity code is provided by the eye track-ing server). In addition to the validity code, gaze data is set to(−1,−1), which, if folded into the smoothing filter’s summation,would inappropriately skew the gaze centroid. Invalid data is there-fore ignored in the filter’s summation, resulting in potentially fewergaze points considered for the average calculation. To avoid theproblem of the filter potentially being given only a few or no validpoints with which to compute the average, a threshold of 80% isused. If more than 80% of the filter’s data is invalid, then the filter’soutput is flagged as invalid and is not drawn.

99

Page 4: Daugherty Measuring Vergence Over Stereoscopic Video With A Remote Eye Tracker

3 Results

Following recording of raw eye movement data, the collected gazepoints (xl, yl), (xr, yr) and timestamp t were analyzed to detectfixations in the data stream. The angular disparity between the leftand right gaze coordinates, given by equation (1), was calculatedfor every gaze point that occurred during a fixation, as identified byan implementation of the position-variance approach, with a spatialdeviation threshold of 0.029 and number of samples set to 10. Notethat the gaze data used in this analysis is normalized, hence thedeviation threshold specified is in dimensionless units although it istypically expressed in pixels or degrees visual angle. The fixationanalysis code is freely available on the web.1 The angular disparityserves as the dependent variable in the analysis of the experiment.

Averaging across each of the three calibration segments, when thecalibration stimulus was shown in each of the back, mid, and frontstereo planes, with image plane and subject acting as fixed fac-tors, repeated-measures (within-subjects) one-way ANOVA showa highly significant effect of stereo plane on vergence response(F(2,22) = 8.15, p < 0.01).2 Pairwise comparisons using t-testswith pooled SD indicate highly significant differences between dis-parities measured when viewing the front plane and each of theback and mid planes (p < 0.01), but not between the back and midplanes, as shown in Figure 3.

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Back Mid FrontMe

an

Dis

pa

rity

(d

eg

. vis

ua

l a

ng

le,

with

SE

)

Stereo Plane

Mean Disparity During Calibration

Figure 3: Binocular disparity when viewing the stereoscopic cal-ibration video, averaging over all viewers within ∼40 s viewingtime split in thirds, i.e., when the calibration dot was seen in theback plane, the mid plane, and then in the front plane.

4 Conclusion

Results suggest that vergence is more active when viewing stereo-scopic imagery than when no stereo disparity is present. Moreover,a commercially available binocular eye tracker can be used to mea-sure vergence via estimation of horizontal disparity between the leftand right gaze points recorded when fixating.

1The position-variance fixation analysis code was originallymade available by LC Technologies. The C++ interface and im-plementation ported from C by Mike Ashmore are available at:<http://andrewd.ces.clemson.edu/courses/cpsc412/fall08>.

2With sphericity assumed by R, the statistical package used throughout.

Acknowledgments

This work was supported in part by IIS grant #0915085 fromthe National Science Foundation (HCC: Small: Eye Movement inStereoscopic Displays, Implications for Visualization).

References

BUTTNER-ENNEVER, J. A., Ed. 1988. Neuroanatomy of the Ocu-lomotor System. Reviews of Oculomotor Research, vol. II. Else-vier Press, Amsterdam, Holland.

ESSIG, K., POMPLUN, M., AND RITTER, H. 2004. Application ofa Novel Neural Approach to 3D Gaze Tracking: Vergence Eye-Movements in Autostereograms. In Proceedings of the Twenty-Sixth Annual Meeting of the Cognitive Science Society, K. For-bus, D. Gentner, and T. Regier, Eds. Cognitive Science Society,357–362.

HOLLIMAN, N., FRONER, B., AND LIVERSEDGE, S. 2007. AnApplication Driven Comparison of Depth Perception on Desktop3D Displays. In Stereoscopic Displays and Applications XVIII.SPIE.

HOWARD, I. P. 2002. Seeing in Depth. Vol. I: Basic Mechanisms.I Porteous, University of Toronto Press, Thornhill, ON, Canada.

HOWARD, I. P. AND ROGERS, B. J. 2002. Seeing in Depth. Vol.II: Depth Perception. I Porteous, University of Toronto Press,Thornhill, ON, Canada.

JULESZ, B. 1964. Binocular Depth Perception without FamiliarityCues. Science 145, 3630 (Jul), 356–362.

KWON, Y.-M. AND SHUL, J. K. 2006. Experimental Researcheson Gaze-Based 3D Interaction to Stereo Image Display. In Edu-tainment, Z. Pan et al., Ed. Springer-Verlag, Berlin, 1112–1120.LNCS 3942.

LIPTON, L. 1982. Foundations of the Stereoscopic Cin-ema: A Study in Depth. Van Nostrand Reinhold Com-pany Inc., New York, NY. ISBN O-442-24724-9 , URL:<http://www.stereoscopic.org>.

SHAKHNOVICH, A. R. 1977. The Brain and Regulation of EyeMovement. Plenum Press, New York, NY.

SMITH, G. AND ATCHISON, D. A. 1997. The Eye and VisualOptical Instrucments. Cambridge Univ. Press, Cambridge, UK.

TOBII TECHNOLOGY AB. 2003. Tobii ET-17 Eye-tracker ProductDescription. (Version 1.1).

100