tracking in unprepared environments for augmented reality systems

7
* Corresponding author. Tel.: #1-310-317-5151; fax: #1- 310-317-5695. E-mail address: azuma@hrl.com (R. Azuma) Computers & Graphics 23 (1999) 787}793 Augmented Reality Tracking in unprepared environments for augmented reality systems Ronald Azuma!,*, Jong Weon Lee", Bolan Jiang", Jun Park", Suya You", Ulrich Neumann" !HRL Laboratories, 3011 Malibu Canyon Road, MS RL96 Malibu, CA 90265-4799, USA "Integrated Media Systems Center, University of Southern California, Los Angeles, CA 90089-0781, USA Abstract Many augmented reality applications require accurate tracking. Existing tracking techniques require prepared environments to ensure accurate results. This paper motivates the need to pursue augmented reality tracking techniques that work in unprepared environments, where users are not allowed to modify the real environment, such as in outdoor applications. Accurate tracking in such situations is di$cult, requiring hybrid approaches. This paper summarizes two 3DOF results: a real-time system with a compass * inertial hybrid, and a non-real-time system fusing optical and inertial inputs. We then describe the preliminary results of 5- and 6-DOF tracking methods run in simulation. Future work and limitations are described. ( 1999 Elsevier Science Ltd. All rights reserved. Keywords: Augmented reality; Registration; Outdoor tracking; See-through head-mainted displays; Sensor fusion; Video tracking 1. Motivation An augmented reality (AR) display system super- imposes or composites virtual 3-D objects upon the users view of the real world, in real time. Ideally, it appears to the user as if the virtual 3-D objects actually exist in the real environment [1]. One of the key requirements for accomplishing this illusion is a tracking system that accu- rately measures the position and the orientation of the observer's location in space. Without accurate tracking, the virtual objects will not be drawn in the correct loca- tion and the correct time, ruining the illusion that they coexist with the real objects. The problem of accurately aligning real and virtual objects is called the registration problem. The best existing AR systems are able to achieve pixel- accurate registration, in real time; see [2] for example. However, such systems only work indoors, in prepared environments. These are environments where the system designer has complete control over what exists in the environment and can modify it as needed. For traditional AR applications, such as medical visualization and dis- playing manufacturing instructions, this assumption is reasonable. But many other potential AR applications would become feasible if accurate tracking was possible in unprepared environments. Potential users include hikers navigating in the woods, soldiers out in the "eld, and drivers operating vehicles. Users operating outdoors, away from carefully prepared rooms, could use AR dis- plays for improved situational awareness, navigation, targeting, and information selection and retrieval. AR interfaces might be a more natural means of controlling and interacting with wearable computers than the cur- rent WIMP standard. Beside opening new application areas, tracking in unprepared environments is an important research direc- tion because it will reduce the need to prepare environ- ments, making AR systems easier to set up and operate. Compared to virtual environment systems, AR systems are rarely found outside research laboratories. Preparing an environment for an AR system is hard work, requiring a signi"cant amount of measurement and calibration. If 0097-8493/99/$ - see front matter ( 1999 Elsevier Science Ltd. All rights reserved. PII: S 0 0 9 7 - 8 4 9 3 ( 9 9 ) 0 0 1 0 4 - 1

Upload: ronald-azuma

Post on 13-Sep-2016

219 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Tracking in unprepared environments for augmented reality systems

*Corresponding author. Tel.: #1-310-317-5151; fax: #1-310-317-5695.

E-mail address: [email protected] (R. Azuma)

Computers & Graphics 23 (1999) 787}793

Augmented Reality

Tracking in unprepared environments for augmented realitysystems

Ronald Azuma!,*, Jong Weon Lee", Bolan Jiang", Jun Park", Suya You",Ulrich Neumann"

!HRL Laboratories, 3011 Malibu Canyon Road, MS RL96 Malibu, CA 90265-4799, USA"Integrated Media Systems Center, University of Southern California, Los Angeles, CA 90089-0781, USA

Abstract

Many augmented reality applications require accurate tracking. Existing tracking techniques require preparedenvironments to ensure accurate results. This paper motivates the need to pursue augmented reality tracking techniquesthat work in unprepared environments, where users are not allowed to modify the real environment, such as in outdoorapplications. Accurate tracking in such situations is di$cult, requiring hybrid approaches. This paper summarizes two3DOF results: a real-time system with a compass* inertial hybrid, and a non-real-time system fusing optical and inertialinputs. We then describe the preliminary results of 5- and 6-DOF tracking methods run in simulation. Future work andlimitations are described. ( 1999 Elsevier Science Ltd. All rights reserved.

Keywords: Augmented reality; Registration; Outdoor tracking; See-through head-mainted displays; Sensor fusion; Video tracking

1. Motivation

An augmented reality (AR) display system super-imposes or composites virtual 3-D objects upon the usersview of the real world, in real time. Ideally, it appears tothe user as if the virtual 3-D objects actually exist in thereal environment [1]. One of the key requirements foraccomplishing this illusion is a tracking system that accu-rately measures the position and the orientation of theobserver's location in space. Without accurate tracking,the virtual objects will not be drawn in the correct loca-tion and the correct time, ruining the illusion that theycoexist with the real objects. The problem of accuratelyaligning real and virtual objects is called the registrationproblem.

The best existing AR systems are able to achieve pixel-accurate registration, in real time; see [2] for example.However, such systems only work indoors, in prepared

environments. These are environments where the systemdesigner has complete control over what exists in theenvironment and can modify it as needed. For traditionalAR applications, such as medical visualization and dis-playing manufacturing instructions, this assumption isreasonable. But many other potential AR applicationswould become feasible if accurate tracking was possiblein unprepared environments. Potential users includehikers navigating in the woods, soldiers out in the "eld,and drivers operating vehicles. Users operating outdoors,away from carefully prepared rooms, could use AR dis-plays for improved situational awareness, navigation,targeting, and information selection and retrieval. ARinterfaces might be a more natural means of controllingand interacting with wearable computers than the cur-rent WIMP standard.

Beside opening new application areas, tracking inunprepared environments is an important research direc-tion because it will reduce the need to prepare environ-ments, making AR systems easier to set up and operate.Compared to virtual environment systems, AR systemsare rarely found outside research laboratories. Preparingan environment for an AR system is hard work, requiringa signi"cant amount of measurement and calibration. If

0097-8493/99/$ - see front matter ( 1999 Elsevier Science Ltd. All rights reserved.PII: S 0 0 9 7 - 8 4 9 3 ( 9 9 ) 0 0 1 0 4 - 1

Page 2: Tracking in unprepared environments for augmented reality systems

AR systems are to become more commonplace, theymust become easier for end users to set up and operate.Today's systems require expert users to set up and cali-brate the environment and the system. If accurate track-ing could be achieved without the need to carefullyprepare the environment in advance, that would be a ma-jor step in reducing the di$culty of operating an ARsystem. The ultimate goal is for the AR system to supportaccurate tracking in arbitrary environments and condi-tions: indoors, outdoors, anywhere the user wants to go.We are far from that goal, but by moving AR systemsinto unprepared, outdoor environments, we take an im-portant step in that direction.

Tracking in unprepared environments is di$cult forthree reasons. First, if the user operates outdoors andtraverses long distances, the resources available to thesystem may be limited due to mobility constraints. Ina prepared environment where the user stays within oneroom, bandwidth and CPU constraints are limited moreby the budget than any physical factors. But if the usermoves around outdoors, especially if he wears all theequipment himself, then size, weight, and power con-straints all become concerns. Second, the range of operat-ing conditions is greater than in prepared environments.Lighting conditions, weather, and temperature are allfactors to consider in unprepared environments. Forexample, the display may not be bright enough to see ona sunny day. Visual landmarks that a video trackingsystem relies upon may vary in appearance under di!er-ent lighting conditions or may not be visible at all atnight. Third and most importantly, the system designercannot control the environment. It may not be possibleto modify the environment. For example, many ARtracking systems rely upon placing special "ducialmarkers at known locations in the environment; anexample is [3]. However, this approach is not practical inmost outdoor applications. We cannot assume we cancover the landscape with billboard-sized coloredmarkers. Also, we may not be able to accurately measureall objects of interest in the environment beforehand. Theinability to control the environment also restricts thechoice of tracking technologies. Many trackers requireplacing active emitters in the environment. These threedi!erences between prepared and unprepared environ-ments illustrate the challenge of accurate tracking inunprepared environments.

If we survey tracking technologies for how well theyoperate in unprepared environments, we "nd that nosingle technology will o!er the required performance inthe near future [4]. The global positioning system (GPS)can measure the position of any point on the Earth fromwhich enough satellites can be seen. Ordinary GPSmeasurements have typical errors around 30 m; di!eren-tial GPS systems can reduce the typical error to around3 m. Carrier phase systems can achieve errors measuredin the centimeters under certain conditions. However,

GPS does not directly measure orientation and does notwork when the user cannot see enough of the sky(indoors, near buildings, in canyons, etc.) In militarycircumstances, GPS is relatively easy to jam. Inertial anddead reckoning sensors are self-contained, sourcelesstechnologies. Their main problem is drift. Cost and sizerestrictions also limit the performance of units suitablefor man-portable applications, although MEMS tech-nologies may change that in the future. Because recover-ing position requires doubly integrating acceleration,getting accurate position estimates from accelerometersis extremely di$cult (due to the confounding e!ects ofgravity). Many commonly used trackers (optical, mag-netic, and ultrasonic) rely on active sources, which maynot be appropriate for unprepared environments. Passiveoptical (video tracking) is an applicable approach andcan generate a full 6-D solution. It needs a clear line-of-sight. Computer vision algorithms can be brittle andcomputationally intensive. Electromagnetic compassescombined with tilt sensors are trackers commonly usedin inexpensive head-mounted displays (HMDs). They donot measure position and are vulnerable to distortions inthe Earth's magnetic "eld, requiring extensive calibratione!orts. Even in magnetically clean environments, wehave measured 2}43 peak-to-peak distortions in a high-quality electronic compass. From this analysis, we seethat no single tracking technology by itself appears too!er a complete solution.

Therefore, our approach has been to develop hybridtracking technologies that combine multiple sensors inways that compensate for the weaknesses of each indi-vidual component. In particular, we have been pursuinghybrids that combine optical and inertial sensors. Thecomplementary nature of these two sensors makes themgood candidates for research [5]. Our colleagues GaryBishop, Leandra Vicci and Greg Welch at the Universityof North Carolina at Chapel Hill are also developinghybrid sensors of this type for this problem area.

2. Previous work

Virtually all existing AR systems work indoors, inprepared environments. Few AR systems operate in un-prepared environments where the user cannot modify orcontrol the real world. The "rst known system of thistype is Columbia's Touring Machine [6]. This usescommercially available sourceless orientation sensors,typically a compass and a tilt sensor, combined witha di!erential GPS. The most recent version uses anorientation sensor from InterSense. Developing accuratetrackers has not been the focus of the Touring Machineproject, and the system can exhibit large registrationerrors when the user moves quickly. Concurrent with ourresearch, a group at the Rockwell Science Center hasbeen developing a method for registration in outdoor

788 R. Azuma et al. / Computers & Graphics 23 (1999) 787}793

Page 3: Tracking in unprepared environments for augmented reality systems

Fig. 1. Base system data#ow diagram.

Fig. 2. Sequence of heading data comparing the compass inputvs. the "lter output.

environments that is based on detecting the silhouette ofthe horizon line [7]. By comparing the silhouette againsta model of the local geography, the system can determinethe user's current location and orientation.

Our research in developing new trackers for un-prepared environments is directed by the philosophy thathybrid approaches are the only ones that o!er a reason-able chance of success. Some previous AR trackingsystems have used hybrid systems. [8] added rate gyro-scopes to an optical tracker to aid motion prediction. [9]uses a set of sensors (rate gyroscopes and a compass andtilt sensor) that is similar to our initial base system. Thedi!erences in our results are the di!erent mathematics tocombine the sensor inputs, our distortion compensationmethods, and our actual demonstration of accuratetracking in an AR system.

3. Contribution

The rest of this paper summarizes results from twosystems that we have built for tracking in unpreparedenvironments and describes recent new results that tacklethe position problem. The "rst two systems focus on theorientation component of tracking, so we call them 3 de-gree-of-freedom (3DOF) results. The initial base systemcombines a compass and tilt sensor with three-rate gyro-scopes to stabilize the apparent motion of the virtualobjects. This system runs in real time and has beendemonstrated in the "eld. While the registration is notperfect, typical errors are reduced signi"cantly from us-ing the compass by itself. Next, the inertial}optical hybridadds input from a video tracker that detects natural 2-Dfeatures in the video sequence to reduce errors to a fewpixels. We tested this tracker on real data, but because ofcomputational requirements it does not yet run in realtime. Finally, the 5- and 6-DOF simulations are our "rststeps toward actively tracking position as well as orienta-tion (beyond solely relying upon GPS). The results dem-onstrate video tracking algorithms that detect features atinitially unknown locations and incorporate them intothe position estimate as the user moves around a widearea in an unprepared environment.

4. 3DOF base system

For our "rst attempt at a hybrid tracker in an un-prepared environment, we focused on a subset of theproblem. We wanted a system that addresses problemswithin that subset and provides a base to build upon.First, we assume that all objects are distant (severalhundred meters away). This allows us to rely solely ondi!erential GPS for position tracking and focus ourresearch e!orts on the orientation problem. The largesterrors come from distortions in the orientation sensor

and dynamic errors caused by system latency. Therefore,the main contributions of the base system are in calibra-ting the electronic compass and other sensors andstabilizing the displayed output with respect to usermotion. To do this, we built a hybrid tracker that com-bines a compass and tilt sensor with three rate gyro-scopes (Fig. 1).

E!ectively fusing the compass and gyroscope inputsrequired careful calibration and development of sensorfusion algorithms. We measured signi"cant distortionsin the electronic compass, using a custom built non-magnetic turntable. Furthermore, there is a 90 ms delaybetween the measurements in the two sensors, due toinherent sensor properties and communication latencies.The compass is read at 16 Hz while the gyroscopes aresampled at 1 kHz. The "lter fuses the two at the gyro-scope update rate. It is not a true Kalman "lter, to makethe tuning easier and to reduce the computational load.However, it provides the desired properties as seen inFig. 2. First, the "lter output leads the raw compassmeasurement, showing that the gyros compensate for theslow update rate and long latency in the compass. Sec-ond, the "lter output is much smoother than the com-pass, which is another bene"t of the gyroscope inputs.Third, the "lter output settles to the compass when themotion stops. Since the gyros accumulate drift, the "lter

R. Azuma et al. / Computers & Graphics 23 (1999) 787}793 789

Page 4: Tracking in unprepared environments for augmented reality systems

Fig. 3. Virtual labels over outdoor landmarks at PepperdineUniversity, as seen from HRL Laboratories. Fig. 4. Example of 2-D video features automatically selected

and tracked.

Fig. 5. Graph comparing registration errors from base system(gray) vs. the inertial}optical hybrid running the second method(red).

uses the absolute heading provided by the compass tocompensate for the inertial drift. A simple motion pre-dictor compensates for delays in the rendering and dis-play subsystems.

The base system operates in real time, with a 60 Hzupdate rate. It runs on a PC under Windows NT4, usinga combination of OpenGL and DirectDraw 3 for render-ing. We have run the system in four di!erent geographi-cal locations. Fig. 3 shows a sample image of some virtuallabels identifying landmarks at Pepperdine University, asseen from HRL Laboratories.

The base system is the "rst motion-stabilized outdoorAR system, providing the smallest registration errors ofcurrent outdoor real-time systems. Compared to usingthe compass by itself, the base system is an enormousimprovement both in registration accuracy and smooth-ness. Without the bene"ts provided by the hybridtracker, the display is virtually unreadable when theuser moves around. However, the registration is far fromperfect. Peak errors are typically around 23, with averageerrors under 13. The compass distortion can change withtime, requiring system recalibration. For more detailsabout this system, please read [10].

5. 3DOF inertial}optical hybrid

The next system improves the registration even furtherby adding video tracking, forming our "rst inertial}optical hybrid. Fusing these two types of sensors o!erssigni"cant bene"ts. Integrating the gyroscopes yieldsa reasonable estimate of the orientation, providinga good initial guess to reduce the search space in thevision-processing algorithms. Furthermore, during fastmotions the visual tracking may fail due to blur and largechanges in the images, so the system relies upon thegyroscopes then. However, when the user stops moving,the video tracking locks on to recognizable features andcorrects for accumulated drift in the inertial tracker.

The inertial}optical hybrid performs some calibrationto map the relative orientations between the compass-

inertial tracker and the video camera's coordinate sys-tem. 2-D di!erences between adjacent images aremapped into orientation di!erences and used to providecorrections. The 2-D vision tracking does not rely on"ducials at known locations. Instead, it searches thescene for a set of 10}40 features that it can robustly track.These features may be points or 2-D features (Fig. 4). Theselection is automatic. Thus, the 2-D vision tracking issuitable for use in unprepared environments.

There are two methods for fusing the inputs from thevision and inertial trackers. The "rst method is to use theintegrated gyro orientation as the vision estimate. Thismeans that the inertial estimate will drift with time but ithas the advantage that any errors in the vision trackingwill not propagate to corrupt the inertial estimate, so thismethod may be more robust. In the second method, theincremental gyroscope motion becomes the visual esti-mate. This corrects the gyroscope drift at every videoframe, but now visual errors can a!ect the orientationestimate, so the visual feature tracking must be robust. Inpractice, the second method yields much smaller errorsthan the "rst, so that is what we use.

Overall, the inertial}optical hybrid greatly reduces theerrors seen in the base system (Fig. 5). During fast

790 R. Azuma et al. / Computers & Graphics 23 (1999) 787}793

Page 5: Tracking in unprepared environments for augmented reality systems

Fig. 6. Virtual labels annotated over landmarks for video se-quences showing vision-corrected (red labels), and inertial only(blue labels) tracking results.

Fig. 7. A comparison of 5DOF errors as a function of image-feature tracking pixel noise.

motions, the errors in the base system can becomesigni"cant. Fig. 6 shows an example of the correctionprovided by incorporating the video input. The inertial-video output stays within a few pixels of the true location.In actual operation the labels appear to stick almostperfectly.

Because the computer vision processing is computa-tionally intensive, it does not run in real time. However,we emphasize that our result is not a simulation. Werecorded several motion sequences of real video, com-pass, and inertial data. The method then ran, o%ine, onthis real unprocessed data to achieve the results. We areinvestigating ways of incorporating this video-based cor-rection into the real-time system. For more details aboutthis system, see [11].

6. 5DOF simulation

The two previous results focused on orientation track-ing, relying upon GPS for acquiring position. However,there are many situations where GPS will not be avail-able, and when objects of interest get close to the user,errors in GPS may appear as signi"cant registrationerrors. Therefore, we need to explore methods of doingposition tracking in unprepared environments. This sec-tion describes an approach to tracking relative motiondirection in addition to rotation, based on observed 2-Dmotions.

The inertial}optical hybrid AR system describedabove uses a traditional planar-projection perspectivecamera. This optical system has several weaknesses whenused to measure linear camera motion (translation).First, there is a well-known ambiguity in discriminatingbetween the image motions caused by small pure trans-lations and small pure rotations [12]. Second, a planarprojection camera is very sensitive to noise when thedirection of translation lies outside of the "eld of view.Panoramic or panospheric projections reduce or elimin-ate these problems. Their large "eld of view makes theuncertainty of motion estimation relatively independentof the direction of motion. We compared planar and

panoramic projections using the 8-point algorithm thatuses essential matrix and co-planarity conditions amongimage points and observer's position [13,14]. Fig. 7shows a result from our simulations. We plot the camerarotation and translation direction errors against pixelnoise to show that for moderate tracking noise ((0.4pixel), the panoramic projection gives more accurateresults. We show one of several general motion pathstested. In this case, yaw"83, pitch"63, and translation(up) for 2 cm. Below 0.4 pixel noise levels, the panoramicprojection shows superior accuracy (over a planar per-spective projection) in terms of lower error and standarddeviation of the error.

R. Azuma et al. / Computers & Graphics 23 (1999) 787}793 791

Page 6: Tracking in unprepared environments for augmented reality systems

Fig. 8. Propagated camera pose errors in 6DOF autocalibra-tion experiment.

7. 6DOF simulation

The range of most vision-based tracking systems islimited to areas where a minimum number of calibratedfeatures (landmarks or "ducials) are in view. Even partialocclusion of these features can cause failure or errors intracking. More robust and dynamically extendible track-ing can be achieved by dynamically calibrating the 3-Dpositions of uncalibrated "ducials or natural features[15,16], however the e!ectiveness of this approach de-pends on the behavior of the pose calculations.

Experiments show that tracking errors propagate rap-idly for extendible tracking when the pose calculation issensitive to noise or otherwise unstable. Fig. 8 shows howerrors in camera position increase as dynamic pose andcalibration errors propagate to new scene features. In thissimulated experiment, the system starts tracking with sixcalibrated features. The camera is then panned and ro-tated while the system estimates the positions of 94 ini-tially uncalibrated features placed in a 100A]30A]20Avolume.

The green line in Fig. 8 shows the errors in the camerapositions computed from the estimated features, usingthe 3-point pose estimation method described in [3].After about 500 frames (&16 s) the 5-in accumulatederror exceeds 5% of the largest operating volume dimen-sion. This performance may be adequate to compensatefor several frames of "ducial occlusion, but it does notallow signi"cant camera motion or tracking area exten-sion. More accurate pose estimates are needed to reducethe error growth rate.

To address the pose problem we developed two newpose computation methods that signi"cantly improve theperformance of dynamic calibration and therefore in-crease the possibility of achieving 6DOF tracking inunprepared environments. One method is based on ro-bust averages of 3-point solutions. The other is based onan iterative extended Kalman "lter (IEKF) and single

constraint at a time (SCAAT) "lter [17]. Both methodsare designed speci"cally for low frame rates and over-constrained measurements per frame that characterizevideo vision systems. The pink and blue lines in Fig. 8show the results obtained by these two methods. Theseinitial tests show signi"cant improvements that lead us tobelieve that autocalibration methods will be an impor-tant approach to tracking in unprepared environments.For more details on the two-pose estimation methods see[18].

8. Future work

Much remains to be done to continue developingtrackers that work accurately in arbitrary, unpreparedenvironments. The results we described in this paper area "rst step but have signi"cant limitations. For example,the visual tracking algorithms assume a static scene, andwe must add compass calibration routines that compen-sate for the changing magnetic "eld distortion as the userwalks around. We currently assume that viewed objectsare distant to minimize the e!ect of position errors; as weprogress down the 6DOF route we will need to includereal objects at a variety of ranges.

Future AR systems that work in unprepared environ-ments must also address the size, weight, power, andother issues that are particular concerns for systems thatoperate outdoors.

Acknowledgements

Most of this paper is based on an invited presentationgiven by Ron Azuma at the 5th Eurographics Workshopon Virtual Environments (with a special focus on Aug-mented Reality) in June 1999. This work was mostlyfunded by DARPA ETO War"ghter Visualization, con-tract N00019-97-C-2013. We thank Axel Hildebrand forhis invitation to submit this paper.

References

[1] Azuma R. A survey of augmented reality. Presence:Teleoperators and Virtual Environments 1997;6(4):355}85.

[2] State A, Hirota G, Chen D, Garrett B, Livingston M.Superior augmented reality registration by integratinglandmark tracking and magnetic tracking. Proceedings ofSIGGRAPH '96, August 1996. p. 429}38.

[3] Neumann U, Cho Y. A self-tracking augmented realitysystem. Proceedings of ACM Virtual Reality Software andTechnology, July 1996. p. 109}15.

[4] Azuma R. The challenge of making augmented realitywork outdoors. In: Ohta Y, Tamura J, editors. Mixedreality: merging real and virtual worlds. Berlin: Springer,1999. p. 379}90.

792 R. Azuma et al. / Computers & Graphics 23 (1999) 787}793

Page 7: Tracking in unprepared environments for augmented reality systems

[5] Welch G. Hybrid self-tracker: an inertial/optical hybridthree-dimensional tracking system. UNC Chapel HillDepartment of Computer Science Technical ReportTR95-048, 1995.

[6] Feiner S, MacIntyre B, HoK llerer T. A touring machine:prototyping 3D mobile augmented reality systems for ex-ploring the urban environment. Proceedings of First Inter-national Symposium on Wearable Computers, October1997. p. 74}81.

[7] Behringer R. Registration for outdoor augmented realityapplications using computer vision techniques and hybridsensors. Proceedings of IEEE Virtual Reality '99, March1999. p. 244}51.

[8] Azuma R, Bishop G. Improving static and dynamic regis-tration in an optical see-through HMD. Proceedings ofSIGGRAPH '94, July 1994. p. 197}204.

[9] Foxlin E, Harrington M, Pfei!er G. Constellation: a wide-range wireless motion-tracking system for augmentedreality and virtual set applications. Proceedings of SIG-GRAPH '98, July 1998. p. 371}8.

[10] Azuma R, Ho! B, Neely H. III, Sarfaty R. A motion-stabilized outdoor augmented reality system. Proceedingsof IEEE Virtual Reality '99, March 1999. p. 252}9.

[11] You S, Neumann U, Azuma R. Hybrid inertial and visiontracking for Augmented Reality Registration. Proceedingsof IEEE Virtual Reality '99, March 1999. p. 260}7.

[12] Daniilidis K, Nagel H.-H. The coupling of rotation andtranslation in motion estimation of planar surfaces. IEEEConference on Computer Vision and Pattern Recognition,June 1993. p. 188}93.

[13] Hartley RI. Defense of the 8-Point Algorithm. FifthInternational Conference on Computer Vision, 1995.p. 1064}70.

[14] Huang TS, Netravali AN. Motion and structure fromfeature correspondences: a review. Proceedings of theIEEE 1994;82(2):251}68.

[15] Neumann U, Park J. Extendible object-centric tracking foraugmented reality. Proceedings of IEEE Virtual RealityAnnual International Symposium 1998, March 1998.p. 148}55.

[16] Park J, Neumann U. Natural feature tracking forextendible robust augmented realities. InternationalWorkshop on Augmented Reality (IWAR)'98, November1998.

[17] Welch G, Bishop G. SCAAT: Incremental tracking withincomplete information. Proceedings of SIGGRAPH97,August 1997 p. 333}44.

[18] Park J, Jiang B, Neumann U. Vision-based pose computa-tion: robust and accurate augmented reality tracking. Pro-ceedings of International Workshop on AugmentedReality (IWAR)'99, October 1999, to appear.

R. Azuma et al. / Computers & Graphics 23 (1999) 787}793 793