mixed reality applications in urban environments

11

Click here to load reader

Upload: j-bulman

Post on 06-Aug-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mixed Reality Applications in Urban Environments

BT Technology Journal • Vol 22 No 3 • July 200484

Mixed reality applications in urban environments

J Bulman, B Crabtree, A Gower, A Oldroyd, M Lawson and J Sutton

Mixed reality applications are concerned with blurring the divide between the real and virtual, where the user perception ofthe real world is contextually enhanced with additional information. This paper will provide an overview of three relatedprojects that explore opportunities for how virtual reality and mixed reality technologies can be effectively used acrossconsumer, industrial and military domains. A common theme is dealing with the amount of data that can be potentiallydisplayed in the applications.

1. IntroductionPervasive computing covers the management of manydistributed sensor and information networks. If theseneed to interact with the user at the point of informationcollection, then there will be a need for some kind ofinterface between the information and the user of thatinformation. We have been exploring the use of virtualand augmented reality techniques in a number ofapplications to enhance the real world with additional,contextually relevant information. In this respect wehave been looking more at the issues concerned withrepresenting information in this mixed reality world,rather than the particular source of the content. In thisway we are neutral with respect to pervasive computingwhere the information producers are distributed in theworld and therefore local to their point of use. In our

applications we gather all the information centrally to beprocessed. This allows the information to be analysedand relevant material shown at the point of use. In mostcases this will be locally relevant, but there may be moregeneral information presented. Figure 1 shows the kindof information filtering that might occur to reduce theamount of information that is presented in a mixedreality environment.

The effect is a funnel — all the information is‘poured’ in at the top, and only the relevant informationis left at the end. We could take as an example givinginformation about London buses — pre-operationalwould give us all information about every London bus.Operation-specific filtering might restrict this to busesthat are routed to the particular destination. Spatial

Fig 1 Information filtering.

pre-operation (all data available)

operation-specific data selected

spatialised information filtering

context filtering

visibility filtering

information presented to user

(+100 000 media objects

(+100 000 000 media objects

(+1000 media objects

(+100 media objects

(+10 media objects

optional data

number of GIS referenced mediadecreasing

mandatorydata

Page 2: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 2004 85

filtering would then limit the buses to those in thevicinity. Context filtering would further limit theselection to those buses going directly to thedestination. So from starting off with information aboutall the London buses, we would only display informationabout the nearest bus that is heading in the correctdirection. Typically in this kind of environment it isimportant to reduce the amount of informationdisplayed to that needed just for the task in hand, as theinformation is likely to be displayed on a low resolutiondevice, or possibly by audio or touch, depending on theapplication.

2. 3-D virtual reality and mixed reality scene rendering

Unlike virtual reality (VR), which provides the user with acompletely computer-generated environment, mixedreality systems (sometimes referred to as augmentedreality) enhance a user’s perception of the real world byseamlessly merging computer generated text, 2-Dimages, 3-D models, and audio with the user’s real-world view. The motivation behind developing a mixedreality system is that existing visual and spatial skills canbe utilised, thereby enhancing interaction with, andcomprehension of, spatialised information.

The rest of this paper describes three very differentapplications that use virtual and mixed reality views ofinformation, and discusses some of the issues involvedwith presenting that material. The first, and mostmature, application1 is a mixed reality location-basedgame involving co-operative game play in urbanenvironments. Typically there is a mix between mobileplayers with hand-held devices and desk or home-basedplayers who have access to a much richer environment.The second application covers an industrial workforcemanagement application where we are exploringproviding contextually relevant material to networkrepair engineers who need to visit customer premisesfor their job. We provide information about the job,routing of cabling and nearby faults to assist them intheir work. The final application is based on a currentproject for the defence sector. The project is focused onthe provision of enhanced situational awareness toground forces and command and control personnelusing multiple-fused information and data sources. Akey issue within this work is the use of a 3-Dgeographical information system (GIS) that enables acommon spatialised reference model to be used bymobile users and command-and-control operatives forthe generation and provision of information.

All of these applications have been developed usingan enhanced version of BT’s real-time renderingarchitecture, TARA (Total Abstract Rendering Archi-

tecture), to underpin all the 3-D visualisations in thiswork. TARA requires the following pieces of informationto generate mixed reality visualisations:

• a live camera feed of the real-world scene,

• the position, orientation and field of view of thereal-world camera,

• a 3-D model of the real-world scene,

• a 3-D model of the virtual-world scene.

Given these pieces of information the mixed realityvisualisation is created using the following steps:

• an image is captured from the live camera feed andused to fill the colour buffer,

• the 3-D model of the real-world scene is renderedinto the depth buffer using the position, orientationand field of view of the real-world camera — thisprovides occlusion information to itemssubsequently rendered but has no visual impact onthe final image,

• the 3-D model of the virtual world scene is renderedinto the frame buffer using the position, orientationand field of view of the real-world camera.

3. Pervasive gaming — gaming in urban environments

In pervasive gaming2, the majority of current gamesenable players to interact with each other and gamecontent using hand-held and fixed devices, in differentlocations and in unusual social contexts.

The games differ greatly in terms of format and useof the physical environment. Its Alive!’s ‘Bot Fighters’[1] is a location-based ‘shoot-em-up’ game usingcellular location to identify players in the same zone,and text messaging as the primary interface for fighting.Newt Games’s ‘Mogi’ [2] provides map interfacesavailable on both mobile telephones and fixed PCs toenable players to find, collect and trade virtual creaturesat specific locations. These two games are developed toenable casual game-play over a long period of time.

Other games rely on more co-operative play anddraw the local environment into the game as a mental orphysical challenge. In Blast Theory’s ‘Can You See MeNow’ [3], players on the streets of a city, hunt virtualplayers who navigate a 3-D replica of the game space.The street players are equipped with a map interface ona PDA which displays the virtual location of the on-line

1 This application underwent trials in BT during December 2003.

2 Cross-platform gaming on fixed and mobile networks with multiple real-time players. Games may be embedded into the physical environment andcontain added TV, movie and music hooks.

Page 3: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 200486

players. GPS tracks the location of the street playersand sends that location information to the virtualplayer’s PC interface. This kind of game format requiresplayers to have a mutual understanding of each other’ssituation, which means sharing information aboutlocation and context, and players being able tocommunicate with each other at appropriate times. Atthe same time, to make the game more challenging,information needs to be managed and sometimesdeliberately contain uncertainty to keep playersengaged [4].

Systems also need to be able to adapt tounpredictable situations, which can include networkfailure, unusual player behaviour or missed cues in thegame environment. In Blast Theory’s ‘Uncle Roy AllAround You’ developed in collaboration with theNottingham University’s Mixed Reality Lab, BT andUCL, the pervasive game is mixed with theatre andinvolves actors interacting with players at specificstaged locations (see Fig 2). These critical interactionsdepend on players and actors being in the right place,and on schedule [5].

In order to support a range of game formats inpervasive gaming we have developed an infrastructurewhich can support multiple game modules over fixedand mobile networks, including broadband, GPRS andWiFi. To add more functionality to the server, a gamemanagement system has been added, which enablesthe management of complex media streams and real-time communications not only between fixed andmobile players, but also between game managers andplayers. These two systems have undergone trials in ateam-building event, which we call ‘Encounter 1.0’.

3.1 Encounter 1.0Encounter 1.0 was held in November 2003 in andaround BT Centre, St Paul’s, London (see Fig 3). Withinthe event four teams of three were invited to compete inan hour-long challenge. The challenge was for eachteam to collect intelligence about the area, identify andlocate key targets, and recreate photographs receivedfrom ‘control’. Finally they were required to return to

base with the correct data from the correct location.The winning team had the most accurate information,and more intelligence collected.

To enable players to have a specific role targeted toeach device, location and network connection, theinfrastructure has been designed to marshal informationbetween four classes of client — spectator, admini-stration, street and virtual.

The spectator client is limited in functionality.Conceived as a secondary overview interface, this canbe used to track the progress of each team in the gameby anyone with access to the viewer. Typically displayinga rich 3-D environment with avatars, the scene isupdated with the current location of all the players inthe streets. Cameras can be attached to the avatars andto key areas, and spectators can navigate through thespace themselves. Spectators are able to replay gamesas logged by the server. In the future it is planned forspectators to take on a role in the game, addingobstacles or bonus challenges.

The virtual client is a media-rich broadband clientwhich is used by the team leader to track playerlocation, uncover location-specific media on a map, andreceive streaming video, audio and images. The clientenables asynchronous communication with the street

Fig 2 Uncle Roy All Around You — a street player, in an office, and meeting an actor.

Fig 3 Encounter 1.0 — the game area mapped into a 3-D environment.

Page 4: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 2004 87

players using text messaging, and can also sustain real-time audio connections. This player commands the teamand gives them the background to achieve the goals(see Fig 4). Messages are also received from ‘control’and non-player character (NPC) game characters.

Fig 4 Encounter 1.0 — team leaders guiding their teams through the challenges.

The street client is a very thin client which to dateruns on a PDA, with GPRS/WiFi and GPS. Varyingamounts of information are available on the PDA,including maps, video, and images. The more limitedthis information, the more communication is requiredbetween the team-members in the game to achieve thegoals. Communication can happen in two ways — textinput (which is difficult on the move), or audio (usingupload of segments via GPRS or streaming via WiFi)(Fig 5).

Fig 5 Encounter 1.0 — teams on the streets competing to get to target locations.

The most complex group of clients belong to theadministration class. These consist of a media mark-upinterface, a player administration interface and an agentinterface. The mark-up interface enables a gamemanager to create location targets on a map andassociate media clips with them before and during thegame. As street players reach locations the associatedmedia clips are triggered. The player administrationinterface is used by the managers during the game tocommunicate with all players, to change targetlocations on the fly, and change the game status. Theagent interface clones street player functionalitycreating a series of NPCs in the game.

The following methods were used to gather userdata from Encounter 1.0:

• audio and data logs,

• an observer dedicated to each team on the street,

• an observer for the team leader game zone,

• a group feedback session at the end of the game.

3.2 Playing the gameTo run the event, only a subset of the system featureswere employed. A breakdown of GPRS on the event dayrequired us to adapt the system within a few hours.Street players were given a PDA with all the data for thegame held locally rather than remotely — theseconsisted of maps with target locations and images ofthe target. They were also equipped with a mobiletelephone which connected them to their team leadervia audio conference, and a digital camera to capturedata. GPS data was downloaded locally to the PDA.

Each street team observer tracked the teams in thefield and reported location directly to the managementteam. Position data was updated on the server using theadminstration tools. In this way street player positionswere updated on the control player’s interfacecontinually throughout the game. Neither team leadersnor street players were aware the process was manual.

Team leaders received instructions via text messageson their game interface. On a map they were given atarget location, could see their team’s current locationand images of the target photograph they needed theteam to recreate. A pin number would be sent to themto relay to their team colleagues. Entering the pin intothe PDA released another media clue for the streetplayers. Team leaders would guide their teams to thelocation, where the team recreated the photographclue. Observers would verify the goal was achieved andthe next clue sent to the team leader.

There were over 15 locations to achieve within thehour-long game. As street players went from location tolocation, their movements triggered media clips on theteam-leader’s interface — classed as intelligence.Points were awarded for each clip released. The more ofthe game zone covered, the more intelligence isgathered. Teams able to find locations quickly werehandicapped on the fly by the management team, whocould change the order of targets to those farthestapart. Weaker teams could be led to more media-richareas to enhance their score.

3.3 Feedback from game participants and administrators

The results of the game were very positive from anexperience point of view. The teams were highlycompetitive and the nature of the game gave theplayers different levels of competition. On return tobase all images were downloaded and numerical targetscompared — accuracy of image reproduction could berewarded.

Page 5: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 200488

Players responded very well to the always-on audiocontact within the team. As well as the competitivenature of the game leading to sense of speed andurgency, the audio allowed constant feedback betweenteam-members. In contrast to ‘Uncle Roy All AroundYou’, in which communication is asynchronous andsomewhat ambiguous, the audio conference allowed forclear direction and spontaneous chat.

Street players were frustrated by too muchinformation on the PDA. Having a map removed someof the challenge in finding the target locations andremoved some of the dependency on the team leaderwho also had a map. For these players the moreambiguous the data, the more exciting the gamebecame.

Team leaders were not immediately aware of thesignificance of the media clips being released, and wereless interested in where the players had been thanwhere they were going to. They requested that clipsappear near target locations as additional clues forguiding their players to the location. In a sense theconcept that this could be media being streamed fromthe street players was lost in this particular gameformat.

Verification of achieving a target or mission wasanother issue. Team leaders requested that they couldverify that the image had been taken and thus have thenext clue automatically. Having the observers relay theinformation to the management team was an overhead,and caused some time delays. Team leaders alsorequested that they could see the image taken by theirteam and judge its accuracy in real time, and perhapsdirecting them more closely.

Game administrators had many of the tools theyneeded, except for the ability to set up a sequence oftargets in advance for the system to work throughautomatically. All game decisions were made in real-time which required constant communication betweenobservers and administrators. This is not a scalablemodel, GPRS issues aside, and results have been fedback into the tool-set.

3.4 Discussion and further workOne of the main results from our trial is how we dealwith unpredictability in the network. Our aim is to moveover completely to WiFi, in which users can play stand-alone elements (capture data, look up local data on theclient machine) and then move to interactive hotspots.The hotspots would allow reliable streaming andcommunication at specific points. Users could drop datain these locations for others to pick up without the needto deploy extended mesh networks.

Secondly, more research is needed to define thefeature sets for managing content on each clientinterface. Content is specific to the kind of experienceyou wish each user to have, and is not necessarily thesame data, repacked for each player or device. It ispossible to imagine that street players receiveinformation about an area that is vastly different fromthat which the team leaders have.

In contrast the administration class have commonfeatures across game formats. It is possible to conceivethat the features in them could be used as adecentralised game control interface, allowing users orgroups of users to define their own media-richexperiences.

New features required will include adaptingelements for the military project discussed in section 5,and to accommodate large-scale media events beyondthe current context of pervasive gaming. Mark-upfeatures are in development for Encounter 2.0 which willenable non-geographic specific games, allowing playersto collaborate and compete across cities and townsrather than in one specific game zone.

4. Workforce management applicationThe primary aim of the workforce managementapplication is to develop a 3-D visualisation of ageographic area overlaid with information on BT’snetwork and selected faults. The 3-D visualisations havebeen built up from a combination of aerial photography,OS mapping, building outlines, BT’s network andcurrent fault information. This type of visualisation isbeneficial to a number of end-user groups. Call centreoperators are an obvious user group, as they are likelyto have little knowledge of the area where a fault isreported. So being able to see the local geography willprovide background information which may promptappropriate questions to better estimate the time takento get to the premises or arrange for any access issuesto be resolved. Another group is a fault analyst who willuse the 3-D model to look for fault patterns based onthe physical geography.

The area we will focus on in this paper is using thisvisualisation to provide an overlay on the real worldfrom an engineer’s viewpoint. In our application therepair engineers can look at fault and networkinformation overlaid on the road network (see Fig 6).The benefits of this are to allow the engineers a view ofthe routing of the network leading up to a fault locationwhich may aid in diagnosing network-based faults dueto road works, flooding and other localised problems.The system will:

• give an overview of the routing of cabling to acustomer’s premises,

Page 6: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 2004 89

• overlay fault and network information on a viewfrom the engineer’s van,

• provide a ‘helicopter view’ to aid the discovery ofrelated faults and connectivity,

• let the engineers preview the locality in case thereare access issues.

The image in Fig 6 shows an artist’s impression ofhow a 3-D visualisation tool might be seen by a repairengineer. The display shows the street overlaid withadditional information that would aid the engineer.Specifically underground cabling is shown (in brown),and locations of faults with summary informationshowing their relative importance. It is also envisagedthat this would be a collaborative tool used inconjunction with a central controller who can look atother virtual views of the same scene. The compass onthe lower right of the display links the view of thecontroller with the view of the engineer which can beused to align the two views if necessary.

The conceptualisation might be the ideal scenario,but in reality there are many issues that need to beaddressed to make this a reality. One of the issues thatwe have been exploring is the problem of registration3

of the real and virtual worlds using readily available

equipment. To recreate a virtual overlay in exactly thecorrect place for it to be useful there are a number ofthings that need to be aligned — the exact position ofthe viewpoint, the focal length of the lens and theorientation of the camera. Of these we have beenlooking at how we can achieve accurate enoughorientation and position without the need for veryexpensive components. GPS can be used to give anapproximate position in outdoor environments to theorder of a few metres, while differential GPS canimprove this to the order of a metre. In addition, GPScan also give direction information while the receiver ismoving. An electronic compass can be used to provideorientation with some small lag, which is accurate to afew degrees.

The images in Fig 7 show how the 3-D models areused to aid overlaying information on the live scene.Figure 7(a) shows a perspective view of part of aresidential area. The road network is shown in grey andhouse models in dark pink taken from the OrdnanceSurvey MasterMap datasets [6]. The routing betweenthe exchanges and local distribution points is shown inblue. The image in Fig 7(b) shows a different area from alower viewpoint with a fault flag over a building giving arough indication of the severity of the fault. The lowerviewpoint shows that height information is used to placethe buildings at the correct heights on the ground. Withthe information in the 3D model we can then choose toshow as much or as little data overlaid as we wish.

The image in Fig 6 shows the conceptualisation ofhow a view might look for a field engineer. The view inFig 8 shows more of the reality. Here we have overlaidon a view of a residential area the routing of cabling andpositions of distribution points. The cabling is clearlyshown as a blue overlay on the road4, and thedistribution point positions are shown as columns inyellow. The view does not do any occlusion based on the

faultindicator

undergroundcable route

viewsummary

compass

Fig 6 Artist’s impression of 3-D visualisation tool used by fault repair engineers.

Fig 7 Perspective views of a residential area.

(a) (b)

3 Aligning information to overlay on to the correct place on the real world.4 The cabling is shown to take up much of the road so the engineer canclearly see that there is a cable on the road. The exact position of the cablecan be determined by the engineer.

Page 7: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 200490

positions of buildings in the real world, although thiswould be possible as the building data is available. Theheights of the DPs give an indication of their relativeproximity and 3-D position.

The approach we have taken is to move away from ahand-held unit due to the inherent movement involvedin holding the unit and instead have chosen to base thelocation and orientation data for the overlays on avehicle-based unit. The main driver for this is to be ableto use additional data to aid with positioning and orien-tation. The Ordnance Survey provides accurate positionsof the roads and road network, and we also use groundheight data to place virtual roads in a 3-D space.

As we have made the assumption that the displayunit will be vehicle based, we can make a number ofassumptions on the position of the vehicle. Firstly, wemake the assumption that the vehicle is on the road,and in the UK to the left hand side of the road. We canalso assume that the viewpoint is a fixed height abovethe road. With these assumptions we can use arelatively approximate location from a GPS sensortogether with road network information to moreaccurately position the location of the observer on theroad. Figure 9 shows the improvements.

As we also have ground height information we can usethis to give the camera an accurate orientation not onlyin the X-Z plane but also the inclination of the camera.

In summary, we have been keen to explore the useof mixed reality environments for network fault-engineer-based applications. We have addressed someof the issues of registering the real-world to overlay theinformation accurately in position over the real-worldobjects by using a combination of GPS and roadnetwork information to get more accurate position andorientation. We could always improve registration usingdifferential GPS and an electronic compass to get abetter overlay, but there is the question of the necessityof doing this. We are interested in comparing the use of

overlays such as we have generated in Fig 8 with avirtual reality view from Fig 7(b). The virtual reality viewdoes not suffer from the need to have very accurateposition and direction — a standard GPS device wouldsuffice for this. The engineer can make the link betweenthe object in the virtual world with its real worldequivalent.

5. Military operations in urban environments

Increasingly the success of military operations in urbanenvironments is dependent on possessing an enhancedawareness of the current situation. Ground forces inunfamiliar urban terrains require an enhancedappreciation of their local situation, where pertinentinformation and knowledge effortlessly augments theirperception of the environment. Command and controlrequires a complete understanding of the remotesituation in order to help manage uncertainty,ignorance and surprise across the battle space.

To provide these capabilities a system is required,which supports mutual understanding through thesharing and provision of information and knowledgedirectly related to the physical space.

5.1 Benefits and scenariosSuch a system will enable ground forces to be aware ofothers in their proximity, thereby helping to preventfriendly fire incidents. It will also facilitate the creationof spatialised information and knowledge by groundforces, and the further dissemination and presentationto others. When operations are held in a previouslyvisited location, pertinent information such as a securedentrance to a building or a hot spot of recurring socialunrest could be highlighted to ground forces unfamiliarwith the environment (see Fig 10).

Fig 8 Mixed reality view of residential area.

Fig 9 Refined vehicle positioning using GPS and road data.

GPS position

view direction based on roadsegment or average of roadsegments and approximate

direction

refined position,positioned to left of road

assuming driving directiontowards top right

road se

gment

Page 8: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 2004 91

Similarly, in rehearsed tactical manoeuvres,probable sniper locations and hazards previouslyidentified by reconnaissance could all be highlighted.Furthermore, generic information such as street namesand buildings could be provided and important military

resources such as telecommunications and power lines,broadcast aerials, etc, could be made apparent.

The use of a virtual replica of the battle space (seeFig 11) will enable command and control to better

Fig 10 Enhanced perception of the real world for ground forces.

HOSPITAL

HOSPITAL

COMMS

COMMANDPOWER

FIELD OF VIEW

SERVICES

ROADS

GRID POINT

HAZARDS

BUILDINGS

MANOEUVRES

Fig 11 Virtual replica of the real-world environment for command and control.

VEHICLE:N

AO76A

VEHICLE:UAO75B

AO76A

APV53

S53:8

S27:8APV27

c&c

VEHICLE:N

Page 9: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 200492

understand the intricacies of the terrain and the locationand status of friendly and enemy monitored groundforces and resources. Location-specific commands suchas anticipatory orders and requests for intelligence canbe easily created and delivered to ground forces in anorganised and managed way. In operational planningand review the virtual battle space could also be used torehearse and replay operation tactics.

5.2 Supporting technologiesThe key technologies required to support thedevelopment of the mixed reality system for thisdefence application are outlined below.

5.2.1 3-D GIS generationThe 3-D GIS tool-kit enables an accurate virtualduplicate of a real-world space to be quickly created byfusing data and information from a variety of diversesources, including existing vector and raster maps,LIDAR (light detection and ranging) data, satelliteimagery, and aerial photography.

The use of a 3-D GIS enables a common spatialisedreference model to be used by ground forces andcommand and control for the generation and provisionof GIS-referenced information.

5.2.2 GIS-referenced mediaBoth the command and control and ground forces havethe capability to create GIS-referenced media. In orderfor a media to be GIS referenced it must be attributableto a point location, defined area or an addressable 3-Dobject within the virtual scene or real-world environ-ment. Metadata associated with all GIS referencedmedia enables information filtering to be usedeffectively. At a minimum metadata should include adescription of what it is, who created it, where it wascreated, and when it was created.

5.2.3 Virtual and real-world tracking andregistration software

Mixed reality systems require computer media (2-D and3-D graphics, audio and haptic/tactile representations)to be accurately registered to the user’s viewpoint inreal time so that the virtual media is convincingly placedwith the real-world scene. Past research [7] hasdemonstrated that this can be achieved using hybridsystems that combine magnetic tracking (digitalcompass, inertia sensors and differential GPS) withcomputer-vision-based tracking techniques.

GPS and magnetic tracking systems are used toprovide an initial estimate of position and orientation ofthe viewer. Computer vision real-time object trackingalgorithms are then used to register the real-world videoimage with the virtual world image. This is achieved

using real-world static objects as markers that have avirtual equivalent within an accurate virtual represen-tation of real-world scene.

5.2.4 Information provisioning systemUrban environments are extremely complicated. Theyare populated by large numbers of people, areas,buildings and vehicles all of which can have vastamounts of data and information stored about them. Itis therefore very easy for a user to become overwhelmedby the sheer volume of information available. Toovercome the problem of information overload aninformation provisioning system (IPS) will be developedwhich is able to filter superfluous information andprovide only useful, relevant and pertinent informationto the user. Recent research in this area hasdemonstrated positive results using hybrid informationfiltering methods [8].

The IPS provides a framework for utilising spatialand visible filtering combined with an understanding ofthe user’s context to refine relevant GIS-referencedinformation that is presented to the user (see Fig 1).

Spatial filtering can be used to manage the displayof information based on the proximity of the user to aGIS-referenced point. A user-centred bounding spheredelineates an area of space within which a GIS-referenced point can be detected. If a point intersectswithin the user’s bounding sphere the user can see andinteract with the media associated with that point.Expanding or contracting the user’s bounding sphereprovides further or nearer distance-related information.

Visible filtering can be used to manage the display ofinformation based on the user’s visibility of an object inthe real world. This can enable potentially redundant in-formation associated with a physical object to be filtered.

Context filtering utilises any available informationabout a user and interprets that information (which mayinclude complex analysis, through feature extractionand modelling, to inferring a context) to refine relevantGIS information that is presented to the user.

5.2.5 Information displays and interfacesThe multimodal interface system enables the user tointeract with complex media via a conventional chest-mounted LCD display. Information is also displayedusing alternative body-worn displays that do not fullydominate the user’s attention. These include aperipheral vision display mounted in goggles,spatialised audio via a stereo headset, and a body-worntactile display. Figure 12 shows all the types of displayand other hardware carried by potential users.

Page 10: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 2004 93

6. FutureOver the next 30 months the DTC DIF Mixed RealitySystem for Urban Environments Project will develop aproof of concept mixed reality system demonstrator,which can be evaluated in a number of simulatedmilitary and civilian situations. The results of theevaluation will then be used to inform the developmentof future situational awareness systems for fieldoperatives and for command-and-control personnelwithin the military domain.

Although the exploitation of this research work ispredominantly focused within the military domain,many opportunities exist for commercial exploitationwithin the wider civilian arena. Generated IP andselected technologies are likely to be transferable to theemergency services, mobile engineering work forces,civil engineering, and business and consumer mobilebroadband markets.

7. ConclusionsThis paper has discussed three applications involvingvirtual/mixed reality environments as a means ofpresenting information related to the local environment.In the gaming application area it will be interesting tosee how the more collaborative aspects beingdeveloped as part of Encounter 1.1 can be introducedinto future mixed reality systems. Unfortunately in theworkforce management and military applications it isstill too early to measure the effectiveness of usingmixed reality techniques such as content filtering and

real world/virtual registration as an aid to providinginformation to a user. It is clear that a move to pervasivecomputing environments will help with informationfiltering where the information might be held locally toits point of use, but in turn this may introduce problemsof content access if the receiver is not in the immediatevicinity of the information.

What will be fundamental to future spatialisedinformation provisioning systems is the relevant use andappropriate display of media to users regardless of itsorigin. This paper has gone some way to showing whattypes of information can be used, and how they can bepresented to the end user.

References1 It’s Alive! — http://www.itsalive.com/

2 Newt Games — http://www.newtgames.com/

3 Blast Theory — http://www.blasttheory.co.uk

4 Benford S, Anastasi R, Flintham M, Drozd A, Crabtree A,Greenhalgh C, Tandavanitj N, Adams M and Row-Farr J: ‘Copingwith Uncertainty in a Location-Based Game’, IEEE PervasiveComputing (September 2003).

5 Flintham M, Anastasi R, Benford S, Drozd A, Mathrick J, RowlandD, Tandavanitj N, Adams M, Row-Farr J, Oldroyd A, Sutton J:‘Uncle Roy All Around You: mixing games and theatre on the citystreets’, in Proceedings of Level Up: The First InternationalConference of the Digital Games Research Association (DIGRA),Utrecht, The Netherlands (November 2003).

6 Ordance Survey — http://www.ordnancesurvey.co.uk/oswebsite/products/osmastermap/

Fig 12 Various ‘displays’ used by mobile users.

tactiledisplay

stereoheadset andmicrophone server

client

peripheralvision

display

differentialGPS

WiFicomms

batterypack

Firewirecamera

sunlightreadable

LCD display

10 GBhard drive digital

compass andaccelerometers

command-and-control interface

client

WiFicomms

mobile client command and control client

Page 11: Mixed Reality Applications in Urban Environments

Mixed reality applications in urban environments

BT Technology Journal • Vol 22 No 3 • July 200494

7 You S, Neumann U and Azuma R: ‘Orientation tracking foroutdoor augmented reality registration’, IEEE Virtual Reality,pp 36—42 (1999).

8 Julier S, Bailot Y and Brown D: ‘Information filtering for mobileaugmented reality’, IEEE Computer Graphics and Applications,pp 12—15 (September/October 2002).

James Bulman completed a ComputerStudies degree at the Nottingham TrentUniversity. He subsequently joined BT in2001 in the role of graphics programmerfor the Radical Multimedia Lab.

During his time with the RML he workedon various projects in the areas of virtualreality, mixed reality and pervasivegaming.

He is currently with the Media InterfacesGroup working in the fields of computergraphics and networking.

Barry Crabtree joined BT in 1980 aftercompleting a Physics degree at BathUniversity. After a number of years insoftware development he moved into thefield of artificial intelligence (AI), gaining aPhD, and helped begin BT’s systems inautomated diagnosis and workforcemanagement. He moved on to work indistributed AI, optimisation problems andadaptive systems, setting up the firstinternational conference on IntelligentAgents (PAAM), and was seconded to BT’sNorth America office to help build theiragents research team. He spent some

years working on developing personalised systems and worked closelywith MIT’s software agents group. After a spell as the CTO of a start-up company in BT’s incubator, focusing on delivering personalisedinformation on mobile devices, he moved back into BT. There heconcentrated on developing 3-D applications before moving to work ondigital rights management.

Andrew Gower is a research group leaderin BT’s Media interfaces Group, within theBroadband Applications Research Centreat Adastral Park. He received a BA(Hons)degree in Three-Dimensional Design fromLeicester Polytechnic in 1990. Since then,he has worked on the design of consumerand capital plant products and a variety ofvision-setting research projects. Hiscurrent focus is leading a team researchingmixed reality systems for use withinmilitary and civilian commercial domains.Recent work also includes ambient andpervasive user interfaces to information

and communications systems. His main research interests are mixedreality, ambient interfaces and disruptive design.

Amanda Oldroyd received a BA (Hons)degree in Design from Edinburgh Collegeof Art in 1993 where she studied animatedfilm-making. In 1994 she completed anMA in Computer Graphics and Animationat the National Centre for ComputerAnimation, Bournemouth University. Shejoined BT in the same year, to research thedesign and implementation of on-linecollaborative applications, specialising inreal-time 3-D character and world design.In the years 1996-2000, she workedalmost exclusively in the development ofinhabited TV appli-cations, including the

award-winning ‘The Mirror’ with the BBC, ‘Heaven & Hell Live’ forChannel 4 and ‘Ages of Avatar’ with BskyB. These applications enabledindividuals with an Internet connection to participate in televisionprogramming via on-line virtual worlds.

She now manages the Gaming Futures research project. Recent workincludes ‘Uncle Roy All Around You’, in collaboration with Blast Theoryand Nottingham University, and Encounter — a heterogeneousgaming platform to connect players over fixed and mobile networks inmulti-user games.

Matt Lawson joined BT eight years agoand has masters degrees in Physics andTelecommunications from Durham andUCL respectively and is also a CharteredEngineer.

He has had a career in research that hascovered technical development, projectmanagement, programme managementand managing a multimedia research labcovering 3-D graphics, interactive multi-media and computer games technology.He is currently setting up the new AppliedTechnology Centre which will provide BT

with an advanced prototyping and alpha trial capability. His teamrecently won ‘The Banker’ magazine award for on-line services and theBT Award for sustainability for work in advanced multimedia CRMsystems using virtual characters.

Jon Sutton is a new media designer in BT’sBroadband Applications Research Centreat Adastral Park. He received a BA(Hons)degree in Communication Design fromUniversity of Portsmouth in 2001. Sincejoining, he has developed interfacecomponents for new broadband serviceconcepts. These include a relationshipminder service and a digital media albumprototype, in which physical objects anddisplays are connected to on-line digitalcontent.

His current work includes the GamingFutures project, in which his role has been to design and implementmulti-user on-line gaming applications running on fixed and wirelessdevices.