future tech >>> mike villas’s worldcpoellab/teaching/cse40827/... · jaron lanier, the...

4
July 2004 | IEEE Spectrum | NA 45 ABOUT THE BIG PICTURE, there isn’t much doubt. Sensing, monitoring, networking, and com- puting technologies of incredible variety and profusion will converge over the next 10 to 20 years to give us—and those who would keep tabs on us—incredible powers of obser- vation. But exactly how it will change our lives, we can only imagine. Some fear a simple eruption of technology-based Orwellian repres- sion. Others anticipate the emergence of a hyperengaging form of exis- tence based on really cool toys that (caveat emptor) spy on us every now and then. We’ll drift casually in and out of augmented reality and have dizzying access to an unceasing tor- rent of information. In the cool toys category, some of the most compelling and detailed scenarios have come from Vernor Vinge, a science fiction author and former computer science profes- sor at San Diego State University. In the preceding story, “Synthetic Serendipity,” which Vinge adapted for IEEE Spectrum from his upcoming novel, Rainbows End, he introduces us to Mike Villas and his friends. Through their eyes, we see how we might integrate the coming technologies into our lives. Vinge’s sensor planet of 2020 teems with billions of wireless ultra- wideband communications nodes connected to countless pinhead- size cameras, microphones, motion detectors, and biometric and other sensors to form a fine-grained mesh of networks that cover every square millimeter of the globe. Equipped with full-color, see-through displays that cover each pupil like a contact lens and clothing that senses mus- cle twitches, people will exploit an immensely sophisticated successor of today’s Internet. They’ll be able to immerse themselves in gripping gaming environments, silently com- municate with friends just by tens- ing their muscles, and hunt down information about other people. In the next 30 years, Vinge believes, we will reach a point where the combination of powerful proces- sors, limitless data-storage capac- ity, ubiquitous sensor networks, and deeply embedded user interfaces will create a bond between human and machine “so intimate that users The augmented- reality wonderland of Pyramid Hill and Fairmont High School is taking shape today BY HARRY GOLDSTEIN ALL ILLUSTRATIONS: PETER BOLLENGER SENSOR NATION FUTURE TECH > > > WORLD MIKE VILLAS’S

Upload: others

Post on 16-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: FUTURE TECH >>> MIKE VILLAS’S WORLDcpoellab/teaching/cse40827/... · Jaron Lanier, the dreadlocked hipster credited with coining the term virtual real-ity, wants to embed VR displays

July 2004 | IEEE Spectrum | NA 45

ABOUT THE BIG PICTURE,there isn’t much doubt. Sensing,monitoring, networking, and com-puting technologies of incrediblevariety and profusion will convergeover the next 10 to 20 years to giveus—and those who would keep tabson us—incredible powers of obser-vation. But exactly how it will changeour lives, we can only imagine.

Some fear a simple eruption oftechnology-based Orwellian repres-sion. Others anticipate the emergenceof a hyperengaging form of exis-tence based on really cool toys that(caveat emptor) spy on us every nowand then. We’ll drift casually in andout of augmented reality and havedizzying access to an unceasing tor-rent of information.

In the cool toys category, someof the most compelling and detailedscenarios have come from VernorVinge, a science fiction author andformer computer science profes-sor at San Diego State University.In the preceding story, “SyntheticSerendipity,” which Vinge adaptedfor IEEE Spectrum from hisupcoming novel, Rainbows End,he introduces us to Mike Villas and

his friends. Through their eyes, wesee how we might integrate thecoming technologies into our lives.

Vinge’s sensor planet of 2020teems with billions of wireless ultra-wideband communications nodesconnected to countless pinhead-size cameras, microphones, motiondetectors, and biometric and othersensors to form a fine-grained meshof networks that cover every squaremillimeter of the globe. Equippedwith full-color, see-through displaysthat cover each pupil like a contactlens and clothing that senses mus-cle twitches, people will exploit animmensely sophisticated successorof today’s Internet. They’ll be ableto immerse themselves in grippinggaming environments, silently com-municate with friends just by tens-ing their muscles, and hunt downinformation about other people.

In the next 30 years, Vingebelieves, we will reach a point wherethe combination of powerful proces-sors, limitless data-storage capac-ity, ubiquitous sensor networks, anddeeply embedded user interfaceswill create a bond between humanand machine “so intimate that users

The augmented-reality wonderland of Pyramid Hill and

Fairmont High School is taking shape today

BY HARRY GOLDSTEIN

ALL ILLU

STR

ATION

S: P

ETER B

OLLEN

GER

SENSORNATIONFUTURE TECH > > >

WORLDMIKE

VILLAS’S

Page 2: FUTURE TECH >>> MIKE VILLAS’S WORLDcpoellab/teaching/cse40827/... · Jaron Lanier, the dreadlocked hipster credited with coining the term virtual real-ity, wants to embed VR displays

may reasonably be considered super-humanly intelligent.”

Vinge (pronounced VIN-jee) closelytracks emerging technologies by staying intouch with such influential computer sci-entists as Robert Fleming and CherieKushner and with research engineers likeGeorgia Institute of Technology’s ThadStarner. Spectrum talked about Vinge’sstory with his friends, as well as with suchtechno-gurus as Jaron Lanier, a pioneer ofvirtual reality, and Will Wright, creator of thehit computer game The Sims. Though theysquabble about how the technologiesdescribed in “Synthetic Serendipity” willcome together in the end, they all agreethat the result will make fact out of fiction.

+ + +“YEARS AGO, GAMESAND MOVIESwere for indoors….Now they were on the outside. They werethe world.”

Vinge’s brand of immersive reality in“Synthetic Serendipity” can be thought ofas an electronically created, shared hal-lucination. It combines virtual reality(where your ears, eyes, and skin are fedcomputer-generated sounds, images, andsensations) and augmented reality (wherecomputer-generated images are laid overyour view of the physical terrain). In SanDiego, circa 2020, you can see what youchoose to see: suburban homes canbecome castles; your friends, velocirap-tors. The key challenge to mapping the vir-tual onto the real is access to streams ofsufficiently specific and precise locationinformation. In Vinge’s conception, thetask is performed by localizer nodes,which are transceivers that determine

their position in thenetwork by commu-

nicating with othernodes located in a 10-

to 20-meter radius inevery direction.

In a network of thou-sands of nodes, each indi-

vidual node talks only toanother dozen or so nodes in

its local cluster. To fix its posi-tion in space, an individual node

measures both the time it takesto transmit a train of pulses to a

neighboring node and how long ittakes to receive an answering pulse

from that node. Like old friends tradinggossip, one node will tell another nodeabout other neighbors it has contact with,so that every node in a cluster knows itsposition relative to all the others.

Mated to the story’s game server inVinge’s Pyramid Hill Amusement Park,these localizers keep track of players towithin a fraction of a millimeter, to over-lay full-color graphics on top of players’real-world views.

But here at the turn of the century,we’re still fumbling with the GlobalPositioning System and location accura-cies measured in meters or, at best, cen-timeters. Nevertheless, an early approx-imation of Pyramid Hill exists on thecampus of the University of SouthAustralia in Adelaide. Researchers lug-ging backpacks stuffed with notebookcomputers, cables, batteries, and GPSreceivers sport bulky electronic halos—head-mounted displays and head-trackingdevices. As they stalk around campus,they wield haptic guns that vibrate whenfired at monsters from a modified versionof Quake, the popular desktop shooting-gallery game. Bruce Thomas, WaynePiekarski, and their colleagues at the uni-versity’s Wearable Computer Laboratorytook Quake’s open source code andadapted the game to the campus envi-ronment. They kept the guns and mon-sters but removed the texturing for theground, structures, and sky so that thereal world shines through.

The game, which they call ARQuake,works pretty well when a player is morethan 10 meters away from, say, anapproaching monster, which appears tothe gamers amid real objects. A combina-tion of magnetometers, gyroscopes, ac-celerometers, and GPS readers track eachplayer and match the position of physicalobjects like trees and buildings to the blankspaces in the computer game where the

graphical images of these objects havebeen erased.

But the closer the virtual character isin relation to the player, the more the GPStracking error erodes the gaming experi-ence and the virtual monsters do thingsthey shouldn’t, like pass through walls.When a player is within a meter or two ofa three-story building, which blocks theGPS signal, the game can no longer trackthe player at all, and the virtual world stopsmoving with him or her.

“The trick is designing the games sothey force the user to stay within thedesirable operating area,” says Piekarski.“So if the GPS doesn’t work well next tothe walls, encourage the user to stay outin the open by making all the interestingstuff happen there.”

To get the kind of seamlessness thatVinge envisions will require an area bris-tling with localizers. It’s also going todemand some cleverness in matching vir-tual props and characters to real ones—for example, a virtual dinosaur to therobotic mechanism built solely to be theframework for the graphics-created beast.

“The robot’s going to have to be prettymuch the same size and shape as thedinosaur—otherwise it’s not going to bite players at the right time,” says BlairMcIntyre, an assistant professor atGeorgia Tech’s Augmented EnvironmentsLab in Atlanta. “When you hide the phys-ical world behind these virtual overlaysbut then expect people to be able to touchthings in the physical world, that meansthat the virtual world must precisely cor-respond to the physical.”

The stumbling block for immersivegames so far is a vicious little problemcalled simulator sickness, according toWright, cofounder of the computer gamecompany Maxis Software Inc., in WalnutCreek, Calif., and creator of such block-buster games as SimCity and The Sims.“It’s hard to get a really accurate read onwhich way you’re pointed and to have itupdate fast enough,” says Wright. “Withtoday’s systems, you turn your head, andall of a sudden reality moves, but the vir-tual image lags behind.” For most people,nausea ensues.

Overcoming the problem comes downto minimizing latency—the lag that occurswhen radio signals travel among playersand the computers that compute theframe-by-frame updates for each player.In “Synthetic Serendipity,” localizersspread all over Pyramid Hill deliver thisinformation at a rate fast enough to keepthe experience smooth and immersive. But

46 IEEE Spectrum | July 2004 | NA

ALL ILLU

STR

ATION

S: P

ETER B

OLLEN

GER

Page 3: FUTURE TECH >>> MIKE VILLAS’S WORLDcpoellab/teaching/cse40827/... · Jaron Lanier, the dreadlocked hipster credited with coining the term virtual real-ity, wants to embed VR displays

July 2004 | IEEE Spectrum | NA 47

what about players who are not located onthe Hill, such as the mysterious strangerwho projects Big Lizard onto that robot?For players in remote locations to share avirtual reality experience, the basic strat-egy is to pick a prime location for the com-puters that run the software that predicts,for each player, what frame needs toappear next in his or her display.

Jaron Lanier, the dreadlocked hipstercredited with coining the term virtual real-ity, wants to embed VR displays intoenclosed environments such as offices,classrooms, and laboratories so that peo-ple across the globe or across the streetcan share the same virtual environment—a VR teleconference of sorts. For the room-sized tele-immersive displays he andothers have been developing, real-worldlatencies are in the 40- or 50-millisecondrange, more than enough to turn the expe-rience into a herky-jerky nightmare. Toachieve the bone-crunching realism Vingedescribes, engineers will have to get laten-

cies down to 10 ms or less, says Lanier, whois now a visiting scientist at SiliconGraphics Inc., in Mountain View, Calif.

When it comes to chipping away atlatencies, different virtual interactions callfor different strategies. For the “haptic car-nage” Vinge writes about, a local cache ofsimulation data related to dinosaur teethand jaws stored on Fred’s wearable couldfeed actuators in his shirt to create the sen-sation of being munched. But for real-timeinteractions between people on oppositesides of a continent, where there is anunpredictable two-way flow of data, engi-neers will have to put the computers mid-way between them in what Lanier calls“virtual-world prediction megacenters.”

In 2000, as chief scientist of theNational Tele-immersion Initiative, a coali-tion of research universities studyingadvanced applications for the next gener-ation of the Internet, Lanier helped demon-strate that placing computation in themiddle of a network dramatically improvedthe quality of prototypical immersive appli-cations. His team used the PittsburghSupercomputing Center to link researchersat the University of North Carolina, ChapelHill, with staff from Advanced Network &Services in Armonk, N.Y. With virtual laserpointers, the participants moved computer-generated furniture around in a three-dimensional space projected in real timeonto large screens at each venue. Thoughmore like a low-fi holodeck from Star Trekthan Pyramid Hill, the demo showed thatwe’re on our way to sharing virtual experi-ences in real spaces.

+ + +“THE TWINS LOOKEDAT EACH OTHER. Mike couldtell they were silent messaging.”

The idea of communicating withoututtering a word or typing on a keypadseems like so much parapsychologicalmumbo jumbo. But there’s solid technol-ogy behind it. In Vinge’s story, Mike and hisfriends, the Radner twins, play games, surfthe Web, and “silent message” each otherby shrugging and twitching to control theelectronics embedded in their clothing.

And such gesturing isn’t the only alter-native to speaking or typing. Researcherstoday are already investigating several alter-natives, including eye blinks and actionsknown as subvocal utterances.

Vinge’s twitching communicationscheme seems the most fantastical, but itcould capitalize on medical technology thathas been around for years. In a typical setup

for monitoring muscle activity, surface-electromyography electrodes placed incontact with the skin detect the minuteelectrical signals associated with the con-traction of a muscle. In the wearable appli-cation Vinge imagines, analog signalspicked up by such electrodes would beamplified and then sent along conductivethreads to chips that would digitize the sig-nals and route them to a processor runninggesture-recognition software. The programwould first determine whether the series ofmuscle contractions is intentional and, ifso, match them to a library of gesture cuesand associated meanings. Finally, the tex-tualized message would be wirelessly trans-mitted to the intended party.

“In theory, we could train ourselves totwitch different muscles so we could effec-tively type,” says Georgia Tech’s McIntyre.“All that you need to learn how to controlthese muscles is feedback. And if the inputdevice reacted predictably and consis-tently, then we could learn to control thingsthat were peripheral to our bodies just byhaving the computer monitor signals to andfrom the brain.”

For now, researchers are working ondifferent silent messaging techniques.Elsewhere at Georgia Tech, Thad Starner’s“blinkprint” technique measures eye blinksto identify blinkers and allow them a meas-ure of rudimentary control.

But for many, using vocal cords is stillthe most natural way of communicating—even if you don’t make a sound. Researchersat NASA’s Ames Research Center, in MoffettField, Calif., led by Chuck Jorgensen, re-cently proved that the tremors in nervescontrolling the vocal cords can be detectedwhen someone is speaking very quietly oreven just reading silently. His team is experi-menting with button-size surface-contactelectrodes, placed beneath and on eitherside of the larynx, which detect the nervesignals that would otherwise becomespeech. The sensors relay those subvocalnerve signals to a digital signal processorand then to a software package trained torecognize certain signals as simple words,such as “stop” and “go” and the digits “0”through “9.” Researchers used the systemfor hands-free Web browsing.

+ + +“MIKE GAVE A SHRUGand a twitch just so. That was enough cuefor his Epiphany wearable.”

In “Synthetic Serendipity,” clothing is theinterface for everything from personalcommunications to gaming. Today, the

Page 4: FUTURE TECH >>> MIKE VILLAS’S WORLDcpoellab/teaching/cse40827/... · Jaron Lanier, the dreadlocked hipster credited with coining the term virtual real-ity, wants to embed VR displays

48 IEEE Spectrum | July 2004 | NA

primitive forerunners of these garmentsare being tested by e-textile pioneerslike Sundaresan Jayaraman [see “Readyto Ware,” Spectrum, October 2003]. Hisprototype machine-washable SmartShirtcontains sensors, actuators, processors,and communications circuitry all embed-ded directly in the fabric and connectedby a flexible data bus.

The SmartShirt controller, now the sizeof a pager and running on a 3-volt battery,processes the signals from sensors woveninto the fabric to compute vital signs likeheart rate. It wirelessly transmits the datato a display device, such as a wristwatch,a PDA, or a PC. By 2020 such a controllerwill be smaller than a dime and powered bymicrofuel cells or solar cells woven into thefabric, like those demonstrated recently byphotovoltaics maker Konarka TechnologiesInc., in Lowell, Mass.

The garments of Vinge’s imagination,which can faultlessly sense even subtlemessages amid random twitches andshrugs, are years away. Nevertheless, theraw computational power and storagecapacities in today’s experimental wear-ables are already impressive. As he strollsthe campus of the College of Computingat Georgia Tech, where he is an assistantprofessor, Starner views e-mail docu-ments and surfs the Web by peeringthrough a head-up liquid-crystal displayclipped onto his eyeglasses. He’s also“typing” single-handedly on a palm-sizekeyboard-and-mouse combo called aTwiddler2, from Handkey Corp. in Denver.Obviously, he’s never far from his com-puter, a shoulder-bag unit from CharmedTechnology in Los Angeles. It’s basedon a low-power Transmeta Crusoe pro-cessor, 256 megabytes of RAM, and an80-gigabyte hard disk.

Starner predicts that within five years,the shoulder-bag unit will shrink to fit inhis pocket and carry 1 terabyte of diskspace. That’s more than enough to storea year’s worth of your e-mail, music,video, medical records, and every wordyou utter.

Tiny, light, and bright displays areclearly central to Vinge’s future world; fewpeople would gladly wear the geeky clip-ons that Starner habitually sports. In Vinge’sstory, the display of choice is a retinal-scanning system embedded in contactlenses. The basic technology is already here,albeit in much bulkier form, in displaysfrom Microvision Inc., in Bothell, Wash. Thecompany’s scanned-beam display usesextremely small semiconductor lasers toscan images directly onto the retina [see “Inthe Eye of the Beholder,” Spectrum, May].

The display allows augmented-realitysoftware to superimpose graphs and textover your view of real objects—schemat-ics of the underground utility infrastruc-ture of Pyramid Hill, say, or navigationarrows that guide you to the right class-room. When green diode lasers becomeavailable to combine with the red and blueones already here, full-color, 3-D displayswill give gamers gut-wrenchingly vividscenes that will make the images Starnernow sees on his display look like grainyold snapshots.

The scanned-beam display, includingthe lasers and the 2.5-mm-diametermicroelectromechanical scanner thatpaints the light onto the retina, needs toshrink to fit comfortably and unobtru-sively on a contact lens. John R. Lewis,a research fellow at Microvision, insistshe could build such a prototype today forUS $5 million to $10 million.

Of course, we might skip the contactlens altogether and pump images directlyinto the brain’s visual cortex. Researchscientist Lanier, who counts himself anexpert in user interfaces, among otherthings, believes that prosthetic sensoryimplants are almost inevitable; it’s just amatter of how far into the nervous systemthe embedding takes place.

“The machinery of the retina and itsconnectivity to the visual cortex and thevisual cortex’s integration with the brainare all so exquisitely good that probablythe best engineering decision will be toimplant a display inside the eye but onthe outside of the retina,” Lanier says.“But engineering culture has to articulatea more spiritual and more beautiful visionfor the future in order for any of thesethings to be accepted.”

+ + +“WITHOUT A COMPLETElocalizer mesh, nodes could not knowprecisely where they and their neigh-bors were. High-rate laser comm couldnot be established....”

In Mike Villas’s world, every object isinstrumented and networked: its descrip-tion, current states, capabilities, andrelationship to its context are known toanyone who gains access to the network.

Vinge bases his fictional sensor net-works on the localizer nodes beingdeveloped by Robert Fleming and CherieKushner, cofounders of Aether Wire &Location Inc., in Nicasio, Calif. Over thelast decade, under contracts from theU.S. Defense Advanced Research ProjectsAgency and other government sponsors,

the company has been making a device,currently the size of a pager, capable ofproviding location information accurateto within a centimeter.

As described earlier, localizer nodesare transceivers that determine theirposition in the network by communi-cating with other localizers placed in a10- to 20-meter radius in every direc-tion. By forming a mesh network, thedevices talk to each other instead of acentral base station, sending packetsfor one another along routes that arerecalculated every few milliseconds tofind the fastest path.

Unlike the sensor networks based onvarious flavors of IEEE 802.11 (Wi-Fi) andIEEE 802.15.4 (ZigBee) that are nowbecoming ubiquitous in developed coun-tries, Aether Wire’s networks rely on low-frequency ultrawideband. While traditionalnarrowband radio communications trans-mit data by modulating sinusoidal wavesand emitting a great deal of power in a nar-row band of frequencies, ultrawidebandtransmitters blast low-power streams ofpulses at a rate of 40 million to a billiontimes per second over a broad swath—at least 500 megahertz—of spectrum.Information is impressed onto the pulsetrain by varying the amplitude, spacing,polarity, or duration of the individual pulsesin the train.

In Vinge’s world, localizers serve a vari-ety of functions. They precisely steerlasers that send and receive data to andfrom people and devices at tremendouslyhigh rates over the Internet. Localizernodes can also be integrated with justabout any kind of sensor imaginable,including the hundreds of camerasembedded at Fairmont High School, whichMike and his friends tap into to peer overa classmate’s shoulder or to locate oneanother across campus. �