connecting the physical world with pervasive networks

11
1536-1268/02/$17.00 © 2002 IEEE PERVASIVE computing 59 REACHING WEISER’S VISION Connecting the Physical World with Pervasive Networks M ark Weiser envisioned a world in which computing is so per- vasive that everyday devices can sense their relationship to us and to each other. They could, thereby, respond so appropriately to our actions that the computing aspects would fade into the back- ground. Underlying this vision is the assumption that sensing a broad set of phys- ical phenomena, rather than just data input, will become a com- mon aspect of small, embedded computers and that these devices will communicate with each other (as well as to some more powerful infrastructure) to orga- nize and coordinate their actions. Recall the story of Sal in Weiser’s article; Sal looked out her window and saw “tracks” as evidence of her neighbors’ morning strolls. What sort of system did this seemingly sim- ple functionality imply? Certainly Weiser did not envision ubiquitous cameras placed throughout the neighborhood. Such a solution would be far too heavy for the application’s relatively casual nature as well as quite invasive with respect to personal pri- vacy. Instead, Weiser posited the existence of far less intrusive instrumentation in neighborhood spaces— perhaps smart paving stones that could detect local activity and indicate the walker’s direction based on exchanges between neighboring nodes. As we have marched technology forward, we are now in a posi- tion to translate this aspect of Weiser’s vision to real- ity and apply it to a wide range of important appli- cations, both computing and social. Other articles in this issue address the user inter- face-, application-, software-, and device-level design challenges associated with realizing Weiser’s vision. Here, we address the challenges and opportunities of instrumenting the physical world with pervasive networks of sensor-rich, embedded computation. Such systems fulfill two of Weiser’s key objectives— ubiquity, by injecting computation into the physi- cal world with high spatial density, and invisibility, by having the nodes and collectives of nodes oper- ate autonomously. Of particular importance to the technical community is making such pervasive com- puting itself pervasive. We need reusable building blocks that can help us move away from the spe- cialized instrumentation of each particular envi- ronment and move toward building reusable tech- niques for sensing, computing, and manipulating the physical world. The physical world presents an incredibly rich set of input modalities, including acoustics, image, motion, vibration, heat, light, moisture, pressure, ultrasound, radio, magnetic, and many more exotic modes. Traditionally, sensing and manipulating the This article addresses the challenges and opportunities of instrumenting the physical world with pervasive networks of sensor-rich, embedded computation. The authors present a taxonomy of emerging systems and outline the enabling technological developments. REACHING FOR WEISER’S VISION Deborah Estrin University of California, Los Angeles David Culler and Kris Pister University of California, Berkeley Gaurav Sukhatme University of Southern California

Upload: g

Post on 26-Feb-2017

217 views

Category:

Documents


0 download

TRANSCRIPT

1536-1268/02/$17.00 © 2002 IEEE PERVASIVEcomputing 59

R E A C H I N G W E I S E R ’ S V I S I O N

Connecting the PhysicalWorld with PervasiveNetworks

Mark Weiser envisioned a worldin which computing is so per-vasive that everyday devices cansense their relationship to usand to each other. They could,

thereby, respond so appropriately to our actions thatthe computing aspects would fade into the back-ground. Underlying this vision is the assumption

that sensing a broad set of phys-ical phenomena, rather than justdata input, will become a com-mon aspect of small, embeddedcomputers and that these deviceswill communicate with eachother (as well as to some morepowerful infrastructure) to orga-nize and coordinate their actions.

Recall the story of Sal inWeiser’s article; Sal looked out her window and saw“tracks” as evidence of her neighbors’ morningstrolls. What sort of system did this seemingly sim-ple functionality imply? Certainly Weiser did notenvision ubiquitous cameras placed throughout theneighborhood. Such a solution would be far tooheavy for the application’s relatively casual natureas well as quite invasive with respect to personal pri-vacy. Instead, Weiser posited the existence of far lessintrusive instrumentation in neighborhood spaces—perhaps smart paving stones that could detect local

activity and indicate the walker’s direction based onexchanges between neighboring nodes. As we havemarched technology forward, we are now in a posi-tion to translate this aspect of Weiser’s vision to real-ity and apply it to a wide range of important appli-cations, both computing and social.

Other articles in this issue address the user inter-face-, application-, software-, and device-level designchallenges associated with realizing Weiser’s vision.Here, we address the challenges and opportunitiesof instrumenting the physical world with pervasivenetworks of sensor-rich, embedded computation.Such systems fulfill two of Weiser’s key objectives—ubiquity, by injecting computation into the physi-cal world with high spatial density, and invisibility,by having the nodes and collectives of nodes oper-ate autonomously. Of particular importance to thetechnical community is making such pervasive com-puting itself pervasive. We need reusable buildingblocks that can help us move away from the spe-cialized instrumentation of each particular envi-ronment and move toward building reusable tech-niques for sensing, computing, and manipulatingthe physical world.

The physical world presents an incredibly rich setof input modalities, including acoustics, image,motion, vibration, heat, light, moisture, pressure,ultrasound, radio, magnetic, and many more exoticmodes. Traditionally, sensing and manipulating the

This article addresses the challenges and opportunities ofinstrumenting the physical world with pervasive networks ofsensor-rich, embedded computation. The authors present ataxonomy of emerging systems and outline the enablingtechnological developments.

R E A C H I N G F O R W E I S E R ’ S V I S I O N

Deborah Estrin University of California, Los Angeles

David Culler and Kris PisterUniversity of California, Berkeley

Gaurav Sukhatme University of Southern California

physical world meant deploying and plac-ing a highly engineered collection of instru-ments to obtain particular inputs andreporting the data over specialized wiredcontrol protocols to data acquisition com-puters. Ubiquitous computing testbedshave retained much of this engineered dataacquisition style, although we use them toobserve a variety of unstructured phe-nomena (such as human gestures and inter-action). The opportunity ahead lies in theability to easily deploy flexible sensing,computation, and actuation capabilitiesinto our physical environments such thatthe devices themselves are general-purposeand can organize and adapt to support sev-eral application types.1

In this article, we describe the challengeson the road ahead, present a taxonomy ofsystem types that we expect to emerge dur-ing this next decade of research and devel-opment, and summarize technologicaldevelopments.

ChallengesThe most serious impediments to per-

vasive computing’s advances are systemschallenges. The immense amount of dis-tributed system elements, limited physicalaccess to them, and this regime’s extremeenvironmental dynamics, when consideredtogether, imply that we must fundamen-tally reexamine familiar layers of abstrac-tion and the kinds of hardware accelera-tion employed—even our algorithmictechniques.

Immense scaleA vast number of small devices will com-

prise these systems. To achieve denseinstrumentation of complex physical sys-tems, these devices must scale down toextremely small volume, with applicationsformulated in terms of immense numbersof them. In five to 10 years, complete sys-tems with computing, storage, communi-cation, sensing, and energy storage couldbe as small as a cubic millimeter, but eachaspect’s capacity will be limited. Fidelityand availability will come from the quan-tity of partially redundant measurementsand their correlation, not the individualcomponents’ quality and precision.

Limited accessMany devices will be embedded in the

environment in places that are inaccessibleor expensive to connect with wires, mak-ing the individual system elements largelyuntethered, unattended, and resource con-strained. Much communication will bewireless, and nodes will have to rely on on-board and harvested energy (such as frombatteries and solar cells). Inaccessibility, aswell as sheer scale, implies that they mustoperate without human attendance; eachpiece is such a small part of the whole thatnobody can reasonably lay hands on all ofthem. At sufficient levels of efficiency,energy harvested from the environmentcan potentially allow arbitrary lifetimes,but the available energy bounds theamount of activity permitted per unit time.Energy constraints also limit the applica-tion space considerably; if solar power isused, nodes must be outdoors, and if bat-teries cannot be recharged, they will seri-ously affect maintenance, pollution, andreplacement costs.

Extreme dynamicsBy virtue of nodes and the system as a

whole being closely tied to the ever-chang-ing physical world, these systems will expe-rience extreme dynamics. By design, theycan sense their environment to provideinputs to higher-level tasks, and environ-mental changes directly affect their per-formance. In particular, environmental fac-tors dramatically influence propagationcharacteristics of low-power radio fre-quency (RF) and can effectively createmobility even in stationary configurations.These devices also experience extreme vari-ation in demand: Most of the time, theyobserve that no relevant change hasoccurred and no relevant information hasbeen communicated. Thus, they mustmaintain vigilance while consuming almostno power.

However, when important events dooccur, a great deal happens at once. Flowsof high- and low-level data among sensorsand actuators must be efficiently inter-leaved while meeting real-time demands,and redundant flows must be managedeffectively. Consequently, passive vigilance

is punctuated by bursts of concurrency-intensive operation. To cope with resourcelimitations in the presence of such dynam-ics, these systems will be governed by inter-nal control loops in which componentscontinuously adapt their individual andjoint behavior to resource and stimulusavailability.

A taxonomy of systems Meeting these challenges requires new

frameworks for system design of the sortthat only come from direct, hands-on expe-rience with the emerging technologicalregime. We must apply many small, power-constrained, connected devices to realproblems where programming, orchestra-tion, and management of individual devicesare impractical.

The applications of physically embeddednetworks are as varied as the physical envi-ronments in which we live and work. Yet,even with this heterogeneity, many oppor-tunities and resources for exploiting com-monality across them exist. Most impor-tant is the hope to achieve pervasiveness byenabling system reuse and evolution.Within any one environment or system,more than one type of system or applica-tion might be active as the system evolves orfrom its outset. One step toward identify-ing common building blocks is to define ataxonomy of systems and applications sothat we can identify and foster reusable andparamaterizable features.

We divide the space of physically embed-ded systems along the critical dimensionsof space and time. Within these dimen-sions, we are interested in scale, variabil-ity, and autonomy. We address thesedimensions with respect to the environ-mental stimuli being captured and the sys-tem elements.

ScaleSpatial and temporal scale concerns the

sampling interval, the extent of overall sys-tem coverage, and the relative number ofsensor nodes to input stimuli. The scale ofspatial and temporal sampling and extentare important determinants of systememphasis. The finer grain the sampling, themore important the innovative collabora-

60 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

tive signal processing techniques such asthose described elsewhere.2–5 However,systems intended to operate over extendedperiods of time and regions of space mustemphasize techniques for self-organizationbecause as the system extent grows, theability to configure and control the envi-ronment becomes infeasible. High densityprovides many opportunities and chal-lenges for self-configuration that low-den-sity systems don’t encounter.

Sampling. The physical phenomena mea-sured ultimately dictates spatial and tem-poral sampling scale. High-frequencywaves require higher temporal and spatialsampling than phenomena such as tem-perature or light in a room, where spatialand temporal fluctuations are coarser-grained. The sampling scale is a functionof both the phenomena and the applica-tion. If you need the system for event detec-tion, the requirement could be more laxthan if the goal is event or signal recon-struction. For example, if a structure isbeing monitored to detect structural faults,signatures of seismic response might becompared to “healthy” signatures at a rel-atively coarse grain. However, if data isbeing collected to generate profiles of struc-ture response, fine-grain data is needed.

Extent. The spatial and temporal extent ofsystems also varies widely. At the high endof this continuum are environmental mon-itoring systems, which can span on theorder of tens of thousands of meters. Mostexisting and planned pervasive computingsystems are an order of magnitude, orsmaller, such as is needed to cover a build-ing or room. Elements of pervasive com-puting systems also extend to the smallerend of the continuum such as reconfig-urable fabric, which a user can wear or thatwe can deploy to monitor structure ormachinery surfaces.

Density. System density is a measure of sen-sor nodes per footprint of input stimuli.Higher-density systems provide greateropportunities for exploiting redundancy toeliminate noise and extend system lifetime.The higher the density of nodes to stimuli,

the greater the number of independentmeasurements possible and thus opportu-nities to combine measurements to elimi-nate channel or coupling noise. Similarly,where density is high enough to allow over-sampling, nodes can go to sleep for longperiods of time and thereby extend cover-age over time.

VariabilityVariability is a second differentiating

characteristic of many systems and associ-ated designs. As with spatial and temporalscale, it takes on many forms and can applyto system elements or the phenomena beingsensed. Relatively static systems emphasizedesign time optimization whereas morevariable systems must use runtime self-organization and might be fundamentallylimited in the extent to which they are bothvariable and long-lived.

Structure. Ad hoc versus engineered sys-tem structure refers to the variability in sys-tem composition.6 At one extreme is a sys-tem that monitors a structure such as abuilding, bridge, or even an individual air-plane. At the other end are sensor networksdeployed in remote regions to study bio-complexity. More traditional pervasivecomputing systems embody aspects of bothsystems; elements of the systems such asinstrumentation in an aware room orhouse might be relatively static, whereasthe larger system that includes humans andsubsystems carried in and out of the spaceis clearly ad hoc.

Task. Variability in system task determinesthe extent to which we can optimize thesystem for a single mode of operation.Even a structurally static system might per-form different tasks over time—for exam-ple, a structural system that periodicallygenerates a structure profile could act asan event monitoring system. Similarly, asystem used regularly to measure an airconditioning system’s effectiveness mightoccasionally have to integrate inputs fromnew sensors to detect traces of newly char-acterized toxins

Space. Variability in space—meaning mo-

bility—applies to both system nodes andphenomena. In many systems of interest,most or all the nodes remain fixed in spaceonce placed. However, many interesting, iflonger-term, systems include elements thatmove themselves or that are tied to objectsthat move them (such as vehicles or peo-ple). Similarly, the phenomena these sys-tems monitor differ in the extent of mobil-ity. Many systems of interest are intendedfor phenomena that move quickly in time.A system designed to manage the physicalenvironment for a stationary human userfaces different challenges than one designedto track humans moving quickly throughthat same space.

AutonomyThe degree of autonomy has some of the

most significant and varied long-term con-sequences for system design: the higher theoverall system’s autonomy, the less thehuman involvement and the greater theneed for extensive and sophisticated pro-cessing inside the system. Such autonomyincreases the need for multiple sensorymodalities, translation between externalrequests and internal processing, and theinternal computational model’s complex-ity. Detailed characterization systems arerelatively low-autonomy systems, becausetheir intent is to simply deliver sensoryinformation to a human user or externalprogram. Event detection requires far moreautonomy because the definition of inter-esting events must be programmed into thesystem, and the system must execute morecomplex queries and computations inter-nally to process detailed measurements andidentify events. Perhaps more than anyother dimension, autonomy is most signif-icant in moving us from embedding instru-ments to embedding computation in ourphysical world.

Modalities. Truly autonomous systemsdepend on multiple sensory modalities.Different modalities provide noise re-silience to one another and can combine toeliminate noise and identify anomalousmeasurements.

Complexity. Greater system autonomy also

JANUARY–MARCH 2002 PERVASIVEcomputing 61

entails greater complexity in the computa-tional model. A system that delivers datafor human consumption leaves most of thecomputation to the human consumer or toa centralized program that can operatebased on global information. A system thatexecutes contingent on system state andinputs over time must execute a generalprogramming language that refers to spa-tially and temporally variable events.

Where are we now?Weiser often described the need for a

range of different-sized devices (not just PCsand laptops, but devices the size of win-dows or scraps of paper). Surely he imag-ined them scaling down to the size of a pinor up to the size of an entire building. Thissection describes developments and trendsthat are key to realizing his vision ofincreasingly pervasive and invisible com-puting systems. In each of the followingareas, we see a consistent trend from highlyengineered deployments of modest scaleusing application-specific devices to ad hocdeployments of immense scale based onreusable components intended for systemevolution. We will emphasize the role thatthese developments play in supporting sys-tem scale, variability, and autonomy.

Small packages in the physical world A major theme in Weiser’s work was

the creation of small devices used in a larger intelligent environment. The XeroxParcTab provided limited display, userinput, and infrared communication in apalm-sized unit. Small, attachable RF iden-tification (RFID) tags helped determine theapproximate position and identity of thetagged object or individual in a spaceequipped with specialized readers.7 Today,we see many consumer devices in the palmform factor possessing roughly the process-ing and storage capabilities of early-to-mid1990s PCs. Personal digital assistants andpocket PCs have gained wireless connectiv-ity either as variations of pager networks or802.11 wireless LANs. However, bothapproaches have significant drawbacks—the former is expensive, has low bandwidth,and is void of proximity information, andthe latter consumes too much energy and

requires a large battery pack. There is nowa strong push to incorporate Bluetoothshort-range wireless networks into hand-held devices, and numerous attractive radiotechnologies are on the horizon.

Recently these devices have gained a richset of input modes besides user buttons andknobs. Most have acoustic input and out-put, and some incorporate accelerometersto detect gestures, orientation, or videoinput. Simultaneously, many cell phoneshave gained Internet browsing capability,and PDAs have gained cell phone capabil-ities for data and voice access. Soon all cellphones will know where they are thanksto GPS, which will bring a degree of loca-tion awareness into various personal com-puting devices.

In most existing ubiquitous computingenvironments, the sensing capabilities ofthese devices is used to detect humanactions—to recognize gestures or the rela-tionship between objects.8,9 The networkcommunicates the occurrence of theseactions to the computers in the infrastruc-ture that can process them. We can con-nect rich sensor arrays and sophisticatedmotor controllers to these devices as wewould to a PC in a traditional process con-trol environment (as on a manufacturingline) or to an embedded controller (as inan automobile). However, increased minia-turization raises the possibility of every tinysensor and controller having its own pro-cessing and communication capabilities,such that the aggregate performs sophisti-cated functions. A sequence of Universityof California, Los Angeles wireless inte-grated networked sensor nodes demon-strates early examples of such wireless inte-grated sensors.10 As these become smalland numerous, we can place them close tothe physical phenomenon of interest toprovide tremendous detail and accuracy.This ability provides richness to ubiquitousenvironments through new modalities (forexample, detecting user frustration bychange in body temperature), but alsoenables the instrumentation of physicalsites that humans can’t access, such as theinside of structures, equipment, or aque-ous solutions.

In addition to packing ever more com-

puting and storage capacity onto a chip,decreasing lithographic feature size squeezesthe same capacity into a progressivelysmaller area with an even greater reductionin power consumption. However, the abil-ity to implement the radio or optical trans-ceivers in the same module as the processorhas provided a qualitative advance in sizeand power. Sensors and actuators haveundergone a revolution with the emergenceof microelectromechanical systems (MEMS)technology, in which mechanical devicessuch as accelerometers, barometers, andmovable mirrors are constructed at minutesize on a silicon chip using lithographicprocesses similar to those for integrated cir-cuits. Improved equipment and technologyresult in smaller MEMS devices, lowerpower consumption, and improved deviceperformance. Although these improvementsare not as predictable or clearly tied to fea-ture size as processing and storage are, astrong correlation exists. The node subsys-tems’ size and performance improvementsreduce power consumption and allow a cor-responding decrease in the size and cost ofthe power supply as well. There have alsobeen substantial improvements in batterytechnology, with improved storage den-sity, form factor, and recharging, as wellas the emergence of alternative storagedevices, such as fuel cells and energy-harvesting mechanisms (see the “Energy”sidebar).

Several research groups are exploringhow to build intelligent, information-richphysical environments by deploying manysmall, untethered, deeply integrated nodes.Core challenges involve designing systemsto operate at low power consumption,storing and obtaining that power, orches-trating nodes to form larger networks, anddeploying applications over fine-grain net-works. At the far extreme, researchers haveconducted design studies on the feasibilityof building an entire system, includingpower storage, processing, sensing, andcommunication, in a cubic millimeter.11–13

This scale should be achievable in practicefive to 10 years out. Researchers have alsomade substantial progress in low-powerCMOS radios at several performancepoints. The research community now

62 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

widely uses a one-inch scale platform (seethe “University Research” sidebar), whichprovides an opportunity to explore thenode system architecture. It must be ex-tremely energy efficient, especially in thelow duty-cycle vigilance mode, and it mustbe extremely facile with event bursts. Itmust meet hard real-time constraints, suchas sampling the radio signal within bit win-dows, while also handling asynchronoussensor events and supporting localized dataprocessing algorithms. It also must berobust and reprogrammable in the field.

During the growth in capability andcomplexity of these devices, several distinctoperating systems approaches haveemerged to make application design moremanageable. Real-time operating systemssuch as Vxworks (www.windriver.com),GeoWorks (www.geoworks.com), and

Chorus (www.sun.com/chorusos) havescaled down their footprints and addedTCP/IP capabilities, whereas Windows CEhas sought to provide a subset of the famil-iar PC environment. PalmOS successfullyfocused on data element exchange withinfrastructure machines, but provided lit-tle support for the concurrency associatedwith interactive communication. As thedevices reach a processing and storagecapability beyond the early workstations,compact Unix variants, especially Linux,have gained substantial popularity whileproviding real-time support in a multi-tasking environment with well-developednetworking.

To make the networked embedded nodean effective vehicle for developing algo-rithms and applications, a modular, struc-tured runtime environment should provide

the scheduling, device interface, network-ing, and resource management primitiveson which the programming environmentsrest. It must support several concurrentflows of data from sensors to the networkto controllers. Moreover, microsensordevices and low-power networks operatebit by bit (or in a few cases, byte by byte),so software must do much of the low-levelprocessing of these flows and events. Often,operations must be performed within nar-row jitter windows, such as when samplingthe RF signal.

The traditional approach to controllerdesign has been to hand-code schedulingloops to service the collection of concurrentflow events, but this yields brittle, single-use firmware that has poor adaptability. A more general-purpose solution is to pro-vide fine-grain multithreading. Although

JANUARY–MARCH 2002 PERVASIVEcomputing 63

E nergy constraints dominate algorithm and system design

trade-offs for small devices. Energy storage has advanced

substantially, but not at the pace we associate with silicon-based

processing, storage, and sensing. Batteries remain the primary

energy storage devices, although fuel-based alternatives with high

energy density are being actively developed. As a general rule of

thumb, batteries store approximately a joule per mm3, but many

factors influence the choice of technology for a particular appli-

cation. Over the past 20 years, an AA nickel alkaline (NiCd and

NiMH) battery’s capacity has risen from 0.4 to 1.2 Amp hours with

fast recharging (see http://books.nap.edu/books/0309059348/

html/index.html). Lithium batteries offer higher energy density

with fewer memory effects but longer recharge times. The zinc-

based batteries used in hearing aids have high energy density but

high leakage, so they are best for high usage over short duration.

Recent polymer-based batteries have excellent energy density, can

be manufactured in a range of form factors, and are flexible, but

they are also expensive. Numerous investigations have focused on

thin and thick film batteries, and researchers have fabricated tiny,

1-mm3 lead-acid batteries, so we can expect to package energy

storage directly with logic.

Fuel cells potentially have 10 times the energy density of bat-

teries, considering just the fuel, but the additional volume of the

membrane, storage, and housing lowers this by a factor of two to

five. MEMS approaches are exploring micro heat engines and stor-

ing energy in rotating micromachinery. Solar panels remain the

most common form of energy harvesting, but numerous inves-

tigations are exploring avenues for harvesting the mechanical

energy associated with specific applications, such as flexing shoes,

pushing buttons, window vibration, or airflow in ducts (see

www.media.mit.edu/context, www.media.mit.edu/physics, or

www.media.mit.edu/resenv). With existing technology, a cubic

millimeter of battery space has enough energy to perform roughly

1 billion 32-bit computations, take 100 million sensor samples, or

send and receive 10 million bits of data. As all the system’s layers

become optimized for energy consumed per operation, these

numbers will increase by at least an order of magnitude and some

by several orders.1

Sample battery energy ratings include

• Nonrechargeable lithium: 2,880 J/cm3

• Zinc-air: 3,780 J/cm3 (has very high leakage)

• Alkaline: 1,190 J/cm3

• Rechargeable lithium: 1,080 J/cm3

• Nickel metal hydride (NiMHd): 864 J/cm3

• Fuel cells (based on methanol): 8,900 J/cm3

• Hydrocarbon fuels (for use in micro heat engines): 10,500 J/cm3

Sample scavenging energy ratings include

• Solar (outdoors midday): 15 mW/cm2

• Solar (indoor office lighting): 10 uW/cm2

• Vibrations (from microwave oven casing): 200 uW/cm3

• Temperature gradient: 15 uW/cm3 (from a 10° C temperature

gradient)

REFERENCE

1. L. Doherty et al., “Energy and Performance Considerations for SmartDust,” Int’l J. Parallel Distributed Systems and Networks, vol. 4, no. 3,2001, pp. 121–133.

Energy

researchers have studied this approachextensively for general-purpose computa-tion, we can attack it even more effectivelyin the tiny, networked sensor regime,because the execution threads that must beinterleaved are simple. These requirementshave led to a component-based tiny operat-ing system environment,14 which provides aframework for dealing with extensive con-currency and fine-grain power managementwhile providing substantial modularity forrobustness and application-specific opti-mization. The TinyOS framework estab-

lishes the rules for constructing reusablecomponents that can support extensive con-currency on limited processing resources.

Sensing and actuationInterfacing to the physical world involves

exchanging energy between embeddednodes and their environments. This takestwo forms: sensing and actuation. What-ever the sensed quantity (temperature, lightintensity), the sensor transducers a partic-ular form of energy (heat, light) into infor-mation. Actuation lets a node convert infor-

mation into action, but its main role is toenable better sensing. An actuator movespart of itself, relocates spatially, or movesother items in the environment. Sensing andactuation are together the means of physi-cal interaction between the nodes and theworld around them.

The 1990s saw MEMS technology trans-formed from a laboratory curiosity into asource of widespread commercial products.Millimeter-scale silicon accelerometersjoined their cousins the silicon pressure sen-sors under the hoods of most automobiles,

64 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

R esearch conducted through the University of California,

Berkeley’s DARPA project on SenseIT and ubiquitous com-

puting produced a widely-used microsensor node. CrossBow

(www.xbow.com) currently produces it in volume. The core build-

ing block is a 1-inch x 1.5-inch motherboard comprising a low-

power microcontroller, low-power 900-MHz radio, nonvolatile

memory, LEDs, network programming support, and vertical ex-

pansion bus connector. The microcontroller contains Flash pro-

gram and SRAM data storage, analog digital converter, and ex-

ternal I/O (standard and direct ports). A second small microcon-

troller lets the node reprogram itself from network data. The sen-

sors and actuators on the motherboard are associated with its

own operation: battery voltage sensor, radio signal strength sens-

ing and control, and LED display. The microcontroller’s external

interface is exposed in a standardized form on the expansion con-

nector, providing analog, digital, direct I/O, and serial bus inter-

connections. Sensor packs for the specific applications, including

termistors, photo detectors, accelerometers, magnetometers,

humidity, pressure, or actuator connections, are stacked like tiny

PC104 boards (see Figure A). The processor dissipates several

nano-Joules per 8-bit instruction.

The sensor board for the Berkeley platform consists of five differ-

ent microsensor modules to support several potential applications.

The types of sensors it supports include light, temperature, accel-

eration, magnetic field, and acoustic, each of which is available off

the shelf. All modules in the sensor board power cycle indepen-

dently and are power isolated from the MICA’s processor through

an analog switch. Finally, the gain of the magnetometer and the

microphone amplification is adjustable by tuning the two digital

potentiometers over the I2C bus.

We recently stacked a motor control board on a microsensor

node and mounted the resulting assembly on a motorized chassis

with two wheels.1 The motor control board regulates wheel

speeds and provides range information from two forward and

one rear-looking infrared emitters. The resulting node (see Figure

B) is essentially a small mobile robot platform with the same net-

work interface as the microsensor nodes.

REFERENCE

1. G.T. Sibley et al., “Robomote: A Tiny Mobile Robot Platform for Large-Scale Sensor Networks,” to be published in Proc. IEEE Int’l Conf. Roboticsand Automation, 2002; http://robotics.usc.edu/projects/robomote.

UniversIty Research

Figure A. The

University of

California,

Berkeley, mote.

Figure B. The

Robomote: A

mobile sensor

node.

and gyros and flow sensors are now alsobecoming common. Projection display sys-tems with a million moving parts on a sin-gle chip are commonplace. Internet pack-ets bounce off submillimeter mirrors thatswitch photons between different opticalfibers. MEMS technologies are successfulin the optical applications space becausethey provide far superior performance orthe same performance at lower prices; theyare successful in Detroit because they arereliable and dirt-cheap.

A common problem in both sensing andactuation is uncertainty. The physicalworld is a partially observable, dynamicsystem, and the sensors and actuators arephysical devices with inherent accuracyand precision limitations. Thus sensor-measured data are necessarily approxima-tions to actual values. In a large system ofdistributed nodes, this implies that we needsome form of filtering at each node beforewe can meaningfully use the data. We canalso achieve increased accuracy and faulttolerance by redundancy, using sensorswith overlapping fields of view. This raisesinteresting challenges of sensor placementand fusion, especially in the context of verylarge networks. In addition to uncertainty,there is the further problem of latency inactuation. For closed loop control, sto-chastic latency can cause instability andunreliable behavior.

Although not traditionally associatedwith ubiquitous computing, robotics andMEMS developments play a critical rolewhen it comes to sensing, actuation, andcontrol. A particularly significant develop-ment in robotics within the past decade hasbeen the move away from disembodied,traditional AI to real-time embedded deci-sion making in physical environments. Fornearly two decades, the dominant para-digm in mobile robotics research involvedthe offline design of control algorithmsbased on deliberation. These planner-basedalgorithms relied on logic and models of therobots and their environments, but the sys-tems were unresponsive and slow to adaptto dynamic environments. Their dysfunc-tionality in physical domains spurred thedevelopment of control techniques thatclosely coupled perception and action with

remarkable success. The earliest examplesof these stateless, reactive systems wereresponsive but lacked generality and theability to store representation. Modern,behavior-based control15 generalizes reac-tive control by introducing the notion ofbehavior as an encapsulated, time-extendedsequence of actions. Perception and actionare still coupled tightly as in reactive sys-tems, with the added benefits of represen-tation and adaptation without any cen-tralized control. An alternative modernapproach is to hybridize control,16 wherea planner and a reactive system communi-cate through a third software layer designedexplicitly for that purpose.

Localization For a system to operate on input from

the physical world, nodes must know theirlocation in three spaces to provide context-aware services to other system elements. Astatic or mobile node could answer thequestion “Where am I?” in several ways:the answer might be relative to a map, rel-ative to other nodes, or in a global coordi-nate system. For a sensor network, this isparticularly relevant, because we mustoften tag the queries to which it will pro-vide answers based on sensor measure-ments with location information.

Localization is a main part of registra-tion between the virtual and physicalworlds. In a sensor network, this can takea variety of forms. Nodes in a networkmight report data tagged with relative loca-tion information, but from time to time,some nodes in the network might have toreference their data to an external anchorframe. Another example involves aggre-gation. Imagine two cameras with over-lapping fields of view. A node aggregatingdata from these two cameras might per-form the equivalent of stereo processing,but to do so, it must know the baseline

between the two nodes—the location ofone relative to the other.

Scale and autonomy play an importantrole in location computation. Several earlylarge-scale systems relied on careful offlinecalibration and surveying before nodedeployment to ensure reliable localization.Examples include the Active Badge andActive Bat systems (see the “Sensing Loca-tion and Movement of People andDevices” sidebar). In robotics, however,the focus was on techniques for localizingsmall numbers of robots autonomously—without relying on presurveyed maps, bea-cons, or receivers.17 A recent trend hasbeen to investigate algorithms that suc-

cessfully localize large-scale networks ofnodes autonomously.

We can taxonomize localization tech-niques for embedded devices in severalways: the sensors used, whether the local-ization is with reference to a map, andwhether the localization is done using exter-nal beacons and such.18 Broadly speaking,we can view localization as a sensor-fusionproblem. Given disparate sources of infor-mation about different aspects of node loca-tion and position, the problem is to designalgorithms that can rationally combine thedata from these sources to maintain an esti-mate of node location. In many interestingapplications (with large numbers of smallnodes), nodes cannot be placed with greatcare. In some systems, nodes might bemobile because they are robotic or becauseother entities in the environment can movethem. Thus in-network, autonomous local-ization is a must for many real-world appli-cations. This is a departure from most exist-ing and planned ubiquitous computingsystems for traditional office environmentsin which significant effort was expendedupfront to instrument the environment forlocalization.

JANUARY–MARCH 2002 PERVASIVEcomputing 65

The physical world is a partially observable,

dynamic system, and the sensors and actuators

are physical devices with inherent accuracy and

precision limitations.

Several algorithms exist for localizingboth fixed and mobile network nodes. Oneexample recently demonstrated coarselocalization of networks of mobile nodesby letting them build a map of their envi-ronment.19 Significantly, the system local-izes the nodes adaptively by updating aninternal sparse representation of a chang-ing environment. The algorithm uses thenodes themselves as landmarks, and couldthereby allow the robot to determine eachnode’s location without reference to envi-ronmental features or to a map.20 The algo-

rithm constructs a mesh in which nodes arerepresented by point masses, and springsrepresent observed relationships betweennodes.20 A relaxation algorithm determinesthe lowest energy state of the mesh andthereby the most probable location of thenodes.

A distributed system architectureLooking forward, the distributed sys-

tem architecture will make or break ourability to effectively instrument the phys-ical world. Moreover, the same character-

istics that enable these systems (miniatur-ization and wireless communication)impose constraints that require significantchanges in the overall architecture toachieve the desired functionalities andproperties.

First and foremost are the constraintsimposed by having to operate within thelimits on finite or slowly charging batter-ies. Remember that this energy’s primaryconsumer in the context of low-powercommunication in physically complex set-tings is wireless communication.10 Con-

66 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

M any of the earliest ubiquitous computing systems focused

on providing localization services using physically embed-

ded, networked systems. These systems exhibited the characteris-

tics of several nodes (scale), but otherwise differed significantly

from current trends in that they had a relatively fixed—not ad hoc

or variable—system structure. The application emphasis was on

providing context and location to human applications with simple

forms of autonomy relative to those anticipated in future systems.

Researchers took the first steps in the direction of interacting

with the physical world as a general computing concept at the

Xerox Parc and Olivetti (now AT&T) Research Lab in Cambridge.

That work’s emphasis was on building a range of different sized

devices that could occupy distinct niches in the ecosystem of

interactions with information. Much of the work focused on

understanding how people might interact with such devices and

how user interfaces could use contextual information about the

collection of devices. Thus, the primary connection with the physi-

cal world involved determining each device’s location and thereby

its geometric relationship to others.

The Active Badge system, developed between 1989 and 1992,

used an IR beacon that carried identity information from small

mobile devices to IR sensors in the environment. Given a map of the

physical space with sensors as landmarks, its short range and inabil-

ity to pass through solid objects provided a convenient means of

determining approximate location by proximity. A low-bandwidth

wired network connected the sensors to a centralized processing

capability, which could maintain a representation of all the tagged

objects and their interrelationships to direct actions of various sorts.

Typical examples were telephone connections and computing envi-

ronments tracking the movement of individuals through a space.

Passive RFID tags provide a similar capability in small, low-cost

packages.1

More recently, the Active Bat system used trilateration of ultra-

sonic beacons to provide accurate position determination in a

physical space.2 Receivers are placed in a regular grid in the ceiling

tiles. An RF beacon informs a particular mobile device to emit an

ultrasonic beacon and start a time-of-flight measurement at the

receivers. Arrival times of the pulse’s edge are conveyed over a

wired network from the receivers to a central PC, which can calcu-

late the specific device’s position. The largest system deployed

uses 720 receivers to cover an area of 1,000 m2 on three floors

and can determine the positions of up to 75 objects each second

to within a few centimeters. Thus, we see in the localization tech-

nology’s advancement the use of a richer set of sensor modes and

the integration of communication with the sensing and control

process.

The Massachusetts Institute of Technology Media Lab and the

Georgia Institute of Technology Aware Home projects have devel-

oped more direct means of sensing the interaction of people with

their environment. These efforts include floor sensors to determine

individuals’ positions and movement (with the hope of determin-

ing identity from step patterns), weight and acoustic sensors in

doorways, tag readers embedded in tables to determine the pat-

tern of tagged objects, and numerous physical objects with vari-

ous sensors, display capabilities, and tags. A representation of all

this information is projected into a higher tier, which deals with

proxies of the physical objects as widgets. Hewlett Packard’s

Cooltown seeks to provide an infrastructure for building applica-

tions using many such instrumented devices interfacing to vari-

ous PDAs and mobile computers.

REFERENCES

1. R. Want et al., “Bridging Physical and Virtual Worlds with ElectronicTags,” Proc. Conf. Computer-Human Interaction, ACM Press, New York,1999.

2. A. Harter et al., “The Anatomy of a Context-Aware Application,” Proc.5th Ann. ACM/IEEE Int’l Conf. Mobile Computing and Networking (Mobi-Com), ACM Press, New York, 1999, pp. 59–68.

Sensing Location and Movement of People and Devices

sequently, we cannot realize long-livedautonomous systems by simply streamingall the sensory data out of the nodes forprocessing by traditional computing ele-ments. Rather, computation must residealongside the sensors, so that we canprocess time-series data locally. Instead ofbuilding a network of sensors that all out-put high-bandwidth bit streams, we mustconstruct distributed systems whose out-puts are at a higher semantic level—com-pact detection, identification, tracking,pattern matching, and so forth.

To support such an architecture, impor-tant systems-oriented trends are emerging.Two important examples are self-config-uring networks and data-centric systems.One objective of self-configuration is tolet systems exploit node redundancy (den-sity) to achieve longer unattended life-times. Techniques for coordinating andadapting node sleep schedules to supporta range of trade-offs between fidelity,latency, and efficiency are also emerg-ing.21–24 Ad hoc routing techniques repre-sent an even earlier example of self-con-figuration in the presence of wireless linksand node mobility,25 but they still supportthe traditional IP model of shipping datafrom one edge of the network to another.Small form factor wireless sensor nodescannot afford to ship all data to the edgesfor outside processing, and fortunately do

not need to operate according to the samelayering restrictions as IP networks whenit comes to application-layer data pro-cessing at intermediate hops. Directed dif-fusion promotes in-network processing bybuilding on a data-centric instead ofaddress-centric architecture for the dis-tributed system or network. Using datanaming as the lowest level of system orga-nization supports flexible and efficient in-network processing (see the “Directed Dif-fusion” sidebar).26

A second trend is the increased relianceon tiered architectures in which fewerhigher-end elements complement the morelimited capabilities of widely and denselydispersed nodes. Very small devices willinevitably possess limited storage and com-puting resources as well as limited band-width with which to interact with the out-side world. Introducing tiered architectures,where some system elements have greatercapacity, is desirable. In a tiered architec-ture, the smallest system elements helpachieve spatial diversity and short-rangesensing, whereas the computationally pow-erful elements implement more sophisti-cated and performance intensive processingfunctions, such as digital signal processing,localization, and long-term storage. Thesehigher-tier resources could include roboticelements that traverse the sensor field, deliv-ering energy to depleted batteries, or that

compute localization coordinates for ad hoccollections of smaller nodes.

Where are we headed?As we embark on Weiser’s vision, we are

discovering that the ability to form net-works of devices interacting with the phys-ical world opens broad avenues for infor-mation technology beyond the highlyconnected responsive home or workspace.We foresee thousands of devices embeddedin the civil infrastructure (buildings,bridges, water ways, highways, and pro-tected regions) to monitor structural healthand detect crucial events. Eventually, suchdevices might be tiny enough to passthrough bodily systems or be usable inlarge enough numbers to instrument majorair or water flows. In the nearer term,embedded sensor networks can funda-mentally change the practice of numerousscientific endeavors, such as studies ofcomplex ecosystems, by providing in situmonitoring and measurement at unprece-dented levels of temporal and spatial den-sity without disturbing the complex sys-tems under study.27

The tremendous progress toward minia-turization means that we can put instru-ments into the experiment, rather than con-ducting the experiment within aninstrument. For the laboratory, this meanswireless sensors for measurement and log-

JANUARY–MARCH 2002 PERVASIVEcomputing 67

D irected Diffusion was designed to promote adaptive, in-net-

work processing in wireless sensor networks. In Directed Dif-

fusion, sensors publish and clients subscribe to data. Both identify

data by attributes such as “the southwest” or “acoustic sensors.”

Diffusion divorces data identity from node identity. To do this, it

uses a simple typing mechanism and encoding of attribute-based

naming with simple matching rules. Names are sets of attributes,

each a tuple, including keys, values, and operations.

As an example, the query, “sensor EQ seismic, latitude GT 100,

latitude LT 101” would trigger a sensor with data tagged, “sensor

IS seismic, latitude IS 100.5.” Matching is not meant to be a gen-

eral-purpose language. User-provided code or filters can be dis-

tributed to the sensor network to perform application-specific, in-

network processing tasks such as data aggregation, caching, and

collaborative signal processing.1

Directed Diffusion was designed and implemented by researchers

at the University of Southern California’s Information Sciences Insti-

tute and is now used by multiple research projects to support col-

laborative signal processing applications as part of the DARPA

SenseIT program. Directed Diffusion is supported under Linux

and runs on off-the-shelf embedded PC devices. A constrained

subset of Diffusion runs under TinyOS on the University of Califor-

nia, Berkeley’s motes described earlier in this article.These imple-

mentations support a common API, which has been used by Cor-

nell, BAE Systems, Xerox PARC, and Pennsylvania State University

collaborative processing applications run in experimental field tri-

als. Diffusion is also supported in the widely used ns-2 network

simulator and supports APIs identical to the Linux

implementations.

REFERENCE

1. J. Heidemann et al., “Building Efficient Wireless Sensor Networks withLow-Level Naming,” ACM Symp. Operating Systems Principles, ACMPress, New York, 2001.

Directed Diffusion

ging every test tube and beaker. For the ubiq-uitous computing environment, it meanssprinkling sensing capabilities through aspace appropriate to the activities of inter-est, rather than conducting studies in a spe-cially designed testbed environment. Real-izing this dimension of Weiser’s long-termvision will require more than dramaticdevelopments in hardware miniaturization.It will require system-level advances includ-ing the programming model, closed loopcontrol, predictability, and environmental-compatibility.

We first need a system-wide architecturethat supports interrogating, programming,and manipulating the physical world.Some of these systems will require exten-sive in-network compression and collabo-rative processing to exploit the spatial andtemporal density of emerging systems. Oneemerging characteristic of such an archi-tecture is the shift to naming data in terms

of the relevant properties of the physicalsystem instrumented—naming data insteadof nodes.26,28 Many systems will be orga-nized around spatial and temporal coordi-nates. Given a tuple-space for the instru-mented environment, we’ll need a pro-gramming model for the computations dis-tributed in time and space. Various modelsare under consideration and alternativelyview the system as a distributed databaseor a loosely coupled parallel computingstructure.29

Increasingly, physically embedded sys-tems will need to self-organize. Self-orga-nization, particularly spatial reconfigura-tion, is needed to address variability atmultiple scales. An interesting (and unique)aspect of the interaction with the physicalworld is the ability to manipulate it.Although today’s systems are built with self-configuration in mind, in the longer term,these systems must integrate manipulation

and reconfigure themselves and the worldin which they are embedded. Closed-loopcontrol will be contained in the distributedsystem and is a formidable challenge dueto the inherent stochastic communicationdelays in such systems.

As the systems become increasinglyautonomous and grow to include actua-tion, the need for predictability and diag-nosability will be a critical stumbling block.Weiser’s vision requires these systems ulti-mately to disappear into the environment,but they must do so while operating cor-rectly. Users will not rely on them if theydo not degrade with predictable behaviorsor if they do not lend themselves to intu-itive diagnosis, which comes with the abil-ity to develop mental models.6

There is the potential for damage orintrusion on the spaces instrumented. Twoexamples whose risks are evident today are the violation of personal privacy through

lack of anonymity in instrumented spacesand the generation of “space garbage” byleaving behind depleted nodes in the envi-ronment. Techniques for anonymity-pre-serving systems and for harvesting energyfrom the environment are enabling tech-nologies, but such issues ultimately requiresocial and legal policy.

Moving into a ubiquitouscomputing world calls forthe computing communityto embrace interdiscipli-

nary approaches and to transform the edu-cational process to enable deep interdisci-plinary advances. Interfacing to the physicalworld is arguably the single most impor-tant challenge in computer science today.The frontier of almost any traditional CSsubdiscipline is in this area. Examples ofthe disciplines involved include network-

ing, operating system, databases, artificialintelligence, virtual reality, and signal pro-cessing.3 An implication of this area’s inter-disciplinary nature is the need for upgrad-ing training of our students at both theundergraduate and graduate levels. Stu-dents need skills that extend beyond con-structing complex programs to manipulatevirtual objects. They need the ability tounderstand and model physical worldprocesses that they will have to measureand with which they will have to cope.6

ACKNOWLEDGMENTS

David Culler’s work is supported in part by theDefense Research Projects Agency (NEST F33615-01C-1895), National Science Foundation (RI EIA-9802069), California MICRO, Compaq, and Intel.Deborah Estrin’s research is supported in part bygrants from the Defense Research Projects Agency(GALORE F33615-01-C-1906 and SenseIT DABT63-99-1-0011), NSF (SCOWR EIA-0082498), andIntel. Kris Pister’s research is supported by grantsfrom DARPA MTO and ITO. Gaurav Sukhatme’sresearch is supported in part by DARPA (MARSDABT63-99-1-0015), the National Science Founda-tion (SCOWR ANI-0082498 and ITR EIA-0121141),NASA (1231521), the Department of Energy (RIMDE-FG03-01ER45905), and Intel.

REFERENCES 1. D. Estrin et al., “Next Century Challenges:

Scalable Coordination in Sensor Networks,”Proc. ACM Conf. Mobile and ComputingNetworking (MobiCom), ACM Press, NewYork, 1999.

2. M. Chu, H. Haussecker, and F. Zhao, “Scal-able Information-Driven Sensor Queryingand Routing for Ad Hoc HeterogeneousSensor Networks,” to be published in Int’l J.High Performance Computing Applications,2002.

3. F. Zhao, J. Shin, and J. Reich, “Information-Driven Dynamic Sensor Collaboration forTarget Tracking,” to be published in IEEESignal Processing, 2002; www.parc.xerox.com/spl/projects/cosense/pub/ieee_spm.pdf.

4. S. Pradhan, J. Kusuma, and K. Ramchandran,“Distributed Compression in a Dense SensorNetwork,” to be published in IEEE SignalProcessing, 2002.

5. D. Li et al., “Detection, Classification andTracking of Targets in Distributed SensorNetworks,” to be published in IEEE SignalProcessing, 2002.

6. Computer Science and Telecommunication

68 PERVASIVEcomputing http://computer.org/pervasive

R E A C H I N G F O R W E I S E R ’ S V I S I O N

As the systems become increasingly

autonomous and grow to include actuation, the

need for predictability and diagnosability will be

a critical stumbling block.

Board, Embedded Everywhere: A ResearchAgenda for Networked Systems of Embed-ded Computers, National Research Coun-cil, Washington, D.C., 2001.

7. R. Want et al., “The Active Badge LocationSystem,” ACM Trans. Information Systems,vol. 10, no. 1, Jan. 1992, pp. 91–102.

8. C.D. Kidd et al., “The Aware Home: A Liv-ing Laboratory for Ubiquitous ComputingResearch,” Proc. 2nd Int’l Workshop Coop-erative Buildings, Lecture Notes in Com-puter Science, vol. 1670, Springer-Verlag,Berlin, 1999, pp. 190–197.

9. T. Kindberg and J. Barton, “A Web-BasedNomadic Computing System,” ComputerNetworks, vol. 35, no. 4, Mar. 2001, pp.443–456.

10.G. Pottie and W. Kaiser, “Wireless IntegratedNetwork Sensors,” Comm. ACM, vol. 43,no. 5, May 2000, pp. 51–58.

11. J.M. Kahn, R.H. Katz, and K.S.J. Pister,“Emerging Challenges: Mobile Networkingfor Smart Dust,” J. Comm. and Networks,vol. 2, no. 3, Sept. 2000, pp. 188–196; www.eecs.berkeley.edu/~jmk/pubs/jcn.00.pdf.

12.K.S.J Pister and B.E. Boser, Smart Dust:Wireless Networks of Millimeter-Scale Sen-sor Nodes, Electronics Research LaboratoryResearch Summary, Electronic ResearchLab, Univ. of California, Berkeley, 1999.

13.B. Warneke et al., “Smart Dust: Communi-cating with a Cubic-Millimeter Computer,”Computer, vol. 34, no. 1, Jan. 2001, pp.44–51.

14. J. Hill et al., “System Architecture Directionsfor Networked Sensors,” ACM ArchitecturalSupport for Programming Languages andOperating Systems, ACM Press, New York,2000.

15.M.J. Mataric, “Behavior-based Control:Examples from Navigation, Learning, andGroup Behavior,” J. Experimental and The-oretical Artificial Intelligence, vol. 9, nos.2–3, 1997, pp. 323–336; www-robotics.usc.edu/~maja/publications/jetai-arch.ps.gz.

16.R. Arkin, “Toward the Unification of Nav-igational Planning and Reactive Control,”Proc. AAAI Spring Symp., AAAI, MenloPark, Calif., 1989.

17.S. Thrun et al., “Robust Monte Carlo Local-ization for Mobile Robots,” Artificial Intel-ligence, vol. 101, 2000, pp. 99–141; www-2.cs.cmu.edu~thrun/papers/thrun.robust-mcl.html.

18. J. Hightower and G. Borriello, “LocationSystems for Ubiquitous Computing,” Com-puter, vol. 34, no. 8, Aug. 2001, pp. 57–66.

19.G. Dedeoglu and G.S. Sukhatme, “Land-mark-based Matching Algorithm for Coop-erative Mapping by Autonomous Robots,”Proc. 5th Int’l Symp. Distributed Autono-mous Robotic Systems, Springer-Verlag,Berlin, 2000, pp. 251–260; www-robotics.usc.edu/~gaurav/Papers/goksel-dars.ps.gz.

20.A. Howard, M.J. Mataric, and G.S.Sukhatme, “Relaxation on a Mesh: A For-malism for Generalized Localization,” Proc.IEEE/RSJ Int’l Conf. Intelligent Robots andSystems (IROS), IEEE CS Press, Los Alami-tos, Calif., 2001, pp. 1055–1060.

21.B. Chen et al., “Span: An Energy-EfficientCoordination Algorithm for TopologyMaintenance in Ad Hoc Wireless Net-works,” Proc. 7th ACM Conf. Mobile andComputing Networking (MobiCom), ACMPress, New York, 2001.

22. A. Cerpa and D. Estrin, “ASCENT: Adap-tive Self-Configuring Sensor NetworkTopologies,” to be published in Proc. IEEEInfocom, IEEE Press, Piscataway, N.J., 2002.

23.C. Schurgers, V. Tsiatsis, and M. Srivastava,“STEM Topology Management for EfficientSensor Networks,” to be published in Proc.IEEE Aerospace Conf., IEEE CS Press, LosAlamitos, Calif., 2002.

24.Y. Xu, J. Heidemann, and D. Estrin, “Geog-raphy-Informed Energy Conservation for AdHoc Routing,” Proc. 7th Ann. ACM/IEEEInt’l Conf. Mobile Computing and Net-working (MobiCom), ACM Press, NewYork, 2001.

25.D. Johnson and D. Maltz, “Protocols forAdaptive Wireless and Mobile Network-ing,” IEEE Personal Comm., vol. 3, no. 1,Feb. 1996.

26. J. Heidemann et al., “Building EfficientWireless Sensor Networks with Low-LevelNaming,” ACM Symp. Operating SystemsPrinciples, ACM Press, New York, 2001.

27. A. Cerpa et al., “Habitat Monitoring: Appli-cation Driver for Wireless CommunicationsTechnology,” Proc. ACM SIGCOMM Work-shop on Data Communications in LatinAmerica and the Caribbean, ACM Press,New York, 2001.

28.W. Adjie-Winoto et al., “The Design andImplementation of an Intentional NamingSystem,” ACM Symp. Operating SystemPrinciples, ACM Press, New York, 1999.

29. P. Bonnet, J.E. Gehrke, and P. Seshadri, “Query-ing the Physical World,” IEEE PersonalComm., vol. 7, no. 5, Oct. 2000, pp. 10–15.

For more information on this or any other com-puting topic, please visit our Digital Library athttp://computer.org/publications/dlib.

JANUARY–MARCH 2002 PERVASIVEcomputing 69

the AUTHORS

Deborah Estrin is a profes-sor of computer science atthe University of California,Los Angeles. Her researchinterests include design ofnetwork and routing proto-cols for large global net-works, sensor networks, and

applications for environmental monitoring. Shehas a PhD in computer science from the Massa-chusetts Institute of Technology. She has servedon several program committees and editorialboards. Contact her at the Computer ScienceDept., UCLA, 3713 Boelter Hall, 420 WestwoodPlaza, Los Angeles, CA 90095; [email protected].

David Culler is a professorof computer science at theUniversity of California,Berkeley, and director ofIntel Research at UCB. Hisresearch interests includeembedded networks ofsmall wireless devices, par-

allel computer architecture, parallel program-ming languages, and high-performance com-munication. He has a PhD from MIT. Contacthim at Computer Science Division #1776, 627Soda Hall, UCB, Berkeley, CA 94720-1776;[email protected].

Kris Pister is an associateprofessor in the ElectricalEngineering and ComputerSciences Department atUCB. His research interestsinclude the developmentand use of standard MEMSfabrication technologies,

micro robotics, and CAD for MEMS. He has aBA in applied physics from the University ofCalifornia, San Diego, and an MS and PhD inelectrical engineering from UCB. Contact himat the Electrical Eng. and Computer ScienceDept., UCB, 497 Cory Hall, Berkeley, CA 94720-1770; [email protected].

Gaurav Sukhatme is anassistant professor in theComputer Science Depart-ment at the University ofSouthern California and the associate director of itsRobotics Research Labora-tory. His research interests

include embedded systems, mobile robot coordi-nation, sensor fusion for robot fault tolerance,and human–robot interfaces. He has an MS andPhD in computer science from USC. He is a mem-ber of the IEEE, AAAI, and ACM and has servedon several conference program committees.Contact him at the Computer Science Dept.,MC0781, USC, 941 West 37th Place, LosAngeles, CA 90089-0781; [email protected].