[acm press the 14th acm/ieee international symposium - san fancisco, ca, usa...

6
Energy Efficient Sampling for Event Detection in Wireless Sensor Networks Zainul Charbiwala, Younghun Kim, Sadaf Zahedi, Jonathan Friedman, Mani B. Srivastava University of California, Los Angeles [zainul,kimyh,szahedi,jf,mbs]@ee.ucla.edu ABSTRACT Compressive Sensing (CS) is a recently developed mecha- nism that allows signal acquisition and compression to be performed in one inexpensive step so that the sampling pro- cess itself produces a compressed version of the signal. This significantly improves systemic energy efficiency because the average sampling rate can be considerably reduced and ex- plicit compression eliminated. In this paper, we introduce a modification to the canoni- cal CS recovery technique that enables even higher gains for event detection applications. We show a practical implemen- tation of this compressive detection with energy constrained wireless sensor nodes and quantify the gains accrued through simulation and experimentation. Categories and Subject Descriptors E.4 [Coding and Information Theory]: Data compaction and compression General Terms Theory, Design, Experimentation Keywords Compressive Sensing, Detection, Wireless Sensor Networks 1. INTRODUCTION Wireless sensor networks (WSN) are now routinely being used in experiments that further our understanding of the natural world and for the estimation and detection of var- ious events within it. Since their primary use has been in hard-to-reach infrastructure-less environments, many wire- less sensor network platforms are battery operated, leading to extreme energy constraints. Achieving high system life- time, therefore, requires a concerted effort in reducing the sensor sampling, processing and radio communication costs while maintaining application level objectives. For detection applications, WSN’s to date bifurcate into those which can only pursue simple detection schemes and Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. ISLPED’09, August 19–21, 2009, San Francisco, California, USA. Copyright 2009 ACM 978-1-60558-684-7/09/08 ...$5.00. those which are not really low-power. In the latter case sub- stantial computing and communication elements supported by large energy buffers and harvesting means replace tiny low-cost nodes. These systems are too costly to provide broad coverage and often last merely on the order of weeks. The former either push the detection problem into analog hardware – sleeping until woken by the analog trigger signal – or use a secondary digital processor to manage sampling and initial detection. The result is either a large number of false alarms or an inability to detect more sophisticated trigger conditions (such as a specific acoustic signature). In this work we present a novel approach – Weighted Basis Pursuit (WBP) – which eliminates this dilemma by providing continual coverage, sophisticated signal detection, and high accuracy (low false positive/negative rates) even at low signal-to-noise ratios (SNR) – all without an addi- tional power penalty. This is achieved by tailoring work in the emerging field of Compressive Sensing (CS) to the chal- lenges of WSN. For validation, a WBP-CS TinyOS module was developed and deployed on a testbed of MicaZ nodes. Demonstrable findings in this work clearly illustrate WBP- CS’ utility. With a 30× sampling reduction, a deployment otherwise obtaining an impractical 1 month lifetime might only need a battery change once a year. 1.1 Compressive Sensing Overview Many natural signals are compressible by transforming them to some domain – eg. sounds are compactly repre- sented in the frequency domain and images in the wavelet domain. But, compression is typically performed after the signal is completely acquired. Advances in compressive sens- ing [5, 9] suggest that if the signal is sparse or compress- ible, the sampling process can itself be designed so as to acquire only essential information. CS enables signal acqui- sition with average sampling rates far below the Shannon- Nyquist requirement and eliminates the explicit compres- sion step altogether. This not only saves energy in the ADC subsystem through reduced sampling, the processing subsys- tem through reduced complexity (no explicit compression), and the communication subsystem through reduced trans- mission, but also enables the capture of substantially more complex signals where it would not be possible otherwise. For example, in applications interested in high-frequency acoustic signals, low power sensor network platforms, in- cluding MicaZ motes, can not sample at Nyquist rates [1]. Compressive sensing involves taking sample measurements in an ‘incoherent domain’ through a linear transformation [2]. This step may be viewed computationally equivalent to compression if this transformation sparsifies the signal. 419

Upload: mani-b

Post on 19-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

Energy Efficient Sampling for Event Detection in WirelessSensor Networks

Zainul Charbiwala, Younghun Kim, Sadaf Zahedi, Jonathan Friedman,Mani B. Srivastava

University of California, Los Angeles[zainul,kimyh,szahedi,jf,mbs]@ee.ucla.edu

ABSTRACTCompressive Sensing (CS) is a recently developed mecha-nism that allows signal acquisition and compression to beperformed in one inexpensive step so that the sampling pro-cess itself produces a compressed version of the signal. Thissignificantly improves systemic energy efficiency because theaverage sampling rate can be considerably reduced and ex-plicit compression eliminated.

In this paper, we introduce a modification to the canoni-cal CS recovery technique that enables even higher gains forevent detection applications. We show a practical implemen-tation of this compressive detection with energy constrainedwireless sensor nodes and quantify the gains accrued throughsimulation and experimentation.

Categories and Subject DescriptorsE.4 [Coding and Information Theory]: Data compactionand compression

General TermsTheory, Design, Experimentation

KeywordsCompressive Sensing, Detection, Wireless Sensor Networks

1. INTRODUCTIONWireless sensor networks (WSN) are now routinely being

used in experiments that further our understanding of thenatural world and for the estimation and detection of var-ious events within it. Since their primary use has been inhard-to-reach infrastructure-less environments, many wire-less sensor network platforms are battery operated, leadingto extreme energy constraints. Achieving high system life-time, therefore, requires a concerted effort in reducing thesensor sampling, processing and radio communication costswhile maintaining application level objectives.

For detection applications, WSN’s to date bifurcate intothose which can only pursue simple detection schemes and

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.ISLPED’09, August 19–21, 2009, San Francisco, California, USA.Copyright 2009 ACM 978-1-60558-684-7/09/08 ...$5.00.

those which are not really low-power. In the latter case sub-stantial computing and communication elements supportedby large energy buffers and harvesting means replace tinylow-cost nodes. These systems are too costly to providebroad coverage and often last merely on the order of weeks.The former either push the detection problem into analoghardware – sleeping until woken by the analog trigger signal– or use a secondary digital processor to manage samplingand initial detection. The result is either a large numberof false alarms or an inability to detect more sophisticatedtrigger conditions (such as a specific acoustic signature).

In this work we present a novel approach – WeightedBasis Pursuit (WBP) – which eliminates this dilemma byproviding continual coverage, sophisticated signal detection,and high accuracy (low false positive/negative rates) evenat low signal-to-noise ratios (SNR) – all without an addi-tional power penalty. This is achieved by tailoring work inthe emerging field of Compressive Sensing (CS) to the chal-lenges of WSN. For validation, a WBP-CS TinyOS modulewas developed and deployed on a testbed of MicaZ nodes.Demonstrable findings in this work clearly illustrate WBP-CS’ utility. With a 30× sampling reduction, a deploymentotherwise obtaining an impractical 1 month lifetime mightonly need a battery change once a year.

1.1 Compressive Sensing OverviewMany natural signals are compressible by transforming

them to some domain – eg. sounds are compactly repre-sented in the frequency domain and images in the waveletdomain. But, compression is typically performed after thesignal is completely acquired. Advances in compressive sens-ing [5, 9] suggest that if the signal is sparse or compress-ible, the sampling process can itself be designed so as toacquire only essential information. CS enables signal acqui-sition with average sampling rates far below the Shannon-Nyquist requirement and eliminates the explicit compres-sion step altogether. This not only saves energy in the ADCsubsystem through reduced sampling, the processing subsys-tem through reduced complexity (no explicit compression),and the communication subsystem through reduced trans-mission, but also enables the capture of substantially morecomplex signals where it would not be possible otherwise.For example, in applications interested in high-frequencyacoustic signals, low power sensor network platforms, in-cluding MicaZ motes, can not sample at Nyquist rates [1].

Compressive sensing involves taking sample measurementsin an ‘incoherent domain’ through a linear transformation[2]. This step may be viewed computationally equivalentto compression if this transformation sparsifies the signal.

419

Page 2: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

However, the key insight underpinning CS mechanisms isthat, though the incoherent domain does not sparsify thesignal directly, it describes the signal sufficiently uniquely forperfect recovery to succeed from a fraction of measurements.The computational advantage of doing this comes from thefact that some incoherent transformations can be done im-plicitly and cheaply at the source. To achieve this, however,the designer needs to (a) fabricate a domain that is inco-herent with the sparsifying one and (b) transform the signalto it through sampling. Researchers have shown [2], quiteremarkably, that taking appropriate random projections ofthe signal before sampling satisfies both these requirementsadequately for a large class of compressible signals.

The issue with applying CS in embedded systems is thatwhile acquisition is cheap, reconstruction algorithms are com-putationally severe. Interestingly, it is this asymmetric ar-chitecture that makes CS an excellent choice for low-powerdistributed sensing with wireless sensor networks. This isbecause WSN deployments usually include a back-end datacollection and fusion center (where the event is ultimatelyreported) that is endowed with a considerable amount ofcomputing and storage ability. This means that, if the sen-sor nodes are able to take random projections of the sampledsignal and communicate them to the fusion center, it is pos-sible to reconstruct the signal with high probability using afraction of what the Nyquist rate would have required.

1.2 Compressive Event DetectionWhile many CS mechanisms have focused on signal re-

construction, some researchers [14, 11, 10] have found thatthe number of samples needed to reliably detect features inthe signal, even in a noisy and interference prone environ-ment, can be considerably lower if full CS recovery is notrequired. A recent algorithm, IDEA [12], demonstrates thisby utilizing knowledge of where the event may be presentin the sparse domain. For example, if the event of interestis an acoustic signature of known frequencies, IDEA wouldlook for the signal at those frequencies first. IDEA employsa greedy search procedure called Orthogonal Matching Pur-suit (OMP [21]) to look for the best fitting frequency coef-ficients and furnishes its gains from being able to terminateOMP as soon as the desired signature is found in the signal.

An alternative to OMP often used for full CS recovery iscalled Basis Pursuit (BP), which poses the search for thesparse solution as a linear programming problem. Interest-ingly, it has been demonstrated that BP performs betterthan OMP in practice [9]. Intuitively, this is because BPattempts to find the global minimum while OMP might getcaught in a local dip. There are two drawbacks to using BPdirectly for detection, however. First, though BP can com-plete in polynomial time, the computational requirement isfar higher than OMP [21]. And second, since BP attemptsto reconstruct the signal completely, the number of measure-ments and hence energy required for comparable detectionperformance may actually be higher than IDEA. Assumingthat the first drawback can be overlooked in a deploymentwith a capable back-end fusion center, this paper focuses onovercoming the second.

Our solution tailors BP’s linear programming problem toinclude prior knowledge of the event signature, similar inconcept to IDEA. This is done by biasing components ofthe solution through a weighting matrix (details in Sec. 2)that prioritizes the search to prefer solutions with the known

frequency indices. The effect of this weighting procedure isthat the biased components ‘stand out’ because they areartificially enhanced against background noise. This ideawas inspired by the recent work of Candes, et al. [7] thatapplies an iterative re-weighting procedure around BP toimprove the quality of the compressive decoding solution.

Our proposed Weighted Basis Pursuit (WBP) is visuallydepicted in Fig. 1 for the detection of a sinusoidal tone at450 Hz in the presence of white noise. The reconstruction isperformed in the Fourier (frequency) domain from randomlycollected samples at different rates. When no weighting isapplied, the average sampling rate needs to be as high as300Hz to detect the tone – the red dot in Fig. 1b is justabove the noise floor. While this is below the Nyquist rateof 900Hz, the gains are not impressive. If the sampling rateis lowered to 30Hz, no detection is possible (1c). However, ifweighting is applied, the frequency tones immediately standout (1d), implying a near 30× benefit over the Nyquist rate.A detailed evaluation of both simulated and experimentalperformance for different sampling rates in various noisy en-vironments is deferred until Sec. 4.

1.3 Implementing Compressive DetectionPerhaps the most important aspect of implementing CS is

the random linear transformation for sampling. Note thatthis transformation must not only be incoherent with thedomain in which the signal is sparse, but it must also be sub-stantially cheaper to implement than explicit compression.Much research has been undertaken in the CS communityto search for suitable pseudo-random transforms but mostrequire some form of additional front-end hardware beforethe ADC or some software oriented techniques that assumeNyquist sampling once more. A key contribution of this pa-per is a demonstration of compressed detection mechanismson commercial MicaZ sensor nodes without additional hard-ware. To achieve this, we use a uniform random samplingprocedure that is known to be incoherent with any orthog-onal basis [19], such as the Fourier basis. However, thisrandom sampling is inherently non-causal and may also vi-olate ADC hold times. In Sec. 3, we show how both theselimitations can be overcome effectively and inexpensively.

2. WEIGHTED BASIS PURSUITBefore we describe our detection procedure, we outline the

BP estimation problem briefly. Assume that the signal ofinterest x is of length n and that a set of measurements z oflength k, where k n, are available to us, such that z = Φx,and Φ is the k × n measurement (random non-invertible

(a) FFT of original 450Hz tone at −10dB SNR

FF

T m

agni

tude

(b) BP recovery with 300Hz sampling

0 200 400

(c) BP recovery with 30Hz sampling

Frequency (Hz)

FF

T m

agni

tude

0 200 400

(d) Weighted BP recovery with 30Hz sampling

Frequency (Hz)

Figure 1: Frequency (FFT) coefficients for the CSreconstruction of a 450Hz tone at -10dB SNR withdifferent sampling rates and recovery strategies.

420

Page 3: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

transformation) matrix. Also, assume a separate invertiblelinear transformation Ψ of size n× n which compresses thesignal using x = Ψy, where y is also of length n but has veryfew non-zero coefficients. For example, if the signal x was aset of sinusoidal tones, it is not sparse in the time domain,but with Ψ as the inverse Fourier transform, y is sparse.

Then, under the condition that x is sufficiently compress-ible and that ΦΨ satisfies the so-called restricted isometryproperty (or are mutually incoherent), the reconstruction xfrom the following optimization problem is exact with highprobability [6, 5]:

y = argminy‖y‖`1 s.t. z = ΦΨy (1)

x = Ψy (2)

where ‖y‖`1 ,Pn

i=1 |yi|, the sum of magnitudes (`1 norm)of the sparse coefficients. The above problem is termed BasisPursuit. The intuition behind BP is that from the infinitelymany solutions of y that satisfy z = ΦΨy, it is the simplestone, the one with the minimum sum of magnitudes that ismost likely the right one. One may think of this as applyingOccam’s Razor, albeit rather uniquely. The mathematicaltheory behind BP is well developed and we refer the inter-ested reader to [6], which offers an excellent introductorytreatise on the subject.

While it is understood that the `1 regularization aboveperforms quite well when the sparsity condition is satisfied,the question we wish to investigate here is whether a knownevent signature can be identified from fewer measurementsin the presence of noise and interference. Along these lines,we propose to modify BP to include a weighting matrix Wwithin the minimization objective of Eq. 1 as follows:

y = argminy‖Wy‖`1 s.t. z = ΦΨy (3)

This weighting matrix serves to bias components of theevent signature so that the they are preferentially picked bythe `1 minimization routine. This is done by defining W asa diagonal matrix as follows:

W = diag(w1, ..., wn),wj < 1 ∀j ∈ Ωwj = 1 ∀j /∈ Ω

(4)

where, Ω is the set of indices in the sparse domain wherethe event of interest may be present. For the detection of anacoustic event, this would correspond to the major frequencycomponents of that signature. Note that W is constructedsuch that coefficients of interest have a smaller weight at-tached to them. The effect of this is an (artificial) reductionin the `1 norm of any solution y that contains these coef-ficients, leading those solutions to be chosen over others.Because the solver has to meet the constraints z = ΦΨy,it ensures that the solution is not arbitrary and that it isbounded in energy.

A geometric interpretation of the above weighting tech-nique can be borrowed from Candes, et. al. [7], which in-spired the current work. They show (Fig 1. in [7]) how theweighting factor skews the previously symmetric `1 normball to direct it towards a preferred solution. They alsoshow that as the weighting value wj∈Ω decreases, the solu-tions tend to stabilize, which means that a weighting lessthan a certain value gives an almost identical result. Thisis partly because the norm ball ‖Wy‖`1 hits the same pointon the polyhedra z = ΦΨy beyond a point.

In a detection scenario, it is also interesting to see whathappens when the event is not present, because then the

weighting violates the assumption that the signature is presentat those entries. If the event is absent, noise or interferenceat indices corresponding to Ω will be erroneously enhanced.This means that while wj∈Ω → 0 is a valid selection, whenthe weights are very small the solver will enhance even smallamounts of noise, resulting in false alarms (or false posi-tives). In our empirical evaluations (details in Sec. 4), wefound that the detection performance was fairly insensitiveto the precise value of wj∈Ω and that values between 10−1

and 10−3 gave equivalent results. 1

2.1 Detection FunctionsIn an event detection application, it is important to con-

sider how the event hypothesis will be decided. For example,if we assume that the hypothesis of an event being presentis H1 and absent is H0, one may declare a hypothesis bycomputing a detection function D(y,Ω), where y is the solu-tion to Eq. 3 with weighting using (4) and Ω is a non-emptyset of coefficient indices we care about. While many formsof D are possible, in this paper we consider a proxy of theclassical Likelihood Ratio Test (LRT) [18], which is definedas follows:

DLRT (y,Ω) =

(H1 if yj > θj ∀j ∈ Ω

H0 otherwise(5)

where θj∈Ω represents a threshold for each component iny that the event signature is composed of. In classical detec-tion theory, the thresholds are computed based on the noisepower level, but we adopt a more general training basedstrategy to handle non-Gaussian noise sources as well asnarrow-band interference. Detection performance is mea-sured in terms of the probability of missed detections, PMD

and probability of false alarms, PFA, which are defined as:

PMD = Pr[D(y,Ω) = H0 | H1] (6)

PFA = Pr[D(y,Ω) = H1 | H0] (7)

We extensively evaluate the performance of event detec-tion in both simulation and through experiments in Section4, but first describe the implementation of compressive sens-ing on low-end sensor network platforms.

3. LOW-POWER CS IMPLEMENTATIONA critical aspect of implementing CS is the random projec-

tion matrix Φ (in Eq. 3) through which the sensor node col-lects sample measurements. Since the matrix is constructedpseudo-randomly, the node need not communicate the com-plete matrix to the fusion center. Instead, if the randomnumber generator being used and the initial seed are known,the fusion center can regenerate the matrix locally.

From [6], we learn that a number of random distributionsmay be used to develop Φ, though not all lend themselvesto low-power implementations easily. The two most popularones that have been shown to satisfy the restricted isome-try property [5] are (a) when the elements of Φ are inde-pendent realizations of a Gaussian random variable, Φij =N (0, 1

n) and (b) when they are independent realizations of

an equiprobable ± 1√n

Bernoulli random variable.

Random projections may be computed in software by gen-erating Φ and performing the matrix multiplication, z =Φx. This step, however, requires the sensor node to pos-sess x a priori, which means that it needs to sample above1While preparing the final manuscript we discovered [15],which optimizes weighting for a specific detection scenario.

421

Page 4: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

the Nyquist rate and store n samples in memory. Fur-ther, using the Gaussian distribution requires O(kn) (ide-ally, floating-point) multiply and add operations to computez. Though this computational burden is relaxed when usingthe Bernoulli distribution, which only needs additions (the

1√n

scale factor can be performed post facto at the fusion

center), the promised ADC rate reductions have been lost.The device described in [16] is a hardware based approach

that consists of a bank of k analog front-ends, each of whichperforms signal multiplication with a Bernoulli random streamgenerated at the Nyquist rate. The result from each mul-tiplier is integrated and sampled simultaneously at a muchlower rate. While this is an attractive general purpose tech-nique, it has two drawbacks for low-power implementation.First, the extra power consumed by the continuous analogoperation is non-trivial, especially because of linearized low-noise multiplier blocks and second, strict time synchroniza-tion is required with the fusion center so that the regeneratedBernoulli stream matches that at the node.

3.1 Causal Randomized SamplingA technique that avoids both these issues is randomized

sampling. Sampling at uniformly distributed random in-stants was shown to satisfy the restricted isometry propertywhen the sparse basis Ψ is orthogonal [6, 19], and has beenemployed successfully in [4, 11, 10]. The Φ matrix is con-structed by randomly selecting one column index in each ofthe k rows to be set to unity, but in practice all that is re-quired is being able to sample at arbitrary times and storingthe k samples for subsequent communication. This meansthat the node no longer samples above the Nyquist rate nordoes it perform any arithmetic operation to compute z.

This form of uniform random sampling, however, is non-causal if the random numbers are generated on-the-fly. Toensure causality, one would have generate, sort and storethe k numbers in memory. Further, this technique has thedisadvantage that two sample times may be closer togetherthan the hardware can handle. Dang, et. al. [10] circum-vented this problem by applying a scaling factor before andafter generating the random sample indices. To avoid thequantization effects introduced by this scaling, they applya normally distributed jitter to the resulting sampling in-stants, which works acceptably well.

A simpler technique that solves these issues and is a goodapproximation to the uniform distribution is mentioned inBilinskis and Mikelsons [3]. Let us define the k samplinginstants as ti, i ∈ 1, ..., k. Then, the sampling instantsare generated using the additive random sampling process,that is:

ti = ti−1 + τi (8)

where t0 = 0 and τi are independent realizations of a

Gaussian random variable ∼ N ( nk, r2n2

k2 ). It turns out thatthe PDF of ti converges to a uniform distribution ratherquickly. Here, n

krepresents the desired average sampling

interval and r determines the width of the bell and the re-sulting speed of convergence. We use this procedure in ourimplementation to generate random sampling times on-the-fly, with r fixed at 0.25. For causality and feasibility reasons,we ensure that τi > TADC , the ADC sampling latency.

It is noteworthy to add that this causal randomized sam-pling procedure is as general purpose as the other techniquesmentioned while reducing hardware, sampling, storage andcomputation requirements substantially. The only downside

SensingMote

Base Station

y

Hx +! z = !x +"#

Figure 2: Schematic representation of detection pro-cess with MicaZ motes and in simulation

to using randomized sampling is that its domain basis is notincoherent with signals that are sparse in the time domain,such as EKG signals, precluding its use from this particularsub-class of signals.

3.2 Quantifying Power and Duty Cycle GainsTo test our proposition in practice and quantify the gains

and performance it could deliver, we implemented the so-lution using MicaZ sensor motes running the TinyOS op-erating system. We motivate the application of a CS basedapproach through acoustic signature detection similar to [13]using the Fourier basis for reconstruction. Since the Fourierbasis is incoherent with the time-spike basis, our randomizedsampling procedure is well suited to this application.

MicaZ sensor motes contain an 8 MHz 8-bit ATMEGA128processor with a built-in 10-bit ADC and an IEEE 802.15.4compliant radio. They have been reported to sustain sam-pling rates of a few hundred Hz, limited mainly due tothe absence of a DMA unit. Since detecting the signaturevia Fourier domain analysis on the mote itself or throughsampling and collecting data wirelessly at the Nyquist ratewould have been infeasible, we used a combination of empir-ical modeling and simulation to quantify the gains from ourCS approach. In particular, we modeled four blocks, whichare significant for the comparison – random number genera-tor (for CS), ADC, FFT processing and radio transmission.

We model the energy consumption and running time ofeach block with simple first order linear functions that de-pend on the data rate flowing through them. Model pa-rameters were extracted using a cycle and energy accurateinstruction-level simulator available for the MicaZ [20]. TheGaussian random variable for causal randomized sampling iscomputed by approximating it to an order-12 Irwin-Hall dis-tribution using a 16-bit MLCG [17] based uniform randomnumber generator. FFT processing was performed using an1024-point implementation optimized for 16-bit operation.The FFT library routines occupy 2KB of the 4KB RAMavailable on the ATMEGA128.

4. RESULTSFigure 2 depicts a schematic representation of the com-

pressive detection process used for evaluation. We chose togenerate and detect a single frequency tone at 450 Hz forour experiments. While identifying the presence of a singlefrequency tone may be a trivial detection problem, we choseit as a case study for three reasons. First, it provides us witha baseline for comparison using a well known basis. Second,the solution is easily extended to event signatures sparse inother bases, with no change to the node’s implementation.And finally, single frequency tones have little structure to beexploited by the detection function and thus the false alarmrates reported in Section 4 may be considered worst case.Results for a multi-tone case are reported in [8].

A host machine generates the 450Hz signal and white noiseat a specific SNR at a high sampling rate. This audio stream

422

Page 5: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

is played out over a speaker and recorded through the mi-crophone of a sensing MicaZ mote using random projectionsas described in Section 3.1. One second long segments ofrecorded samples are then wirelessly transmitted to a base-station mote connected to the fusion center, which performsweighted basis pursuit to recover the signal in the frequencydomain using 1024-point FFT. The FFT coefficients are fedinto the detection function along with the indices Ω to pro-duce the hypothesis decision. We also run a simulation ver-sion of the process, which emulates the recording and collec-tion process by applying the same random projection matrixas would have been computed on the sensing mote.

Figure 3 reports results for the resource costs incurred bythe sensing MicaZ node. While the numbers are specific tothis platform, the insight from these results can be appliedmore generally. The top plot shows the power consumed byeach block for different sampling rates. Also included are thesimulated power consumption numbers when periodic sam-pling is applied above the Nyquist rate and when the FFTdetection procedure is performed on the node itself. Per-forming local detection, while computationally expensive,reduces the radio transmission burden, which is especiallybeneficial in a multi-hop network scenario. For example, inthis case, the relatively favorable results for the radio trans-mission power would overshadow FFT computation cost ifthe base-station was further than two hops away.

For the compressive sensing cases shown left most, thebiggest power consumer is the random number generatorand, in particular, the MLCG implementation, which usessoftware emulated 32-bit arithmetic extensively. If a lowercost LFSR based implementation was used instead, the con-sumption would be substantially reduced at the cost of fewerunique random numbers [17]. In terms of energy, using 30HzCS is over 10× more efficient than sampling and communi-cating at 1024Hz over a one-hop wireless link. For compar-ison purposes, a 250Hz CS implementation was also simu-lated and found to result in 30% reduction in power.

The bottom plot of Figure 3 illustrates the running timeof each block for every one second window. This equates tothe achievable duty cycle of the node, lower values for whichfurther improve the overall energy efficiency. The ADC la-tency is clearly visible here as the dominant component forhigh rate sampling. This stems from the lack of a DMAunit, which causes the CPU to be interrupted constantly.

While Figure 3 emphatically demonstrates that using lowrate compressive sensing can achieve long node lifetimes,Figure 4 shows that detection performance is also excep-tionally good. We show results from 250-run Monte Carlosimulations and experiments at five different average sam-pling rates (10, 20, 30, 50 and 100 Hz) and at three differentSNRs (-10, 0 and 10 dB). The set of plots on the left il-lustrate experimental performance and on the right, fromsimulation. The y-axis denotes the overall error probabil-ity, Pe, which is an equally weighted sum of PMD and PFA

(Eq. 6-7). All results reported use wj∈Ω = 0.1. To selectthe thresholds θΩ (in Eq. 5), we use a 10-fold cross valida-tion training approach with Neyman-Pearson detection [18]setting the maximum false alarm rates to 10%.

Instead of using training, we could have used a fixed SNRdependent threshold value, as is commonly done in like-lihood ratio testing and this has an interesting effect asthe parameter wj∈Ω is varied. It can be shown that PMD

is a monotonically non-decreasing function of w for fixed

0

10

20

30

40

10Hz CS

20Hz CS

30Hz CS

250HzCS

1024HzNyqS

1024HzNS/FFT

Power (m

W)

Rnd ADC FFT Radio TX

0

250

500

750

1000

10Hz CS

20Hz CS

30Hz CS

250HzCS

1024HzNyqS

1024HzNS/FFT

Running Tim

e (ms)

Rnd ADC FFT Radio TX

Figure 3: Power and Duty Cycle costs for Compres-sive Sensing versus Nyquist Sampling w/ local FFT.

SNR and sampling rate and PFA is a monotonically non-increasing function of w. This conclusion is intuitive, owingto the fact that as w reduces, so does the `1 norm, promot-ing those indices in the solution (even if the signal was notpresent). Details of this are included in [8].

We evaluate our biased weighting approach (WBP) againstconventional basis pursuit (BP) and the iterative re-weightingtechnique (IRBP) described in [7]. We observe some generaltrends right away – increasing SNR or sampling rate reducesPe for all three techniques. This is expected, since a higherquality signal (or the lack of it) as well as additional sam-ples improve both the detection and rejection performanceof the system. Comparing first BP and IRBP, we observethat the latter is always worse (or no better) at all SNRsand sampling rates. This seems counter-intuitive at first be-cause IRBP has been shown to perform especially well whenthe signal is non-sparse [7]. The reason for this poor detec-tion performance at low sampling rates arises from the waythe algorithm applies iterative weighting. Fundamentally,IRBP assumes that the solution from a previous iterationis a good one and strengthens that solution in subsequentiterations. Thus, if the solution in the first iteration (whichis unbiased) is a bad one, IRBP gets caught in a local trap,never attempting other possibilities. Our implementation ofIRBP assumes a zero valued initial condition as suggestedin [7]. This approach turns out worse than conventional BPbecause the iterative strengthening leads to a large numberof false alarms. With WBP, only those indices that formpart of the event signature are biased. Thus, false alarmsoccur only in the unlikely scenario that significant noise orinterference energy is present at those indices.

Figure 4 illustrates results for our weighted BP approach,which consistently outperforms the former two recovery tech-niques for detection. The performance improvement is moreobvious at lower sampling rates and higher SNRs. Thereare slight discrepancies for experimental results that, we be-lieve, are an artifact of the inevitable environmental differ-ences between runs. Notice that at 0dB and 10dB SNRs, thedetection performance is near-perfect with 30Hz and 10Hzrespectively. For the poor -10dB SNR environment, the sam-pling rate has to be elevated considerably to extract the samelevel of performance. A summary of the relative power gainsachieved by compressive sensing with weighted basis pursuitfor detection performance comparable to Nyquist samplingis listed in Table 1.

423

Page 6: [ACM Press the 14th ACM/IEEE international symposium - San Fancisco, CA, USA (2009.08.19-2009.08.21)] Proceedings of the 14th ACM/IEEE international symposium on Low power electronics

10 20 30 50 1000

0.1

0.2

0.3

0.4

0.5−10dB SNR

Pe

Sampling Rate (Hz)10 20 30 50 100

0dB SNR

Sampling Rate (Hz)10 20 30 50 100

10dB SNR

Sampling Rate (Hz)

IRBPBPWBP

(a) Experimental Results

10 20 30 50 100 150 2500

0.1

0.2

0.3

0.4

0.5−10dB SNR

Pe

Sampling Rate (Hz)10 20 30 50 100 150 250

0dB SNR

Sampling Rate (Hz)10 20 30 50 100 150 250

10dB SNR

Sampling Rate (Hz)

IRBPBPWBP

(b) Simulation Results

Figure 4: Comparing the detection performance of IRBP and BP with WBP.

10 20 30 50 1000

0.1

0.2

0.3

0.4

0.5Conventional BP

Pe

Sampling Rate (Hz)

−30dB SINR−20dB SINR

10 20 30 50 100

Weighted BP

Sampling Rate (Hz)

−20dB SINR−10dB SINR

Figure 5: Detection performance of BP and WBPin narrow-band interference with 0dB noise power.

Relative Power Gains → SNR -10dB 0dB 10dB

Over 1024Hz sample-and-send 30% 90% 96%Over 1024Hz sample-FFT-detect 67% 95% 98%

Table 1: Relative power consumption gains usingWBP CS with comparable detection performance.

A final set of results in Figure 5 show that WBP is com-parable to conventional BP in the presence of narrow-bandinterference. To emulate interference, we generate a highamplitude tone at a randomly selected frequency such thatthe signal-to-interference-plus-noise ratio (SINR) is between-10dB and -30dB. The noise power was maintained at 0dB.

5. CONCLUSIONWe have presented a novel modification to the basis pur-

suit reconstruction procedure for known-signature event de-tection from sparse incoherent measurements. We showthrough simulations and an implementation on MicaZ sen-sor nodes that this strategy is not only feasible at rates 30×below the Nyquist requirement but that it delivers compara-ble detection performance with up to 10× increased energyefficiency. Our empirical study also shows that the compu-tational complexity of good random number generation isnon-trivial for these low-power embedded devices.

6. ACKNOWLEDGMENTSThis material is supported in part by the U.S. ARL and the

U.K. MOD under Agreement Number W911NF-06-3-0001, theU.S. Office of Naval Research under MURI-VA Tech Award CR-19097-430345, the National Science Foundation under grant CCF-0820061, and the UCLA Center for Embedded Networked Sens-ing. Any opinions, findings and conclusions or recommendationsexpressed in this material are those of the authors and do notnecessarily reflect the views of the listed funding agencies. TheU.S. and U.K. Governments are authorized to reproduce and dis-tribute reprints for Government purposes not withstanding anycopyright notation herein.

7. REFERENCES[1] Allen, M., Girod, L., Newton, R., Madden, S.,

Blumstein, D. T., and Estrin, D. Voxnet: An interactive,rapidly-deployable acoustic monitoring platform. IPSN08 .

[2] Baraniuk, R., Davenport, M., DeVore, R., and Wakin,M. A simple proof of the restricted isometry property forrandom matrices. Constructive Approximation (2008).

[3] Bilinskis, I., and Mikelson, A. Randomized SignalProcessing. Prentice-Hall, Inc. NJ, USA, 1992.

[4] Boyle, F., Haupt, J., Fudge, G., and Yeh, C. DetectingSignal Structure from Randomly-Sampled Data. InStatistical Signal Processing (2007).

[5] Candes, E., Romberg, J., and Tao, T. Robustuncertainty principles: Exact signal reconstruction fromhighly incomplete frequency information. IEEETransactions on Information Theory 52, 2 (2006), 489–509.

[6] Candes, E., and Wakin, M. People hearing withoutlistening: An introduction to compressive sampling. IEEESignal Processing Magazine.

[7] Candes, E., Wakin, M., and Boyd, S. Enhancing sparsityby reweighted l1 minimization. Journal of Fourier Analysisand Applications 14, 5-6 (2008), 877–905.

[8] Charbiwala, Z., Kim, Y., Zahedi, S., Balani, R., andSrivastava, M. B. Weighted `1 minimization for eventdetection in sensor networks. NESL Tech Report,http://nesl.ee.ucla.edu/document/show/299 (2009).

[9] Chen, S., Donoho, D., and Saunders, M. AtomicDecomposition by Basis Pursuit. SIAM Journal onScientific Computing 20, 1 (1998), 33–61.

[10] Dang, T., Bulusu, N., and Hu, W. Lightweight AcousticClassification for Cane-Toad Monitoring. AsilomarConference on Signals, Systems and Computers (2008).

[11] Davenport, M., Duarte, M., Wakin, M., Laska, J.,Takhar, D., Kelly, K., and Baraniuk, R. The smashedfilter for compressive classification and target recognition.In SPIE (2007).

[12] Duarte, M., Davenport, M., Wakin, M., and Baraniuk,R. Sparse Signal Detection from Incoherent Projections. InICASSP (2006).

[13] Griffin, A., and Tsakalides, P. Compressed Sensing ofAudio Signals Using Multiple Sensors. EUSIPCO08 .

[14] Haupt, J., and Nowak, R. Compressive Sampling forSignal Detection. In ICASSP (2007).

[15] Khajehnejad, M. A., Xu, W., Avestimehr, A. S., andHassibi, B. Weighted `1 minimization for sparse recoverywith prior information.

[16] Kirolos, S., Laska, J., Wakin, M., Duarte, M., Baron,D., Ragheb, T., Massoud, Y., and Baraniuk, R.Analog-to-information conversion via randomdemodulation. In DCAS (2006).

[17] L’ecuyer, P. Efficient and portable combined randomnumber generators.

[18] McDonough, R. N., and Whalen, A. Detection ofSignals in Noise, 2nd edition, Academic Press. 1995.

[19] Rudelson, M., and Vershynin, R. Sparse reconstructionby convex relaxation: Fourier and Gaussian measurements.In Information Sciences and Systems, (2006).

[20] Titzer, B., Lee, D., and Palsberg, J. Avrora: Scalablesensor network simulation with precise timing. In IPSN’05.

[21] Tropp, J., and Gilbert, A. Signal Recovery FromRandom Measurements Via Orthogonal Matching Pursuit.Information Theory, IEEE Transactions on (2007).

424