distributed multi-scale data processing for sensor networks

76
Distributed Multi-Scale Data Processing for Sensor Networks Raymond S. Wagner Ph.D. Thesis Defense April 9, 2007

Upload: brandi

Post on 18-Jan-2016

50 views

Category:

Documents


0 download

DESCRIPTION

Distributed Multi-Scale Data Processing for Sensor Networks. Raymond S. Wagner Ph.D. Thesis Defense April 9, 2007. Collaborators. Marco Duarte. J. Ryan Stinnett. V é ronique Delouille. T.S. Eugene Ng. David B. Johnson. Albert Cohen. Shu Du. Richard Baraniuk. Shriram Sarvotham. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Distributed Multi-Scale Data Processing for Sensor Networks

Distributed Multi-Scale Data Processing for Sensor Networks

Raymond S. Wagner

Ph.D. Thesis DefenseApril 9, 2007

Page 2: Distributed Multi-Scale Data Processing for Sensor Networks

Collaborators

Marco Duarte

Véronique Delouille

Richard Baraniuk

David B. Johnson

J. Ryan Stinnett

T.S. Eugene Ng

Albert Cohen

Shu Du

Shriram Sarvotham

Page 3: Distributed Multi-Scale Data Processing for Sensor Networks

Collections of small battery-powereddevices, called sensor nodes, that can:

Sensor Network Overview

• sense data

• process data

• share data

Nodes form ad-hoc networks to exchange data:

Page 4: Distributed Multi-Scale Data Processing for Sensor Networks

PROBLEM: centralized collection very costly (power, bandwidth), especially near sink.

network bottleneck

region

Data Collection Problem

Page 5: Distributed Multi-Scale Data Processing for Sensor Networks

SOLUTION: nodes locally exchange data with neighbors, finding answers to questions in-network.

Distributed Processing Solution

Page 6: Distributed Multi-Scale Data Processing for Sensor Networks

Distributed Data Representations

Little / No Collaboration

Wide Meas.Field Support

Quick Decode

Dist. Source Coding

Dist. CompressedSensing

Dist. Multi-Scale Analysis

Dist. Regression

Page 7: Distributed Multi-Scale Data Processing for Sensor Networks

Novel Contributions

• development of multi- scale transform

• survey of application communication requirements

• analysis of numerical stability

• development of protocols

• analysis of energy cost

• development of API

new algorithms support for algorithms

Page 8: Distributed Multi-Scale Data Processing for Sensor Networks

Multi-scale Wavelet AnalysisUnconditional basis for wide range of signal classes – good choice for sparse representation when little known about signal.

WT

Page 9: Distributed Multi-Scale Data Processing for Sensor Networks

Project onto to find scaling coefficient , or find from previous-scale SCs as

Fix V0 with scaling function basis set , with

Multi-Resolution Analysis (MRA)Vj+1

Vj

Vj-1

Page 10: Distributed Multi-Scale Data Processing for Sensor Networks

Define difference spaces Wj s.t.

Wavelet Space Analysis

Vj+1

Vj

Vj-1Wj-1Wj

Give W0 wavelet function basis set

Project onto to find wavelet coefficient or find from previous-scale SC’s as

Page 11: Distributed Multi-Scale Data Processing for Sensor Networks

MRA assumes regular sample point spacing, power-of-two sample size. Not likely in sensor networks.

Wavelet Analysis for Sensor Networks

Page 12: Distributed Multi-Scale Data Processing for Sensor Networks

Wavelet Lifting In-place formulation of WT – distributable, tolerates

irregular sampling grids [Sweldens, 1995]

Starts with all nodes ( ) , scalar meas. ( )

At each scale j={J-1,…,j0} , transform into:

wavelet coefficients scaling coefficients

Iterate on SCs to j=j0 so that meas. replaced by:

Page 13: Distributed Multi-Scale Data Processing for Sensor Networks

Lifting Stages

Each transform scale decomposes into three stages: split, predict, update…

split P

+

_U

split P

+

_U

SPLIT into ,

Page 14: Distributed Multi-Scale Data Processing for Sensor Networks

Lifting Stages

Each transform scale decomposes into three stages: split, predict, update…

split P

+

_U

split P

+

_U

PREDICT wavelet coeffs.

Page 15: Distributed Multi-Scale Data Processing for Sensor Networks

Lifting Stages

Each transform scale decomposes into three stages: split, predict, update…

split P

+

_U

split P

+

_U

UPDATE scaling coeffs.

Page 16: Distributed Multi-Scale Data Processing for Sensor Networks

Lifting Stages

Each transform scale decomposes into three stages: split, predict, update…

split P

+

_U

split P

+

_U

SPLIT into ,

Page 17: Distributed Multi-Scale Data Processing for Sensor Networks

Lifting Stages

Each transform scale decomposes into three stages: split, predict, update…

split P

+

_U

split P

+

_U

GOAL: design split,P,U to distributed easily, tolerate grid irregularity, provide sparse representation

Page 18: Distributed Multi-Scale Data Processing for Sensor Networks

Split Design

Scale j+1 Scale j Scale j-1

Goal: mimic regular grid split, s.t. ,

Page 19: Distributed Multi-Scale Data Processing for Sensor Networks

Split Design

Scale j+1 Scale j Scale j-1

Goal: mimic regular grid split, s.t. ,

1. Pick a , put it in ( )

2. Put all with ( )into ( )

3. Repeat until all elements of visited

Approach:

Page 20: Distributed Multi-Scale Data Processing for Sensor Networks

Split Example

Original Grid Scale-5 Grid Scale-4 Grid

Scale-3 Grid Scale-2 Grid Scale-1 Grid

Page 21: Distributed Multi-Scale Data Processing for Sensor Networks

Predict Design

Goal: encode WC at each ( ) as difference from summary of local neighborhood behavior

Approach: fit order-m polynomial to scale-(j+1) SCsat neighboring ( ), evaluate at :

WC for is difference between scale-(j+1) SC and estimate:

Page 22: Distributed Multi-Scale Data Processing for Sensor Networks

, depend only on m, d (dim. of )

Predict Design

Given predict order m, must only specify to find weights

1. Consider points s.t.

2. Pick as smallest (cost) subset s.t.

3. If can’t satisfy, reduce to , repeat Step 1

Page 23: Distributed Multi-Scale Data Processing for Sensor Networks

Use min-norm solution [Jansen et al., 2001]

Approach: choose update weights so that weighted by integrals of constant :

Update Design

Goal: enhance transform stability by preserving average value encoded by SCs across scales

with

Page 24: Distributed Multi-Scale Data Processing for Sensor Networks

Transform Network Traffic

update

predict

Example: , with ,

Page 25: Distributed Multi-Scale Data Processing for Sensor Networks

A function is ( ) at point if polynomial of degree and some

such that

We show that, if is at for , then

depends only on constants

Coefficient Decay

Page 26: Distributed Multi-Scale Data Processing for Sensor Networks

WT Application: Distributed Compression

IDEA: compress measurements by only allowing sensors with large-magnitude WCs to transmit to the sink.

Page 27: Distributed Multi-Scale Data Processing for Sensor Networks

Compression Evaluation

Sample field classes:

Piecewise smoothacross discontinuity

Globally smooth

Page 28: Distributed Multi-Scale Data Processing for Sensor Networks

Compressing Smooth Fields

(250 nodes, 100 trials)

aver

age

MS

E

number of coefficients

P onlyP, U

Page 29: Distributed Multi-Scale Data Processing for Sensor Networks

Compressing Piecewise-Smooth Fieldsav

erag

e M

SE

number of coefficients (250 nodes, 100 trials)

P onlyP, U

Page 30: Distributed Multi-Scale Data Processing for Sensor Networks

Energy vs. Distortion (Smooth Field)

MS

E

bottleneck energy (Joules) (1000 nodes)

Page 31: Distributed Multi-Scale Data Processing for Sensor Networks

Energy vs. Distortion (Smooth Field)

Energy to compute WT

MS

E

bottleneck energy (Joules) (1000 nodes)

Page 32: Distributed Multi-Scale Data Processing for Sensor Networks

Energy vs. Distortion (Smooth Field)

Energy to dump all measurements to sink

MS

E

bottleneck energy (Joules) (1000 nodes)

Page 33: Distributed Multi-Scale Data Processing for Sensor Networks

Energy vs. Distortion (Smooth Field)

Beneficial operating regime

MS

E

bottleneck energy (Joules) (1000 nodes)

Page 34: Distributed Multi-Scale Data Processing for Sensor Networks

Energy vs. Distortion (Piecewise-smooth Field)

MS

E

bottleneck energy (Joules) (1000 nodes)

Page 35: Distributed Multi-Scale Data Processing for Sensor Networks

noise dominates

coefficientcount

PS

NR

1. in-network de-noising (requires inverse dist. WT)

2. compression with de-noising (guides threshold choice)

WT Application: Distributed De-noising

Page 36: Distributed Multi-Scale Data Processing for Sensor Networks

Implementation Lessons

Implemented WT in Duncan Hallsensor network

Need to support common patternswith abstraction to ease algorithmprototyping

Surveyed IPSN 2003-2006

Distilled common comm. patterns into network application programming interface (API) calls

Page 37: Distributed Multi-Scale Data Processing for Sensor Networks

Send to single address – source node ( ) sends message to single destination ( ), drawn from node ID space

Address-Based Sending

Page 38: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based Sending

Send to list of addresses – source node sends message to multiple destinations, drawn from node ID space

Page 39: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based Sending

Send to multicast address – source node sends message to single group address, drawn from multi-cast address space

Page 40: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based API Calls

sendSingle (data, address, effort, hopLimit)

sendList (data, addList, effort, hopLimit)

sendMulti (data, address, effort, hopLimit)

Page 41: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based API Calls

sendSingle (data, address, effort, hopLimit)

sendList (data, addList, effort, hopLimit)

sendMulti (data, address, effort, hopLimit)

Provide packet fragmentation

Page 42: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based API Calls

sendSingle (data, address, effort, hopLimit)

sendList (data, addList, effort, hopLimit)

sendMulti (data, address, effort, hopLimit)

Drawn from node-ID address space

Drawn from multicast group address space

Page 43: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based API Calls

sendSingle (data, address, effort, hopLimit)

sendList (data, addList, effort, hopLimit)

sendMulti (data, address, effort, hopLimit)

Energy-based transmission effort abstraction (per-packet basis)

Page 44: Distributed Multi-Scale Data Processing for Sensor Networks

Address-Based API Calls

sendSingle (data, address, effort, hopLimit)

sendList (data, addList, effort, hopLimit)

sendMulti (data, address, effort, hopLimit)

Limit on number of forwarding hops to destination

Page 45: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based Sending

Send to hop radius – source node sends message to all nodes within specified number of radio hops

Page 46: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based Sending

Send to geographic radius – source node sends message to all nodes within specified geographic distance from its location

Page 47: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based Sending

Send to circle – source node sends message to nodes (single or many) within a specified geographic distance of specified center

Page 48: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based Sending

Send to polygon – source node sends message to nodes (single or many) within convex hull of specified list of vertex locations

Page 49: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based API Calls

sendHopRad (data, hopRad, effort, hopLimit)

sendGeoRad (data, geoRad, outHops, effort, hopLimit)

sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)

sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit)

Page 50: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based API Calls

sendHopRad (data, hopRad, effort, hopLimit)

sendGeoRad (data, geoRad, outHops, effort, hopLimit)

sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)

sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit)

Region specification

Page 51: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based API Calls

sendHopRad (data, hopRad, effort, hopLimit)

sendGeoRad (data, geoRad, outHops, effort, hopLimit)

sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)

sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit)

Limit number of hops to route outside region to find path around voids

Page 52: Distributed Multi-Scale Data Processing for Sensor Networks

Region-Based API Calls

sendHopRad (data, hopRad, effort, hopLimit)

sendGeoRad (data, geoRad, outHops, effort, hopLimit)

sendCircle (data, centerX, centerY, radius, single, outHops, effort, hopLimit)

sendPolygon (data, vertCount, vertices, single, outHops, effort, hopLimit)

Send to single node or multiple nodes in region

Page 53: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy Sending

Send to sink – source node sends message to sensor network’s data sink

Page 54: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy Sending

Send to parent – source node sends message to its parent in hierarchy of device classes

Page 55: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy Sending

Send to children – source node sends message to its children in hierarchy of device classes

Page 56: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy Sending

Send to children – source node sends message to its children in hierarchy of device classes

Page 57: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy API Calls

setLevel (level)

sendParent (data, effort, hopLimit) sendChildren (data, effort, hopLimit)

sendSink (data, effort, hopLimit)

Page 58: Distributed Multi-Scale Data Processing for Sensor Networks

setLevel (level)

sendParent (data, effort, hopLimit) sendChildren (data, effort, hopLimit)

sendSink (data, effort, hopLimit)

Device Hierarchy API Calls

Allow application to assign hierarchy levels suited to device capabilities

Page 59: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy API Calls

Each level-L device associated with level-(L-1) parent, level-(L+1) children

setLevel (level)

sendParent (data, effort, hopLimit) sendChildren (data, effort, hopLimit)

sendSink (data, effort, hopLimit)

Page 60: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy API Calls

setLevel (level)

sendParent (data, effort, hopLimit) sendChildren (data, effort, hopLimit)childList = getChildren()

sendSink (data, effort, hopLimit)

Page 61: Distributed Multi-Scale Data Processing for Sensor Networks

Device Hierarchy API Calls

setLevel (level)

sendParent (data, effort, hopLimit) sendChildren (data, effort, hopLimit)childList = getChildren()

sendSink (data, effort, hopLimit)

Each node can send directly to sink (level 1), regardless of level

Page 62: Distributed Multi-Scale Data Processing for Sensor Networks

Receive Modes

1. receiveTarget – node can examine observe messages for which it is destination

2. receiveOverhear – node can passively examine any message transmitted within radio range

3. receiveForward – node can examine and modify any message it forwards

Finally, three receive modes supported:

Page 63: Distributed Multi-Scale Data Processing for Sensor Networks

Receive Modes

1. receiveTarget – node can examine observe messages for which it is destination

2. receiveOverhear – node can passively examine any message transmitted within radio range

3. receiveForward – node can examine and modify any message it forwards

Finally, three receive modes supported:

Requires hop-by-hop reassembly of packet fragments

Page 64: Distributed Multi-Scale Data Processing for Sensor Networks

Conclusions, Extensions

Developed distributed WT suited to sensor network deployment

Developed network API to support easy algorithm prototyping

• Investigate applicability to other tasks (e.g., query-routing, data recovery)

• Further study tradeoffs between temporal, spatio-temporal processing

• Implement API in resource-efficient manner

Page 65: Distributed Multi-Scale Data Processing for Sensor Networks

This slide left intentionally blank.

Page 66: Distributed Multi-Scale Data Processing for Sensor Networks

For proper inverse WT at sink, each , must receive coeffs. from all ,

Options to ensure include:

1. Require high reliability from routing

2. Repair P,U on link failure

3. Repair P-only on link failure

Communication Reliability Requirements

Page 67: Distributed Multi-Scale Data Processing for Sensor Networks

Starting with threshold , iterate:

Distributed Compression Protocol

1. sink broadcasts 2. each node with sends WC to sink (if not sent already)

3. sink collects WC’s, computes new estimate

4. while estimate residual exceeds some tolerance, repeat Step 1 for

Page 68: Distributed Multi-Scale Data Processing for Sensor Networks

WT Application: Distributed De-noising

IDEA: exploit signal sparsity in WC’s to remove noise energyfrom measurements in wavelet domain

Denote original noisy measurements as , where IID

Can estimate each using modified WC’s

Page 69: Distributed Multi-Scale Data Processing for Sensor Networks

noise dominates

coefficientcount

PS

NR

Two forms of distributed de-noising:

1. in-network de-noising (requires inverse dist. WT)

2. compression with de-noising (guides threshold choice)

De-noising Modes

Page 70: Distributed Multi-Scale Data Processing for Sensor Networks

Universal Thresholding

Donoho/Johnstone’s universal threshold for univariate Gaussian noise

Must scale each coefficient to account for non-orthonormal transform (W) and nose variance :

Each scaling coefficient is modified as:

Page 71: Distributed Multi-Scale Data Processing for Sensor Networks

Distributing Universal Hard Thresholding

To estimate , use median absolute deviation of fine-scale wavelet coefficients :

Use de-centralized median protocol, gossiping with local time-series estimates, single node collection/dissemination…

NOTE: must only estimate appropriate to stationarity of noise process

Page 72: Distributed Multi-Scale Data Processing for Sensor Networks

De-noising Evaluation

Low, smooth

High, smooth

Low, disc.

High, disc.

Page 73: Distributed Multi-Scale Data Processing for Sensor Networks

(PSNR in dB)

Results averaged over 50 trials of N=1000 nodes (randomly, uniformly placed)

De-noising Evaluation (In-Network)

Low, smooth

High, smooth

Low, disc.

High, disc.

Page 74: Distributed Multi-Scale Data Processing for Sensor Networks

De-noising Evaluation (Compression with De-Noising)

Low, smooth

High, smooth

Low, disc.

High, disc.

PS

NR

coefficient count

PS

NR

coefficient count

PS

NR

coefficient count

PS

NR

coefficient count

Bayesuniv.orig.

Bayesuniv.orig.

Bayesuniv.orig.

Bayesuniv.orig.

Page 75: Distributed Multi-Scale Data Processing for Sensor Networks

De-noising Evaluation (Compression with De-Noising)

Low, smooth

High, smooth

Low, disc.

High, disc.

PS

NR

coefficient count

PS

NR

PS

NR

coefficient count

PS

NR

coefficient count

Bayesuniv.orig.

Bayesuniv.orig.

Bayesuniv.orig.

Bayesuniv.orig.

coefficient count

Page 76: Distributed Multi-Scale Data Processing for Sensor Networks

De-noising Evaluation (Compression with De-Noising)

(500 nodes, averaged over 50 trials)

Low, disc.

noise dominates

PS

NR

coefficient count

Bayesuniv.orig.