distributed network signal processing

39
s e n s o r w e b s http://basics.eecs.berkeley.edu/sensorwebs Sensorwebs group Kannan Ramchandran (Pister, Sastry,Anantharam,Jordan,Malik) Electrical Engineering and Computer Science University of California at [email protected] http://www.eecs.berkeley.edu/~kannanr Distributed network signal processing

Upload: duante

Post on 23-Jan-2016

28 views

Category:

Documents


0 download

DESCRIPTION

Distributed network signal processing. Sensorwebs group. http://basics.eecs.berkeley.edu/sensorwebs. Kannan Ramchandran ( Pister, Sastry,Anantharam,Jordan,Malik) Electrical Engineering and Computer Science University of California at Berkeley. [email protected] - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Distributed network signal processing

s e n s o

rw

eb

s

http://basics.eecs.berkeley.edu/sensorwebs

Sensorwebs group

Kannan Ramchandran

(Pister, Sastry,Anantharam,Jordan,Malik)Electrical Engineering and Computer Science

University of California at [email protected]

http://www.eecs.berkeley.edu/~kannanr

Distributed network signal processing

Page 2: Distributed network signal processing

University of California, Berkeley

DARPA Sensorwebs: Creation of a fundamental unifying

framework for real-time distributed/ decentralized information processing with applications to Sensor Webs, consisting of:

– MEMS (Pister)

– Distributed SP (Ramchandran)

– Distributed Control (Sastry)

– “Real-time” Information Theory(Anantharam)

– Distributed Learning Theory(Jordan)

Page 3: Distributed network signal processing

University of California, Berkeley

Dense low-power sensor network attributes

Disaster Management

• Dense clustering of sensors/ embedded devices • Highly correlated but spatially distributed data• Limited system resources: energy, bandwidth• Unreliable system components• Wireless medium: dynamic SNR/ interference• End-goal is key: tracking, detection, inference

Page 4: Distributed network signal processing

University of California, Berkeley

Signal processing & comm. system challenges:

Distributed & scalable multi-terminal architectures for:

–coding, clustering, tracking, estimation, detection;

Distributed sensor fusion based on statistical sensor data models;

Integration of layers in network stack:

–joint source-network coding

–joint coding/routing

Reliability through diversity in representation & transmission

Energy optimization: computation vs. transmission cost:

–~100 nJ/bit vs. 1 pJ/inst. (HW) & 1 nJ/inst (SW)

Page 5: Distributed network signal processing

University of California, Berkeley

Roadmap

Distributed Compression: basics, new results

Networking aspects: packet aggregation

Reliability through diversity: multiple descriptionsDistributed multimedia streaming: robust, scalable architecture

Page 6: Distributed network signal processing

University of California, Berkeley

Real-world scenario: Blackouts project Near Real-time room condition monitoring

using sensor motes in Cory Hall (Berkeley campus): data goes online.

All sensors periodically route readings to central (yellow) node.

Strong multiplication of redundancy due to topology.

http://blackouts.eecs.berkeley.edu

Page 7: Distributed network signal processing

University of California, Berkeley

Distributed compression: basic ideas

Suppose X, Y correlated

Y available at decoder but not at encoder

How to compress X close to H(X|Y)?

Key idea: discount I(X;Y).H(X|Y) = H(X) – I(X;Y)

X

Y

Page 8: Distributed network signal processing

University of California, Berkeley

Information-theory: binning argument

Make a main codebook of all typical sequences. 2nH(X) and 2nH(Y) elements.

Partition into 2nH(X|Y).

When observe Xn, transmit index of bin it belongs to

Decoder finds member of bin that is jointly typical with Yn.

Can extend to “symmetric cases”

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

X

Slepian-Wolf (’72)

Page 9: Distributed network signal processing

University of California, Berkeley

Symmetric case: joint binning

Rate limited by: Rx H(X|Y) RY H(Y|X) Rx + RY H(X,Y)

xR

yR

H(X|Y)

H(Y|X)

H(X,Y)

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

6182-13ihronvqanv83-4vnq-renHqigofednv3q4nvqrnvqwnv0=rNkqlveno3[nv34=3nv=3w4nvi3Nklqenv3=349i3wvn=3qwpvnvInhgvvvo3=vn3=nv3=vnv=wvc

X

Y

Product of bin sizes > 2nI(X,Y)

Page 10: Distributed network signal processing

University of California, Berkeley

Simple binary example

X and Y => length-3 binary data (equally likely), Correlation: Hamming distance between X and Y is at most 1.

Example: When X=[0 1 0], Y => [0 1 0], [0 1 1], [0 0 0], [1 1 0].

Encoder DecoderX

Y

XX ˆ)|( YXHR

•X and Y correlated•Y at encoder and decoder

System 1

X+Y=

0 0 00 0 10 1 01 0 0

Need 2 bits to index this.

Page 11: Distributed network signal processing

University of California, Berkeley

What is the best that one can do?

Encoder DecoderX

Y

XX ˆ)|( YXHR

•X and Y correlated•Y at encoder

System 2

The answer is still 2 bits!

How?0 0 0 1 1 1Coset-1

000001010100

111110101011

X

Y

Page 12: Distributed network signal processing

University of California, Berkeley

•Encoder -> index of the coset containing X.

•Decoder reconstructs X in given coset.

Note:•Coset-1 -> repetition code.•Each coset -> unique “syndrome” •DIstributed Source Coding Using Syndromes (DISCUS)

111

000Coset-1

110

001Coset-4

101

010Coset-3

011

100Coset-2

Page 13: Distributed network signal processing

University of California, Berkeley

General block diagram of DISCUS

Find quantization index using source codebook

Computesyndrome of quantizedcodeword

Find codeword closest to Y in coset U

Optimally estimate source

Source X U X̂

Correlated source Y

EncoderDecoder

I I

DISCUS: a constructive approach to distr. compression

Intricate interplay between source coding, channel coding and estimation theory: can leverage latest advances in all areas.7-15 dB gains in reconstruction SNR over theoretically optimal strategies that ignore correlation for typical correlated sources.Applications to digital upgrade of analog radio/television.

Page 14: Distributed network signal processing

University of California, Berkeley

Continuous case: practical quantizer design issues

Difference at most 1 cell.

Send only index of “coset”: A,B,C,D

Decoder decides which member of coset is the correct answer

We have compressedfrom 3 bits to 2 bits

0 1 2 3 4 5 6 7A B C D A B C D

XY

• Consider the following coset example: 8-level scalar quantizer

Side-informationSource

Page 15: Distributed network signal processing

University of California, Berkeley

Optimizing quantizer and rate

X

X

X

0 d* 2d*-d*-2d*

d*/2-d*/2

i

X dixf )( *

)(xf X

)(* xf X

Important note: decoder can’t differentiate bet. x and x+d* (ABCD)

Therefore: must combine statistics of members of bins

Use PDF periodization: repeat PDFs using parameter d*.

Design using f’x(x)

ABCD ABCD ABCD ABCD

Page 16: Distributed network signal processing

University of California, Berkeley

Caveats: choice of d*

If too small: high coset error

If too large: high quantization error

X

X

d* 2d* 3d*0 4d*

d* 2d*0

f(x)

f(x)

Kusuma & Ramchandran ‘01

Page 17: Distributed network signal processing

University of California, Berkeley

Dynamic bit allocation

Consider iterative method: assign one bit at a time

Can either:improve quantizationimprove code performance

Iteratively assign using rules of thumb.

Multiple levels of protection

Most significant index

Least significant index

Not transmitted

Send syndrome

Full index sent

Protectionneeded

Not transmitted

Page 18: Distributed network signal processing

University of California, Berkeley

For example Increase quantization (need more protection

too!) OR Increase code performance

Most significant index

Least significant index

Not transmitted

Send syndrome

Full index sent

Protectionneeded

Not transmitted

Page 19: Distributed network signal processing

University of California, Berkeley

XY

Suppose X, Y correlated as X=Y+N Wyner-Ziv (’78): No theoretical performance loss due to distributed processing (no X-Y communication) if X and Y are jointly Gaussian. New results (Pradhan, Chou & Ramchandran ’01)::

No performance loss due to distributed processing for arbitrary X, Y if N is Gaussian.Fundamental duality between distributed coding and data- hiding (encoder/decoder functions can be swapped!)

New results: distributed lossy compression:(Pradhan & Ramchandran ’01)

Page 20: Distributed network signal processing

University of California, Berkeley

Distributed sensor fusion under bandwidth constraints:

Suboptimal to form E(X|Yi) as in single-sensor case

Optimal distributed strategy for Gaussian case: Compress Yi’s without estimating X individually. Exploit correlation structure to reduce transmitted bit rate. DISCUS multisensor fusion under BW constraints.

Y2

Y1rate R1

rate R2

X

SceneX̂

Yi=X+Ni

Page 21: Distributed network signal processing

University of California, Berkeley

Enabling DISCUS for sensor networks

Use clustering to enable network deployment (hierarchies)

Learn correlation structure (training based or dynamically) and optimize quantizer and code: Good news: no need for motes to be aware of clustering

Elect a “cluster leader” – can swap periodically.

Localization increases robustness to changing correlation structure (everything is relative to leader).

Page 22: Distributed network signal processing

University of California, Berkeley

000

111

001

110

010

101

100

011

A B C D

2

43

A B

011

000 110

010

3 4

2

11

A BC

•Gateway node 1 first decodes node 2 • It then recursively decodes nodes 3,4

If each link ~ 1 m, network does15 bit-meter work w/o DISCUS

With DISCUS, network does only 10 bit-meter work.

Page 23: Distributed network signal processing

University of California, Berkeley

000

111

001

110

010

101

100

011

A B C D

2

43

A B

1

C

•Gateway node 2 decodes nodes 3,4• Node 2 sends the deltas w.r.t. 3,4 as well as its own syndrome

If each link ~ 1 m, network does15 bit-meter work w/o DISCUS

With DISCUS, network does only 10 bit-meter work.

Load-balance in computation

Δ1 Δ2

011

000 110

010

3 4

2

1

Page 24: Distributed network signal processing

University of California, Berkeley

Where should aggregation be done?

1

000 110

010

011

Node 2 collects all the data: nodes 1,3,4 send 2-bit syndromesTotal work done by network down to 6 bit-meters!

2

3 4

Page 25: Distributed network signal processing

University of California, Berkeley

Network deployment: aggregation (Picoradio project BWRC)

Sensor nodes from region of interest (ROI) reply to query

ROI sends out single

aggregate packet

ControllerSensors

Border Node

Page 26: Distributed network signal processing

University of California, Berkeley

Integration into routing protocols

Traditional way: find the best route and

use it always Probabilistic path selection is

superior Can incorporate data correlation

structure into path weights

Source

Dest1 J/bit

1.1 J/bit

10 nJ

30 nJ

(0.75*10) + (0.25*30) = 15 nJ

p1 = 0.75

p2 = 0.25

Local Rule

1.0 1.0

0.6

0.4

Dest.

Source0.3

0.7

Datapropagation

Page 27: Distributed network signal processing

University of California, Berkeley

Network Coding: the case for “smart motes”

Information A

Information B

A

A

B

B

A A

A+B A+B

•“Store and forward” philosophy of current packet routers can be inefficient•“Smart motes” can significantly decrease system energy requirements:

Page 28: Distributed network signal processing

University of California, Berkeley

Distributed Media Streaming from Multiple Servers: new paradigm

• Client is served by multiple servers• Advantages:

• Robust to link/server failure (useful in battlefield!)

• Load balancing

ScalableMediaSource

Server 1

Server 2

Server 3

Client 1

Client 2

Page 29: Distributed network signal processing

University of California, Berkeley

Robust transmission:

the Multiple Descriptions Problem

- Multiple levels of quality delivered to the destination. (N+1 levels for the N-channel case)

MDEncoder

XCentralDecoder

X0

Description 2

Side Decoder 2 X2

Description 1 Side Decoder 1

X1

Distortion X1 = X2

X0 < X1, X2

Page 30: Distributed network signal processing

University of California, Berkeley

Emerging Multimedia Compression

Standards

• Multi-resolution (MR) source coding e.g, JPEG-2000 (wavelets), MPEG-4.

• Bit stream arranged in importance layers (progressive)

Page 31: Distributed network signal processing

University of California, Berkeley

A Review of Erasure Codes

Erasure Codes (n, k, d) : recovery from reception of partial data.

n = block length, k= log (# of code words), correct (d-1) erasures

(n,k) Maximum Distance Separable (MDS) Codes: d = n – k + 1 MDS => any k channel symbols => k source symbols.

Source

ChannelEncoding

Packets

Transmission

A subset

ChannelDecoding

Source

Page 32: Distributed network signal processing

University of California, Berkeley

Robust Source CodingR1 R2 RN

. . . .

...

1

2

3

N. . . .

. . . .

. . . .

. . . .

• MD-FEC (Multiple Descriptions through Forward Error Correction Codes) packet stream insensitive to ‘position’ of loss. • MD-FEC rate markers can be optimized to dynamically adapt to both instantaneous channel conditions and the source content. (Puri & Ramchandran ’00)

Page 33: Distributed network signal processing

University of California, Berkeley

Outline of Solution (Single Receiver Case)

Use a progressive bit stream to ensure graceful degradation.

Find the loss rates and total bandwidth from each server to the client and calculate the “net” loss rate and bandwidth to the client.

Apply the “MD-FEC” framework now that the problem is reduced to a point-to-point problem.

Page 34: Distributed network signal processing

University of California, Berkeley

Camera raw videostream

MRSource Encoder

MD-FECTranscoder 2

Receiver

progressivelycoded video stream

MD video stream 1

channelstate 1

feedback

MD-FECTranscoder 1

channelstate 2

Network

MD-FECTranscoder m

channelstate m

MD video stream m

MD video stream 2

Distributed streaming paradigm: end-to-end system architecture

Page 35: Distributed network signal processing

University of California, Berkeley

“Robustified” distributed compression

F1

X1

F2

F3

Network

G12

G13

G23

G123

Each packet has R bits/sample

X2

X3

X3X1 ,

X3X2 ,

X2X1 ,

X1 , X3X2 ,

Consider symmetric case: H(Xi)=H1, H(Xi,Xj)=H2, H(Xi,Xj,Xk)=H3:•R=H3/3 Fully distributed, maximally compressed, not robust to link loss;•R=H2/2 Fully distributed, minimally redundant, robust to any one link loss.

Page 36: Distributed network signal processing

University of California, Berkeley

Future challenges:

Integrate “distributed learning” aspect into framework

Extend to arbitrary correlation structures

Incorporate accurate statistical sensor models:

Wavelet mixture models for audio-visual data

Retain end-goal while optimizing system components:

– e.g.: estimation, detection, tracking, routing, transmission;

– impose bandwidth/energy/computational constraints

Progress on network information theory & constructive algorithms

Extend theory/algorithms for incorporating robustness/reliability

Target specific application scenarios of interest.

Page 37: Distributed network signal processing

University of California, Berkeley

1-D Vehicle trackingFor each vehicle, there are two parameters: 1. t0 – the time the vehicle passes through point p = 02. v – the speed of the vehicle (assume constant velocity)

Node i at position pi sees the vehicle at time ti:ti = t0 + (1/v)pi

Combining all nodes, Ax = b with:

0

3

2

1

3

2

1

/1

1

......

1

1

1

...t

vx

p

p

p

p

A

t

t

t

t

b

nn

x = (ATA)-1ATb Matrix inversion is only a 2x2!

Page 38: Distributed network signal processing

University of California, Berkeley

Update Node PositionsOnce we calculate v, go back and make a new guess at each pi

ti = (1/v)pinew + t0

pinew = (ti-t0)v

Update according to some non-catastrophic weighted rule like:

Detect Vehicle(fix pi’s)

Update Positions(fix t0, v)

MakeInitial guess

For pi’s

BetterResultsAs timeprogresses

oldi

newi

nexti p

eventnum

eventnump

eventnump

11

Page 39: Distributed network signal processing

University of California, Berkeley

Dynamic Data Fusion

Use a node-pixel analogy to exploit algorithms from computer vision.

Each sensor reading is akin to a pixel intensity at some (x,y) location.

By interpolating node positions to regular grid

points, standard differentiation techniques are used

to determine the direction of flow. This can be done

in a distributed fashion.

Left: Chemical plume is tracked through network

-6 -4 -2 0 2 4 6-6

-4

-2

0

2

4

6