lecture 4: routing

48
Lecture 4: Routing Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Wattenhofer, Gouda, Estrin

Upload: nasim-witt

Post on 31-Dec-2015

30 views

Category:

Documents


2 download

DESCRIPTION

Lecture 4: Routing. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Wattenhofer, Gouda, Estrin. Routing Overview. Patterns: Convergecast one shot subscription or persistent subscription subscriber in-network or from base station - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Lecture 4:  Routing

Lecture 4: Routing

Anish Arora

CIS788.11J

Introduction to Wireless Sensor Networks

Material uses slides from Wattenhofer, Gouda, Estrin

Page 2: Lecture 4:  Routing

2

Routing Overview

• Patterns:

Convergecast

one shot subscription or persistent subscription

subscriber in-network or from base station

if in-network and one-shot subscriber, then subscriber could be moving

Broadcast

potentially directed/local

potentially with feedback (PIF)

potentially scoped (e.g. data centric routing)

Page 3: Lecture 4:  Routing

3

Routing Overview

• Model assumptions:

Availability of locations

Density/planarity

Node/link heterogeneity

• Requirements:

Latency

Reliability

Energy

Scalability

Convergence

Page 4: Lecture 4:  Routing

5

Convergecast Protocol Classification

• Distance vector protocols Key issues:

Link selection

Route metric:o Expected number of transmissions on path

o Expected transmission time

o Distance advanced towards destination

• Greedy protocols: issue of dealing with holes

• Geometric protocols

• Randomized protocols

• Gradient-descent protocols

• Multi-path protocols, even flooding

• Hierarchical protocols (potentially exploiting clusters)

Page 5: Lecture 4:  Routing

6

Location-based/Geometric/Geographic Convergecast

• Sensor nodes addressed according to their locations

• No routing tables stored in nodes!

Kleinrock et al. MFR et al. Geometric Routing proposed

Kranakis, Singh, Urrutia Face Routing

First correct algorithm

Bose, Morin, Stojmenovic, Urrutia

GFG First average-case efficient algorithm (simulation but no proof)

Karp, Kung GPSR A new name for GFG

Kuhn, Wattenhofer, Zollinger

GOAFR Worst-case optimal and average-case efficient, percolation theory

Page 6: Lecture 4:  Routing

7

• [Kranakis, Singh, Urrutia CCCG 1999]

Correct Geometric Routing: Face Routing

Page 7: Lecture 4:  Routing

8

Face Routing

• Remark: Planar graph can easily (and locally!) be computed with the Gabriel Graph, for example

Page 8: Lecture 4:  Routing

9

Face Routing

s t

Page 9: Lecture 4:  Routing

10

Face Routing

s t

Page 10: Lecture 4:  Routing

11

Face Routing

s t

Page 11: Lecture 4:  Routing

12

Face Routing

s t

Page 12: Lecture 4:  Routing

13

Face Routing

s t

Page 13: Lecture 4:  Routing

14

Face Routing

s t

Page 14: Lecture 4:  Routing

15

Face Routing

s t

Page 15: Lecture 4:  Routing

16

• All necessary information is stored in the message Source and destination positions Point of transition to next face

• Completely local: Knowledge about direct neighbors’ positions sufficient Faces are implicit

• Planarity of graph is computed locally (not an assumption) Computation for instance with Gabriel Graph

Face Routing Properties

“Right Hand Rule”

Page 16: Lecture 4:  Routing

17

Efficiency in Face Routing

• Theorem: Face Routing reaches destination in O(n) steps• But: Can perform poorly compared to the optimal route

• Need to bound search area adaptively

Page 17: Lecture 4:  Routing

19

Grid Routing

Key ideas: Embeds tree in logical grid

Well suited for bursty convergecast traffic Avoids fast link reliability estimation

o Preselects innerband links

Focuses only on up/down link detection Attempts to spread load uniformly

o Parent chosen randomly and rotated periodically

Deals with holes randomly Cycles avoided by limiting number of diversions

Base station snoops

Page 18: Lecture 4:  Routing

20

The Logical Grid

• The motes are named as if they form an M*N logical grid

• Each mote is named by a pair (i, j) where

i = 0 .. M-1 and j = 0 .. N-1

• The base station is mote (0,0)

• Physical connectivity between motes is a superset of their

connectivity in the logical grid:

(0,0)

(0,1)

(1,1)

(1,0)

(2,1)

(2,0)

(0,0)

(0,1)

(1,0)

(1,1)

(2,0)

(2,1)

Page 19: Lecture 4:  Routing

21

Potential Parents

• A mote (i, j) dominates another mote (x, y) iff i≥x and j≥y

• If (i, j) dominates (x, y), then distance from (i, j) to (x, y) is (i-x)+(j-y)

• Let H be a “small” positive integer, called the hop size

A potential parent of a mote (i, j) is a mote (x, y) such that (i, j) dominates (x, y) and distance from (i, j) to (x, y) = H

(except in special cases where (i,j) is close to some edge of the grid)

Page 20: Lecture 4:  Routing

22

Communication Pattern

• Each mote (i, j) can send msgs whose ultimate destination is mote

(0, 0)

• The motes need to maintain an incoming spanning tree whose root

is (0, 0): each mote maintains a pointer to its parent

• When a mote (i, j) has a msg, it forwards the msg to its parent. This

continues until the msg reaches mote (0, 0)

(H = 2)

Page 21: Lecture 4:  Routing

23

Protocol Message

• When a mote (i, j) has a parent, then every random period,

whose average is 20 seconds, mote (i, j) sends the msg:

connected(i, j)

Otherwise, mote (i, j) does nothing

• Every random period, whose average is 20 seconds, mote (0, 0)

sends the msg:

connected(0, 0)

Page 22: Lecture 4:  Routing

24

Maintaining a Parent

• Initially, no mote has a parent

• When a mote (i, j) receives a connected(x, y) msg, where (x, y)

is a potential parent of (i, j), (i, j) makes (x, y) its (new) parent

• Thus, the parent of a mote is changed, in a round robin fashion,

among the active potential parents of that mote – load balancing

and fast fault recovery

Page 23: Lecture 4:  Routing

25

Losing the Parent

• If a mote (i, j) does not receive any connected(x, y) msg from any of

its potential parents for 120 seconds, then (i, j) loses its parent

• If a mote (i, j) has no parent and receives a connected(x, y) msg,

where (x, y) is not a potential parent of (i, j), then (i, j) makes (x,

y) its “foster parent” but (i, j) will not send connected(i, j) msgs as

long as (i, j) has no parent

Page 24: Lecture 4:  Routing

26

Using the Routing Protocol

• When a mote (i, j) has a data msg to forward, it checks whether (i,

j) has a parent or a foster parent

if (i, j) has a parent or a foster parent (x, y), (i, j) sends a data(x,

y) msg, intended for (x, y)

otherwise, (i, j) discards the data msg

• A mote (i, j) has a data msg to forward iff either the mote itself has

generated the msg or it has received the data(i, j) msg

Page 25: Lecture 4:  Routing

27

Using the Routing Protocol by the Root

• When mote (0, 0), the base station, receives any data(x, y), it

forwards the msg text to its resident application (the base

station snooping)

Page 26: Lecture 4:  Routing

28

Grid Routing in Exscal

• Each mote is assigned three potential parents for a base station, based on a location of a mote in a logical grid A mote reads potential parent information from internal flash. “Potential Parents” session will cover how to compute potential

parents for each mote in the demo topology

• Provide primary and secondary base stations for each mote - overcome a base station failure A sensor can be connected to the secondary base station, only when

its primary base station fails

• Connected message formatconnected(myID, currentBaseStationID)

Page 27: Lecture 4:  Routing

29

Data-centric routing

• Sensor networks can be considered as a virtual database

• Implement query-processing operators in the sensor network

• Queries are flooded through the network (or sent to a representative set of nodes)

• In response, nodes generate tuples and send matching tuples towards the origin of the query

• Intermediate nodes may merge responses or aggregate

Page 28: Lecture 4:  Routing

30

Directed Diffusion

• Protocol initiated by destination (through query)

• Data has attributes ; sink broadcasts interests

• Nodes diffuse the interest towards producers via a sequence of local

interactions

• Nodes receiving the broadcast set up a gradient (leading towards

the sink)

• Intermediate nodes opportunistically fuse interests, aggregate,

correlate or cache data

• Reinforcement and negative reinforcement used to converge to

efficient distribution

Page 29: Lecture 4:  Routing

31

Illustrating Directed Diffusion

Sink

Source

Setting up gradients

Sink

Source

Sending data

Sink

Source

Recoveringfrom node failure

Sink

Source

Reinforcingstable path

Page 30: Lecture 4:  Routing

32

Data Naming

• Expressing an Interest Using attribute-value pairs E.g., Type = Wheeled vehicle // detect vehicle

locationInterval = 20 ms // send events every 20ms Duration = 10 s // Send for next 10 sField = [x1, y1, x2, y2] // from sensors in this area

Page 31: Lecture 4:  Routing

33

Gradient Set Up

• Inquirer (sink) broadcasts exploratory interest, i1 Intended to discover routes between source and sink

• Neighbors update interest-cache and forwards i1

• Gradient for i1 set up to upstream neighbor No source routes Gradient – a weighted reverse link Low gradient Few packets per unit time needed

Page 32: Lecture 4:  Routing

34

Low

Exploratory Gradient

EventEvent

LowLow

Exploratory RequestGradient

Bidirectional gradients established on all links through flooding

Page 33: Lecture 4:  Routing

35

Event-data propagation

• Event e1 occurs, matches i1 in sensor cache e1 identified based on waveform pattern matching

• Interest reply diffused down gradient (unicast) Diffusion initially exploratory (low packet-rate)

• Cache filters suppress previously seen data Problem of bidirectional gradient avoided

Page 34: Lecture 4:  Routing

36

Reinforcement

• From exploratory gradients, reinforce optimal path for high-rate data download Unicast

By requesting higher-rate-i1 on the optimal path

Exploratory gradients still exist – useful for faults

EventEvent

SinkA sensor field

Reinforced gradientReinforced gradient

Page 35: Lecture 4:  Routing

37

Path Failure / Recovery

• Link failure detected by reduced rate, data loss Choose next best link (i.e., compare links based on infrequent

exploratory downloads)

• Negatively reinforce lossy link Either send i1 with base (exploratory) data rate Or, allow neighbor’s cache to expire over time

EventEvent

Sink

Src AC

B

MD

Link A-M lossyA reinforces BB reinforces C …D need notA (–) reinforces MM (–) reinforces D

Page 36: Lecture 4:  Routing

38

• M gets same data from both D and P, but P always delivers late due to looping M negatively-reinforces (nr) P, P nr Q, Q nr M Loop {M Q P} eliminated

• Conservative nr useful for fault resilience

Loop Elimination

A

QP

D M

Page 37: Lecture 4:  Routing

39

Local Behavior Choices

1. For propagating interestsIn our example, flood

More sophisticated behaviors possible: e.g. based on cached information, GPS

2. For setting up gradientsHighest gradient towards

neighbor from whom we first heard interest

Others possible: towards neighbor with highest energy

3. For data transmission

Different local rules can result in single path delivery, striped multi-path delivery, single source to multiple sinks …

4. For reinforcement

reinforce one path, or part thereof, based on observed losses, delay variances etc.

other variants: inhibit certain paths because resource levels are low

Page 38: Lecture 4:  Routing

40

Simulation studies

• Compare diffusion to a) flooding, and b) centrally computed tree (“ideal”)

• Key metrics: total energy consumed per

packet delivered (indication of network life time)

average pkt delay

CENTRALIZED

DIFFUSION

FLOODING

DIFFUSION

FLOODING

CENTRALIZED

Page 39: Lecture 4:  Routing

41

Rumor Routing

• Designed for query/event ratios between query and event flooding

• Motivation Sometimes a non-optimal route is satisfactory

• Advantages Tunable best effort delivery Tunable for a range of query/event ratios

• Disadvantages Optimal parameters depend heavily on topology (but can be

adaptively tuned) Does not guarantee delivery

Page 40: Lecture 4:  Routing

42

Rumor Routing

Page 41: Lecture 4:  Routing

43

Basis for Algorithm

• Observation: Two lines in a bounded rectangle have a 69% chance of intersecting

• Create a set of straight line gradients from event, then send query along a random straight line from source

• Thought: Can this bound be proved for a random walk . What is this bound if the line is not really straight?

Event

Source

Page 42: Lecture 4:  Routing

44

Creating Paths

• Nodes that observe an event send out agents which leave routing info to the event as state in nodes

• Agents attempt to travel in a straight line

• If an agent crosses a path to another event, it begins to build the path to both

• Agent also optimizes paths if they find shorter ones

Page 43: Lecture 4:  Routing

45

Algorithm Basics

• All nodes maintain a neighbor list

• Nodes also maintain a event table When it observes an event, the event is added with distance 0

• Agents Packets that carry local event info across the network Aggregate events as they go

Page 44: Lecture 4:  Routing

46

Agents

Page 45: Lecture 4:  Routing

47

Agent Path

• Agent tries to travel in a “somewhat” straight path

Maintains a list of recently seen nodes

When it arrives at a node adds the node’s neighbors to the

list

For the next tries to find a node not in the recently seen list

Avoids loops

-important to find a path regardless of “quality”

Page 46: Lecture 4:  Routing

48

Following Paths

• A query originates from source, and is forwarded along

until it reaches it’s TTL

• Forwarding Rules:

If a node has seen the query before, it is sent to a random

neighbor

If a node has a route to the event, forward to neighbor along

the route

Otherwise, forward to random neighbor using straightening

algorithm

Page 47: Lecture 4:  Routing

49

Fault Tolerance

• After agents propagated paths to events, some nodes were disabled

• Delivery probability degraded linearly up to 20% node failure, then dropped sharply

• Both random and clustered failure were simulated with similar results

Page 48: Lecture 4:  Routing

52

Reliable Data Transport

• Transport layer design is difficult because of application-specific nature of sensor networks

• Networking layers tend to become fused (particularly transport and application)

• Goal: design customizable transport layer

• Provide the primitives for reliable transport