lecture 9 wsns: data dissemination - university of rochester€¦ · communication paradigm for...
TRANSCRIPT
Lecture 9WSNs: Data Dissemination
Reading: • “Wireless Sensor Networks,” Chapter 12, sections 12.2-12.4.• W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “An Application-
Specific Protocol Architecture for Wireless Microsensor Networks,” IEEE Trans. on Wireless Communications, Vol. 1, No. 4, Oct. 2002, pp. 660-670.
• C. Intanagonwiwat et al., “Directed Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks,” Proc. MobiCom '00.
• W. Heinzelman, J. Kulik, and H. Balakrishnan, “Adaptive Protocols for Information Dissemination in Wireless Sensor Networks,” Proc. MobiCom ’99.
• D. Braginsky and D. Estrin, “Rumor Routing Algorithm for Sensor Networks,” Proc. WSNA, Sept. 2002, pp. 22-31.
2
Sensor Network RoutingEnergy-efficiency even more important than in MANETs“Resource”-aware, data-centric routing needed
Reduce power consumptionDistribute energy load (maximize network lifetime)Take into account sensors’ importance to application
May be tightly coupled with protocols from different layersTake advantage of data fusion opportunitiesCross-layer architectures
Types of routing neededOne-to-one: data to sinkMany-to-one: all sensors’ data to sinkOne-to-many: sink commands to sensorsMany-to-many: data dissemination, flooding, gossiping
3
Resource-aware RoutingConsider each sensor’s resources for routing decisions
Energy resourcesSensing resourcesOthers?
Energy-aware routingBalance power consumption so nodes fail uniformlyNode costs and link costs considered with different protocols
Fidelity-aware routingConsider importance of node to sensing applicationRoute around important “sensors”Takes into account redundant neighborsRequires local communication so nodes learn relative positioninginformation
4
Data-Centric RoutingAggregate data or information from data important
Individual data items not important
Sensor nodes themselves less important than dataQueries posed for specific data rather than data from a particular sensorRouting exploits the requirement for aggregate data rather than individual dataExample protocols
SPINDirected DiffusionRumor Routing
5
Network-wide Data Dissemination
A
B C D
F EGH
A,G,H
B,C,F B,C,D,E C,E,D
B,E,F,G
C,D,
E,FA,G,HA,G,H
A,B,F-H
A-H B-H B-G
A-H B-GA-C,E-H
A,B,F-H
A-H
A-H A-H A-H
A-H A-HA-H
A-H
Problem: information dissemination
6
Conventional Approach
B
D E
FG
CA
FloodingSend to all neighborsE.g., routing table updates
7
Resource Inefficiencies
Implosion
A
B C
D
(a)
(a)
(a)
(a)
A B
C (r,s)(q,r)
q sr
Data overlap
Resource blindness
8
What is the optimum protocol?
B
D E
FG
CA
“Ideal”Shortest-path routesAvoids overlapMinimum energyNeed global topology information
9
SPIN Family
Sensor Protocol for Information via Negotiation
Data negotiationMeta-data (data naming)Application-level controlModel “ideal” data paths
SPIN messagesADV- advertise dataREQ- request specific dataDATA- requested data
Resource management
ADVA B
AREQ
B
ADATA
B
10
SPIN Example
B
AADVREQDAT
A
ADV
ADVADV
ADVADV ADV
REQREQ
REQ
REQ
REQ
DATAD
ATA
DATA
DATA
DATA
11
SPIN Example 2
ADVE
DREQ
D
EDATA E
DC
ADV
E
DC
BA
A Nodes with dataA Nodes without data
Nodes waiting to transmit REQA
12
Simulation Results
-- SPIN-- Ideal-- Flooding
SPINConverges quicker than floodingReduces energy by 50% compared with floodingMeta-data negotiation successful in broadcast
13
Directed DiffusionAbstraction that tries to describe communication patterns underlying many localized algorithmsData named with attribute-value pairsNodes that want data express interests based on the predefined attributesInterests disseminated throughout networkInterests diffuse to correct area
Intermediate nodes propagate interests based on the contents of the interestE.g., if interest is for data from location (x,y), interest will be propagated towards (x,y)
14
Directed DiffusionInterests may be propagated to multiple neighbors for robustnessGradients set up that draw events of interest back to originating node
Strength of gradient depends on quality of the routing pathApplication-specific meaning to a gradient
Interests/data propagated along routes with strong gradientsGood routes inherently reinforcedCreates low-energy routing of data
Data aggregation and caching performed within the networkFurther reduces node energy dissipation
15
ExampleSensor nodes monitoring area for animalsInterested in receiving data about all 4-legged creaturesSpecify desired data rateQuery/interest
1. Type=four-legged animal2. Interval=20ms (event data rate)3. Duration=10 seconds (time to cache)4. Rect=[-100, 100, 200, 400]
Reply1. Type=four-legged animal2. Instance = elephant3. Location = [125, 220]4. Intensity = 0.65. Confidence = 0.856. Timestamp = 01:20:40
Attribute-Value pairs, no advanced naming scheme
16
Directed Diffusion ProtocolSinks broadcast interest to neighborsInterests are cached by neighborsGradients are set up pointing back to where interests came from at low data rateOnce a source receives an interest, it routes measurements along gradientsGradients from source to sink initially small
Increased during reinforcement
17
Directed Diffusion (cont.)
18
Design Considerations
19
Simulation Results
20
Rumor RoutingIn query-based networks, different techniques for routing data and queries
Query floodingExpensive for large query/event ratioAllows for optimal reverse path setupGossiping scheme can be use to reduce overhead
Event FloodingExpensive for low query/event ratio
NoteBoth of them provide shortest delay paths
21
Rumor RoutingAlternative: Rumor Routing
Designed for query/event ratios between query and event floodingMotivation
Sometimes a non-optimal route is satisfactoryAdvantages
Tunable best effort deliveryTunable for a range of query/event ratios
DisadvantagesOptimal parameters depend heavily on topology (but can be adaptively tuned)Does not guarantee delivery
22
Rumor Routing (cont.)
23
Basis for AlgorithmObservation
Two lines in a bounded rectangle have a 69% chance of intersecting
IdeaCreate set of straight line gradients from eventSend query along a random straight line from sink
What if this line is not really straight?
Event
Sink
24
Creating PathsNodes with data send out agents
Agents leave routing info to the event as state in intermediate nodes
Agents attempt to travel in a straight lineIf an agent crosses a path to another event, it begins to build the path to bothAgents also optimize paths if they find shorter ones
25
Algorithm BasicsAll nodes maintain a neighbor listNodes also maintain event table
When it observes an event, event added with distance 0Agents
Packets that carry local event information across networkAggregate events as they go
26
Agent PathAgent tries to travel in a “somewhat” straight path
Maintains a list of recently seen nodesWhen it arrives at a node, adds the node’s neighbors to the listFor the next hop, tries to find a node not in the recently seen listAvoids loopsImportant to find a path regardless of “quality”
27
Following PathsA query originates from sink and is forwarded along until it reaches its TTLForwarding Rules
If a node has seen the query before, it is sent to a random neighborIf a node has a route to the event, forward to neighbor along the routeOtherwise, forward to random neighbor using straightening algorithm
28
Agents
29
Simulation Results10 Events, 4000 Nodes
0
5
10
15
20
25
30
35
40
45
50
0 10 20 30 40
Number of Queries
Num
ber o
f Tra
nsm
issi
ons
(thou
sand
s)Query Flooding
A=28, La=500, Lq=1000
A=52, La=100, Lq=2000
Event Flooding
30
ObservationsWide range of parameters allow for energy saving over simple alternativesOptimal parameters depend on
Network topologyQuery/event distribution and frequency
Algorithm very sensitive to event distributionFault tolerance
After agents propagated paths to events, some nodes were disabledDelivery probability degraded linearly up to 20% node failure, then dropped sharplyBoth random and clustered failure were simulated with similar results
31
Geographic RoutingSensors often know locationUse location information to get data/queries to particular part of networkGeographic forwarding reduces
Routing overheadMemory utilization
Example protocolsGPSRTBF
32
Greedy Perimeter Stateless Routing (GPSR)
Greedy geographic forwarding algorithm
Forward data to node’s neighbor that makes most progress towards destinationMust keep track of neighbors’ locations
Obstacles can cause problems
Use “right hand rule” routing around holes
33
Trajectory Based Forwarding (TBF)
Similar to GPSR but allow source-specified trajectories for routesEnables
Multipath routing for added resilienceSpoke broadcastingBroadcast to remote subregion
34
ClusteringTo scale, hierarchical approach beneficialForm local clusters managed by cluster head
Fixed or adaptive cluster maintenance
Clustering providesFramework for resource managementSupport for intra-cluster channel access and power controlSupport for inter-cluster routing and channel separationDistributes management responsibility from sink to cluster headsProvides framework for data fusion, local decision making, localcontrol and energy savings
Example protocolsLEACHHEED
35
µAMPS Networks
Relevant parameters:System lifetime (energy efficiency)QualityLatency
Assumptions:Base station away from nodesAll nodes energy-constrainedLocally, data correlated
A
B D
E
C
FG Base station
f(A-G)
36
LEACH Protocol Architecture
Cluster-head
Base station
Low-Energy Adaptive Clustering HierarchyAdaptive, self-configuring cluster formationLocalized control for data transfersLow-energy medium accessApplication-specific data aggregation
37
Dynamic Clusters
Cluster-head rotation to evenly distribute energy loadAdaptive clusters
Clusters formed during set-upScheduled data transfers during steady-state
Time•••
START START START
Set-up Frame RoundSteady-state
Cluster-heads = •
38
Distributed Cluster Formation
Using Pi(t)
Choose CH with “loudest” announcement
Cluster-headNodes
Non-CHNodes
Node icluster-head ?Yes No
Wait forcluster-head
announcements
Send Join-Requestmessage to chosen
cluster-head
Announcecluster-head status
Wait forJoin-Request
messages
Steady-stateoperation for
t=Tround seconds
Autonomous decisions lead to global behavior
• No global control• Flexible, fault-tolerant
39
Distributed Cluster FormationAssume nodes begin with equal energyDesign for k clusters per round
Want to evenly distribute energy load
ktN
i=∗= ∑
=
1)(P CH] E[#1
i
k = system param.(Analytical optimum)
Each node CH once in N/k rounds
−=0
)/mod(* )(Pi kNrkNk
t 0 )(Ci =t
1 )(Ci =tCi(t) = 1 if node i a CH in last r rounds
Can determine Pi(t) with unequal node energy
40
LEACH Steady-StateSet-up Steady-state
Clusters formedSlot for node i
Slot for node i •••
TimeFrame
Cluster-head coordinates transmissionsTime Division Multiple Access (TDMA) scheduleNode i transmits once per frame
Cluster-head broadcasts TDMA schedule Low-energy approach
No collisionsMaximum sleep timePower control
41
Inter-Cluster Interference
BA
C
Cluster BCluster C
Transmission in different clusters can collideNodes minimize transmission powerEach cluster has unique spreading code Distributed solution to minimize interference
42
LEACH Medium Access
ADV Join-REQ SCH
ADV• CSMA• Large power, small messages
Time
Set-up Steady-state
CH Non-CH CH
Join-REQ• CSMA• Large power, small messages
SCH• DS-SS code• Power to reach all members
Data Transfer to Cluster-Head• TDMA slot (with DS-SS)• Power to reach CH, large messages
43
Data Aggregation
Energy for aggregation (J/bit/signal)
Tota
l ene
rgy
(J)
No DataAggregation
Local Data Aggregation
**
Clusters exhibit spatial localityLocal data aggregation
Computation vs. communication tradeoffDepends on cost of computation and communicationSignal processing within the network
44
Base Station Cluster Formation
Get optimal clusters for comparisonLEACH-C
Requires communication with base stationNodes send base station current positionBase station runs optimization algorithm to determine best clusters
Need GPS or other location-tracking method
45
Simulation Framework
Extensions to ns network simulatorComputation and communication energy models
StrongARM processor data aggregation Radio energy
LEACH, LEACH-C, MTE routing, static clustering
Transceiver Tx Amplifier Transceiver
Eelec* kk bit packet εamp* k * dn Eelec* kd
Transmitter Receiver
k bit packet
46
Simulation Parameters
Base Station
100
met
ers
100 meters
100 nodes
500 bytesData size
5 nJ/bit/signalAggregation cost
100 pJ/bit/m2Transmit amplifier
50 nJ/bitRadio electronics
100 kbpsBit rate
50 µsProcessing delay75 meters
47
Optimum Number of Clusters
Ave
rage
ene
rgy
per r
ound
(J)
Number of clusters (k)
k=7
k=9
k=5k=3
k=1
k=11
Too few clusters cluster-head nodes far from sensors
Too many clusters not enough local signal processing
48
Analytical Optimumk clusters N/k nodes/cluster:
dtoBS
dtoCH α 1/k
βα +=kNECH
δγ +=− kE CHnon
1
N=100M=10075 < dtoBS < 185
2 < kopt < 62toBS
opt dMNk ρ=
Simulation agrees with theory
49
Data per Unit Energy
Am
ount
of d
ata
rece
ived
at B
S
Energy Dissipation (J)
Static-Clustering
LEACH-C
LEACH
MTE
LEACH achieves order of magnitude more data per unit energy
2 hops v. 10 hops averageData aggregation successful
Nodes begin withlimited energy
50
Network Lifetime
Num
ber o
f nod
es a
live
Amount of data received at BS
Static-Clustering
LEACH-C
LEACHMTE
LEACH delivers over 10 times amount of data for any number of node deathsRotating cluster-head effective
51
Discussion