Download - Distributed Visualization
Distributed Visualization• Our use of this term extends its
traditional meaning •
– Distributed: Still aim to support geographically distributed users and collaborative communities
– Decentralized infrastructure: The infrastructure does not need to be centralized as in “compute” centers
– Shared infrastructure: The comp/storage nodes can be independent Internet computers
Distributed Visualization
Data Replication: exNodes
D1
D2
D3
D4
F1 F2 F3 Physical Layer
IBP
LoRS:runtime system
Applications
L-Bone exNode
Network Functional Unit
Input Allocation
Input Allocation
Output Allocation
IBP
Memory mapping
MEM
PE
RD RDRD/WR
NFU is novel due to:
1. weakened semantics and
2. control of security-sensitive operations.
Scheduling
Server:P2
Server:P3 Server:P4 Server:P5
Server:P6
Server:P1 Workstation B
d9
d1
d3
d4
d11
d12
d6
d10
d7
d8
d2
d5
d4
d12
d1
d2
d8
d9
d5
d6
d10
d11
d3
d7
Task Queues
Task Queues
Workstation A
• Depots: {P1,P2,…,Pm}
Pi described by bw bi & computational power ci
• Partitioned dataset {d1,d2,…, dn}, k-way replication
• Vis only need one copy of each dj
• (Optional) DM tasks
Mij replicates dj on Pi
Key Challenge:
Resource performance varies over time !!!
Dynamic Data Movement• Some data partitions are just “unlucky” to be on slow
or heavily loaded servers• After fast depots are done with local tasks, can
dynamically “steal” some slow “partitions”
Partial multi-stream download /w deadline
Succeed?
Termination
Main multi-streamdata transfer
Information update
Termination
deadline
<Tj, Pi >
yes
no
Fast server selection
Slow task selection
Deadline calculation
<Tj>
<Pi >
Results: the depots• Most depots used running on Planet-Lab• Workloads varied across servers and time• Realistic test of feasibility on shared,
decentralized infrastructure
0
20
40
60
80
100
0 50 100 150 200 250 300 350 400
Server ID
On
e-m
inu
te L
oa
d
0
10
20
30
40
50
0 225 450 675 900 1125 1350
Time (minutes)
On
e-m
inu
te L
oa
d
Results: the data
• Test data: 30 timestep of Tera-scale Supernova Initiative, 75GB in total– Provided by Tony Mezzacappa (ORNL) and John Blondin
(ORNL) under the auspices of DOE SciDAC TSI project
Results: the performance• 800x800 image resolution, 0.5 step size in ray-casting,
per-fragment classification and Phong shading• With 100 depots, the average rendering time: 237 sec