cost-effective video streaming techniques
DESCRIPTION
Cost-Effective Video Streaming Techniques. Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL 32816-2362 U.S.A. Server Channels. Videos are delivered to clients as a continuous stream. - PowerPoint PPT PresentationTRANSCRIPT
Cost-Effective Video Streaming Techniques
Kien A. Hua
School of EE & Computer ScienceUniversity of Central Florida
Orlando, FL 32816-2362U.S.A
Server Channels
• Videos are delivered to clients as a continuous stream.
• Server bandwidth determines the number of video streams can be supported simultaneously.
• Server bandwidth can be organized and managed as a collection of logical channels.
• These channels can be scheduled to deliver various videos.
Using Dedicated Channel
Video Server
Client
Too Expensive !
Client
Client
Client
Dedicated stream
Batching• FCFS
• MQL (Maximum Queue Length First)
• MFQ (Maximum Factored Queue Length)
ResourcesWaiting queue for video i
newrequest
longest queue length
ResourcesWaiting queue for video i
newrequest
lf
l
ln
n
1
1
fis the largest
1Access frequency
of video 1
777 ResourcesWaiting queue
newrequest
Can multicast provide true VoD ?
• Low Latency: requests must be served immediately
Challenges – conflicting goals
• Highly Efficient: each multicast must still be able to serve a large number of clients
Some Solutions
• Patching [Hua98]
• Range Multicast [Hua02]
Patching
RegularMulticast
Video
A
Proposed Technique: Patching
RegularMulticast
AVideo Player Buffer
B
Video
t
Patching Stream
Skew point
Proposed Technique: Patching
RegularMulticast
ABuffer
B
Video
2t
Skew point is absorbed by client buffer
Video Player
Client Design
Video
Server
Lr
VideoPlayer
Regular MulticastPatching Multicast
Data Loader
RegularStream
PatchingStream
Client A
LrLp
VideoPlayer
Client B
BufferLrLp
VideoPlayer
Client C
Server Design
Server must decide when to schedule a regular stream or a patching stream
A
r
B
p
C
p
D
p
E
r
F
p
G
p
Multicast group Multicast group
time
Two Simple Approaches
• If no regular stream for the same
video exists, a new regular stream is scheduled
• Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching
Greedy Patching
Patching stream is always scheduled
Video Length
Shared Data
Buffer Size
Shared Data
Buffer Size
Shared Data
A
B
D
C
Time
Grace Patching If client buffer is large enough to absorb the
skew, a patching stream is scheduled; otherwise, a new regular stream is scheduled.
Video Length
Buffer Size
Regular Stream
A
Shared DataB
C
Time
Performance Study
• Compared with conventional
batching
• Maximum Factored Queue (MFQ) is used
• Performance metric is average service latency
Simulation Parameters
Request rate (requests/min)
Client buffer (min of data)
Server bandwidth (streams)
Video length (minutes)
Number of videos
Parameter
50 10-90
5 0-10
1,200 400-1,800
90 N/A
100 N/A
Default Range
Video Access Skew factor 0.7 N/A
Number of requests 200,000 N/A
Effect of Server Bandwidth
Client BufferRequest RateDefection
5 minutes50 arrivals/minuteNo
0
100
200
300
400
500
600
400 600 800 1000 1200 1400 1600 1800
Ave
rage
Lat
ency
(S
econ
ds)
Server Communication BW (streams)
Conventional Batching
Greedy PatchingGrace Patching
Effect of Client Buffer
Server BandwidthRequest RateDefection
1,200 streams50 arrivals/minuteNo
0
20
40
60
80
100
120
140
160
180
200
0 1 2 3 4 5 6 7 8 9 10
Ave
rage
Lat
ency
(se
cond
s)
Client Buffer Size (minutes of data)
Conventional Batching
Greedy PatchingGrace Patching
Effect of Request Rate
0
50
100
150
200
250
10 20 30 40 50 60 70 80 90 100 110
Ave
rage
Lat
ency
(se
cond
s)
Request Rate (requests/minutes)
Server BandwidthClient BufferDefection
1,200 streams5 minutesNo
Conventional Batching
Greedy PatchingGrace Patching
Optimal Patching
A
r
B
p
C
p
D
p
E
r
F
p
G
p
patching window patching window
Multicast group Multicast group
time
What is the optimal patching window ?
Optimal Patching Window
• D is the mean total amount of data transmitted by a multicast group
• Minimize Server Bandwidth Requirement, D/W , under various W values
Video Length
Buffer Size Buffer Size
A
W
Optimal Patching Window
• Compute D, the mean amount of data transmitted for each multicast group
• Determine , the average time duration of a multicast group
• Server bandwidth requirement is D/ which is a function of the patching period
• Finding the patching period that minimize the bandwidth requirement
Candidates for Optimal Patching Window
Piggybacking [Golubchik96]
new arrivals
departures
+5% -5%C B A
•Slow down an earlier service and speed up the new one to merge them into one stream
•Limited stream sharing due to long catch-up delay
•Implementation is complicated
Concluding Remarks
• Unlike conventional multicast, requests can be served immediately under patching
• Patching makes multicast more efficient by dynamically expanding the multicast tree
• Video streams usually deliver only the first few minutes of video data
• Patching is very simple and requires no specialized hardware
Patching on Internet
• Problem: – Current Internet does not support
multicast
• A Solution:
– Deploying an overlay of software routers on the Internet
– Multicast is implemented on this overlay using only IP unicast
Content Routing
RootRouter
RouterA
RouterB
RouterE
RouterC
RouterD
ClientClient
Find (1)Find (2)
Fin
d
RouterD
MyRouter ?
No
Yes
Server
Client
Videostream
Each router forwards its Find messages to other routers in a round-robin manner.
Removal of An Overlay Node
A
B
C
D
GF
E
Client
A
B
C
D
G
E
Client
Before adjustment After adjustment
Server Server
Inform the child nodes to reconnect to the grandparent
Failure of Parent Node
A
B
C
D
GF
E
Client
A
B
C
D
G
E
Client
After adjustmentBefore adjustment
– Data stop coming from the parent
– Reconnect to the server
Slow Incoming Stream
A
B
C
D
GF
E A
B
C
D
GF
E
Before adjustment After adjustment
Reconnect upward to the grandparent
Downward Reconnection
A
B
C
D
GF
E
Before adjustment After adjustment
A
B
C
D
GF
E
Slow
Slow
• When reconnection reaches the server, future reconnection of this link goes downward.
• Downward reconnection is done through a sibling node selected in a round-robin manner.
• When downward reconnection reaches a leave node, future reconnection of this link goes upward again.
Limitation of Patching
• The performance of Patching is limited by the server bandwidth.
• Can we scale the application beyond the physical limitation of the server ?
Chainin
g [Hua97]
• Using a hierarchy of multicasts• Clients multicast data to other clients in the
downstream• Demand on server bandwidth is
substantially reduced
Batch3
Batch1
Batch 2
A virtualbatch
Dedicated Channels Multicast Chaining
Only onevideo stream
3 videostreams
7 videostreams
client
Vid
eo
se
rve
r
Vid
eo
se
rve
r
Vid
eo
se
rve
r
Networkcache
Chaining
– Highly scalable and efficient
– Implementation is complex
Video Server
disk
Screen
disk
Screen
Screen
disk
Client A
Client B
Client C
Range Multicast [Hua02]
• Deploying an overlay of software routers on the Internet
• Video data are transmitted to clients through these software routers
• Each router caches a prefix of the video streams passing through
• This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period
Range Multicast Group
C1 C3
C4 C2VideoServer
0
7
8
110
0
0
7
7
7
8
8
11
R5
R6R3
R4R1 R7
R8R2
Root
• Four clients join the same server stream at different times without delay
• Each client sees the entire video
Buffer Size: Each router can cache 10 time units of video data.
Assumption: No transmission delay
Multicast Range
• All members of a conventional multicast group share the same play point at all time
– They must join at the multicast time
• Members of a range multicast group can have a range of different play points
– They can join at their own time
Multicast Range at time 11: [0, 11]
C1 C3
C4 C2VideoServer
0
7
8
110
0
0
7
7
7
8
8
11
R5
R6R3
R4R1 R7
R8R2
Root
Network Cache Management
• Initially, a cache chunk is free.
• When a free chunk is dispatched for a new stream, the chunk becomes busy.
• A busy chunk becomes hot if its content matches a new service request.
free
busy
hot
New streamarrives
A servicerequest arrivesbefore the chunkis full
Lastserviceends
A servicerequest arrives
before the chunkis full
Replacedby a newstream
RM vs. Proxy Servers
Popular data are heavily duplicated if we cache long videos.
RM routers cache only a small leading portion of the video passing through
Caching long videos is not advisable. Many data must still be obtained from the server
Majority of the data are obtained from the network.
Proxy Servers Range Multicast
2-Phase Service Model(2PSM) [Hua99]
Browsing Videos in a Low Bandwidth Environment
Search Model
• Use similarity matching or keyword search to look for the candidate videos.
• Preview some of the candidates to identify the desired video.
• Apply VCR-style functions to search for the video segments.
Conventional Approach
Advantage: Reduce wait time
1. Download So
2. Download S1
while playing S0
3.Download S2
while playing S1...
Disadvantage:Unsuitable for searching video
S 1 S 2 S 3 ...
...
display S 0
display S 1
display S 2
S0
S 1
S 2
S 3
Server
Client
S0
Time
Search Techniques
• Use extra preview files to support the preview function Requires more storage space Downloading the preview file adds
delay
• Use separate fast-forward and fast-reverse files to provide the VCR-style operations Requires more storage space Server can become a bottleneck
Challenges
How to download the preview frames for FREE ?
No additional delay
No additional storage requirement
How to support VCR operations without VCR files ?
No overhead for the server
No additional storage requirement
2PSM – Preview Phase
0
1
2
3
4
6
7
8
9
10
12 18
13
14
15
16
5 11 17
19
20
21
22
23
25
26
27
28
29
24 30 36 42 48
31
32
33
34
35
37
38
39
40
41
43
44
45
46
47
54 60 66 72 78 84 90
49
50
51
52
53
55
56
57
58
59
61
62
63
64
65
67
68
69
70
71
73
74
75
76
77
79
80
81
82
83
85
86
87
88
89
91
92
93
94
95
96
97
98
99
100
102
103
104
105
106
108 114
109
110
111
112
101 107 113
115
116
117
118
119
121
122
123
124
125
120 126 132 138 144
127
128
129
130
131
133
134
135
136
137
139
140
141
142
143
150 156 186162 168 174 180
145
146
147
148
149
151
152
153
154
155
158
157
189159
190160
187
161
188
191
163
164
165
166
167
169
170
171
172
173
175
176
177
178
179
181
182
183
184
185
downloadedduring Step 1
downloadedduring Step 2
downloadedduring Step 3
L
R
.
.
.
.
.
.
.
.
.
GOFs available for previewing after 3 steps
The preview quality improves gradually.
.
.
.
90
1146618
42
162
138
1741501261027854306downloaded
during Step 4
2PSM – Playback PhaseServer
. . .PU0 PU1 PU2 PU3 PU4 PU5 PU6
L3 L4 L5 L6
L
0 L1
L2
L3
L4
L5
L6
L7
L7R2 R3
R
5
R6
R
1
R
0
R
2 R
3 R
4
R4 R5
R
6
PU0
Client
Download during Initialization Phase Download during Playback Phase
PU1
. . .
display
display
display
display
display
display
display
PU2
PU3
PU4
PU5
PU6
R0L0L1
R1 L2
t
Remarks
1. It requires no extra files to provide the preview feature.
2. Downloading the preview frames is free.
3. It requires no extra files to support the VCR functionality.
4. Each client manages its own VCR-style interaction. Server is not involved.
2PS
M V
ideo B
row
ser