ece 5578 multimedia communication lec 12a: multimedia ... · http 1.1 2009 spdy 1.0 2015 http 2.0...
TRANSCRIPT
Spring 2020: Venu: Haag 315, Time: Tu 5:30-8:30pm
ECE 5578 Multimedia Communication
Lec 12a: Multimedia Transport System I - Congestion Model
Zhu LiDept of CSEE, UMKC
Office: FH560E, Email: [email protected], Ph: x 2346.http://l.web.umkc.edu/lizhu
Z. Li: ECE 5578 Multimedia Comm, 2020 p.1
slides created with WPS Office Linux and EqualX LaTex equation editor
Outline
Congestion Models and Control TCP and TCP Friendly Congestion Model New Congestion Work at RMCAT
Summary
Z. Li, Multimedia Communciation, 2018 p.2
Media Transport over the IP Networks
RTP, RTCP, RTSP
media server
RTSPserver
datasource
media player
AVsubsyste
m
RTSPclient
RTSP OK
RTSP PLAYRTSP OK
RTP AUDIO
RTP VIDEO
RTSP TEARDOWNRTSP OK
get UDP portchooseUDP port
RTSP SETUP
RTCP
TCP
UDP
Z. Li, Multimedia Communciation, 2018 p.3
RTP Header RTP Header
Incremented by one for each RTP PDU:
PDU loss detection
Restore PDU sequence Payload type
Identifies synchronization source(used by mixers)
Identifies contributing sources
Z. Li, Multimedia Communciation, 2018 p.4
HTTP History
Evolution of Web Content Transport:
1996HTTP 1.0
1999HTTP 1.1
2009SPDY 1.0
2015HTTP 2.0
Cloud MobilityRise of the Internet as a
Platform
Web 2.0
• Persistent connections• Virtual host support • Conditional caching • Digest authentication • Chunked transfer encoding• Enhanced compression
• Header compression• Security requirements • Interleaving requests and
responses• Push operations • Binary instead of textual
Z. Li, Multimedia Communciation, 2018 p.5
SPDY / HTTP 2.0 Work
Still works on top of a TCP connection Slow start (mitigated by changing init cwnd size to 16) Head of Line (HOL) blocking:
o Another disadvantage of SPDY is that an out-of-order packet delivery for TCP induces head of line blocking for all the SPDY streams multiplexed on that TCP connection.
Connection Latency: 3 RTT to establish a secure link Under utilization of link capacity by TCP Rate Control – no loss, in
order delivery, not that a big deal for media data (we have CTS/DTS)
Z. Li, Multimedia Communciation, 2018 p.6
CDN and Web Cache
Forward and Reverse Proxy Fwd: intercept client request, serve locally if can Rev: intercept server request, serve transparent of client if can
Research Issues Rate Agnostic Content Identification ! Fragmentation IRTF ICN/CCN work !:
https://trac.tools.ietf.org/group/irtf/trac/wiki/icnrg
Z. Li, Multimedia Communciation, 2018 p.7
Fwd Proxy Rev Proxy
WebRTC
WebRTC is a browser embedded native audio/visual real time streaming solution Built on top of RTP Have firewall traversal support Widely deployed in Chrome and Firefox
Main Utilities: MediaStreams – access to user's camera and mic
PeerConnection – audio/video calls
DataChannels – p2p application data transfer
More to come in RMCAT coverage !
Z. Li, Multimedia Communciation, 2018 p.8
QUIC – Quick UDP Internet Connection
Main QUIC Features/Design Goals: Connection establishment latency Improved congestion control – more suited for media QoE Multiplexing without head-of-line blocking Forward Error Correction (FEC) – reduce delay. Connection migration: native support for multipath via CID
(Connection ID)
Z. Li, Multimedia Communciation, 2018 p.9
Outline
ReCap Lecture 17 Congestion Models and Control TCP and TCP Friendly Congestion Model New Congestion Work at RMCAT
Summary
Z. Li, Multimedia Communciation, 2018 p.10
TCP Design
To provide a reliable byte stream service Error Free, In order delivery
Z. Li, Multimedia Communciation, 2018 p.11
Ethernet Hdr - 20 bytes(big-endian)
IP Header - 20 bytes(big-endian)
TCP Header - 20 bytes(big-endian)
App. Hdr & Data
TCP Connection
3-Way Hand Shake Flag bits get set
Z. Li, Multimedia Communciation, 2018 p.12
Client Server
Syn (only)
Syn + Ack
Ack
Ack( Push, Urgent)
Ack( Push, Urgent)
TCP Disconnect TCP Tear Down
Z. Li, Multimedia Communciation, 2018 p.13
Host A Host B
Ack( Push, Urgent)
Ack( Push, Urgent)
Fin + Ack
Fin + Ack
Ack
Ack
or Reset + Ack
Either A or B can be the Server
TCP Transmission – Windowed Control
Transmit and then wait for ACK: Only one TCP segment is “in flight” at a time Especially bad when delay-bandwidth product is high
Numerical example 1.5 Mbps link with a 45 msec round-trip time (RTT)
o Delay-bandwidth product is 67.5 Kbits (or 8 KBytes) But, sender can send at most one packet per RTT
o Assuming a segment size of 1 KB (8 Kbits)o … leads to 8 Kbits/segment / 45 msec/segment 182 Kbpso That’s just one-eighth of the 1.5 Mbps link capacity
Delay*BandwidthPacket Size
Z. Li, Multimedia Communciation, 2018 p.14
Sliding Window
Allow a larger amount of data “in flight” Allow sender to get ahead of the receiver … though not too far ahead
Sending process Receiving process
Last byte ACKed
Last byte sent
TCP TCP
Next byte expected
Last byte written Last byte read
Last byte received
Z. Li, Multimedia Communciation, 2018 p.15
Receiver BufferingWindow size Amount that can be sent without acknowledgment Receiver needs to be able to store this amount of data
Receiver advertises the window to the receiver Tells the receiver the amount of free space left … and the sender agrees not to exceed this amount
Window Size
OutstandingUn-ack’d data
Data OK to send
Data not OK to send yet
Data ACK’d
Z. Li, Multimedia Communciation, 2018 p.16
The Congestion WindowIn order to deal with congestion, a new state variable called
“CongestionWindow” is maintained by the source. (CWND) Limits the amount of data that it has in transit at a given time. MaxWindow = Min(Advertised Window, CongestionWindow) EffectiveWindow = MaxWindow - (LastByteSent -LastByteAcked).
TCP sends no faster than what the slowest component -- the network or the destination host --can accommodate.
Z. Li, Multimedia Communciation, 2018 p.17
Managing the Congestion WindowDecrease window when TCP perceives high congestion.Increase window when TCP knows that there is not much
congestion.How ? Since increased congestion is more catastrophic,
reduce it more aggressively.Increase is additive, decrease is multiplicative -- called the
Additive Increase/Multiplicative Decrease (AIMD) behavior of TCP.
Z. Li, Multimedia Communciation, 2018 p.18
AIMD
Each time congestion occurs - the congestion window is halved. Example, if current window is 16 segments and a time-
out occurs (implies packet loss), reduce the window to 8. Finally window may be reduced to 1 segment.
Window is not allowed to fall below 1 segment (MSS).For each congestion window worth of packets that has
been sent out successfully (an ACK is received), increase the congestion window by the size of a (one) segment.
Source Destination
Z. Li, Multimedia Communciation, 2018 p.19
AIMD
Remember that TCP is byte oriented. does not wait for an entire window worth of ACKs to add one segment
worth to congestion window.
Reality: TCP source increments congestion window by a little for each ACK that arrives. Increment = MSS * (MSS/Congestion Window)
o This is for each segment of MSS acked. Congestion Window + = Increment.
Thus, TCP demonstrates a sawtooth behavior !
60
20
1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0Time (seconds)
70
304050
10
10.0
Z. Li, Multimedia Communciation, 2018 p.20
TCP Slow Start
Additive Increase is good when source is operating at near close to the capacity of the network. Too long to ramp up when it starts from
scratch. Slow start --> increase congestion window
rapidly at cold start.
Slow start allows for exponential growth in the beginning.
E.g. Initially CW =1, if ACK received, CW = 2.If 2 ACKs are now received, CW = 4. If 4 ACKs
are now received, CW =8 and so on.
Note that upon experiencing packet loss, multiplicative decrease takes over.
Z. Li, Multimedia Communciation, 2018 p.21
Host A
one segment
RTT
Host B
time
two segments
four segments
Why Call it Slow Start ?The original version of TCP suggested that the sender transmit as
much as the Advertised Window permitted.Routers may not be able to cope with this “burst” of transmissions.Slow start is slower than the above version -- ensures that a
transmission burst does not happen at once.
Z. Li, Multimedia Communciation, 2018 p.22
TCP Tahoe
Loss based: When CW is below the threshold, CW grows
exponentially When it is above the threshold, CW grows
linearly. Upon time-out, set “new” threshold to half of
current CW and the CW is reset to 1.
This version of TCP is called “TCP Tahoe”.
Z. Li, Multimedia Communciation, 2018 p.23
TCP RenoFast retransmit:
After receiving 3 duplicate ACK
Resend first packet in window.o Try to avoid waiting
for timeoutFast recovery:
After retransmission do not enter slowstart.
Threshold = Congwin/2 Congwin = 3 + Congwin/2 Each duplicate ACK
received Congwin++ After new ACK
o Congwin = Threshold o return to congestion
avoidanceSingle packet drop: great!
Packet 1Packet 2Packet 3Packet 4
Packet 5Packet 6
Retransmitpacket 3
ACK 1ACK 2
ACK 2ACK 2
ACK 6
ACK 2
Sender Receiver
Z. Li, Multimedia Communciation, 2018 p.24
TCP Reno Reno Features Fast Retransmit: after receiving 3 ACK on the same packet Fast Recovery: CWnd andThres adjustment
Z. Li, Multimedia Communciation, 2018 p.25
Time
CWnd
InitialSlowstart
Fast Retransmit
and Recovery
Slowstartto pacepackets
Timeoutsmay still
occurTime Out
Cong Avoidance
AIMD
TCP Vegas
Z. Li, Multimedia Communciation, 2018 p.26
Idea: Delay Based Control, track the RTT Try to avoid packet loss latency increases: lower rate latency very low: increase rate
Implementation: sample_RTT: current RTT Base_RTT: min. over sample_RTT Expected Rate= CWnd/ Base_RTT Actual Rate = number of packets sent / sample_RTT =Expected - Actual
TCP Vegas Congestion Control
Z. Li, Multimedia Communciation, 2018 p.27
= Expected - ActualCongestion Avoidance: introduce two threshold, and two parameters: and , < If ( < ) Congwin = Congwin +1 If ( > ) Congwin = Congwin -1 Otherwise no change Note: Once per RTT
Slowstart parameter If ( > ) then move to congestion avoidance
Actual
Expected
�− �
TCP Throughput
TCP Rate at steady state Segment size: MSS Round Trip Delay: RTT Prob of packet loss: p
Observation Reducing RTT is the key ! Indeed, AKAMAI, Netflix,…,etc, use RTT
as the KPI for deploying and provisioning CDN edge servers. Prob of loss is due to congestion, mostly. For wireless networks, loss due to PHY layer has wrong interpretation
in TCP control !
Z. Li, Multimedia Communciation, 2018 p.28
���� =�XX
�XX2�3 + 12
3�8 �(1 + 32�
�)
TCP Summary
TCP Features Widely deployed transport solution over the current Internet Reliable byte stream service Loss based congestion control: TCP Reno, Tahoe Delay based congestion control: TCP Vegas
TCP as media transport Byte stream vs Packet service: over kill Connection delays: 3 RTT for secure TCP Slow Start: under utilization of the link capacity Leads to new not TCP based media transport work, QUIC, WebRTC
Z. Li, Multimedia Communciation, 2018 p.29
Outline
ReCap Lecture 17 Congestion Models and Control TCP and TCP Friendly Congestion Model New Congestion Work at RMCAT
Summary
Z. Li, Multimedia Communciation, 2018 p.30
WebRTC
Motivation: native browser support for real time communication for a variety of applications
> Javascript API for HTML (W3C)> Signalling & NAT traversal (IETF RTCWEB)> Security (IETF RTCWEB)> Congestion control (IETF RMCAT)
Z. Li, Multimedia Communciation, 2018 p.31
RMCAT
RMCAT = RTP Media Congestion Avoidance Techniques IETF working group resources https://datatracker.ietf.org/wg/rmcat/documents/
Main RMCAT technology Google's congestion control (GCC)
o L. De Cicco et al.: Experimental Investigation of the Google Congestion Control for Real-Time Flows.
o V. Singh et al.: Performance Analysis of Receive-Side Real-Time Congestion Control for WebRTC.
o L. De Cicco et al.: Understanding the Dynamic Behaviour of the Google Congestion Control NADA (Cisco)
X. Zhu, R. Pan: NADA: A Unified Congestion Control Scheme for Low-Latency Interactive Dflow:
P. O'Hanlon, K. Carlberg: DFlow: Low latency congestion control Coupled Congestion Control
S. Islam et al.: One Control to Rule Them All - Coupled Congestion Control for RTP Media (Poster)
Congestion Control and FECM. Nagy et al.: Congestion Control using FEC for Conversational Multimedia Communication
(Nokia may have IPR)
Z. Li, Multimedia Communciation, 2018 p.32
GCC
Google Congestion Control (GCC) implemented in Chrome and Firefox to support WebRTC Utilizes RTP and RTCP for media data transport and control Has sender side control, which is loss based, probe the available BW
as sending rate As. Receiver side control is delay based, computes REMB, “Receiver
Estimated Maximum Bitrate”, Ar to limit the sending rate As
Z. Li, Multimedia Communciation, 2018 p.33
GCC Sender Side Logic
Measure of Loss: fl(tk) at time tk, where the k-th RTCP message is received, the fraction of packets sent lost TCP Friendly Rate (TFRC) :
The sending rate is given by, Hi-loss rate: send at TRFC rate, not like TCP halve the CWnd Small loss rate: AMID like behavior. Mid loss rate: maintain current rate
Z. Li, Multimedia Communciation, 2018 p.34
�(��) =�XX
�XX2�3 + 12
3�8 �(1 + 32�
�)
��(��) = �max ��(��), ��(��−�)�1 − 0.5��(��)�� , �� ��(��) > 0.1 1.05(��(��−�) + 1�X�G, �� ��(��) < 0.02
��(��−�), �� 0.02 ≤ ��(��) ≤ 0.1
GCC Receiver Side Logic
It is based on the delay, at time ti, when i-th group of RTP packets are received, the receiving rate desired is,
R(ti) is the average actual receiving rate in the last 500ms, � ∈ �1.005, 1.3� � ∈ �0.8, 0.95�
The desired receiving rate Ar(ti) is fed back to the sender as REMB message over RTCP, currently every 1000ms, send one, sender adjust sending rate:
Z. Li, Multimedia Communciation, 2018 p.35
Receiver Side State Machine
Receiver side update Ar(ti) according to the congestion state estimation
Packet Arrival Stats based link usage state estimation, Packet delay variation:
Where, {ti} {Ti} are time stamps of ith video packet sending and recving time. C is the link capacity, L is the video packet size. Queuing delay variation: m(ti) = ti – ti-1 – (Ti-Ti-1) Network jitter noise, n(ti),
Z. Li, Multimedia Communciation, 2018 p.36
IncDec Hold
Link Overuse Detection Observe arrival filter signal m(t):
Z. Li, Multimedia Communciation, 2018 p.37
Loss Control
Use a mix of FEC and ARQ to control losses AL-FEC for erasure/loss control is an active topic area. The good rate, i.e, the sending rate minus FEC and ARQ cost is
the true media rate
GCC forces rFEC(t) < 0.5 As(t) GCC ARQ: at most re-transmit As(t)*RTT bytes of data. (not a good
option for live and low delay applications !)
Z. Li, Multimedia Communciation, 2018 p.38
With FEC and ARQ
No FEC and ARQ
GCC Simulation– Set up
Two scenarios
Z. Li, Multimedia Communciation, 2018 p.39
L. DeCicco et.al., Packetvideo workshop, 2013
GCC Link Capacity Utilization Single Flow Fairly good utilization, throughput follows the link capacity change
Z. Li, Multimedia Communciation, 2018 p.40
Single GCC Flow Case Effects of setting different �
Z. Li, Multimedia Communciation, 2018 p.41
GCC sharing with TCP flow
GCC is not getting the fair share of throughput at the bottleneck
Z. Li, Multimedia Communciation, 2018 p.42
REMB received
2 GCC Flows sharing bottleneck Lack of cross traffic coordination, results in unpredictable
behavior
Z. Li, Multimedia Communciation, 2018 p.43
Summary
TCP Type Congestion Control Congestion Window Based Slow Start at the start Congestion Avoidance – AIMD Under Utlization of the link Slow connection
New RMCAT Congestion Control Mix of Delay and Loss based control Sender rate is based on loss Receiver rate is based on delay, variation of packet arrival signal. RTP for data RTCP for signalling
Z. Li, Multimedia Communciation, 2018 p.44