finishing flows quickly with preemptive scheduling
DESCRIPTION
Finishing Flows Quickly with Preemptive Scheduling. Presenter: Gong Lu. Authors. Chi-Yao Hong Ph.D., Computer Science, UIUC, 09-14 Co-advised by Matthew Caesar and Brighten Godfrey Research interests: Protocol design Network measurement Security. Authors (cont.). Matthew Caesar - PowerPoint PPT PresentationTRANSCRIPT
Finishing Flows Quickly with Preemptive Scheduling
Presenter: Gong Lu
Authors
Chi-Yao Hong Ph.D., Computer Science, UIUC,
09-14 Co-advised by Matthew Caesar
and Brighten Godfrey Research interests:
Protocol design Network measurement Security
Authors (cont.)
Matthew Caesar Assistant Professor @ UIUC Ph.D., Computer Science, U.C.
Berkeley
Philip Brighten Godfrey Assistant Professor @ UIUC Ph.D., Computer Science, U.C.
Berkeley
Introduction
Datacenter applications Minimize flow completion time Meet soft-real-time deadlines
Existing works: TCP, RCP, ICTCP, DCTCP, … Approximate fair sharing Far from optimal
Example
Centralized Algorithm
: maximal sending rate of flow i : expected flow transmission time of flow i
Problem
The centralized algorithm is unrealistic Having complete visibility of the network Able to communicate with devices with zero
delay Introduces single point failure and
significant overhead for senders to interact with the centralized coordinator
The Solution
Fully distributed implementation Sender Receiver Switch
Propagate flow information via explicit feedback in packet headers When the feedback reaches the receiver, it
is returned to the sender in an ACK packet
PDQ Sender
Maintains some state variables: : its current sending rate : switches (if any) who has paused the flow : flow deadline (optional) : the expected flow transmission time : the inter-probing time : the measured round-trip time
PDQ Sender (cont.)
Sends packets with rate If , instead sends a probe packet every RTTs Attaches a scheduling header
Remaining fields are set to its current maintained variables
When ACK packet arrives Update by feedback Update by the remaining flow size Update by the packet arrival time Remaining fields are copied from the header
PDQ Receiver
Copies the scheduling header from each data packet to its corresponding ACK
Reduce if it exceeds the processing capacity To avoid buffer overrun on receiver
PDQ Switch
Maintains state about flows on each link <, , , , > Only store the most critical flows Use RCP for less critical flows using the
leftover bandwidth RCP does not require per-flow state Partial shift away from optimizing completion
time and towards traditional fair sharing
PDQ Switch (cont.)
Decides whether to accept or pause the flow A flow is accepted if all switches along the path
accept it A flow is paused if any switch pauses it
Flow acceptance In forward path, the switch computes the
available bandwidth based on the flow criticality, and updates and
In the reverse path, if a switch sees an empty pauseby field in the header, it updates the global decision of acceptance to its state ( and )
Several Optimizations
Early start Provide seamless flow switching
Early termination Terminate flows which unable to meet
deadlines Dampening
Avoid frequent flow switching Suppressed probing
Avoid large bandwidth usage from paused senders
Evaluation
Evaluation (cont.)
Evaluation (cont.)
Evaluation (cont.)
Evaluation (cont.)
Evaluation (cont.)
Conclusion
PDQ can complete flows quickly and meet flow deadlines
PDQ provides a distributed algorithm to approximate a range of scheduling disciplines
PDQ provides significant advantages over existing schemes under extensive packet-level and flow-level simulation
References