tcp and congestion control nick feamster computer networking i spring 2013

35
TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

Upload: natalie-larson

Post on 27-Mar-2015

218 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

TCP and Congestion Control

Nick FeamsterComputer Networking I

Spring 2013

Page 2: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

2

Sequence Numbers

• How large do sequence numbers need to be?– Must be able to detect wrap-around– Depends on sender/receiver window size

• E.g.– Max seq = 7, send win=recv win=7– If pkts 0..6 are sent succesfully and all acks lost

• Receiver expects 7,0..5, sender retransmits old 0..6!!!

• Max sequence must be ≥ send window + recv window

Page 3: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

3

Sequence Numbers• 32 Bits, Unsigned for bytes not packets!

– Circular Comparison

• Why So Big?– For sliding window, must have – |Sequence Space| > |Sending Window| + |Receiving Window|

• No problem– Also, want to guard against stray packets

• With IP, packets have maximum lifetime of 120s• Sequence number would wrap around in this time at 286MB/s =~

2.3Gbit/s (hmm!)

0Max

a

b

a < b

0Max

ba

b < a

Page 4: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

4

TCP Flow Control

• TCP is a sliding window protocol– For window size n, can send up to n bytes without receiving

an acknowledgement – When the data is acknowledged then the window slides

forward

• Each packet advertises a window size– Indicates number of bytes the receiver has space for

• Original TCP always sent entire window– But receiver buffer space != available net. capacity!

• Congestion control now limits this• window = min(receiver window, congestion window)

Page 5: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

5Lecture 17: TCP & Congestion Control

Window Flow Control: Send Side

Sent but not acked Not yet sent

window

Next to be sent

Sent and acked

Page 6: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

6Lecture 17: TCP & Congestion Control

acknowledged sent to be sent outside window

Source Port Dest. Port

Sequence Number

Acknowledgment

HL/Flags Window

D. Checksum Urgent Pointer

Options…

Source Port Dest. Port

Sequence Number

Acknowledgment

HL/Flags Window

D. Checksum Urgent Pointer

Options...

Packet Sent Packet Received

App write

Window Flow Control: Send Side

Page 7: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

7

Acked but notdelivered to user

Not yetacked

Receive buffer

window

Window Flow Control: Receive Side

New

What should receiver do?

Page 8: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

8

Internet Pipes?

• How should you control the faucet?– Too fast – sink overflows– Too slow – what happens?

• Goals– Fill the bucket as quickly as possible– Avoid overflowing the sink

• Solution – watch the sink

Page 9: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

9

The Problem of Congestion

• What is congestion?– Load is higher than capacity

• What do IP routers do?– Drop the excess packets

• Why is this bad?– Wasted bandwidth for retransmissions

Load

Goodput“congestioncollapse” Increase in load that

results in a decrease in useful work done.

Page 10: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

10

Congestion

• Different sources compete for resources inside network

• Why is it a problem?– Sources are unaware of current state of resource– Sources are unaware of each other

• Manifestations:– Lost packets (buffer overflow at routers)– Long delays (queuing in router buffers)– Can result in throughput less than bottleneck link (1.5Mbps

for the above topology) a.k.a. congestion collapse

10 Mbps

100 Mbps

1.5 Mbps

Page 11: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

11

No Problem Under Circuit Switching

• Source establishes connection to destination– Nodes reserve resources for the connection– Circuit rejected if the resources aren’t

available– Cannot have more than the network can

handle

Page 12: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

12

Congestion is Unavoidable

• Two packets arrive at the same time– The node can only transmit one– … and either buffer or drop the other

• If many packets arrive in short period of time– The node cannot keep up with the arriving traffic– … and the buffer may eventually overflow

Page 13: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

13

The Problem of Congestion

• What is congestion?– Load is higher than capacity

• What do IP routers do?– Drop the excess packets

• Why is this bad?– Wasted bandwidth for retransmissions

Load

Goodput“congestioncollapse” Increase in load that

results in a decrease in useful work done.

Page 14: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

14

Congestion Collapse• Definition: Increase in network load results in

decrease of useful work done• Many possible causes

– Spurious retransmissions of packets still in flight• Classical congestion collapse• How can this happen with packet conservation?

RTT increases!• Solution: better timers and TCP congestion control

– Undelivered packets• Packets consume resources and are dropped

elsewhere in network• Solution: congestion control for ALL traffic

Page 15: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

15

Congestion Control and Avoidance

• A mechanism that:– Uses network resources efficiently– Preserves fair network resource allocation– Prevents or avoids collapse

• Congestion collapse is not just a theory– Has been frequently observed in many networks

Page 16: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

16

Congestion Control Approaches

• End-end congestion control:– No explicit feedback from

network– Congestion inferred from

end-system observed loss, delay

– Approach taken by TCP

• Network-assisted congestion control:

• Routers provide feedback to end systems

• Single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)

• Explicit rate sender should send at

• Problem: makes routers complicated

• Two broad approaches

Page 17: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

17

Example: TCP Congestion Control

• Very simple mechanisms in network– FIFO scheduling with shared buffer pool– Feedback through packet drops

• TCP interprets packet drops as signs of congestion and slows down

– This is an assumption: packet drops are not a sign of congestion in all networks

• E.g. wireless networks

• Periodically probes the network to check whether more bandwidth has become available.

Page 18: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

18

Inferring From Implicit Feedback

?

• What does the end host see?• What can the end host change?

Page 19: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

19

Where Congestion Happens: Links

• Simple resource allocation: FIFO queue & drop-tail• Access to the bandwidth: first-in first-out queue

– Packets transmitted in the order they arrive

• Access to the buffer space: drop-tail queuing– If the queue is full, drop the incoming packet

Page 20: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

20

How it Looks to the End Host

• Packet delay– Packet experiences high delay

• Packet loss– Packet gets dropped along the way

• How does TCP sender learn this?– Delay

• Round-trip time estimate– Loss

• Timeout • Duplicate acknowledgments

Page 21: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

21

What Can the End Host Do?

• Upon detecting congestion– Decrease the sending rate (e.g., divide in half)– End host does its part to alleviate the congestion

• But, what if conditions change?– Suppose there is more bandwidth available– Would be a shame to stay at a low sending rate

• Upon not detecting congestion– Increase the sending rate, a little at a time– And see if the packets are successfully delivered

Page 22: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

22

TCP Congestion Window

• Each TCP sender maintains a congestion window– Maximum number of bytes to have in transit– I.e., number of bytes still awaiting acknowledgments

• Adapting the congestion window– Decrease upon losing a packet: backing off– Increase upon success: optimistically exploring– Always struggling to find the right transfer rate

• Both good and bad– Pro: avoids having explicit feedback from network– Con: under-shooting and over-shooting the rate

Page 23: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

23

Additive Increase, Multiplicative Decrease

• How much to increase and decrease?– Increase linearly, decrease multiplicatively– A necessary condition for stability of TCP– Consequences of over-sized window are much worse than

having an under-sized window• Over-sized window: packets dropped and retransmitted• Under-sized window: somewhat lower throughput

• Multiplicative decrease– On loss of packet, divide congestion window in half

• Additive increase– On success for last window of data, increase linearly

Page 24: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

24

Leads to the TCP “Sawtooth”

t

Window

halved

Loss

Page 25: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

25

Practical Details

• Congestion window– Represented in bytes, not in packets (Why?)– Packets have MSS (Maximum Segment Size) bytes

• Increasing the congestion window– Increase by MSS on success for last window of data

• Decreasing the congestion window– Never drop congestion window below 1 MSS

Page 26: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

26

Receiver Window vs. Congestion Window

• Flow control– Keep a fast sender from overwhelming a slow receiver

• Congestion control– Keep a set of senders from overloading the network

• Different concepts, but similar mechanisms– TCP flow control: receiver window– TCP congestion control: congestion window– TCP window: min{congestion window, receiver window}

Page 27: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

27

How Should a New Flow Start

t

Window

But, could take a long time to get

started!

Need to start with a small CWND to avoid overloading the network.

Page 28: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

28

“Slow Start” Phase

• Start with a small congestion window– Initially, CWND is 1 Max Segment Size (MSS)– So, initial sending rate is MSS/RTT

• That could be pretty wasteful– Might be much less than the actual bandwidth– Linear increase takes a long time to accelerate

• Slow-start phase (really “fast start”)– Sender starts at a slow rate (hence the name)– … but increases the rate exponentially– … until the first loss event

Page 29: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

29

Slow Start in ActionDouble CWND per round-trip time

D A D D A A D D

A A

D

A

Src

Dest

D

A

1 2 4 8

Page 30: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

30

Slow Start and the TCP Sawtooth

Loss

Exponential “slow start”

t

Window

Why is it called slow-start? Because TCP originally hadno congestion control mechanism. The source would just start by sending a whole receiver window’s worth of data.

Page 31: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

31

Two Kinds of Loss in TCP

• Timeout– Packet n is lost and detected via a timeout– E.g., because all packets in flight were lost– After the timeout, blasting away for the entire CWND– … would trigger a very large burst in traffic– So, better to start over with a low CWND

• Triple duplicate ACK– Packet n is lost, but packets n+1, n+2, etc. arrive– Receiver sends duplicate acknowledgments– … and the sender retransmits packet n quickly– Do a multiplicative decrease and keep going

Page 32: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

32

Repeating Slow Start After Timeout

t

Window

Slow-start restart: Go back to CWND of 1, but take advantage of knowing the previous value of CWND.

Slow start in operation until it reaches half of

previous cwnd.

timeout

Page 33: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

33

Repeating Slow Start After Idle Period

• Suppose a TCP connection goes idle for a while– E.g., Telnet session where you don’t type for an hour

• Eventually, the network conditions change– Maybe many more flows are traversing the link– E.g., maybe everybody has come back from lunch!

• Dangerous to start transmitting at the old rate– Previously-idle TCP sender might blast the network– … causing excessive congestion and packet loss

• So, some TCP implementations repeat slow start– Slow-start restart after an idle period

Page 34: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

34

TCP Achieves Some Notion of Fairness

• Effective utilization is not the only goal– We also want to be fair to the various flows– … but what the heck does that mean?

• Simple definition: equal shares of the bandwidth– N flows that each get 1/N of the bandwidth?– But, what if the flows traverse different paths?– E.g., bandwidth shared in proportion to the RTT

Page 35: TCP and Congestion Control Nick Feamster Computer Networking I Spring 2013

35

What About Cheating?

• Some folks are more fair than others– Running multiple TCP connections in parallel– Modifying the TCP implementation in the OS– Use the User Datagram Protocol

• What is the impact– Good guys slow down to make room for you– You get an unfair share of the bandwidth

• Possible solutions?– Routers detect cheating and drop excess packets?– Peer pressure?– ???