chapter 5-transport layer...• the internet has decided to use universal port numbers for servers;...

20
By: Er. Anku Jaiswal CHAPTER 5-TRANSPORT LAYER THE TRANSPORT SERVICES Transport layer offers peer-to-peer and end-to-end connection between two processes on remote hosts. Transport layer takes data from upper layer (i.e. Application layer) and then breaks it into smaller size segments, numbers each byte, and hands over to lower layer (Network Layer) for delivery. Functions This Layer is the first one which breaks the information data, supplied by Application layer in to smaller units called segments. It numbers every byte in the segment and maintains their accounting. This layer ensures that data must be received in the same sequence in which it was sent. This layer provides end-to-end delivery of data between hosts which may or may not belong to the same subnet. All server processes intend to communicate over the network are equipped with well- known Transport Service Access Points (TSAPs) also known as port numbers. End-to-End Communication A process on one host identifies its peer host on remote network by means of TSAPs, also known as Port numbers. TSAPs are very well defined and a process which is trying to communicate with its peer knows this in advance. For example, when a DHCP client wants to communicate with remote DHCP server, it always requests on port number 67. When a DNS client wants to communicate with remote DNS server, it always requests on port number 53 (UDP).

Upload: others

Post on 19-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

CHAPTER 5-TRANSPORT LAYER

THE TRANSPORT SERVICES

Transport layer offers peer-to-peer and end-to-end connection between two processes on remote

hosts. Transport layer takes data from upper layer (i.e. Application layer) and then breaks it into

smaller size segments, numbers each byte, and hands over to lower layer (Network Layer) for

delivery.

Functions

• This Layer is the first one which breaks the information data, supplied by Application

layer in to smaller units called segments. It numbers every byte in the segment and

maintains their accounting.

• This layer ensures that data must be received in the same sequence in which it was sent.

• This layer provides end-to-end delivery of data between hosts which may or may not

belong to the same subnet.

• All server processes intend to communicate over the network are equipped with well-

known Transport Service Access Points (TSAPs) also known as port numbers.

End-to-End Communication

A process on one host identifies its peer host on remote network by means of TSAPs, also

known as Port numbers. TSAPs are very well defined and a process which is trying to

communicate with its peer knows this in advance.

For example, when a DHCP client wants to communicate with remote DHCP server, it always

requests on port number 67. When a DNS client wants to communicate with remote DNS

server, it always requests on port number 53 (UDP).

Page 2: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

The two main Transport layer protocols are:

• Transmission Control Protocol

It provides reliable communication between two hosts.

• User Datagram Protocol

It provides unreliable communication between two hosts

TRANSPORT PROTOCOLS: TCP AND UDP

TRANSMISSION CONTROL PROTOCOL (TCP)

The transmission Control Protocol (TCP) is one of the most important protocols of Internet

Protocols suite. It is most widely used protocol for data transmission in communication network

such as internet.

Features

• TCP is reliable protocol. That is, the receiver always sends either positive or negative

acknowledgement about the data packet to the sender, so that the sender always has

bright clue about whether the data packet is reached the destination or it needs to resend

it.

• TCP ensures that the data reaches intended destination in the same order it was sent.

• TCP is connection oriented. TCP requires that connection between two remote points be

established before sending actual data.

• TCP provides error-checking and recovery mechanism.

• TCP provides end-to-end communication.

• TCP provides flow control and quality of service.

• TCP operates in Client/Server point-to-point mode.

• TCP provides full duplex server, i.e. it can perform roles of both receiver and sender.

Header

The length of TCP header is minimum 20 bytes long and maximum 60 bytes.

Page 3: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

• Source Port (16-bits) - It identifies source port of the application process on the

sending device.

• Destination Port (16-bits) - It identifies destination port of the application process on

the receiving device.

• Sequence Number (32-bits) - Sequence number of data bytes of a segment in a session.

• Acknowledgement Number (32-bits) - When ACK flag is set, this number contains the

next sequence number of the data byte expected and works as acknowledgement of the

previous data received.

• Data Offset (4-bits) - This field implies both, the size of TCP header (32-bit words) and

the offset of data in current packet in the whole TCP segment.

• Reserved (3-bits) - Reserved for future use and all are set zero by default.

• Flags (1-bit each)

o NS - Nonce Sum bit is used by Explicit Congestion Notification signaling

process.

o CWR - When a host receives packet with ECE bit set, it sets Congestion

Windows Reduced to acknowledge that ECE received.

o ECE -It has two meanings:

▪ If SYN bit is clear to 0, then ECE means that the IP packet has its CE

(congestion experience) bit set.

▪ If SYN bit is set to 1, ECE means that the device is ECT capable.

o URG - It indicates that Urgent Pointer field has significant data and should be

processed.

Page 4: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

o ACK - It indicates that Acknowledgement field has significance. If ACK is

cleared to 0, it indicates that packet does not contain any acknowledgement.

o PSH - When set, it is a request to the receiving station to PUSH data (as soon as

it comes) to the receiving application without buffering it.

o RST - Reset flag has the following features:

▪ It is used to refuse an incoming connection.

▪ It is used to reject a segment.

▪ It is used to restart a connection.

o SYN - This flag is used to set up a connection between hosts.

o FIN - This flag is used to release a connection and no more data is exchanged

thereafter. Because packets with SYN and FIN flags have sequence numbers,

they are processed in correct order.

• Windows Size - This field is used for flow control between two stations and indicates

the amount of buffer (in bytes) the receiver has allocated for a segment, i.e. how much

data is the receiver expecting.

• Checksum - This field contains the checksum of Header, Data and Pseudo Headers.

• Urgent Pointer - It points to the urgent data byte if URG flag is set to 1.

• Options - It facilitates additional options which are not covered by the regular header.

Option field is always described in 32-bit words. If this field contains data less than 32-

bit, padding is used to cover the remaining bits to reach 32-bit boundary.

USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol

available of the TCP/IP protocol suite. It involves minimum amount of communication

mechanism. UDP is said to be an unreliable transport protocol but it uses IP services which

provides best effort delivery mechanism.

In UDP, the receiver does not generate an acknowledgement of packet received and in turn, the

sender does not wait for any acknowledgement of packet sent. This shortcoming makes this

protocol unreliable as well as easier on processing.

Page 5: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

Requirement of UDP

A question may arise, why do we need an unreliable protocol to transport the data? We deploy

UDP where the acknowledgement packets share significant amount of bandwidth along with the

actual data. For example, in case of video streaming, thousands of packets are forwarded

towards its users. Acknowledging all the packets is troublesome and may contain huge amount

of bandwidth wastage. The best delivery mechanism of underlying IP protocol ensures best

efforts to deliver its packets, but even if some packets in video streaming get lost, the impact is

not calamitous and can be ignored easily. Loss of few packets in video and voice traffic

sometimes goes unnoticed.

Features

• UDP is used when acknowledgement of data does not hold any significance.

• UDP is good protocol for data flowing in one direction.

• UDP is simple and suitable for query based communications.

• UDP is not connection oriented.

• UDP does not provide congestion control mechanism.

• UDP does not guarantee ordered delivery of data.

• UDP is stateless.

• UDP is suitable protocol for streaming applications such as VoIP, multimedia streaming.

UDP Header

UDP header is as simple as its function.

UDP header contains four main parameters:

• Source Port - This 16 bits information is used to identify the source port of the packet.

• Destination Port - This 16 bits information, is used identify application level service on

destination machine.

Page 6: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

• Length - Length field specifies the entire length of UDP packet (including header). It is

16-bits field and minimum value is 8-byte, i.e. the size of UDP header itself.

• Checksum - This field stores the checksum value generated by the sender before

sending. IPv4 has this field as optional so when checksum field does not contain any

value it is made 0 and all its bits are set to zero.

UDP application

Here are few applications where UDP is used to transmit data:

• Domain Name Services

• Simple Network Management Protocol

• Trivial File Transfer Protocol

• Routing Information Protocol

• Kerberos

PORT AND SOCKET

• At the transport layer, we need a transport layer address, called a port number, to choose

among multiple processes running on the destination host.

• The destination port number is needed for delivery; the source port number is needed for

the reply.

• In the Internet model, the port numbers are 16-bit integers between 0 and 65,535.

• The client program defines itself with a port number, chosen randomly by the transport

layer software running on the client host. This is the ephemeral port number.

• The server process must also define itself with a port number.

• This port number, however, cannot be chosen randomly

• The Internet has decided to use universal port numbers for servers; these are called well-

known port numbers.

• For example, while the Daytime client process, discussed above, can use an ephemeral

(temporary) port number 52,000 to identify itself, the Daytime server process must use

the well-known (permanent) port number 13.

lANA Ranges

The lANA (Internet Assigned Number Authority) has divided the port numbers into three

Page 7: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

ranges: well known, registered, and dynamic (or private

• Well-known ports. The ports ranging from 0 to 1023 are assigned and controlled

by lANA. These are the well-known ports.

• Registered ports. The ports ranging from 1024 to 49,151 are not assigned or controlled

by lANA. They can only be registered with lANA to prevent duplication.

• Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controlled nor

registered. They can be used by any process. These are the ephemeral ports.

Socket Address

• Process-to-process delivery needs two identifiers, IP address and the port number, at each

end to make a connection.

• The combination of an IP address and a port number is called a socket address.

• The client socket address defines the client process uniquely just as the server socket

address defines the server process uniquely

• A transport layer protocol needs a pair of socket addresses: the client socket address and

the server socket address.

MULTIPLEXING AND DEMULTIPLEXING

Multiplexing

• At the sender site, there may be several processes that need to send packets.

• However, there is only one transport layer protocol at any time. This is a many-to-one

relationship and requires multiplexing.

• The protocol accepts messages from different processes, differentiated by their assigned

port numbers.

• After adding the header, the transport layer passes the packet to the network layer.

Demultiplexing

• At the receiver site, the relationship is one-to-many and requires demultiplexing.

• The transport layer receives datagrams from the network layer.

• After error checking and dropping of the header, the transport layer delivers each

message to the appropriate process based on the port number.

Page 8: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

• Internet Protocol (IP) provides a packet delivery service across an internet

• however, IP cannot distinguish between multiple processes (applications) running on the

same computer

• fields in the IP datagram header identify only computers

• a protocol that allows an application to serve as an end-point of communication is known

as a transport protocol or an end-to-end protocol

• the TCP/IP protocol suite provides two transport protocols:

o the User Datagram Protocol (UDP)

o the Transmission Control Protocol (TCP)

CONNECTIONLESS VERSUS CONNECTION-ORIENTED SERVICE

Connectionless Service

• In a connectionless service, the packets are sent from one party to another with no need

for connection establishment or connection release.

• The packets are not numbered; they may be delayed or lost or may arrive out of

sequence.

• There is no acknowledgment either.

• UDP, is connectionless.

Connection Oriented Service

• In a connection-oriented service, a connection is first established between the sender and

the receiver.

• Data are transferred. At the end, the connection is released.

• A prior connection setup is needed in connection-oriented service but not in

connectionless service.

• Connection-oriented service guarantees reliability but not connectionless service.

Page 9: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

• Congestion is very unlikely in connection-oriented service but not in connectionless.

• Lost data retransmission is possible in connection-oriented service but not in

connectionless service.

• Connection-oriented is suitable for long connection while connectionless is suitable for a

bursty connection

• Packets reach the destination following the same route in connection-oriented service, but

for connectionless, the packets can take different paths.

• Resource allocation is needed in the connection-oriented but not in the case of

connectionless service.

• The transfer is slower in the connection-oriented due to connection setup time and ACK

but is faster in connectionless service due to missing initial setup and ACK.

CONNECTION ESTABLISHMENT, CONNECTION RELEASE

Connection Management

TCP communication works in Server/Client model. The client initiates the connection and the

server either accepts or rejects it. Three-way handshaking is used for connection management.

Establishment

Client initiates the connection and sends the segment with a Sequence number. Server

acknowledges it back with its own Sequence number and ACK of client’s segment which is one

more than client’s Sequence number. Client after receiving ACK of its segment sends an

acknowledgement of Server’s response.

Release

Either of server and client can send TCP segment with FIN flag set to 1. When the receiving

end responds it back by Acknowledging FIN, that direction of TCP communication is closed

and connection is released.

Page 10: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

TCP Connection: Connection establishment

• TCP transmits data in full-duplex mode.

• When two TCPs in two machines are connected, they are able to send segments to each

other simultaneously.

• This implies that each party must initialize communication and get approval from the

other party before any data are transferred

THREE-WAY HANDSHAKING

The connection establishment in TCP is called three way handshaking. In our example,

an application program, called the client, wants to make a connection with another

application program, called the server, using TCP as the transport layer protocol.

The three steps in this phase are as follows.

1. The client sends the first segment, a SYN segment, in which only the SYN flag is

set. This segment is for synchronization of sequence numbers. It consumes one

sequence number. When the data transfer starts, the sequence number is

incremented by 1. We can say that the SYN segment carries no real data, but we

can think of it as containing 1 imaginary byte.

2. The server sends the second segment, a SYN +ACK segment, with 2 flag bits set:

SYN and ACK. This segment has a dual purpose. It is a SYN segment for

Page 11: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

communication in the other direction and serves as the acknowledgment for the

SYN segment. It consumes one sequence number.

3. The client sends the third segment. This is just an ACK segment. It acknowledges

the receipt of the second segment with the ACK flag and acknowledgment

number field. Note that the sequence number in this segment is the same as the

one in the SYN segment; the ACK segment does not consume any sequence

numbers.

Data Transfer

After connection is established, bidirectional data transfer can take place. The client and server

can both send data and acknowledgments. We will study the rules of acknowledgment later in

the chapter; for the moment, it is enough to know that data traveling in the same direction as an

acknowledgment are carried on the same segment. The acknowledgment is piggybacked with the

data. Figure 23.19 shows an example. In this example, after connection is established, the client

sends 2000 bytes of data in two segments. The server then sends 2000 bytes in one segment. The

client sends one more segment. The first three segments carry both data and acknowledgment,

but the last segment carries only an acknowledgment because there are no more data to be sent.

Note the values of the sequence and acknowledgment numbers. The data segments sent by the

client have the PSH (push) flag set so that the server TCP knows to deliver data to the server

process as soon as they are received. The segment from the server, on the other hand, does not

set the push flag. Most TCP implementations have the option to set or not set this flag.

Page 12: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

Connection Termination

Three-Way Handshaking Most implementations today allow three-way handshaking

for connection termination.

1. In a normal situation, the client TCP, after receiving a close command from the client process,

sends the first segment, a FIN segment in which the FIN flag is set. Note that a FIN segment can

include the last chunk of data sent by the client, or it can be just a control segment. If it is only a

control segment, it consumes only one sequence number.

2.The server TCP, after receiving the FIN segment, informs its process of the situation and sends

the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the

client and at the same time to announce the closing of the connection in the other direction. This

segment can also contain the last chunk of data from the server. If it does not carry data, it

consumes only one sequence number.

3.The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN

segment from the TCP server. This segment contains the acknowledgment number, which is 1

plus the sequence number received in the FIN segment from the server. This segment cannot

carry data and consumes no sequence numbers.

FLOW CONTROL AND BUFFERING

• TCP uses a sliding window, to handle flow control.

• The sliding window protocol used by TCP, however, is something between the Go-Back-

N and Selective Repeat sliding window.

• The sliding window protocol in TCP looks like

Page 13: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

• A sliding window is used to make transmission more efficient as weD as to control the

flow of data so that the destination does not become overwhelmed with data.

• TCP sliding windows are byte-oriented.

• Some points about TCP sliding windows:

• o The size of the window is the lesser of rwnd and cwnd. o The source does not have to

send a full window's worth of data.

• o The window can be opened or closed by the receiver, but should not be shrunk.

• o The destination can send an acknowledgment at any time as long as it does not result in

a shrinking window.

• o The receiver can temporarily shut down the window; the sender, however, can always

send a segment of 1 byte after the window is shut down.

Stream Delivery Service: Buffer

• TCP, on the other hand, allows the sending process to deliver data as a stream of bytes

and allows the receiving process to obtain data as a stream of bytes.

• TCP creates an environment in which the two processes seem to be connected by an

imaginary "tube“ that carries their data across the Internet.

Page 14: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

Sending and Receiving Buffers

• Because the sending and the receiving processes may not write or read data at the same

speed, TCP needs buffers for storage.

• There are two buffers, the sending buffer and the receiving buffer, one for each direction

CONGESTION

An important issue in a packet-switched network is congestion. Congestion in a network may

occur if the load on the network-the number of packets sent to the network-is greater than the

capacity of the network-the number of packets a network can handle. Congestion control refers

to the mechanisms and techniques to control the congestion and keep the load below the

capacity.

CONGESTION CONTROL TECHNIQUES

Congestion control refers to techniques and mechanisms that can either prevent congestion,

before it happens, or remove congestion, after it has happened. In general, we can divide

congestion control mechanisms into two broad categories: open-loop congestion control

(prevention) and closed-loop congestion control (removal)

Page 15: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

Open-Loop Congestion Control

In open-loop congestion control, policies are applied to prevent congestion before it happens. In

these mechanisms, congestion control is handled by either the source or the destination.

Retransmission Policy

Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or

corrupted, the packet needs to be retransmitted. Retransmission in general may increase

congestion in the network. However, a good retransmission policy can prevent congestion. The

retransmission policy and the retransmission timers must be designed to optimize efficiency and

at the same time prevent congestion. For example, the retransmission policy used by TCP

(explained later) is designed to prevent or alleviate congestion.

Window Policy

The type of window at the sender may also affect congestion. The Selective Repeat window is

better than the Go-Back-N window for congestion control. In the Go-Back-N window, when the

timer for a packet times out, several packets may be resent, although some may have arrived safe

and sound at the receiver. This duplication may make the congestion worse. The Selective

Repeat window, on the other hand, tries to send the specific packets that have been lost or

corrupted.

Acknowledgment Policy

The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver

does not acknowledge every packet it receives, it may slow down the sender and help prevent

congestion. Several approaches are used in this case. A receiver may send an acknowledgment

Page 16: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

only if it has a packet to be sent or a special timer expires. A receiver may decide to

acknowledge only N packets at a time. We need to know that the acknowledgments are also part

of the load in a network. Sending fewer acknowledgments means imposing less load on the

network.

Discarding Policy

A good discarding policy by the routers may prevent congestion and at the same time may not

harm the integrity of the transmission. For example, in audio transmission, if the policy is to

discard less sensitive packets when congestion is likely to happen, the quality of sound is still

preserved and congestion is prevented or alleviated.

Admission Policy

An admission policy, which is a quality-of-service mechanism, can also prevent congestion in

virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before

admitting it to the network. A router can deny establishing a virtual circuit connection if there is

congestion in the network or if there is a possibility of future congestion.

Closed-Loop Congestion Control

Closed-loop congestion control mechanisms try to alleviate congestion after it happens.

Several mechanisms have been used by different protocols.

Backpressure

The technique of backpressure refers to a congestion control mechanism in which a congested

node stops receiving data from the immediate upstream node or nodes. This may cause the

upstream node or nodes to become congested, and they, in turn, reject data from their upstream

nodes or nodes. And so on. Backpressure is a node-to-node congestion control that starts with a

node and propagates, in the opposite direction of data flow, to the source. The backpressure

technique can be applied only to virtual circuit networks, in which each node knows the

upstream node from which a flow of data is corning.

Choke Packet

A choke packet is a packet sent by a node to the source to inform it of congestion. Note the

difference between the backpressure and choke packet methods. In backpressure, the warning is

from one node to its upstream node, although the warning may eventually reach the source

station. In the choke packet method, the warning is from the router, which has encountered

Page 17: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

congestion, to the source station directly. The intermediate nodes through which the packet has

traveled are not warned. We have seen an example of this type of control in ICMP.

Implicit Signaling

In implicit signaling, there is no communication between the congested node or nodes and the

source. The source guesses that there is a congestion somewhere in the network from other

symptoms. For example, when a source sends several packets and there is no acknowledgment

for a while, one assumption is that the network is congested. The delay in receiving an

acknowledgment is interpreted as congestion in the network; the source should slow down. We

will see this type of signaling when we discuss TCP congestion control later in the chapter.

Explicit Signaling

The node that experiences congestion can explicitly send a signal to the source or destination.

The explicit signaling method, however, is different from the choke packet method. In the choke

packet method, a separate packet is used for this purpose; in the explicit signaling method, the

signal is included in the packets that carry data. Explicit signaling, as we will see in Frame Relay

congestion control, can occur in either the forward or the backward direction.

Backward Signaling A bit can be set in a packet moving in the direction opposite

to the congestion. This bit can warn the source that there is congestion and that it needs to slow

down to avoid the discarding of packets.

Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This bit

can warn the destination that there is congestion. The receiver in this case can use policies, such

as slowing down the acknowledgments, to alleviate the congestion.

TRAFFIC SHAPING ( Congestion Control Algorithm)

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the

network. Two algorithms can shape traffic: leaky bucket and token bucket.

Leaky Bucket

If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as

long as there is water in the bucket. The rate at which the water leaks does not depend on the rate

at which the water is input to the bucket unless the bucket is empty. The input rate can vary, but

the output rate remains constant. Similarly, in networking, a technique called leaky bucket can

smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate.

Page 18: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host. The

use of the leaky bucket shapes the input traffic to make it conform to this commitment. In Figure

24.19 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data.

The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of

data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket smooths the traffic by

sending out data at a rate of 3 Mbps during the same 10 s. Without the leaky bucket, the

beginning burst may have hurt the network by consuming more bandwidth than is set aside for

this host. We can also see that the leaky bucket may prevent congestion. As an analogy, consider

the freeway during rush hour (bursty traffic). If, instead, commuters could stagger their working

hours, congestion on our freeways could be avoided.

A simple leaky bucket implementation is shown in Figure 24.20. A FIFO queue holds the

packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process

removes a fixed number of packets from the queue at each tick of the clock. If the traffic consists

of variable-length packets, the fixed output rate must be based on the number of bytes or bits.

Page 19: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal

The following is an algorithm for variable-length packets:

1. Initialize a counter to n at the tick of the clock.

2. If n is greater than the size of the packet, send the packet and decrement the counter by the

packet size.

3.Repeat this step until n is smaller than the packet size.

4.Reset the counter and go to step 1.

Token Bucket

The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is not

sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky bucket

allows only an average rate. The time when the host was idle is not taken into account. On the

other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in the

form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system

removes one token for

every cell (or byte) of data sent. For example, if n is 100 and the host is idle for 100 ticks, the

bucket collects 10,000 tokens. Now the host can consume all these tokens in one tick with

10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other words, the host can send

bursty data as long as the bucket is not empty. Figure 24.21 shows the idea. The token bucket

can easily be implemented with a counter. The token is initialized to zero. Each time a token is

added, the counter is incremented by 1. Each time a unit of data is sent, the counter is

decremented by 1. When the counter is zero, the host

cannot send data.

Page 20: CHAPTER 5-TRANSPORT LAYER...• The Internet has decided to use universal port numbers for servers; these are called well-known port numbers. • For example, while the Daytime client

By: Er. Anku Jaiswal