bcs diploma / network

12
Radiant Info School NETWORK CONGESTION In data networking and queuing theory , network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay , packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput , or to an actual reduction in network throughput. CONGESTION CONTROL Congestion control concerns controlling traffic entry into a telecommunications network , so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets . It should not be confused with flow control , which prevents the sender from overwhelming the receiver. If congestion occurs within a network then data packets will become delayed and the loss rate will increase as packets are discarded. From an end to end performance point of view an increase in propagation delay and reduction in throughput. SLOW START AL GORITHM Slow-start is part of the congestion control strategy used by TCP GORITHAM Slow-start is used in conjunction with other algorithms to avoid sending more data than the network is capable of transmitting, that is, to avoid causing network congestion . End stations maintain a parameter called a congestion window. Initially the congestion window is set at a low value. Data is sent up to the congestion window size. If these bytes are acknowledged within a given time then the congestion window is doubled and twice the number of bytes are transmitted. This process repeats until the advertised window size is reached. This phase is known as the exponential growth. In addition to the congestion window, an end

Upload: imitias-ahmed

Post on 18-Apr-2015

18 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: BCS Diploma / Network

Radiant Info School

NETWORK CONGESTION

In data networking and queuing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increase in network throughput, or to an actual reduction in network throughput.

CONGESTION CONTROL

Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. It should not be confused with flow control, which prevents the sender from overwhelming the receiver.

If congestion occurs within a network then data packets will become delayed and the loss rate will increase as packets are discarded. From an end to end performance point of view an increase in propagation delay and reduction in throughput.

SLOW START AL GORITHM

Slow-start is part of the congestion control strategy used by TCPGORITHAM Slow-start is used in conjunction with other algorithms to avoid sending more data than the network is capable of transmitting, that is, to avoid causing network congestion.

End stations maintain a parameter called a congestion window. Initially the congestion window is set at a low value. Data is sent up to the congestion window size. If these bytes are acknowledged within a given time then the congestion window is doubled and twice the number of bytes are transmitted. This process repeats until the advertised window size is reached. This phase is known as the exponential growth. In addition to the congestion window, an end station also maintains a threshold variable. Initially this is set to the advertised window size. If there is congestion within the network then transmitted bytes will not be acknowledged within the specific time. When this happens the transmitter sets its threshold to half of the current congestion window value and resets its congestion window to the initial value. Now the exponential process begins again. If each block of transmitted bytes are acknowledged within the specified time then the congestion window is doubled and so on. This process now continues until the threshold value is reached, Thereafter the congestion window is increased by the initial value only. This second phase is known as the linear phase. In this way when congestion is detected, the volume of data being transmitted is immediately reduced. It builds up again but in a more controlled manner with large increments initially until the threshold value is reached and thereafter at a much slower rate. Should any block of bytes not be acknowledged within the specified

Page 2: BCS Diploma / Network

Radiant Info School

time then the process is repeated with the threshold being set to half of the current congestion window size and then the congestion window being reset to the initial value.

Network Bandwidth and Latency

Latency is a synonym for delay.

Latency is an expression of how much time it takes for a packet of data to get from point to Latency is another element that contributes to network speed. The term latency refers to any of several kinds of delays typically incurred in processing of network data. A so-called low latency network connection is one that generally experiences small delay times, while ahigh latency connection generally suffers from long delays.

Latency can be measured by sending a packet that is returned to the sender e.g. a ‘ping the round trip time of a packet is considered to be the latency

Contributors to latency within a data network

Propagation. The time it takes for a packet to travel between one place and another.

• Transmission. The medium itself (e.g. optical, wireless) introduces some delay.

• The size of the packet introduces delay in a round trip, as a larger packet will take longer to receive and return than a short one.

Router and other processing. Each node takes time to examine and

• Possibly change the header in a packet

• An example of a change e.g. hop count or time-to-live field

• Other computer and storage delays. A packet may subject to storage and hard disk access delays.

These delays could occur at switches or routers.

Bandwidth in computer networking refers to the data rate supported by a network connection or interface. One most commonly expresses bandwidth in terms of bits per second (bps). The term comes from the field of electrical engineering, where bandwidth represents the total distance or range between the highest and lowest signals on the communication channel (band).

Bandwidth represents the capacity of the connection. The greater the capacity, the more likely that greater performance will follow, though overall performance also depends on other factors, such as latency. Also Known As: throughput

Page 3: BCS Diploma / Network

Radiant Info School

Packet Transmission

The packet forwarding task is concerned with examining individual packets, comparing their header information with the routing tables that already exist and trying to find the best match to the destination address in the header, and then moving the packet onto the outgoing queue for the appropriate interface. The packet would be consumed (or discarded) by the router it if the destination is the router itself. The packet would also be discarded if the destination is not present in the routing tables and no default route exists. This happens for every packet and thus an important objective is that the forwarding must happen very quickly. The packet forwarding subtask itself takes place in a single router using information that the router already has. Of course, the router may indeed pass a packet on to a further router if it can't deliver it directly to the final destination.

The routing table determination task is concerned with creating the routing tables. These tables will hold a set of destinations and the interfaces that lead towards them. There will also be information to note whether the destination is a network directly connected to the present router or whether it is beyond another intermediate router. This activity happens MUCH less often than does packet forwarding, it is NOT conducted for every packet! Routing table determination will probably only take place when the router has reason to believe that connectivity of the network has changed

The routing determination task is really an activity involving multiple routers. A router will receive information from neighbors detailing connectivity changes of which they are aware and similarly, will send them details of changes of which it is aware

The server has only one IP address. Hence it is important to be able to differentiate which IP Datagram is intended for which application. UDP/TCP port numbers allow for multiplexing at the Transport layer. These port numbers are 16 bit numbers and each PDU contains both a source port and destination port number. In this way, port numbers are the equivalent of a source and destination address at the Transport layer.

Hence each application is assigned a unique port number within the server – call these P1 and P2. When a PDU is received the port number is checked. Those with a destination number P1 will be directed to application 1 and those with a destination port numbers of P2 will be directed to application 2 and so on

The key field used to manage reliable data transfer within a TCP PDU header are: sequence number; acknowledgement number; ACK bit. Each octet transmitted with a TCP data stream is uniquely numbered. The sequence number is a 32 bit integer that increments by 1 for each octet within the wrapping to zero when its maximum value is reached.

A positive acknowledgement is indicated by virtue of the fact that the ACK bit is set and then the acknowledgement number will indicate the number of the first non-acknowledgement octet. In other words all octets up to and including acknowledgement number -1 have been received. TCP does not contain a message for negative acknowledgement or re-transmission request. If a TCP PDU is lost or received in error then the receiver ignores it and consequently does not issue an acknowledgement.

Page 4: BCS Diploma / Network

Radiant Info School

The transmitter maintains a timer and if an octet has not been acknowledged within a given time then it is simply re-transmitted. It is up to the receiver to ignore any duplicates which result from the process.

DiffServ

DiffServ operates by applying different levels of QoS to classes of applications rather than to individual instances of an application . It is thus much more scalable than earlier approaches (such as RSVP) . Improved QoS for some traffic can only be provided if a lower QoS is provided to other traffic [2 *s]. Traffic leaving a network is *ed to show which class it belongs to, this is normally done based on the nature of the application generating the traffic. The normal classifications (per hop behaviours) are expedited forwarding, assured forwarding, best efforts and lower-than-best-efforts.

TCP builds a reliable delivery service, using an unreliable service (IP)

i) Setting up a virtual connection between the two TCP endpoints – 3 way handshake etc.

ii) Numbering of TCP segments so that the receiving station can know if segments are arriving out of order (and re-order them, before passing them to the application) or if segments have got lost. Real time applications such as video-conferencing work better using UDP because it is better for some datagrams to be lost, than to lose time in retransmission (which would impair the quality of the transmission more than the loss of some datagrams) m Virtual Private Network (VPN)

Page 5: BCS Diploma / Network

Radiant Info School

LOCAL AREA NETWORK

IP Addressing and Subnetting

4 bits – nibble. 8 bits – byte / octet. The values in a byte are 128 64 32 16 8 4 2 1.

- Binary to Decimal memorization chart:

Decimal to Binary to Hex chart:

IP addresses are normally written in dotted-decimal format, eg: 192.168.0.100, 172.16.10.10.-

An IP address can be divided into 2 portions:

Network address that is used to identify the network. Host address or node address that is used to identify the end system on the

network.

- 5 different classes of IP address were designed for efficient routing by defining different Leading-bits sections for the address of different class. A router would be able to identify an IP

Page 6: BCS Diploma / Network

Radiant Info School

address quickly by only reading the first few bits of the address, eg: if the address starts with 0, it is a Class A address, etc. Classful routing protocols also use the first octet rule to determine the class of an address when assigning subnet masks for their routing operation.

Page 7: BCS Diploma / Network

Radiant Info School

Page 8: BCS Diploma / Network

Radiant Info School

Page 9: BCS Diploma / Network

Radiant Info School

Page 10: BCS Diploma / Network

Radiant Info School