digital data communications

65
Data Communication & Networking IV Sem BCA K. Adisesha, Presidency College COPY: Jan 2009 1 Networks The idea of networking is an old one. A network can be defined as "A collection of two or more devices which are interconnected using common protocols to exchange data." Networks are large distributed systems designed to send information from one location to another. An end point is a place in a network where data transmission either originates or terminates. A node is a point in the network where data travels through without stopping. Nodes are connected by channels, paths that data flows down. Channels can be physical linear objects such as a wire or a fiber optic cable, or it can be less tangible, like a wireless connection at a particular frequency. The cellular concept of space-divided networks was first developed in AT&T in the 1940's and 1950's. AMPS, an analog frequency division multiplexing network was first implemented in Chicago in 1983, and was completely saturated with users the next year. The FCC, in response to overwhelming user demand, increased the available cellular bandwidth from 40Mhz to 50Mhz. Wireless Generations It is often instructive to break the history of wireless networking up into several specific generations. First Generation (1G) The 1G wireless generation comprised of mainly analog signals for carrying voice and music. These were one directional broadcast systems such as Television broadcast, AM/FM radio, and similar communications. Second Generation (2G) 2G introduced concepts such as TDMA and CDMA for allowing bi-directional communications among nodes in large networks. 2G is when some of the first cellular phones were made available, although communications were restricted to very low bitrates. The second generation is frequently divided into sub-sets as well. "2.5G" represented a significant increase in throughput capacity as digital communications techniques became more refined. "2.75G" is another common pseudo-generation that saw an additional increase in speed and capacity among digital wireless networks. Third Generation (3G) 3G is the current generation, and represents the combination of voice traffic with data traffic, and the advent of high-bandwidth mobile devices such as PDAs and smartphones. Fourth Generation (4G) The 4G generation, which is a theoretical future generation, will see the ubiquity of broadband data connections and universal internet access. These networks, many of which are being designed around the WiMAX (IEEE 802.16) specification.

Upload: kasd-adisesha

Post on 19-May-2015

4.222 views

Category:

Education


2 download

DESCRIPTION

Data communication and networking notes

TRANSCRIPT

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

1

Networks The idea of networking is an old one. A network can be defined as "A collection of two or more devices which are interconnected using common protocols to exchange data."

Networks are large distributed systems designed to send information from one location to another. An end point is a place in a network where data transmission either originates or terminates. A node is a point in the network where data travels through without stopping. Nodes are connected by channels, paths that data flows down. Channels can be physical linear objects such as a wire or a fiber optic cable, or it can be less tangible, like a wireless connection at a particular frequency. The cellular concept of space-divided networks was first developed in AT&T in the 1940's and 1950's. AMPS, an analog frequency division multiplexing network was first implemented in Chicago in 1983, and was completely saturated with users the next year. The FCC, in response to overwhelming user demand, increased the available cellular bandwidth from 40Mhz to 50Mhz. Wireless Generations It is often instructive to break the history of wireless networking up into several specific generations. First Generation (1G) The 1G wireless generation comprised of mainly analog signals for carrying voice and music. These were one directional broadcast systems such as Television broadcast, AM/FM radio, and similar communications. Second Generation (2G) 2G introduced concepts such as TDMA and CDMA for allowing bi-directional communications among nodes in large networks. 2G is when some of the first cellular phones were made available, although communications were restricted to very low bitrates. The second generation is frequently divided into sub-sets as well. "2.5G" represented a significant increase in throughput capacity as digital communications techniques became more refined. "2.75G" is another common pseudo-generation that saw an additional increase in speed and capacity among digital wireless networks. Third Generation (3G) 3G is the current generation, and represents the combination of voice traffic with data traffic, and the advent of high-bandwidth mobile devices such as PDAs and smartphones. Fourth Generation (4G) The 4G generation, which is a theoretical future generation, will see the ubiquity of broadband data connections and universal internet access. These networks, many of which are being designed around the WiMAX (IEEE 802.16) specification.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

2

Bi-directional Communications Bi-directional communications means that data is flowing both to and from an end point. An end point can be both a client and a server. Point-to-Point communication Some channels are point-to-point -- they have only a single producer (at one end), and a single consumer (at the far end). Many networks have "full duplex" communication between nodes, meaning they have 2 separate point-to-point channels (one in each direction) between the nodes (on separate wires or allocated to separate frequencies). Some "mesh" networks are built from point-to-point channels. Since wiring every node to every other node is prohibitively expensive, when one node needs to communicate with a distant node, the "intermediate" nodes must pass through the information. Multiple Access Multiple access networks are networks where multiple clients, multiple servers, or both are attempting to access the network simultaneously. Networks with one server and multiple clients are called "broadcast networks", "multicast networks", or "SIMO networks". "SIMO" stands for "Single Input Multiple Output". Networks with multiple clients and servers are known as "MIMO" or "Multiple Input Multiple Output" networks. Network Topologies The shape of a network and the relationship between the nodes in that network is known as the network topology. The network topology determines, in large part, what kinds of functions the network can perform, and what the quality of the communication will be between nodes.

Common Topologies

'Star topology' - A star topology creates a network by arranging 2 or more host machines around a central hub. A variation of this topology, the 'star ring' topology, is in common use today. The star topology is still regarded as one of the major network topologies of the networking world. A star topology is typically used in a broadcast or SIMO network, where a single information source communicates directly with multiple clients. An example of this is a radio station, where a single antenna transmits data directly to many radios. ‘Tree topology’- A tree topology is so named because it resembles a binary tree structure from computer science. The tree has a "root" node, which forms the base of the network. The root node then communicates

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

3

with a number of smaller nodes, and those in turn communicate with an even greater number of smaller nodes. An example of a tree topology network is the DNS system. DNS root servers connect to DNS regional servers, which connect to local DNS servers which then connect with individual networks and computers. For your personal computer to talk to the root DNS server, it needs to send a request through the local DNS server, through the regional DNS server, and then to the root server. 'Ring topology' - A ring topology (more commonly known as a token ring topology) creates a network by arranging 2 or more hosts in a circle. Data is passed between hosts through a 'token.' This token moves rapidly at all times throughout the ring in one direction. If a host desires to send data to another host, it will attach that data as well as a piece of data saying who the message is for to the token as it passes by. The other host will then see that the token has a message for it by scanning for destination MAC addresses that match its own. If the MAC addresses do match, the host will take the data and the message will be delivered. A variation of this topology, the 'star ring' topology, is in common use today. The ring topology is still regarded as one of the major network topologies of the networking world. Mesh topology' - A mesh topology creates a network by ensuring that every host machine is connected to more than one other host machine on the local area network. This topology's main purpose is for fault tolerance - as opposed to a bus topology, where the entire LAN will go down if one host fails. In a mesh topology, as long as 2 machines with a working connection are still functioning, a LAN will still exist. The mesh topology is still regarded as one of the major network topologies of the networking world. Line topology - This rare topology works by connecting every host to the host located to the right of it. Most networking professionals do not even regard this as an actual topology, as it is very expensive (due to its cabling requirements) and due to the fact that it is much more practical to connect the hosts on either end to form a ring topology, which is much cheaper and more efficient. 'Tree topology' - A tree topology, similar to a line topology in that it is extremely rare and is generally not regarded as one of the main network topologies, forms a network by arranging hosts in a hierarchal fashion. A host that is a branch off from the main tree is called a 'leaf.' This topology in this respect becomes very similar to a partial mesh topology - if a 'leaf' fails, its connection is isolated and the rest of the LAN can continue onwards. ‘Bus topology’ - A bus topology creates a network by connecting 2 or more hosts to a length of coaxial backbone cabling. In this topology, a terminator must be placed on the end of the backbone coaxial cabling - in Michael Meyer's Network+ textbook, he commonly compares a network to a series of pipes that water travels through. Think of the data as water; in this respect, the terminator must be placed in order to prevent the water from flowing out of the network. The bus topology is still regarded as one of the major network topologies of the networking world. ‘Hybrid topology’ - A hybrid topology, which is what most networks implement today, uses a combination of multiple basic network topologies, usually by functioning as one topology logically while appearing as another physically. The most common hybrid topologies include Star Bus, and Star Ring. Network Size Designations Personal Area Network (PAN)

Extremely small networks, often referred to as "piconets" that encompass an area around a single person. These networks, such as Bluetooth, have a range of only 1-5 meters, and tend to have very low power requirements, but also very low datarates. personal area network (PAN) - wireless PAN

Local Area Network (LAN) LAN networks can encompass a building such as a house or an office, or a single floor in a multi-level building. Common LAN networks are IEEE 802.11x networks, such as 802.11a, 802.11g, and 802.11n. local area network (LAN) - wireless LAN

Metropolitan Area Network (MAN) These networks are designed to cover large municipal areas. Data protocols such as WiMAX (802.16) and Cellular 3G networks are MAN networks. metropolitan area network (MAN)

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

4

Wide Area Network (WAN) Wide-Area Networks are very similar to MAN, and the two are often used interchangably. WiMAX is also considered a WAN protocol. Television and Radio broadcasts are frequently also considered MAN and WAN systems. wide area network (WAN)

Regional Area Network (RAN) Large regional area networks are used to communicate with nodes over very large areas. Examples of RAN are satellite broadcast media, and IEEE 802.22.

Sensor Area Networks These networks are low-datarate networks primarily used for embedded computer systems and wireless sensor systems. Protocols such as Zigbee (IEEE 802.15.4) and RFID fall into this category.

Network Architecture

Network Types Analog Networks

• Circuit Switching Networks • Cable Television Network • Radio Communications

Digital Networks • Internet • Ethernet • Wireless Internet

Hybrid Networks • Analog and Digital TV • Analog and Digital Telephony • Analog and Digital Radio

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

5

Protocols Protocols are the rules by which computers communicate. Generally a "Network Protocol" defines how communications should begin and end properly, and the sequence of events that should occur during data transmissions. At the transmitting computer the protocol is responsible for:

• Breaking the data down into packets • Adding the address of the intended receiving computer • Preparing the data for transmission through the NIC and data-transmission media • At the receiving computer the protocol is responsible for • Collecting the packets off the data-transmission media through the NIC • Stripping off transmitting information from the packets • Copying only the data portion of the packet to a memory buffer • Reassembling the data portions of the packets in the correct order • Checking the data for errors

Protocol Architecture

• Task of communication broken up into modules • For example file transfer could use three modules

—File transfer application —Communication service module —Network access module

Standardized Protocol Architectures

• Required for devices to communicate • Vendors have more marketable products • Customers can insist on standards based equipment • Two standards:

—OSI Reference model •Never lived up to early promises

—TCP/IP protocol suite •Most widely used

• Also: IBM Systems Network Architecture (SNA), FTP The OSI Reference Model (Open Systems Interconnection) Developed by the ISO (International Standards Organization) in the early 1970s as a standard architecture for the development of computer networks. It provides a structured and consistent approach for describing, understanding, and implementing networks. The OSI Model:

• Provides general design guidelines for data-communications systems • Provides a standard way to describe how portions (layers) of data-communications systems interact • Divides communication problems into standard layers, facilitating the development of network

products and encouraging "mix and match" interchangeability of network components • Promotes the development of a global internetwork in which disparate systems can freely share

network data and resources • Is a tool for learning how networks function

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

6

OSI Reference Model

The OSI model allows for different developers to make products and software to interface with other products, without having to worry about how the layers below are implemented. Each layer has a specified interface with layers above and below it, so everybody can work on different areas without worrying about compatibility.

The Layers and their Responsibilities 1. Application – Provides services that directly support user applications, such as the user interface, e-mail, file transfer, terminal emulation, database access, etc... Communicates through: Gateways and Application Interfaces 2. Presentation – Translates data between the formats the network requires and the computer expects. Handles character encoding, bit order, and byte order issues. Encodes and decodes data. Determines the format and structure of data. Compresses and decompresses, encrypts and decrypts data. Communicates through: Gateways and Application Interfaces 3. Session – Allows applications on a separate computer to share a connection (called a session). Establishes and maintains connection. Manages upper layer errors. Handles remote procedure calls. Synchronizes communicating nodes. Communicates through: Gateways and Application Interfaces 4. Transport – Ensures that packets are delivered error free, in sequence, and without loss or duplication. Takes action to correct faulty transmissions. Controls the flow of data. Acknowledges successful receipt of data. Fragments and reassembles data. Communicates through: Gateway Services, Routers, and Brouters 5. Network – Makes routing decisions and forwards packets (a.k.a. datagrams) for devices that could be farther away than a single link. Moves information to the correct address. Assembles and disassembles packets. Addresses and routes data packets. Determines best path for moving data through the network. Communicates through: Gateway Services, Routers, and Brouters

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

7

6. Data Link – Provides for the flow of data over a single link from one device to another. Controls access to communication channel. Controls flow of data. Organizes data into logical frames (logical units of information). Identifies the specific computer on the network. Detects errors. Communicates through: Switches, Bridges, Intelligent Hubs The Data Link Layer contains 2 sub-layers:

A. The LLC (Logical Link Control) – The upper sub-layer which establishes and maintains links between communicating devices. Also responsible for frame error correction and hardware addresses.

B. The MAC (Media Access Control) – The lower sub-layer which controls how devices share a media channel. (Either through contention or token passing) 7. Physical – Handles the sending and receiving of bits. Provides electrical and mechanical interfaces for a network. Specific type of medium used to connect network devices. Specifies how signals are transmitted on network. Communicates through: Repeaters, Hubs, Switches, Cables, Connectors, Transmitters, Receivers, Multiplexers Layers request the services of the layers below them and provide services to the layers above them. The point of communication between layers is called the SAP (Service Access Point). TCP/IP Protocol Architecture

• Developed by the US Defense Advanced Research Project Agency (DARPA) for its packet switched network (ARPANET)

• Used by the global Internet • No official model but a working one. • This model has five layers

—Application layer —Host to host or transport layer —Internet layer —Network access layer —Physical layer

OSI v/s TCP/IP

TCP •Usual transport layer is Transmission Control Protocol —Reliable connection •Connection —Temporary logical association between entities in different systems •TCP PDU —Called TCP segment

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

8

—Includes source and destination port (c.f. SAP) •Identify respective users (applications) •Connection refers to pair of ports •TCP tracks segments between entities on each connection

TCP/IP Concepts

UDP

• Alternative to TCP is User Datagram Protocol • Not guaranteed delivery • No preservation of sequence • No protection against duplication • Minimum overhead • Adds port addressing to IP

Some Protocols in TCP/IP Suite

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

9

Data Transmission In a communications system, data are propagated from one point to another by means of electromagnetic signals. Both analog and digital signals may be transmitted on suitable transmission media. An analog signal is a continuously varying electromagnetic wave that may be propagated over a variety of media, depending on spectrum; examples are wire media, such as twisted pair and coaxial cable; fiber optic cable; and unguided media, such as atmosphere or space propagation.

Figure 1

Figure 2

Above figure 1, illustrates, analog signals can be used to transmit both analog data represented by an electromagnetic signal occupying the same spectrum, and digital data using a modem (modulator/demodulator) to modulate the digital data on some carrier frequency. However, analog signal will become weaker (attenuate) after a certain distance. To achieve longer distances, the analog transmission system includes amplifiers that boost the energy in the signal. Unfortunately, the amplifier also boosts the noise components. With amplifiers cascaded to achieve long distances, the signal becomes more and more distorted. For analog data, such as voice, quite a bit of distortion can be tolerated and the data remain intelligible. However, for digital data, cascaded amplifiers will introduce errors. A digital signal is a sequence of voltage pulses that may be transmitted over a wire medium; eg. a constant positive voltage level may represent binary 0 and a constant negative voltage level may represent binary 1. As Figure 2, illustrates, digital signals can be used to transmit both analog signals and digital data. Analog data can converted to digital using a codec (coder-decoder), which takes an analog signal that directly represents the voice data and approximates that signal by a bit stream. At the receiving end, the bit stream is used to reconstruct the analog data. Digital data can be directly represented by digital signals. A digital signal can be transmitted only a limited distance before attenuation, noise, and other impairments endanger the integrity of the data. To achieve greater distances, repeaters are used. A repeater receives the digital signal, recovers the pattern of 1s and 0s, and retransmits a new signal. Thus the attenuation is overcome. The principal advantages of digital signaling are that it is generally cheaper than analog signaling and is less susceptible to noise interference. The principal disadvantage is that digital signals suffer more from attenuation than do analog signals. A sequence of voltage pulses, generated by a source using two voltage levels, and the received voltage some distance down a conducting medium. Because of the attenuation, or reduction, of signal strength at higher frequencies, the pulses become rounded and smaller. Which is the preferred method of transmission? The answer being supplied by the telecommunications industry and its customers is digital. Both long-haul telecommunications facilities and intra-building services have moved to digital transmission and, where possible, digital signaling techniques, for a range of reasons. The maximum rate at which data can be transmitted over a given communication channel, under given conditions, is referred to as the channel capacity. There are four concepts here that we are trying to relate to one another. • Data rate, in bits per second (bps), at which data can be communicated

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

10

• Bandwidth, as constrained by the transmitter and the nature of the transmission medium, expressed in cycles per second, or Hertz • Noise, average level of noise over the communications path • Error rate, at which errors occur, where an error is the reception of a 1 when a 0 was transmitted or the reception of a 0 when a 1 was transmitted All transmission channels of any practical interest are of limited bandwidth, which arise from the physical properties of the transmission medium or from deliberate limitations at the transmitter on the bandwidth to prevent interference from other sources. Want to make as efficient use as possible of a given bandwidth. For digital data, this means that we would like to get as high a data rate as possible at a particular limit of error rate for a given bandwidth. The main constraint on achieving this efficiency is noise. Nyquist Signaling rate: Consider a noise free channel where the limitation on data rate is simply the bandwidth of the signal. Nyquist states that if the rate of signal transmission is 2B, then a signal with frequencies no greater than B is sufficient to carry the signal rate. Conversely given a bandwidth of B, the highest signal rate that can be carried is 2B. This limitation is due to the effect of intersymbol interference, such as is produced by delay distortion. If the signals to be transmitted are binary (two voltage levels), then the data rate that can be supported by B Hz is 2B bps. However signals with more than two levels can be used; that is, each signal element can represent more than one bit. For example, if four possible voltage levels are used as signals, then each signal element can represent two bits. With multilevel signaling, the Nyquist formulation becomes:

C = 2B log2 M, where M is the number of discrete signal or voltage levels. So, for a given bandwidth, the data rate can be increased by increasing the number of different signal elements. However, this places an increased burden on the receiver, as it must distinguish one of M possible signal elements. Noise and other impairments on the transmission line will limit the practical value of M. Shannon Channel Capacity: Consider the relationship among data rate, noise, and error rate. The presence of noise can corrupt one or more bits. If the data rate is increased, then the bits become "shorter" so that more bits are affected by a given pattern of noise. Mathematician Claude Shannon developed a formula relating these. For a given level of noise, expect that a greater signal strength would improve the ability to receive data correctly in the presence of noise. The key parameter involved is the signal-to-noise ratio (SNR, or S/N), which is the ratio of the power in a signal to the power contained in the noise that is present at a particular point in the transmission. Typically, this ratio is measured at a receiver, because it is at this point that an attempt is made to process the signal and recover the data. For convenience, this ratio is often reported in decibels. This expresses the amount, in decibels, that the intended signal exceeds the noise level. A high SNR will mean a high-quality signal and a low number of required intermediate repeaters.

SNRdb=10 log10 (signal/noise) Capacity C=B log2(1+SNR)

The signal-to-noise ratio is important in the transmission of digital data because it sets the upper bound on the achievable data rate. Shannon's result is that the maximum channel capacity, in bits per second, obeys the equation shown. C is the capacity of the channel in bits per second and B is the bandwidth of the channel in Hertz. The Shannon formula represents the theoretical maximum that can be achieved. In practice, however, only much lower rates are achieved, in part because formula only assumes white noise (thermal noise). The successful transmission of data depends principally on two factors: the quality of the signal being transmitted and the characteristics of the transmission medium. Data transmission occurs between transmitter and receiver over some transmission medium. Transmission media may be classified as guided

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

11

or unguided. In both cases, communication is in the form of electromagnetic waves. With guided media, the waves are guided along a physical path; examples of guided media are twisted pair, coaxial cable, and optical fiber. Unguided media, also called wireless, provide a means for transmitting electromagnetic waves but do not guide them; examples are propagation through air, vacuum, and seawater. In the case of guided media, the medium itself is more important in determining the limitations of transmission. For unguided media, the bandwidth of the signal produced by the transmitting antenna is more important than the medium in determining transmission characteristics. One key property of signals transmitted by antenna is directionality. In general, signals at lower frequencies are omnidirectional; that is, the signal propagates in all directions from the antenna. At higher frequencies, it is possible to focus the signal into a directional beam. In considering the design of data transmission systems, key concerns are data rate and distance: the greater the data rate and distance the better. Transmission Characteristics of Guided Media

Twisted Pair

By far the most common guided transmission medium for both analog and digital signals is twisted pair. It is the most commonly used medium in the telephone network (linking residential telephones to the local telephone exchange, or office phones to a PBX), and for communications within buildings (for LANs running at 10-100Mbps). Twisted pair is much less expensive than the other commonly used guided transmission media (coaxial cable, optical fiber) and is easier to work with. A twisted pair consists of two insulated copper wires arranged in a regular spiral pattern. A wire pair acts as a single communication link. Typically, a number of these pairs are bundled together into a cable by wrapping them in a tough protective sheath. The twisting tends to decrease the crosstalk interference between adjacent pairs in a cable. Neighboring pairs in a bundle typically have somewhat different twist lengths to reduce the crosstalk interference. On long-distance links, the twist length typically varies from 5 to 15 cm. The wires in a pair have thicknesses of from 0.4 to 0.9 mm.

Frequency Range

Typical Attenuation

Typical Delay

Repeater Spacing

Twisted pair (with loading)

0 to 3.5 kHz

0.2 dB/km @ 1 kHz

50 µs/km

2 km

Twisted pairs (multi-pair cables)

0 to 1 MHz

0.7 dB/km @ 1 kHz

5 µs/km

2 km

Coaxial cable

0 to 500 MHz

7 dB/km @ 10 MHz

4 µs/km

1 to 9 km

Optical fiber

186 to 370 THz

0.2 to 0.5 dB/km

5 µs/km

40 km

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

12

Coaxial Cable

Coaxial cable, like twisted pair, consists of two conductors, but is constructed differently to permit it to operate over a wider range of frequencies. It consists of a hollow outer cylindrical conductor that surrounds a single inner wire conductor (Figure). The inner conductor is held in place by either regularly spaced insulating rings or a solid dielectric material. The outer conductor is covered with a jacket or shield. A single coaxial cable has a diameter of from 1 to 2.5 cm. Coaxial cable can be used over longer distances and support more stations on a shared line than twisted pair. Coaxial cable is a versatile transmission medium, used in a wide variety of applications, including: • Television distribution - aerial to TV & CATV systems • Long-distance telephone transmission - traditionally used for inter-exchange links, now being replaced by optical fiber/microwave/satellite • Short-run computer system links • Local area networks Coaxial cable is used to transmit both analog and digital signals. It has frequency characteristics that are superior to those of twisted pair and can hence be used effectively at higher frequencies and data rates. Because of its shielded, concentric construction, coaxial cable is much less susceptible to interference and crosstalk than twisted pair. The principal constraints on performance are attenuation, thermal noise, and intermodulation noise. The latter is present only when several channels (FDM) or frequency bands are in use on the cable. For long-distance transmission of analog signals, amplifiers are needed every few kilometers, with closer spacing required if higher frequencies are used. The usable spectrum for analog signaling extends to about 500 MHz. For digital signaling, repeaters are needed every kilometer or so, with closer spacing needed for higher data rates.

Optical Fiber

An optical fiber is a thin (2 to 125 µm), flexible medium capable of guiding an optical ray. Various glasses and plastics can be used to make optical fibers. An optical fiber cable has a cylindrical shape and consists of three concentric sections: the core, the cladding, and the jacket. The core is the innermost section and consists of one or more very thin strands, or fibers, made of glass or plastic; the core has a diameter in the range of 8 to 50 µm. Each fiber is surrounded by its own cladding, a glass or plastic coating that has optical

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

13

properties different from those of the core and a diameter of 125 µm. The interface between the core and cladding acts as a reflector to confine light that would otherwise escape the core. The outermost layer, surrounding one or a bundle of cladded fibers, is the jacket. The jacket is composed of plastic and other material layered to protect against moisture, abrasion, crushing, and other environmental dangers. Optical fiber already enjoys considerable use in long-distance telecommunications, and its use in military applications is growing. The continuing improvements in performance and decline in prices, together with the inherent advantages of optical fiber, have made it increasingly attractive for local area networking. Five basic categories of application have become important for optical fiber: Long-haul trunks, Metropolitan trunks, Rural exchange trunks, Subscriber loops & Local area networks. The following characteristics distinguish optical fiber from twisted pair or coaxial cable: • Greater capacity: The potential bandwidth, and hence data rate, of optical fiber is immense; data rates of hundreds of Gbps over tens of kilometers have been demonstrated. Compare this to the practical maximum of hundreds of Mbps over about 1 km for coaxial cable and just a few Mbps over 1 km or up to 100 Mbps to 10 Gbps over a few tens of meters for twisted pair. • Smaller size and lighter weight: Optical fibers are considerably thinner than coaxial cable or bundled twisted-pair cable. For cramped conduits in buildings and underground along public rights-of-way, the advantage of small size is considerable. The corresponding reduction in weight reduces structural support requirements. • Lower attenuation: Attenuation is significantly lower for optical fiber than for coaxial cable or twisted pair, and is constant over a wide range. • Electromagnetic isolation: Optical fiber systems are not affected by external electromagnetic fields. Thus the system is not vulnerable to interference, impulse noise, or crosstalk. By the same token, fibers do not radiate energy, so there is little interference with other equipment and there is a high degree of security from eavesdropping. In addition, fiber is inherently difficult to tap. • Greater repeater spacing: Fewer repeaters mean lower cost and fewer sources of error. The performance of optical fiber systems from this point of view has been steadily improving. Repeater spacing in the tens of kilometers for optical fiber is common, and repeater spacings of hundreds of kilometers have been demonstrated.

Figure shows the principle of optical fiber transmission. Light from a source enters the cylindrical glass or plastic core. Rays at shallow angles are reflected and propagated along the fiber; other rays are absorbed by the surrounding material. This form of propagation is called step-index multimode, referring to the variety of angles that will reflect. With multimode transmission, multiple propagation paths exist, each with a different path length and hence time to traverse the fiber. This causes signal elements (light pulses) to spread out in time, which limits the rate at which data can be accurately received. This type of fiber is best suited for transmission over very short distances. When the fiber core radius is reduced, fewer angles will reflect. By reducing the radius of the core to the order of a wavelength, only a single angle or mode can pass: the axial ray. This single-mode propagation provides superior performance for the following reason. Because there is a single transmission

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

14

path with single-mode transmission, the distortion found in multimode cannot occur. Single-mode is typically used for long-distance applications, including telephone and cable television. Finally, by varying the index of refraction of the core, a third type of transmission, known as graded-index multimode, is possible. The higher refractive index (discussed subsequently) at the center makes the light rays moving down the axis advance more slowly than those near the cladding. Rather than zig-zagging off the cladding, light in the core curves helically because of the graded index, reducing its travel distance. The shortened path and higher speed allows light at the periphery to arrive at a receiver at about the same time as the straight rays in the core axis. Graded-index fibers are often used in local area networks. Unguided transmission Unguided transmission techniques commonly used for information communications include broadcast radio, terrestrial microwave, and satellite. Infrared transmission is used in some LAN applications. Three general ranges of frequencies are of interest in our discussion of wireless transmission. Frequencies in the range of about 1 to 40 GHz are referred to as microwave frequencies. At these frequencies, highly directional beams are possible, and microwave is quite suitable for point-to-point transmission. Microwave is also used for satellite communications. Frequencies in the range of 30 MHz to 1 GHz are suitable for omni directional applications. We refer to this range as the radio range. Another important frequency range is the infrared portion of the spectrum, roughly from 3 × 1011 to 2 × 1014 Hz. Infrared is useful to local point-to-point and multipoint applications within confined areas, such as a single room. For unguided media, transmission and reception are achieved by means of an antenna. An antenna can be defined as an electrical conductor or system of conductors used either for radiating electromagnetic energy or for collecting electromagnetic energy. For transmission of a signal, radio-frequency electrical energy from the transmitter is converted into electromagnetic energy by the antenna and radiated into the surrounding environment. For reception of a signal, electromagnetic energy impinging on the antenna is converted into radio-frequency electrical energy and fed into the receiver. In two-way communication, the same antenna can be and often is used for both transmission and reception. This is possible because antenna characteristics are essentially the same whether an antenna is sending or receiving electromagnetic energy. An antenna will radiate power in all directions but, typically, does not perform equally well in all directions. A common way to characterize the performance of an antenna is the radiation pattern, which is a graphical representation of the radiation properties of an antenna as a function of space coordinates. The simplest pattern is produced by an idealized antenna known as the isotropic antenna. An isotropic antenna is a point in space that radiates power in all directions equally. The actual radiation pattern for the isotropic antenna is a sphere with the antenna at the center. An important type of antenna is the parabolic reflective antenna, which is used in terrestrial microwave and satellite applications. A parabola is the locus of all points equidistant from a fixed line (the directrix) and a fixed point (the focus) not on the line, as shown in Figure above. If a parabola is revolved about its axis, the surface generated is called a paraboloid. Paraboloid surfaces are used in headlights, optical and radio telescopes, and microwave antennas because: If a source of electromagnetic energy (or sound) is placed at the focus of the paraboloid, and if the paraboloid is a reflecting surface, then the wave will bounce back in lines parallel to the axis of the paraboloid; as shown in Figure b above. In theory, this effect creates a parallel beam without dispersion. In practice, there will be some dispersion, because the source of energy must occupy more than one point. The larger the diameter of the antenna, the more tightly directional is the beam. On reception, if incoming waves are parallel to the axis of the reflecting paraboloid, the resulting signal will be concentrated at the focus.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

15

Parabolic Reflective Antenna

Antenna gain is a measure of the directionality of an antenna. Antenna gain is defined as the power output, in a particular direction, compared to that produced in any direction by a perfect omnidirectional antenna (isotropic antenna). For example, if an antenna has a gain of 3 dB, that antenna improves upon the isotropic antenna in that direction by 3 dB, or a factor of 2. The increased power radiated in a given direction is at the expense of other directions. In effect, increased power is radiated in one direction by reducing the power radiated in other directions. It is important to note that antenna gain does not refer to obtaining more output power than input power but rather to directionality. The primary use for terrestrial microwave systems is in long haul telecommunications service, as an alternative to coaxial cable or optical fiber. The microwave facility requires far fewer amplifiers or repeaters than coaxial cable over the same distance, (typically every 10-100 km) but requires line-of-sight transmission. Microwave is commonly used for both voice and television transmission. Another increasingly common use of microwave is for short point-to-point links between buildings, for closed-circuit TV or as a data link between local area networks. The most common type of microwave antenna is the parabolic "dish”, fixed rigidly to focus a narrow beam on a receiving antenna A typical size is about 3 m in diameter. Microwave antennas are usually located at substantial heights above ground level to extend the range between antennas and to be able to transmit over intervening obstacles. To achieve long-distance transmission, a series of microwave relay towers is used, and point-to-point microwave links are strung together over the desired distance. Microwave transmission covers a substantial portion of the electromagnetic spectrum, typically in the range 1 to 40 GHz, with 4-6GHz and now 11GHz bands the most common. The higher the frequency used, the higher the potential bandwidth and therefore the higher the potential data rate. As with any transmission system, a main source of loss is attenuation, related to the square of distance. The effects of rainfall become especially noticeable above 10 GHz. Another source of impairment is interference. A communication satellite is, in effect, a microwave relay station. It is used to link two or more ground-based microwave transmitter/receivers, known as earth stations, or ground stations. The satellite receives transmissions on one frequency band (uplink), amplifies or repeats the signal, and transmits it on another frequency (downlink). A single orbiting satellite will operate on a number of frequency bands, called transponder channels, or simply transponders. The optimum frequency range for satellite transmission is in the range 1 to 10 GHz. Most satellites providing point-to-point service today use a frequency bandwidth in the range 5.925 to 6.425 GHz for transmission from earth to satellite (uplink) and a bandwidth in the range 3.7 to 4.2 GHz for transmission from satellite to earth (downlink). This combination is referred to as the 4/6-GHz band, but has become saturated. So the 12/14-GHz band has been developed (uplink: 14 - 14.5 GHz; downlink: 11.7 - 12.2 GHz).

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

16

For a communication satellite to function effectively, it is generally required that it remain stationary with respect to its position over the earth to be within the line of sight of its earth stations at all times. To remain stationary, the satellite must have a period of rotation equal to the earth's period of rotation, which occurs at a height of 35,863 km at the equator. Two satellites using the same frequency band, if close enough together, will interfere with each other. To avoid this, current standards require a 4° spacing in the 4/6-GHz band and a 3° spacing at 12/14 GHz. Thus the number of possible satellites is quite limited. Among the most important applications for satellites are: Television distribution, Long-distance telephone transmission, Private business networks, and Global positioning.

Satellite Point to Point Link

Figure a, depicts in a general way two common configurations for satellite communication. In the first, the satellite is being used to provide a point-to-point link between two distant ground-based antennas.

Satellite Broadcast Link

Figure b, depicts in a general way two common configurations for satellite communication. In the second, the satellite provides communications between one ground-based transmitter and a number of ground-based receivers. Radio is a general term used to encompass frequencies in the range of 3 kHz to 300 GHz. We are using the informal term broadcast radio to cover the VHF and part of the UHF band: 30 MHz to 1 GHz. This range covers FM radio and UHF and VHF television. This range is also used for a number of data networking applications. The principal difference between broadcast radio and microwave is that the former is omnidirectional and the latter is directional. Thus broadcast radio does not require dish-shaped antennas, and the antennas need not be rigidly mounted to a precise alignment. The range 30 MHz to 1 GHz is an effective one for broadcast communications. Unlike the case for lower-frequency electromagnetic waves, the ionosphere is transparent to radio waves above 30 MHz. Thus transmission is limited to the line of sight, and distant transmitters will not interfere with each other due to reflection from the atmosphere. Unlike the higher frequencies of the microwave region, broadcast radio

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

17

waves are less sensitive to attenuation from rainfall. A prime source of impairment for broadcast radio waves is multipath interference. Reflection from land, water, and natural or human-made objects can create multiple paths between antennas, eg ghosting on TV pictures. Infrared communications is achieved using transmitters/receivers (transceivers) that modulate noncoherent infrared light. Transceivers must be within the line of sight of each other either directly or via reflection from a light-colored surface such as the ceiling of a room. Wireless Propagation A signal radiated from an antenna travels along one of three routes: ground wave, sky wave, or line of sight (LOS), as shown in Figure. Ground Wave Propagation

Ground wave propagation more or less follows the contour of the earth and can propagate considerable distances, well over the visual horizon. This effect is found in frequencies up to about 2 MHz. Several factors account for the tendency of electromagnetic wave in this frequency band to follow the earth's curvature. One factor is that the electromagnetic wave induces a current in the earth's surface, the result of which is to slow the wavefront near the earth, causing the wavefront to tilt downward and hence follow the earth's curvature. Another factor is diffraction, which is a phenomenon having to do with the behavior of electromagnetic waves in the presence of obstacles. Electromagnetic waves in this frequency range are scattered by the atmosphere in such a way that they do not penetrate the upper atmosphere. The best-known example of ground wave communication is AM radio. Sky Wave Propagation

Sky wave propagation is used for amateur radio, CB radio, and international broadcasts such as BBC and Voice of America. With sky wave propagation, a signal from an earth-based antenna is reflected from the ionized layer of the upper atmosphere (ionosphere) back down to earth. Although it appears the wave is reflected from the ionosphere as if the ionosphere were a hard reflecting surface, the effect is in fact caused

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

18

by refraction. Refraction is described subsequently. A sky wave signal can travel through a number of hops, bouncing back and forth between the ionosphere and the earth's surface, as shown in figure b. With this propagation mode, a signal can be picked up thousands of kilometers from the transmitter. Line of Sight Propagation

Above 30 MHz, neither ground wave nor sky wave propagation modes operate, and communication must be by line of sight. For satellite communication, a signal above 30 MHz is not reflected by the ionosphere and therefore a signal can be transmitted between an earth station and a satellite overhead that is not beyond the horizon. For ground-based communication, the transmitting and receiving antennas must be within an effective line of sight of each other. The term effective is used because microwaves are bent or refracted by the atmosphere. The amount and even the direction of the bend depends on conditions, but generally microwaves are bent with the curvature of the earth and will therefore propagate farther than the optical line of sight. In this book, we are almost exclusively concerned with LOS communications.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

19

Digital Data Communications Data Communications The distance over which data moves within a computer may vary from a few thousandths of an inch, as is the case within a single IC chip, to as much as several feet along the backplane of the main circuit board. Over such small distances, digital data may be transmitted as direct, two-level electrical signals over simple copper conductors. Except for the fastest computers, circuit designers are not very concerned about the shape of the conductor or the analog characteristics of signal transmission.

Data Communications concerns the transmission of digital messages to devices external to the message source. "External" devices are generally thought of as being independently powered circuitry that exists beyond the chassis of a computer or other digital message source. As a rule, the maximum permissible transmission rate of a message is directly proportional to signal power, and inversely proportional to channel noise. It is the aim of any communications system to provide the highest possible transmission rate at the lowest possible power and with the least possible noise. Communications Channels A communications channel is a pathway over which information can be conveyed. It may be defined by a physical wire that connects communicating devices, or by a radio, laser, or other radiated energy source that has no obvious physical presence. Information sent through a communications channel has a source from which the information originates, and a destination to which the information is delivered. Although information originates from a single source, there may be more than one destination, depending upon how many receive stations are linked to the channel and how much energy the transmitted signal possesses. In a digital communications channel, the information is represented by individual data bits, which may be encapsulated into multibit message units. A byte, which consists of eight bits, is an example of a message unit that may be conveyed through a digital communications channel. A collection of bytes may itself be grouped into a frame or other higher-level message unit. Such multiple levels of encapsulation facilitate the handling of messages in a complex data communications network. Any communications channel has a direction associated with it:

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

20

The message source is the transmitter, and the destination is the receiver. A channel whose direction of transmission is unchanging is referred to as a simplex channel. For example, a radio station is a simplex channel because it always transmits the signal to its listeners and never allows them to transmit back. A half-duplex channel is a single physical channel in which the direction may be reversed. Messages may flow in two directions, but never at the same time, in a half-duplex system. In a telephone call, one party speaks while the other listens. After a pause, the other party speaks and the first party listens. Speaking simultaneously results in garbled sound that cannot be understood. A full-duplex channel allows simultaneous message exchange in both directions. It really consists of two simplex channels, a forward channel and a reverse channel, linking the same points. The transmission rate of the reverse channel may be slower if it is used only for flow control of the forward channel. Serial Communications Most digital messages are vastly longer than just a few bits. Because it is neither practical nor economic to transfer all bits of a long message simultaneously, the message is broken into smaller parts and transmitted sequentially. Bit-serial transmission conveys a message one bit at a time through a channel. Each bit represents a part of the message. The individual bits are then reassembled at the destination to compose the message. In general, one channel will pass only one bit at a time. Thus, bit-serial transmission is necessary in data communications if only a single channel is available. Bit-serial transmission is normally just called serial transmission and is the chosen communications method in many computer peripherals. Byte-serial transmission conveys eight bits at a time through eight parallel channels. Although the raw transfer rate is eight times faster than in bit-serial transmission, eight channels are needed, and the cost may be as much as eight times higher to transmit the message. When distances are short, it may nonetheless be both feasible and economic to use parallel channels in return for high data rates. This figure illustrates these ideas:

The baud rate refers to the signalling rate at which data is sent through a channel and is measured in electrical transitions per second. In the EIA232 serial interface standard, one signal transition, at most, occurs per bit, and the baud rate and bit rate are identical. In this case, a rate of 9600 baud corresponds to a transfer of 9,600 data bits per second with a bit period of 104 microseconds (1/9600 sec.). If two electrical transitions were required for each bit, as is the case in non-return-to-zero coding, then at a rate of 9600 baud, only 4800 bits per second could be conveyed. The channel efficiency is the number of bits of useful information passed through the channel per second. It does not include framing, formatting, and error detecting bits that may be added to the information bits before a message is transmitted, and will always be less than one.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

21

The data rate of a channel is often specified by its bit rate (often thought erroneously to be the same as baud rate). However, an equivalent measure channel capacity is bandwidth. In general, the maximum data rate a channel can support is directly proportional to the channel's bandwidth and inversely proportional to the channel's noise level. A communications protocol is an agreed-upon convention that defines the order and meaning of bits in a serial transmission. It may also specify a procedure for exchanging messages. A protocol will define how many data bits compose a message unit, the framing and formatting bits, any error-detecting bits that may be added, and other information that governs control of the communications hardware. Channel efficiency is determined by the protocol design rather than by digital hardware considerations. Note that there is a tradeoff between channel efficiency and reliability - protocols that provide greater immunity to noise by adding error-detecting and -correcting codes must necessarily become less efficient. Digital Data Transmission The transmission of a stream of bits from one device to another across a transmission link involves a great deal of cooperation and agreement between the two sides. One of the most fundamental requirements is synchronization. The receiver must know the rate at which bits are being received so that it can sample the line at appropriate intervals to determine the value of each received bit. Two techniques are in common use for this purpose are: •Asynchronous transmission. •Synchronous transmission. The reception of digital data involves sampling the incoming signal once per bit time to determine the binary value. This is compounded by a timing difficulty: In order for the receiver to sample the incoming bits properly, it must know the arrival time and duration of each bit that it receives. Typically, the receiver will attempt to sample the medium at the center of each bit time, at intervals of one bit time. If the receiver times its samples based on its own clock, then there will be a problem if the transmitter's and receiver's clocks are not precisely aligned. If there is a drift in the receiver's clock, then after enough samples, the receiver may be in error because it is sampling in the wrong bit time For smaller timing differences, the error would occur later, but eventually the receiver will be out of step with the transmitter if the transmitter sends a sufficiently long stream of bits and if no steps are taken to synchronize the transmitter and receiver. Asynchronous vs. Synchronous Transmission Serialized data is not generally sent at a uniform rate through a channel. Instead, there is usually a burst of regularly spaced binary data bits followed by a pause, after which the data flow resumes. Packets of binary data are sent in this manner, possibly with variable-length pauses between packets, until the message has been fully transmitted. In order for the receiving end to know the proper moment to read individual binary bits from the channel, it must know exactly when a packet begins and how much time elapses between bits. When this timing information is known, the receiver is said to be synchronized with the transmitter, and accurate data transfer becomes possible. Failure to remain synchronized throughout a transmission will cause data to be corrupted or lost. In synchronous systems, separate channels are used to transmit data and timing information. The timing channel transmits clock pulses to the receiver. Upon receipt of a clock pulse, the receiver reads the data channel and latches the bit value found on the channel at that moment. The data channel is not read again until the next clock pulse arrives. Because the transmitter originates both the data and the timing pulses, the receiver will read the data channel only when told to do so by the transmitter (via the clock pulse), and synchronization is guaranteed. Techniques exist to merge the timing signal with the data so that only a single channel is required. This is especially useful when synchronous transmissions are to be sent through a modem. Two methods in which a data signal is self-timed are nonreturn-to-zero and biphase Manchester coding. These both refer to methods for encoding a data stream into an electrical waveform for transmission.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

22

Synchronous transmission

In asynchronous systems, a separate timing channel is not used. The transmitter and receiver must be preset in advance to an agreed-upon baud rate. A very accurate local oscillator within the receiver will then generate an internal clock signal that is equal to the transmitter's within a fraction of a percent. For the most common serial protocol, data is sent in small packets of 10 or 11 bits, eight of which constitute message information. When the channel is idle, the signal voltage corresponds to a continuous logic '1'. A data packet always begins with a logic '0' (the start bit) to signal the receiver that a transmission is starting. The start bit triggers an internal timer in the receiver that generates the needed clock pulses. Following the start bit, eight bits of message data are sent bit by bit at the agreed upon baud rate. The packet is concluded with a parity bit and stop bit. One complete packet is illustrated below:

The packet length is short in asynchronous systems to minimize the risk that the local oscillators in the receiver and transmitter will drift apart. When high-quality crystal oscillators are used, synchronization can be guaranteed over an 11-bit period. Every time a new packet is sent, the start bit resets the synchronization, so the pause between packets can be arbitrarily long. Parity and Checksums Noise and momentary electrical disturbances may cause data to be changed as it passes through a communications channel. If the receiver fails to detect this, the received message will be incorrect, resulting in possibly serious consequences. As a first line of defense against data errors, they must be detected. If an error can be flagged, it might be possible to request that the faulty packet be resent, or to at least prevent the flawed data from being taken as correct. If sufficient redundant information is sent, one- or two-bit errors may be corrected by hardware within the receiver before the corrupted data ever reaches its destination. A parity bit is added to a data packet for the purpose of error detection. In the even-parity convention, the value of the parity bit is chosen so that the total number of '1' digits in the combined data plus parity packet is an even number. Upon receipt of the packet, the parity needed for the data is recomputed by local hardware and compared to the parity bit received with the data. If any bit has changed state, the parity will not match, and an error will have been detected. In fact, if an odd number of bits (not just one) have been altered, the parity will not match. If an even number of bits have been reversed, the parity will match even though an error has occurred. However, a statistical analysis of data communication errors has shown that a single-bit error is much more probable than a multibit error in the presence of random noise. Thus, parity is a reliable method of error detection.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

23

Another approach to error detection involves the computation of a checksum. In this case, the packets that constitute a message are added arithmetically. A checksum number is appended to the packet sequence so that the sum of data plus checksum is zero. When received, the packet sequence may be added, along with the checksum, by a local microprocessor. If the sum is nonzero, an error has occurred. As long as the sum is zero, it is highly unlikely (but not impossible) that any data has been corrupted during transmission.

Errors may not only be detected, but also corrected if additional code is added to a packet sequence. If the error probability is high or if it is not possible to request retransmission, this may be worth doing. However, including error-correcting code in a transmission lowers channel efficiency, and results in a noticeable drop in channel throughput. Data Compression If a typical message were statistically analyzed, it would be found that certain characters are used much more frequently than others. By analyzing a message before it is transmitted, short binary codes may be assigned to frequently used characters and longer codes to rarely used characters. In doing so, it is possible to reduce the total number of characters sent without altering the information in the message. Appropriate decoding at the receiver will restore the message to its original form. This procedure, known as data compression, may result in a 50 percent or greater savings in the amount of data transmitted. Even though time is necessary to analyze the message before it is transmitted, the savings may be great enough so that the total time for compression, transmission, and decompression will still be lower than it would be when sending an uncompressed message. A compression method called Huffman coding is frequently used in data communications, and particularly in fax transmission. Clearly, most of the image data for a typical business letter represents white paper, and only about 5 percent of the surface represents black ink. It is possible to send a single code that, for example, represents a consecutive string of 1000 white pixels rather than a separate code for each white pixel. Consequently, data compression will significantly reduce the total message length for a faxed business letter. Were the letter made up of randomly distributed black ink covering 50 percent of the white paper surface, data compression would hold no advantages. Data Encryption Privacy is a great concern in data communications. Faxed business letters can be intercepted at will through tapped phone lines or intercepted microwave transmissions without the knowledge of the sender or receiver. To increase the security of this and other data communications, including digitized telephone conversations, the binary codes representing data may be scrambled in such a way that unauthorized interception will produce an indecipherable sequence of characters. Authorized receive stations will be equipped with a

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

24

decoder that enables the message to be restored. The process of scrambling, transmitting, and descrambling is known as encryption. Data Storage Technology Normally, we think of communications science as dealing with the contemporaneous exchange of information between distant parties. However, many of the same techniques employed in data communications are also applied to data storage to ensure that the retrieval of information from a storage medium is accurate. We find, for example, that similar kinds of error-correcting codes used to protect digital telephone transmissions from noise are also used to guarantee correct readback of digital data from compact audio disks, CD-ROMs, and tape backup systems. Data Transfer in Digital Circuits Data is typically grouped into packets that are either 8, 16, or 32 bits long, and passed between temporary holding units called registers. Data within a register is available in parallel because each bit exits the register on a separate conductor. To transfer data from one register to another, the output conductors of one register are switched onto a channel of parallel wires referred to as a bus. The input conductors of another register, which is also connected to the bus, capture the information:

Following a data transaction, the content of the source register is reproduced in the destination register. It is important to note that after any digital data transfer, the source and destination registers are equal; the source register is not erased when the data is sent. The transmit and receive switches shown above are electronic and operate in response to commands from a central control unit. It is possible that two or more destination registers will be switched on to receive data from a single source. However, only one source may transmit data onto the bus at any time. If multiple sources were to attempt transmission simultaneously, an electrical conflict would occur when bits of opposite value are driven onto a single bus conductor. Such a condition is referred to as a bus contention. Not only will a bus contention result in the loss of information, but it also may damage the electronic circuitry. As long as all registers in a system are linked to one central control unit, bus contentions should never occur if the circuit has been designed properly. Note that the data buses within a typical microprocessor are funda-mentally half-duplex channels. Transmission over Short Distances (< 2 feet) When the source and destination registers are part of an integrated circuit (within a microprocessor chip, for example), they are extremely close (thousandths of an inch). Consequently, the bus signals are at very low power levels, may traverse a distance in very little time, and are not very susceptible to external noise and distortion. This is the ideal environment for digital communications. However, it is not yet possible to integrate all the necessary circuitry for a computer (i.e., CPU, memory, disk control, video and display drivers, etc.) on a single chip. When data is sent off-chip to another integrated circuit, the bus signals must be amplified and conductors extended out of the chip through external pins. Amplifiers may be added to the source register:

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

25

Bus signals that exit microprocessor chips and other VLSI circuitry are electrically capable of traversing about one foot of conductor on a printed circuit board, or less if many devices are connected to it. Special buffer circuits may be added to boost the bus signals sufficiently for transmission over several additional feet of conductor length, or for distribution to many other chips (such as memory chips). Noise and Electrical Distortion Because of the very high switching rate and relatively low signal strength found on data, address, and other buses within a computer, direct extension of the buses beyond the confines of the main circuit board or plug-in boards would pose serious problems. First, long runs of electrical conductors, either on printed circuit boards or through cables, act like receiving antennas for electrical noise radiated by motors, switches, and electronic circuits:

Such noise becomes progressively worse as the length increases, and may eventually impose an unacceptable error rate on the bus signals. Just a single bit error in transferring an instruction code from memory to a microprocessor chip may cause an invalid instruction to be introduced into the instruction stream, in turn causing the computer to totally cease operation. A second problem involves the distortion of electrical signals as they pass through metallic conductors. Signals that start at the source as clean, rectangular pulses may be received as rounded pulses with ringing at the rising and falling edges:

These effects are properties of transmission through metallic conductors, and become more pronounced as the conductor length increases. To compensate for distortion, signal power must be increased or the transmission rate decreased.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

26

Transmission over Medium Distances (< 20 feet) Computer peripherals such as a printer or scanner generally include mechanisms that cannot be situated within the computer itself. Our first thought might be just to extend the computer's internal buses with a cable of sufficient length to reach the peripheral. Doing so, however, would expose all bus transactions to external noise and distortion even though only a very small percentage of these transactions concern the distant peripheral to which the bus is connected. If a peripheral can be located within 20 feet of the computer, however, relatively simple electronics may be added to make data transfer through a cable efficient and reliable. To accomplish this, a bus interface circuit is installed in the computer:

It consists of a holding register for peripheral data, timing and formatting circuitry for external data transmission, and signal amplifiers to boost the signal sufficiently for transmission through a cable. When communication with the peripheral is necessary, data is first deposited in the holding register by the microprocessor. This data will then be reformatted, sent with error-detecting codes, and transmitted at a relatively slow rate by digital hardware in the bus interface circuit. In addition, the signal power is greatly boosted before transmission through the cable. These steps ensure that the data will not be corrupted by noise or distortion during its passage through the cable. In addition, because only data destined for the peripheral is sent, the party-line transactions taking place on the computer's buses are not unnecessarily exposed to noise. Data sent in this manner may be transmitted in byte-serial format if the cable has eight parallel channels (at least 10 conductors for half-duplex operation), or in bit-serial format if only a single channel is available. Transmission over Long Distances (< 4000 feet) When relatively long distances are involved in reaching a peripheral device, driver circuits must be inserted after the bus interface unit to compensate for the electrical effects of long cables:

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

27

This is the only change needed if a single peripheral is used. However, if many peripherals are connected, or if other computer stations are to be linked, a local area network (LAN) is required, and it becomes necessary to drastically change both the electrical drivers and the protocol to send messages through the cable. Because multiconductor cable is expensive, bit-serial transmission is almost always used when the distance exceeds 20 feet. A great deal of technology has been developed for LAN systems to minimize the amount of cable required and maximize the throughput. The costs of a LAN have been concentrated in the electrical interface card that would be installed in PCs or peripherals to drive the cable, and in the communications software, not in the cable itself (whose cost has been minimized). Thus, the cost and complexity of a LAN are not particularly affected by the distance between stations. Transmission over Very Long Distances (greater than 4000 feet) Data communications through the telephone network can reach any point in the world. The volume of overseas fax transmissions is increasing constantly, and computer networks that link thousands of businesses, governments, and universities are pervasive. Transmissions over such distances are not generally accomplished with a direct-wire digital link, but rather with digitally-modulated analog carrier signals. This technique makes it possible to use existing analog telephone voice channels for digital data, although at considerably reduced data rates compared to a direct digital link. Transmission of data from your personal computer to a timesharing service over phone lines requires that data signals be converted to audible tones by a modem. An audio sine wave carrier is used, and, depending on the baud rate and protocol, will encode data by varying the frequency, phase, or amplitude of the carrier. The receiver's modem accepts the modulated sine wave and extracts the digital data from it. Signal Encoding Techniques

Analog and Digital information can be encoded as either analog or digital signals:

♦ Digital data, digital signals: simplest form of digital encoding of digital data ♦ Digital data, analog signal: A modem converts digital data to an analog signal so that it can be

transmitted over an analog

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

28

♦ Analog data, digital signals: Analog data, such as voice and video, are often digitized to be able to use digital transmission facilities

♦ Analog data, analog signals: Analog data are modulated by a carrier frequency to produce an analog signal in a different frequency band, which can be utilized on an analog transmission system

For digital signaling, a data source g(t), which may be either digital or analog, is encoded into a digital signal x(t). The basis for analog signaling is a continuous constant-frequency fc signal known as the carrier signal. Data may be transmitted using a carrier signal by modulation, which is the process of encoding source data onto the carrier signal. All modulation techniques involve operation on one or more of the three fundamental frequency domain parameters: amplitude, frequency, and phase. The input signal m(t) may be analog or digital and is called the modulating signal, and the result of modulating the carrier signal is called the modulated signal s(t). Encoding - Digital data to digital signals: A digital signal is a sequence of discrete, discontinuous voltage pulses. Each pulse is a signal element. Binary data are transmitted by encoding each data bit into signal elements. In the simplest case, there is a one-to-one correspondence between bits and signal elements. More complex encoding schemes are used to improve performance, by altering the spectrum of the signal and providing synchronization capability. In general, the equipment for encoding digital data into a digital signal is less complex and less expensive than digital-to-analog modulation equipment. Various Encoding techniques include:

• Nonreturn to Zero-Level (NRZ-L) • Nonreturn to Zero Inverted (NRZI) • Bipolar -AMI • Pseudoternary • Manchester • Differential Manchester • B8ZS • HDB3

Encoding techniques

The most common, and easiest, way to transmit digital signals is to use two different voltage levels for the two binary digits. Codes that follow this strategy share the property that the voltage level is constant during a bit interval; there is no transition (no return to a zero voltage level). Can have absence of voltage used to

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

29

represent binary 0, with a constant positive voltage used to represent binary 1. More commonly a negative voltage represents one binary value and a positive voltage represents the other. This is known as Nonreturn to Zero-Level (NRZ-L). NRZ-L is typically the code used to generate or interpret digital data by terminals and other devices. A variation of NRZ is known as NRZI (Nonreturn to Zero, invert on ones). As with NRZ-L, NRZI maintains a constant voltage pulse for the duration of a bit time. The data bits are encoded as the presence or absence of a signal transition at the beginning of the bit time. A transition (low to high or high to low) at the beginning of a bit time denotes a binary 1 for that bit time; no transition indicates a binary 0. NRZI is an example of differential encoding. In differential encoding, the information to be transmitted is represented in terms of the changes between successive signal elements rather than the signal elements themselves. The encoding of the current bit is determined as follows: if the current bit is a binary 0, then the current bit is encoded with the same signal as the preceding bit; if the current bit is a binary 1, then the current bit is encoded with a different signal than the preceding bit. One benefit of differential encoding is that it may be more reliable to detect a transition in the presence of noise than to compare a value to a threshold. Another benefit is that with a complex transmission layout, it is easy to lose the sense of the polarity of the signal. A category of encoding techniques known as multilevel binary addresses some of the deficiencies of the NRZ codes. These codes use more than two signal levels. In the bipolar-AMI scheme, a binary 0 is represented by no line signal, and a binary 1 is represented by a positive or negative pulse. The binary 1 pulses must alternate in polarity. There are several advantages to this approach. First, there will be no loss of synchronization if a long string of 1s occurs. Each 1 introduces a transition, and the receiver can resynchronize on that transition. A long string of 0s would still be a problem. Second, because the 1 signals alternate in voltage from positive to negative, there is no net dc component. Also, the bandwidth of the resulting signal is considerably less than the bandwidth for NRZ. Finally, the pulse alternation property provides a simple means of error detection. Any isolated error, whether it deletes a pulse or adds a pulse, causes a violation of this property. The comments on bipolar-AMI also apply to pseudoternary. In this case, it is the binary 1 that is represented by the absence of a line signal, and the binary 0 by alternating positive and negative pulses. There is no particular advantage of one technique versus the other, and each is the basis of some applications. There is another set of coding techniques, grouped under the term biphase, that overcomes the limitations of NRZ codes. Two of these techniques, Manchester and differential Manchester, are in common use. In the Manchester code, there is a transition at the middle of each bit period. The midbit transition serves as a clocking mechanism and also as data: a low-to-high transition represents a 1, and a high-to-low transition represents a 0. Biphase codes are popular techniques for data transmission. The more common Manchester code has been specified for the IEEE 802.3 (Ethernet) standard for baseband coaxial cable and twisted-pair bus LANs. In differential Manchester, the midbit transition is used only to provide clocking. The encoding of a 0 is represented by the presence of a transition at the beginning of a bit period, and a 1 is represented by the absence of a transition at the beginning of a bit period. Differential Manchester has the added advantage of employing differential encoding. Differential Manchester has been specified for the IEEE 802.5 token ring LAN, using shielded twisted pair. Digital Data, Analog Signal The most familiar use of transmitting digital data using analog signals transformation is for transmitting digital data through the public telephone network. The telephone network was designed to receive, switch, and transmit analog signals in the voice-frequency range of about 300 to 3400 Hz. It is not at present suitable for handling digital signals from the subscriber locations (although this is beginning to change).

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

30

Thus digital devices are attached to the network via a modem (modulator-demodulator), which converts digital data to analog signals, and vice versa. Having stated that modulation involves operation on one or more of the three characteristics of a carrier signal: amplitude, frequency, and phase. Accordingly, there are three basic encoding or modulation techniques for transforming digital data into analog signals, as illustrated Figure: amplitude shift keying (ASK), frequency shift keying (FSK), and phase shift keying (PSK). In all these cases, the resulting signal occupies a bandwidth centered on the carrier frequency.

In ASK, the two binary values are represented by two different amplitudes of the carrier frequency. Commonly, one of the amplitudes is zero; that is, one binary digit is represented by the presence, at constant amplitude, of the carrier, the other by the absence of the carrier, ASK is susceptible to sudden gain changes and is a rather inefficient modulation technique. On voice-grade lines, it is typically used only up to 1200 bps. The ASK technique is used to transmit digital data over optical fiber, where one signal element is represented by a light pulse while the other signal element is represented by the absence of light. The most common form of FSK is binary FSK (BFSK), in which the two binary values are represented by two different frequencies near the carrier frequency, as shown in Figure. BFSK is less susceptible to error than ASK. On voice-grade lines, it is typically used up to 1200 bps. It is also commonly used for high-frequency (3 to 30 MHz) radio transmission. It can also be used at even higher frequencies on local area networks that use coaxial cable. In PSK, the phase of the carrier signal is shifted to represent data. The simplest scheme uses two phases to represent the two binary digits (Figure) and is known as binary phase shift keying. An alternative form of two-level PSK is differential PSK (DPSK). In this scheme, a binary 0 is represented by sending a signal burst of the same phase as the previous signal burst sent. A binary 1 is represented by sending a signal burst of opposite phase to the preceding one. This term differential refers to the fact that the phase shift is with reference to the previous bit transmitted rather than to some constant reference signal. In differential encoding, the information to be transmitted is represented in terms of the changes between successive data symbols rather than the signal elements themselves. DPSK avoids the requirement for an accurate local oscillator phase at the receiver that is matched with the transmitter. As long as the preceding phase is received correctly, the phase reference is accurate. More efficient use of bandwidth can be achieved if each signaling element represents more than one bit. For example, instead of a phase shift of 180˚, as allowed in BPSK, a common encoding technique, known as quadrature phase shift keying (QPSK), uses phase shifts separated by multiples of π/2 (90˚). Thus each signal element represents two bits rather than one. The input is a stream of binary digits with a data rate of

MMoodduullaattiioonn TTeecchhnniiqquueess

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

31

R = 1/Tb, where Tb is the width of each bit. This stream is converted into two separate bit streams of R/2 bps each, by taking alternate bits for the two streams. The two data streams are referred to as the I (in-phase) and Q (quadrature phase) streams. The streams are modulated on a carrier of frequency fc by multiplying the bit stream by the carrier, and the carrier shifted by 90˚. The two modulated signals are then added together and transmitted. Thus, the combined signals have a symbol rate that is half the input bit rate. The use of multiple levels can be extended beyond taking bits two at a time. It is possible to transmit bits three at a time using eight different phase angles. Further, each angle can have more than one amplitude. For example, a standard 9600 bps modem uses 12 phase angles, four of which have two amplitude values, for a total of 16 different signal elements. Analog data, digital signals In this section we examine the process of transforming analog data into digital signals. Analog data, such as voice and video, is often digitized to be able to use digital transmission facilities. Strictly speaking, it might be more correct to refer to this as a process of converting analog data into digital data; this process is known as digitization. Once analog data have been converted into digital data, a number of things can happen. The three most common are: 1. The digital data can be transmitted using NRZ-L. In this case, we have in fact gone directly from analog data to a digital signal. 2. The digital data can be encoded as a digital signal using a code other than NRZ-L. Thus an extra step is required. 3. The digital data can be converted into an analog signal, using one of the modulation techniques. The device used for converting analog data into digital form for transmission, and subsequently recovering the original analog data from the digital, is known as a codec (coder-decoder). In this section we examine the two principal techniques used in codecs, pulse code modulation and delta modulation.

Digitizing Analog Data

The simplest technique for transforming analog data into digital signals is pulse code modulation (PCM), which involves sampling the analog data periodically and quantizing the samples. Pulse code modulation (PCM) is based on the sampling theorem (quoted above). Hence if voice data is limited to frequencies below 4000 Hz (a conservative procedure for intelligibility), 8000 samples per second would be sufficient to characterize the voice signal completely. Note, however, that these are analog samples, called pulse amplitude modulation (PAM) samples. To convert to digital, each of these analog samples must be assigned a binary code.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

32

Figure shows an example in which the original signal is assumed to be bandlimited with a bandwidth of B. PAM samples are taken at a rate of 2B, or once every Ts = 1/2B seconds. Each PAM sample is approximated by being quantized into one of 16 different levels. Each sample can then be represented by 4 bits. But because the quantized values are only approximations, it is impossible to recover the original signal exactly. By using an 8-bit sample, which allows 256 quantizing levels, the quality of the recovered voice signal is comparable with that achieved via analog transmission. Note that this implies that a data rate of 8000 samples per second × 8 bits per sample = 64 kbps is needed for a single voice signal.

Thus, PCM starts with a continuous-time, continuous-amplitude (analog) signal, from which a digital signal is produced, as shown in Figure. The digital signal consists of blocks of n bits, where each n-bit number is the amplitude of a PCM pulse. On reception, the process is reversed to reproduce the analog signal. Notice, however, that this process violates the terms of the sampling theorem. By quantizing the PAM pulse, the original signal is now only approximated and cannot be recovered exactly. This effect is known as quantizing error or quantizing noise. Each additional bit used for quantizing increases SNR by about 6 dB, which is a factor of 4.

Typically, the PCM scheme is refined using a technique known as nonlinear encoding, which means, in effect, that the quantization levels are not equally spaced. The problem with equal spacing is that the mean absolute error for each sample is the same, regardless of signal level. Consequently, lower amplitude values are relatively more distorted. By using a greater number of quantizing steps for signals of low amplitude,

PPCCMM EExxaammppllee

PPCCMM BBlloocckk DDiiaaggrraamm

NNoonn--LLiinneeaarr CCooddiinngg

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

33

and a smaller number of quantizing steps for signals of large amplitude, a marked reduction in overall signal distortion is achieved, as shown Figure. Nonlinear encoding can significantly improve the PCM SNR ratio. For voice signals, improvements of 24 to 30 dB have been achieved.

Figure shows an example where the staircase function is overlaid on the original analog waveform. A 1 is generated if the staircase function is to go up during the next interval; a 0 is generated otherwise. The transition (up or down) that occurs at each sampling interval is chosen so that the staircase function tracks the original analog waveform as closely as possible. There are two important parameters in a DM scheme: the size of the step assigned to each binary digit, δ, and the sampling rate. As the above figure illustrates, δ must be chosen to produce a balance between two types of errors or noise. When the analog waveform is changing very slowly, there will be quantizing noise. This noise increases as δ is increased. On the other hand, when the analog waveform is changing more rapidly than the staircase can follow, there is slope overload noise. This noise increases as δ is decreased. It should be clear that the accuracy of the scheme can be improved by increasing the sampling rate. However, this increases the data rate of the output signal. The principal advantage of DM over PCM is the simplicity of its implementation. In general, PCM exhibits better SNR characteristics at the same data rate. Good voice reproduction via PCM can be achieved with 128 quantization levels, or 7-bit coding (27 = 128). A voice signal, conservatively, occupies a bandwidth of 4 kHz. Thus, according to the sampling theorem, samples should be taken at a rate of 8000 samples per second. This implies a data rate of 8000 × 7 = 56 kbps for the PCM-encoded digital data. But using the Nyquist criterion this digital signal could require on the order of 28 kHz of bandwidth. Even more severe differences are seen with higher bandwidth signals. For example, a common PCM scheme for color television uses 10-bit codes, which works out to 92 Mbps for a 4.6-MHz bandwidth signal. In spite of these numbers, digital techniques continue to grow in popularity for transmitting analog data. The principal reasons for this are • Because repeaters are used instead of amplifiers, there is no cumulative noise. • use time division multiplexing (TDM) for digital signals with no intermodulation noise, verses of the frequency division multiplexing (FDM) used for analog signals. • The conversion to digital signaling allows the use of the more efficient digital switching techniques. Furthermore, techniques have been developed to provide more efficient codes. Studies also show that PCM-related techniques are preferable to DM-related techniques for digitizing analog signals that represent digital data. Analog data, analog signals Analog data can be modulated by a carrier frequency to produce an analog signal in a different frequency band, which can be utilized on an analog transmission system. The basic techniques are

• Amplitude modulation (AM) • Frequency modulation (FM),

DDeellttaa MMoodduullaattiioonn EExxaammppllee

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

34

• Phase modulation (PM). Modulation has been defined as the process of combining an input signal m(t) and a carrier at frequency fc to produce a signal s(t) whose bandwidth is (usually) centered on fc. For digital data, the motivation for modulation should be clear: When only analog transmission facilities are available, modulation is required to convert the digital data to analog form. The motivation when the data are already analog is less clear. After all, voice signals are transmitted over telephone lines at their original spectrum (referred to as baseband transmission). There are two principal reasons for analog modulation of analog signals:

• A higher frequency may be needed for effective transmission, since for unguided transmission, it is virtually impossible to transmit baseband signals;

• Modulation permits frequency division multiplexing. In this section we look at the principal techniques for modulation using analog data: amplitude modulation (AM), frequency modulation (FM), and phase modulation (PM). As before, the three basic characteristics of a signal are used for modulation.

Analog Modulation Techniques Amplitude modulation (AM) is the simplest form of modulation, and involves the multiplication of the input signal by the carrier fc. The spectrum consists of the original carrier plus the spectrum of the input signal translated to fc. The portion of the spectrum for |f| > |fc| is the upper sideband, and the portion of the spectrum for |f| < |fc| is lower sideband. Both the upper and lower sidebands are replicas of the original spectrum M(f), with the lower sideband being frequency reversed. A popular variant of AM, known as single sideband (SSB), takes advantage of this fact by sending only one of the sidebands, eliminating the other sideband and the carrier. Frequency modulation (FM) and phase modulation (PM) are special cases of angle modulation. For phase modulation, the phase is proportional to the modulating signal. For frequency modulation, the derivative of the phase is proportional to the modulating signal. As with AM, both FM and PM result in a signal whose bandwidth is centered at fc, but can show that the magnitude of that bandwidth is very different, hence both FM and PM require greater bandwidth than AM. The shapes of the FM and PM signals are very similar. Indeed, it is impossible to tell them apart without knowledge of the modulation function. Switched Networks For transmission of data beyond a local area, communication is typically achieved by transmitting data from source to destination through a network of intermediate switching nodes; this switched network design is

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

35

typically used to implement LANs as well. The switching nodes are not concerned with the content of the data; rather, their purpose is to provide a switching facility that will move the data from node to node until they reach their destination.

Switching network

Figure illustrates a simple network. The devices attached to the network may be referred to as stations. The stations may be computers, terminals, telephones, or other communicating devices. We refer to the switching devices whose purpose is to provide communication as nodes. Nodes are connected to one another in some topology by transmission links. Node-station links are generally dedicated point-to-point links. Node-node links are usually multiplexed, using either frequency division multiplexing (FDM) or time division multiplexing (TDM). In a switched communication network, data entering the network from a station are routed to the destination by being switched from node to node. For example, in Figure 10.1, data from station A intended for station F are sent to node 4. They may then be routed via nodes 5 and 6 or nodes 7 and 6 to the destination. A switch is very like a bridge in that is usually a layer 2 device that looks to MAC addresses to determine where data should be directed. A switch has other applications in common with a bridge. Like a bridge, a switch will use transparent and source-route methods to move data and Spanning Tree Protocol (STP) to avoid loops. However, switches are superior to bridges because they provide greater port density and they can be configured to make more intelligent decisions about where data goes.

Two different technologies are used in wide area switched networks: circuit switching and packet switching. These two technologies differ in the way the nodes switch information from one link to another on the way from source to destination. Circuit switching A type of communications in which a dedicated channel (or circuit) is established for the duration of a transmission. The most ubiquitous circuit-switching network is the telephone system, which links together wire segments to create a single unbroken line for each telephone call. Communication via circuit switching implies that there is a dedicated communication path between two stations. That path is a connected sequence of links between network nodes. On each physical link, a logical channel is dedicated to the connection. Communication via circuit switching involves three phases:

1. Circuit establishment - Before any signals can be transmitted, an end-to-end (station-to-station) circuit must be established.

2. Data transfer - Data can now be transmitted through the network between these two stations. The transmission may be analog or digital, depending on the nature of the network. As the carriers evolve to fully integrated digital networks, the use of digital (binary) transmission for both voice and data is becoming the dominant method. Generally, the connection is full duplex.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

36

3. Circuit disconnect - After some period of data transfer, the connection is terminated, usually by the action of one of the two stations. Signals must be propagated to the intermediate nodes to deallocate the dedicated resources.

Circuit switching can be rather inefficient. Channel capacity is dedicated for the duration of a connection, even if no data are being transferred. For a voice connection, utilization may be rather high, but it still does not approach 100%. For a client/server or terminal-to-computer connection, the capacity may be idle during most of the time of the connection. In terms of performance, there is a delay prior to signal transfer for call establishment. However, once the circuit is established, the network is effectively transparent to the users. Packet switching

Refers to protocols in which messages are divided into packets before they are sent. Each packet is then transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message. Frame Relay

A packet-switching protocol for connecting devices on a Wide Area Network (WAN). Frame Relay networks in the U.S. support data transfer rates at T-1 (1.544 Mbps) and T-3 (45 Mbps) speeds. In fact, you can think of Frame Relay as a way of utilizing existing T-1 and T-3 lines owned by a service provider. Most telephone companies now provide Frame Relay service for customers who want connections at 56 Kbps to T-1 speeds. (In Europe, Frame Relay speeds vary from 64 Kbps to 2 Mbps. In the U.S., Frame Relay is quite popular because it is relatively inexpensive. However, it is being replaced in some areas by faster technologies, such as ATM. ATM

Short for Asynchronous Transfer Mode, a network technology based on transferring data in cells or packets of a fixed size. The cell used with ATM is relatively small compared to units used with older technologies. The small, constant cell size allows ATM equipment to transmit video, audio, and computer data over the same network, and assure that no single type of data hogs the line. Some people think that ATM holds the answer to the Internet bandwidth problem, but others are skeptical. ATM creates a fixed channel, or route, between two points whenever data transfer begins. This differs from TCP/IP, in which messages are divided into packets and each packet can take a different route from source to destination. This difference makes it easier to track and bill data usage across an ATM network, but it makes it less adaptable to sudden surges in network traffic. When purchasing ATM service, you generally have a choice of four different types of service:

• Constant bit rate (CBR): specifies a fixed bit rate so that data is sent in a steady stream. This is analogous to a leased line.

• Variable bit rate (VBR): provides a specified throughput capacity but data is not sent evenly. This is a popular choice for voice and videoconferencing data.

• Available bit rate (ABR): provides a guaranteed minimum capacity but allows data to be bursted at higher capacities when the network is free.

• Unspecified bit rate (UBR): does not guarantee any throughput levels. This is used for applications, such as file transfer, that can tolerate delays.

Multiplexing

In telecommunications and computer networks, multiplexing (known as muxing) is a term used to refer to a process where multiple analog message signals or digital data streams are combined into one signal over a shared medium. The aim is to share an expensive resource. For example, in

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

37

telecommunications, several phone calls may be transferred using one wire. It originated in telegraphy, and is now widely applied in communications.

The multiplexed signal is transmitted over a communication channel, which may be a physical transmission medium. The multiplexing divides the capacity of the low-level communication channel into several higher-level logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, can extract the original channels on the receiver side. A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX). Inverse multiplexing (IMUX) has the opposite aim as multiplexing, namely to break one data stream into several streams, transfer them simultaneously over several communication channels, and recreate the original data stream. Categories of multiplexing The basic forms of multiplexing are

• Time-division multiplexing (TDM) • Frequency-division multiplexing (FDM), • Wavelength-division multiplexing (WDM)

Time-division multiplexing

Time-Division Multiplexing (TDM) is a type of digital or (rarely) analog multiplexing in which two or more signals or bit streams are transferred apparently simultaneously as sub-channels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent timeslots of fixed length, one for each sub-channel. A sample byte or data block of sub-channel 1 is transmitted during timeslot 1, sub-channel 2 during timeslot 2, etc. One TDM frame consists of one timeslot per sub-channel. After the last sub-channel the cycle starts all over again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc.

Time Division Multiplexing TDM System

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

38

Application examples The Plesiochronous Digital Hierarchy (PDH) system, also known as the PCM system, for digital transmission of several telephone calls over the same four-wire copper cable (T-carrier or E-carrier) or fiber cable in the circuit switched digital telephone network The SDH and Synchronous Optical Networking (SONET) network transmission standards, that has suppressed PDH. The RIFF (WAV) audio standard interleaves left and right stereo signals on a per-sample basis The left-right channel splitting in use for Stereoscopic Liquid Crystal shutter glasses TDM can be further extended into the time division multiple access (TDMA) scheme, where several stations connected to the same physical medium, for example sharing the same frequency channel, can communicate. Application examples include:The GSM telephone system Relation to multiple access

A multiplexing technique may be further extended into a multiple access method or channel access method, for example TDM into Time-division multiple access (TDMA) and statistical multiplexing into carrier sense multiple access (CSMA). A multiple access method makes it possible for several transmitters connected to the same physical medium to share its capacity. Multiplexing is provided by the Physical Layer of the OSI model, while multiple access also involves a media access control protocol, which is part of the Data Link Layer. Frequency-division multiplexing

Frequency-division multiplexing (FDM) is a form of signal multiplexing which involves assigning non-overlapping frequency ranges to different signals or to each "user" of a medium.

a. Frequency Division Multiplexing Diagram b. FDM System

Non telephone

FDM can also be used to combine multiple signals before final modulation onto a carrier wave. In this case the carrier signals are referred to as subcarriers: an example is stereo FM transmission, where a 38 kHz subcarrier is used to separate the left-right difference signal from the central left-right sum channel, prior to the frequency modulation of the composite signal. A television channel is divided into subcarrier frequencies for video, color, and audio. DSL uses different frequencies for voice and for upstream and downstream data transmission on the same conductors, which is also an example of frequency duplex. Where frequency division multiplexing is used as to allow multiple users to share a physical communications channel, it is called frequency-division multiple access (FDMA) . FDMA is the traditional way of separating radio signals from different transmitters.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

39

In the 1860 and 70s, several inventors attempted FDM under the names of Acoustic telegraphy and Harmonic telegraphy. Practical FDM was only achieved in the electronic age. Meanwhile their efforts led to an elementary understanding of electroacoustic technology, resulting in the invention of the telephone. Telephone For long distance telephone connections, 20th century telephone companies used L-carrier and similar co-axial cable systems carrying thousands of voice circuits multiplexed in multiple stages by channel banks. For shorter distances, cheaper balanced pair cables were used for various systems including Bell System K- and N-Carrier. Those cables didn't allow such large bandwidths, so only 12 voice channels (Double Sideband) and later 24 (Single Sideband) were multiplexed into four wires, one pair for each direction with repeaters every several miles, approximately 10 km. See 12-channel carrier system. By the end of the 20th Century, FDM voice circuits had become rare. Modern telephone systems employ digital transmission, in which time-division multiplexing (TDM) is used instead of FDM. Since the late 20th century Digital Subscriber Lines have used a Discrete multitone (DMT) system to divide their spectrum into frequency channels. The concept corresponding to frequency division multiplexing in the optical domain is known as wavelength division multiplexing. Wavelength-division multiplexing

In fiber-optic communications, wavelength-division multiplexing (WDM) is a technology which multiplexes multiple optical carrier signals on a single optical fiber by using different wavelengths (colours) of laser light to carry different signals. This allows for a multiplication in capacity, in addition to enabling bidirectional communications over one strand of fiber. This is a form of frequency division multiplexing (FDM) but is commonly called wavelength division multiplexing. The term wavelength-division multiplexing is commonly applied to an optical carrier (which is typically described by its wavelength), whereas frequency-division multiplexing typically applies to a radio carrier (which is more often described by frequency). However, since wavelength and frequency are inversely proportional, and since radio and light are both forms of electromagnetic radiation, the two terms are equivalent in this context. A WDM system uses a multiplexer at the transmitter to join the signals together, and a demultiplexer at the receiver to split them apart. With the right type of fiber it is possible to have a device that does both simultaneously, and can function as an optical add-drop multiplexer. The optical filtering devices used have traditionally been etalons, stable solid-state single-frequency Fabry-Perot interferometers in the form of thin-film-coated optical glass.

The concept was first published in 1970, and by 1978 WDM systems were being realized in the laboratory. The first WDM systems only combined two signals. Modern systems can handle up to 160 signals and can thus expand a basic 10 Gbit/s fiber system to a theoretical total capacity of over 1.6 Tbit/s over a single fiber pair.

WDM systems are popular with telecommunications companies because they allow them to expand the capacity of the network without laying more fiber. By using WDM and optical amplifiers, they can accommodate several generations of technology development in their optical infrastructure without having to overhaul the backbone network. Capacity of a given link can be expanded by simply upgrading the multiplexers and demultiplexers at each end.

WDM systems are divided in different wavelength patterns, conventional or coarse and dense WDM. Conventional WDM systems provide up to 16 channels in the 3rd transmission window (C-band) of silica fibers around 1550 nm. DWDM uses the same transmission window but with denser channel spacing. Channel plans vary, but a typical system would use 40 channels at 100 GHz spacing or 80 channels with 50 GHz spacing. Some technologies are capable of 25 GHz spacing (sometimes called ultra dense WDM). New amplification options (Raman amplification) enable the extension of the usable wavelengths to the L-band, more or less doubling these numbers.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

40

WDM, DWDM and CWDM are based on the same concept of using multiple wavelengths of light on a single fiber, but differ in the spacing of the wavelengths, number of channels, and the ability to amplify the multiplexed signals in the optical space. EDFA provide an efficient wideband amplification for the C-band, Raman amplification adds a mechanism for amplification in the L-band. For CWDM wideband optical amplification is not available, limiting the optical spans to several tens of kilometres. SONET Synchronous Optical Network, a standard for connecting fiber-optic transmission systems. SONET was proposed by Bellcore in the middle 1980s and is now an ANSI standard. SONET defines interface standards at the physical layer of the OSI seven-layer model. The standard defines a hierarchy of interface rates that allow data streams at different rates to be multiplexed. SONET establishes Optical Carrier (OC) levels from 51.8 Mbps (OC-1) to 9.95 Gbps (OC-192). Prior rate standards used by different countries specified rates that were not compatible for multiplexing. With the implementation of SONET, communication carriers throughout the world can interconnect their existing digital carrier and fiber optic systems. The international equivalent of SONET, standardized by the ITU, is called SDH. Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH), are two closely related multiplexing protocols for transferring multiple digital bit streams using lasers or light-emitting diodes (LEDs) over the same optical fiber. The method was developed to replace the Plesiochronous Digital Hierarchy (PDH) system for transporting larger amounts of telephone calls and data traffic over the same fiber wire without synchronization problems. SONET and SDH were originally designed to transport circuit mode communications (eg, T1, T3) from a variety of different sources. The primary difficulty in doing this prior to SONET was that the synchronization source of these different circuits were different, meaning each circuit was actually operating at a slightly different rate and with different phase. SONET allowed for the simultaneous transport of many different circuits of differing origin within one single framing protocol. In a sense, then, SONET is not itself a communications protocol per se, but a transport protocol.

Due to SONET's essential protocol neutrality and transport-oriented features, SONET was the obvious choice for transporting ATM (Asynchronous Transfer Mode) frames, and so quickly evolved mapping structures and concatenated payload containers so as to transport ATM connections. In other words, for ATM (and eventually other protocols such as TCP/IP and Ethernet), the internal complex structure previously used to transport circuit-oriented connections is removed, and replaced with a large and concatenated frame (such as STS-3c) into which ATM frames, IP packets, or Ethernet is placed. Both SDH and SONET are widely used today: SONET in the U.S. and Canada and SDH in the rest of the world. Although the SONET standards were developed before SDH, their relative penetrations in the worldwide market dictate that SONET now is considered the variation. The two protocols are standardized according to the following:

• SDH or Synchronous Digital Hierarchy standard developed by the International Telecommunication Union (ITU), documented in standard G.707 and its extension G.708

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

41

• SONET or Synchronous Optical Networking standard as defined by GR-253-CORE from Telcordia and T1.105 from American National Standards Institute

SONET STS-1 Overhead Octets

Circuit switching network Circuit switching network is one that establishes a fixed bandwidth circuit (or channel) between nodes and terminals before the users may communicate, as if the nodes were physically connected with an electrical circuit. The bit delay is constant during a connection, as opposed to packet switching, where packet queues may cause varying delay. Each circuit cannot be used by other callers until the circuit is released and a new connection is set up. Even if no actual communication is taking place in a dedicated circuit that channel remains unavailable to other users. Channels that are available for new calls to be set up are said to be idle. Virtual circuit switching is a packet switching technology that may emulate circuit switching, in the sense that the connection is established before any packets are transferred, and that packets are delivered in order. There is a common misunderstanding that circuit switching is used only for connecting voice circuits (analog or digital). The concept of a dedicated path persisting between two communicating parties or nodes can be extended to signal content other than voice. Its advantage is that it provides for non-stop transfer without requiring packets and without most of the overhead traffic usually needed, making maximal and optimal use of available bandwidth. The disadvantage of inflexibility tends to reserve it for specialized applications, particularly with the overwhelming proliferation of internet-related technology.

bb.. PPuubblliicc CCiirrccuuiitt SSwwiittcchheedd NNeettwwoorrkk

aa.. PPuubblliicc CCiirrccuuiitt SSwwiittcchheedd NNeettwwoorrkk

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

42

Examples of circuit switched networks • Public Switched Telephone Network (PSTN) • ISDN B-channel • Circuit Switched Data (CSD) and High-Speed Circuit-Switched Data (HSCSD) service in cellular

systems such as GSM • X.21 (Used in the German DATEX-L and Scandinavian DATEX circuit switched data network)

With circuit switching, and virtual circuit switching, a route is reserved from source to destination. The entire message is sent in order so that it does not have to be reassembled at the destination. Circuit switching can be relatively inefficient because capacity is wasted on connections which are set up but are not in continuous use (however momentarily). On the other hand, the connection is immediately available and capacity is guaranteed until the call is disconnected. Circuit switching contrasts with packet switching which splits traffic data (for instance, digital representation of sound, or computer data) into chunks, called packets, that are routed over a shared network. Network-wide Overload Control The effect of the actions of other exchanges was not considered. If indiscriminate call blocking is resorted to on getting overloaded, overload may spread to other exchanges and might eventually bring down the whole network. Handling Network Overload Two simple techniques for tracking overloads in the network are described below.

• Rejecting Traffic Traffic rejecting is primarily achieved by blocking of calls by the exchange. If overload is viewed from the network perspective, excess traffic should be rejected as close to the periphery of the network as possible. Thus, if a local exchange is overloaded, blocking of subscriber directly connected to the exchange should be given first priority. Blocking of incoming traffic from the network should be avoided as far as possible because the call is already holding some resources in other exchanges which have forwarded the call to this exchange. In general, while blocking calls, preference should be given to calls that are closer to the subscriber at the network periphery, since they hold lesser resources of the network compared to calls that are being offered from the center of the network.

• Diverting Traffic In a SS7 based network an overloaded exchange can inform other exchanges in the network about the overload, so they can choose a less congested route towards a particular destination. With this approach, traffic can be distributed more evenly throughout the network. Due to load distribution, the BHCA (Busy Hour Call Attempts) rating of the network can be increased. If there is overload at the center of the network, very soon exchanges in the periphery of the network will be asked to reduce the traffic they are generating. On receiving such instructions from the network, the exchanges can use a combination of both call blocking and traffic diverting. Preventing Overload Initiating overload control in an overloaded system itself introduces some load, this severely limits the options available to the exchange for reducing the load. The network can be made more resilient to overload by initiating action before the problem gets out of hand. Various techniques for preventing overload are described below.

• Quota System

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

43

In the quota system, all possible sources of traffic are allocated a fixed quota of traffic. Any source exceeding its quota is asked to reduce its traffic. The disadvantage of this scheme is that, if all other sources are offering traffic that is much less their quota, one source exceeding its quota will have to reduce its traffic, even though the exchange might be quite capable of handling the excess traffic.

• Periodic Delay Measurement Overload is normally characterized by long message queues and delay in processing the packets. Thus, if every exchange has a process periodically monitoring network wide call setup times, overload in other exchanges can be detected well in advance and any traffic being routed through the overloaded exchange can be diverted to routes on which call setup times are better. Delays in other nodes in the network can also be found by sending special test messages and observing the time taken for their acknowledgement to come. A database of delays to other nodes can be maintained and it can be consulted while rerouting calls.

• Using Traffic Data

Overload patterns in the switch will depend on the day of week and time of day, refer to the figure given above. Overload is also expected on special days like New Year day etc. The peak hour of traffic every day is known in advance. Based on previously collected traffic data, the exchange can make a very good guess about the expected traffic.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

44

Peer-to-Peer Communication A peer-to-peer (or "P2P", or, rarely, "PtP") computer network uses diverse connectivity between participants in a network and the cumulative bandwidth of network participants rather than conventional centralized resources where a relatively low number of servers provide the core value to a service or application. Peer-to-peer networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files (see file sharing) containing audio, video, data or anything in digital format is very common, and real-time data, such as telephony traffic, is also passed using P2P technology. A pure peer-to-peer network does not have the notion of clients or servers, but only equal peer nodes that simultaneously function as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client-server model where communication is usually to and from a central server. A typical example for a non peer-to-peer file transfer is an FTP server where the client and server programs are quite distinct, and the clients initiate the download/uploads and the servers react to and satisfy these requests. The earliest peer-to-peer network in widespread use was the Usenet news server system, in which peers communicated with one another to propagate Usenet news articles over the entire Usenet network. Data Link Control Protocols To achieve the necessary control over sending data over a data communications link, a layer of logic is added above the physical layer, referred to as data link control or a data link control protocol. Some of the requirements and objectives for effective data communication between two directly connected transmitting-receiving stations: • Frame synchronization: Data are sent in blocks called frames. The beginning and end of each frame must be recognizable. • Flow control: The sending station must not send frames at a rate faster than the receiving station can absorb them. • Error control: Bit errors introduced by the transmission system should be corrected. • Addressing: On a shared link, such as a local area network (LAN), the identity of the two stations involved in a transmission must be specified. • Control and data on same link: the receiver must be able to distinguish control information from the data being transmitted. • Link management: Procedures for the management of initiation, maintenance, and termination of a sustained data exchange over a link. Flow control Flow control is a technique for assuring that a transmitting entity does not overwhelm a receiving entity with data. The receiving entity typically allocates a data buffer of some maximum length for a transfer. When data are received, the receiver must do a certain amount of processing before passing the data to the higher-level software. In the absence of flow control, the receiver's buffer may fill up and overflow while it is processing old data. Will assume data sent in a sequence of frames, with each frame containing a portion of the data and some control information. The time it takes for a station to emit all of the bits of a frame onto the medium is the transmission time; this is proportional to the length of the frame. The propagation time is the time it takes for a bit to traverse the link between source and destination. For this section, we assume that all frames that are transmitted are successfully received; no frames are lost, they arrive in order sent, and none arrive with errors. However, each transmitted frame suffers an arbitrary and variable amount of delay before reception.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

45

The simplest form of flow control, known as stop-and-wait flow control, works as follows. A source entity transmits a frame. After the destination entity receives the frame, it indicates its willingness to accept another frame by sending back an acknowledgment to the frame just received. The source must wait until it receives the acknowledgment before sending the next frame. The destination can thus stop the flow of data simply by withholding acknowledgment. This procedure works fine and, indeed, can hardly be improved upon when a message is sent in a few large frames. However, it is often the case that a source will break up a large block of data into smaller blocks and transmit the data in many frames (because of limited buffer size, errors detected sooner with less to resend, to prevent media hogging). With the use of multiple frames for a single message, the stop-and-wait procedure may be inadequate, mainly since only one frame at a time can be in transit.

With the use of multiple frames for a single message, the stop-and-wait procedure may be inadequate, mainly since only one frame at a time can be in transit. To show this, start by defining the bit length B of a link as the number of bits present on the link at an instance in time when a stream of bits fully occupies the link. In situations where the bit length of the link is greater than the frame length, serious inefficiencies result, as shown in Figure. In the figure, the transmission time (the time it takes for a station to transmit a frame) is normalized to one, and the propagation delay (the time it takes for a bit to travel from sender to receiver) is expressed as the variable a = B / L (where L is the number of bits in the frame). Sliding-window flow control The essence of the problem described so far is that only one frame at a time can be in transit. Efficiency can be greatly improved by allowing multiple frames to be in transit at the same time. Consider two stations, A and B, connected via a full-duplex link. Station B allocates buffer space for W frames. Thus, B can accept W

SSttoopp aanndd WWaaiitt LLiinnkk UUttiilliizzaattiioonn

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

46

frames, and A is allowed to send W frames without waiting for any acknowledgments. To keep track of which frames have been acknowledged, each is labeled with a a k-bit sequence number. This gives a range of sequence numbers of 0 through 2k – 1, and frames are numbered modulo 2k, with a maximum window size of 2k – 1. The window size need not be the maximum possible size for a given sequence number length k. B acknowledges a frame by sending an acknowledgment that includes the sequence number of the next frame expected. This scheme can also be used to acknowledge multiple frames, and is referred to as sliding-window flow control. Most data link control protocols also allow a station to cut off the flow of frames from the other side by sending a Receive Not Ready (RNR) message, which acknowledges former frames but forbids transfer of future frames. At some subsequent point, the station must send a normal acknowledgment to reopen the window. If two stations exchange data, each needs to maintain two windows, one for transmit and one for receive, and each side needs to send the data and acknowledgments to the other. To provide efficient support for this requirement, a feature known as piggybacking is typically provided. Each data frame includes a field that holds the sequence number of that frame plus a field that holds the sequence number used for acknowledgment.

Figure depicts the sliding-window process. It assumes the use of a 3-bit sequence number, so that frames are numbered sequentially from 0 through 7, and then the same numbers are reused for subsequent frames. The shaded rectangle indicates the frames that may be sent; in this figure, the sender may transmit five frames, beginning with frame 0. Each time a frame is sent, the shaded window shrinks; each time an acknowledgment is received, the shaded window grows. Frames between the vertical bar and the shaded window have been sent but not yet acknowledged. As we shall see, the sender must buffer these frames in case they need to be retransmitted. Sliding-window flow control is potentially much more efficient than stop-and-wait flow control. The reason is that, with sliding-window flow control, the transmission link is treated as a pipeline that may be filled with frames in transit. In contrast, with stop-and-wait flow control, only one frame may be in the pipe at a time. Error control Error control refers to mechanisms to detect and correct errors that occur in the transmission of frames. The model that we will use, which covers the typical case, was illustrated earlier in Figure 7.1b. As before, data are sent as a sequence of frames; frames arrive in the same order in which they are sent; and each transmitted frame suffers an arbitrary and potentially variable amount of delay before reception. In addition, we admit the possibility of two types of errors: • Lost frame: frame fails to arrive at the other side, eg. because a noise burst damaged a frame so badly it is not recognized

SSlliiddiinngg WWiinnddooww DDiiaaggrraamm

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

47

• Damaged frame: A recognizable frame does arrive, but some of the bits are in error (have been altered during transmission). The most common techniques for error control are based on some or all of the following ingredients: • Positive acknowledgment: The destination returns a positive acknowledgment to successfully received, error-free frames. • Retransmission after timeout: The source retransmits a frame that has not been acknowledged after a predetermined amount of time. • Negative acknowledgment and retransmission: The destination returns a negative acknowledgment to frames in which an error is detected. The source retransmits such frames.

Collectively, these mechanisms are all referred to as automatic repeat request (ARQ); the effect of ARQ is to turn an unreliable data link into a reliable one. Three versions of ARQ have been standardized:

• Stop-and-wait ARQ • Go-back-N ARQ • Selective-reject ARQ

All of these forms are based on the use of the flow control techniques discussed in Section 7.1. We examine each in turn. Stop-and-wait ARQ Stop-and-wait ARQ is based on the stop-and-wait flow control technique outlined previously. The source station transmits a single frame and then must await an acknowledgment (ACK). No other data frames can be sent until the destination station's reply arrives at the source station. Two sorts of errors could occur. First, the frame that arrives at the destination could be damaged. The receiver detects this by using the error-detection technique referred to earlier and simply discards the frame. To account for this possibility, the source station is equipped with a timer. After a frame is transmitted, the source station waits for an acknowledgment. If no acknowledgment is received by the time that the timer expires, then the same frame is sent again. Note that this method requires that the transmitter maintain a copy of a transmitted frame until an acknowledgment is received for that frame. The second sort of error is a damaged acknowledgment, which is not recognizable by A, which will therefore time out and resend the same frame. This duplicate frame arrives and is accepted by B. B has therefore accepted two copies of the same frame as if they were separate. To avoid this problem, frames are alternately labeled with 0 or 1, and positive acknowledgments are of the form ACK0 and ACK1. In keeping with the sliding-window convention, an ACK0 acknowledges receipt of a frame numbered 1 and indicates that the receiver is ready for a frame numbered 0. Go-back-N ARQ The sliding-window flow control technique can be adapted to provide more efficient line use, sometimes referred to as continuous ARQ. The form of error control based on sliding-window flow control that is most commonly used is called go-back-N ARQ. In this method, a station may send a series of frames sequentially numbered modulo some maximum value. The number of unacknowledged frames outstanding is determined by window size, using the sliding-window flow control technique. While no errors occur, the destination will acknowledge incoming frames as usual (RR = receive ready, or piggybacked acknowledgment). If the destination station detects an error in a frame, it may send a negative acknowledgment (REJ = reject) for that frame, as explained in the following rules. The destination station will discard that frame and all future incoming frames until the frame in error is correctly received. Thus, the source station, when it receives a REJ, must retransmit the frame in error plus all succeeding frames that were transmitted in the interim. Suppose that station A is sending frames to station B. After each transmission, A sets an acknowledgment timer for the frame just transmitted. Suppose that B has previously successfully received frame (i – 1) and A has just transmitted frame i. The go-back-N technique takes into account the following contingencies:

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

48

1. Damaged frame. 2. Lost Frame 3. Damaged Ack 4. Damaged Reject Selective-reject ARQ With selective-reject ARQ, the only frames retransmitted are those that receive a negative acknowledgment, in this case called SREJ, or those that time out. Selective reject would appear to be more efficient than go-back-N, because it minimizes the amount of retransmission. On the other hand, the receiver must maintain a buffer large enough to save post-SREJ frames until the frame in error is retransmitted and must contain logic for reinserting that frame in the proper sequence. The transmitter, too, requires more complex logic to be able to send a frame out of sequence. Because of such complications, select-reject ARQ is much less widely used than go-back-N ARQ. Selective reject is a useful choice for a satellite link because of the long propagation delay involved. High Level Data Link Control (HDLC) The most important data link control protocol is HDLC (ISO 3009, ISO 4335). Not only is HDLC widely used, but it is the basis for many other important data link control protocols, which use the same or similar formats and the same mechanisms as employed in HDLC. To satisfy a variety of applications, HDLC defines: three station types: • Primary station: Responsible for controlling the operation of the link. Frames issued by the primary are called commands. • Secondary station: Operates under the control of the primary station. Frames issued by a secondary are called responses. The primary maintains a separate logical link with each secondary station on the line. • Combined station: Combines the features of primary and secondary. A combined station may issue both commands and responses. It also defines two link configurations: • Unbalanced configuration: Consists of one primary and one or more secondary stations and supports both full-duplex and half-duplex transmission. • Balanced configuration: Consists of two combined stations and supports both full-duplex and half-duplex transmission. HDLC defines three data transfer modes: • Normal response mode (NRM): Used with an unbalanced configuration. The primary may initiate data transfer to a secondary, but a secondary may only transmit data in response to a command from the primary. NRM is used on multi-drop lines, in which a number of terminals are connected to a host computer. • Asynchronous balanced mode (ABM): Used with a balanced configuration. Either combined station may initiate transmission without receiving permission from the other combined station. ABM is the most widely used of the three modes; it makes more efficient use of a full-duplex point-to-point link because there is no polling overhead. • Asynchronous response mode (ARM): Used with an unbalanced configuration. The secondary may initiate transmission without explicit permission of the primary. The primary still retains responsibility for the line, including initialization, error recovery, and logical disconnection. ARM is rarely used; it is applicable to some special situations in which a secondary may need to initiate transmission.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

49

High Level Link Control (HDLC) Protocol

The HDLC protocol is a general purpose protocol which operates at the data link layer of the OSI reference model. The protocol uses the services of a physical layer, and provides either a best effort or reliable communications path between the transmitter and receiver (i.e. with acknowledged data transfer). The type of service provided depends upon the HDLC mode which is used.

Each piece of data is encapsulated in an HDLC frame by adding a trailer and a header. The header contains an HDLC address and an HDLC control field. The trailer is found at the end of the frame, and contains a Cyclic Redundancy Check (CRC) which detects any errors which may occur during transmission. The frames are separated by HDLC flag sequences which are transmitted between each frame and whenever there is no data to be transmitted.

Types of HDLC Frame

The control field of HDLC follows the address field and is the second part of all HDLC frames. The best-effort service is provided through the use of U (un-numbered) frames consisting of a single byte. All frames carry a field of size 1 bit which is known as the "poll/final" bit and is used by the checkpointing procedure to verify correct transmission.

Format of the control byte in HDLC frames

HDLC defines currently two formats for frames which carry sequence numbers. These type of frame are used to provide the reliable data link service. Two types of numbered frames are supported:

• S (supervisory) frames containing only an acknowledgment number (N(R)) • I (information) frames carrying data and containing both a send sequence number (N(S)) and an

acknowledgment number (N(R)).

The S and U frames contain an additional field to identify the function which is to be performed. The 2-bit SS field is used to identify one of four functions:

1. SS=00 RR - Receiver Ready to accept more I-frames

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

50

2. SS=01 REJ - Go-Back-N retransmission request for an I-frame 3. SS=10 RNR - Receiver Not Ready to accept more I-frames 4. SS=11 SREJ - Selective retransmission request for an I-frame

The simplest format uses one byte (as above) and provides modulo 8 numbering. The "extended format" uses two bytes (as shown below) and uses modulo 128 numbering:

Format of the control byte in HDLC frames using an extended sequence number

Command and Response Addresses

In the HDLC balanced mode, the a node uses just two addresses (1 and 3) to indicate whether the frame relates to information frames (I-frames) being sent from the node or received by the node. The two values are known as "primary" and "secondary" addresses. The primary address is used for commands, (e.g. SABM, DISC, I). The secondary address is used for responses (e.g. UA, RR, REJ). The primary and secondary addresses are differentiated by the setting of the C/R bit in the HDLC Address.

Point-to-Point Protocol In networking, the Point-to-Point Protocol, or PPP, is a data link protocol commonly used to establish a direct connection between two networking nodes. It can provide connection authentication and can also provide transmission encryption privacy and compression. PPP is used over many types of physical networks including serial cable, phone line, trunk line, cellular telephone, specialized radio links, or fiber optic links such as SONET. Most Internet service providers (ISPs) use PPP for customer dial-up access to the Internet. Two encapsulated forms of PPP, Point-to-Point Protocol over Ethernet (PPPoE) and Point-to-Point Protocol over ATM (PPPoA), are used by ISPs to connect Digital Subscriber Line (DSL) Internet service. PPP is commonly used to act as a data link layer protocol for connection over synchronous and asynchronous circuits, where it has largely superseded the older, non-standard Serial Line Internet Protocol (SLIP), and telephone company mandated standards (such as Link Access Protocol, Balanced (LAPB) in the X.25 protocol suite). PPP was designed to work with numerous network layer protocols, including Internet Protocol (IP), Novell's Internetwork Packet Exchange (IPX), NBF and AppleTalk. Basic Features PPP was designed somewhat after the original HDLC specifications. The designers of PPP included many additional features that had been seen only in various proprietary data-link protocols up to that time. Automatic self configuration Link Control Protocol (LCP) is an integral part of PPP, and defined in the same standard specification. LCP provides automatic configuration of the interfaces at each end (such as setting datagram size, escaped characters, and magic numbers) and for selecting optional authentication. The LCP protocol runs atop PPP (with PPP protocol number 0xC021) and therefore a basic PPP connection has to be established before LCP is able to configure it.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

51

RFC 1994 describes Challenge-handshake authentication protocol (CHAP), preferred for establishing dial-up connections with ISPs. Although deprecated, Password authentication protocol (PAP) is often used. Another option for authentication over PPP is Extensible Authentication Protocol (EAP). After the link has been established, additional network (layer 3) configuration may take place. Most commonly, the Internet Protocol Control Protocol (IPCP) is available, although Internetwork Packet Exchange Control Protocol (IPXCP) and AppleTalk Control Protocol (ATCP) were once very popular. Also, Internet Protocol Version 6 Control Protocol (IPv6CP) is available, when IPv6 takes the currently-de facto IPv4's position as the layer-3 protocol in the future. Multiple network layer protocols PPP permits multiple network layer protocols to operate on the same communication link. For every network layer protocol used, a separate Network Control Protocol (NCP) is provided in order to encapsulate and negotiate options for the multiple network layer protocols. For example, Internet Protocol (IP) uses the IP Control Protocol (IPCP), and Internetwork Packet Exchange (IPX) uses the Novell IPX Control Protocol (IPXCP). NCPs include fields containing standardized codes to indicate the network layer protocol type that PPP encapsulates. Looped link detection PPP detects looped links using a feature involving magic numbers. When the node sends PPP LCP messages, these messages may include a magic number. If a line is looped, the node receives an LCP message with its own magic number, instead of getting a message with the peer's magic number. Most important features Link Control Protocol initiates and terminates connections gracefully, allowing hosts to negotiate connection options. It also supports both byte- and bit-oriented encodings[citation needed]. Network Control Protocol is used for negotiating network-layer information, e.g. network address or compression options, after the connection has been established. PPP frame

Name Number of bytes Description Protocol 1 or 2 setting of protocol in data field Information variable (0 or more) datagram Padding variable (0 or more) optional padding

The Protocol field indicates the kind of payload packet (e.g. LCP, NCP, IP, IPX, AppleTalk, etc.). The Information field contains the PPP payload; it has a variable length with a negotiated maximum called the Maximum Transmission Unit. By default the maximum is 1500 octets. It might be padded on transmission; if the information for a particular protocol can be padded, that protocol must allow information to be distinguished from padding. Encapsulation PPP frames are encapsulated in a lower-layer protocol that provides framing and may provide other functions such as a checksum to detect transmission errors. PPP on serial links is usually encapsulated in a framing similar to HDLC, described by IETF RFC 1662.

Name Number of bytes Description Flag 1 indicates frame's begin or end Address 1 broadcast address

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

52

Control 1 control byte Protocol 1 or 2 setting of protocol in information field Information variable (0 or more) datagram Padding variable (0 or more) optional padding FCS 2 (or 4) error check sum

The Flag field is present when PPP with HDLC-like framing is used. The Address and Control fields always have the value hex FF (for "all stations") and hex 03 (for "unnumbered information"), and can be omitted whenever PPP LCP Address-and-Control-Field-Compression (ACFC) is negotiated. The Frame Check Sequence (FCS) field is used to determine whether an individual frame has an error. It contains a checksum computed over the frame to provide basic protection against errors in transmission. This is a CRC code similar to the one used for other layer two protocol error protection schemes such as the one used in Ethernet. According to RFC 1662, it can be either 16 bits (2bytes) or 32 bits (4 bytes) in size (default is 16 bits - Polynomial x16 + x12 + x5 + 1). The FCS is calculated over the Address, Control, Protocol, Information and Padding fields. Although these are not standard applications, PPP is also used over broadband connections. RFC 2516 describes Point-to-Point Protocol over Ethernet (PPPoE), a method for transmitting PPP over Ethernet that is sometimes used with DSL. RFC 2364 describes Point-to-Point Protocol over ATM (PPPoA), a method for transmitting PPP over ATM Adaptation Layer 5 (AAL5), which is also sometimes used with DSL. PPP line activation and phases The phases of the Point to Point Protocol according to RFC 1661 are listed below: Link Dead. This phase occurs when the link fails, or one side has been told not to connect (e.g. a user has finished his or her dialup connection.) Link Establishment Phase. This phase is where Link Control Protocol negotiation is attempted. If successful, control goes either to the authentication phase or the Network-Layer Protocol phase, depending on whether authentication is desired. Authentication Phase. This phase is optional. It allows the sides to authenticate each other before a connection is established. If successful, control goes to the network-layer protocol phase. Network-Layer Protocol Phase. This phase is where each desired protocols' Network Control Protocols are invoked. For example, IPCP is used to establish IP service over the line. Data transport for all protocols which are successfully started with their network control protocols also occurs in this phase. Closing down of network protocols also occur in this phase. Link Termination Phase. This phase closes down this connection. This can happen if there is an authentication failure, if there are so many checksum errors that the two parties to the link decide to tear down the link automatically, if the link suddenly fails, or if the user decides to hang up his connection. This phase tries to close everything down as gracefully as possible depending on the circumstances. Multiclass PPP MP's monotonically increasing sequence numbering (contiguous numbers are needed for all fragments of a packet) does not allow suspension of the sending of a sequence of fragments of one packet in order to send another packet. The obvious approach to providing more than one level of suspension with PPP Multilink is to run Multilink multiple times over one link. Multilink as it is defined provides no way for more than one instance to be active. Each class runs a separate copy of the mechanism defined i.e. uses a separate sequence number space and reassembly buffer.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

53

Statistical Multiplexing Statistical multiplexing provides a generally more efficient service than synchronous TDM for the support of terminals. With statistical TDM, time slots are not preassigned to particular data sources. Rather, user data are buffered and transmitted as rapidly as possible using available time slots. As with a synchronous TDM, the statistical multiplexer has a number of I/O lines on one side and a higher speed multiplexed line on the other. Each I/O line has a buffer associated with it. In the case of the statistical multiplexer, there are n I/O lines, but only k, where k < n, time slots available on the TDM frame. For input, the function of the multiplexer is to scan the input buffers, collecting data until a frame is filled, and then send the frame. On output, the multiplexer receives a frame and distributes the slots of data to the appropriate output buffers. Because statistical TDM takes advantage of the fact that the attached devices are not all transmitting all of the time, the data rate on the multiplexed line is less than the sum of the data rates of the attached devices. Thus, a statistical multiplexer can use a lower data rate to support as many devices as a synchronous multiplexer. However there is more overhead per slot for statistical TDM because each slot carries an address as well as data. The difficulty with this approach is that, while the average aggregate input may be less than the multiplexed line capacity, there may be peak periods when the input exceeds capacity. The solution to this problem is to include a buffer in the multiplexer to hold temporary excess input. There is a tradeoff between the size of the buffer used and the data rate of the line. We would like to use the smallest possible buffer and the smallest possible data rate, but a reduction in one requires an increase in the other.

The frame structure used by a statistical multiplexer has an impact on performance. Clearly, it is desirable to minimize overhead bits to improve throughput. Figure shows two possible formats. In the first case, only one source of data is included per frame. That source is identified by an address. The length of the data field is variable, and its end is marked by the end of the overall frame. This scheme can work well under light load but is quite inefficient under heavy load. A way to improve efficiency is to allow multiple data sources to be packaged in a single frame. Now, however, some means is needed to specify the length of data for each source. Thus, the statistical TDM subframe consists of a sequence of data fields, each labeled with an address and a length. Several techniques can be used to make this approach even more efficient. The address field can be reduced by using relative addressing, which specifies the number of the current source relative to the previous source, modulo the total number of sources. Another refinement is to use a 2-bit label with the length field. A value of 00, 01, or 10 corresponds to a data field of 1, 2, or 3 bytes; no length field is necessary. A value of 11 indicates that a length field is included. Yet another approach is to multiplex one character from each data source that has a character to send in a single data frame. In this case the frame begins with a bit map that has a bit length equal to the number of sources. For each source that transmits a character during a given frame, the corresponding bit is set to one.

SSttaattiissttiiccaall TTDDMM FFrraammee FFoorrmmaatt

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

54

Local Area Network & Medium access Control Protocols Local Area Networks (LANs)

A common LAN configuration is one that supports personal computers. LANs for the support of personal computers and workstations have become nearly universal in organizations of all sizes. Even those sites that still depend heavily on the mainframe have transferred much of the processing load to networks of personal computers. Perhaps the prime example of the way in which personal computers are being used is to implement client/server applications. For personal computer networks, a key requirement is low cost. In particular, the cost of attachment to the network must be significantly less than the cost of the attached device. Thus, for the ordinary personal computer, an attachment cost in the hundreds of dollars is desirable. For more expensive, high-performance workstations, higher attachment costs can be tolerated. Backend networks are used to interconnect large systems such as mainframes, supercomputers, and mass storage devices. The key requirement here is for bulk data transfer among a limited number of devices in a small area. High reliability is generally also a requirement. Typical characteristics include:

• High data rates • High-speed interface • Distributed access • Limited distance • Limited number of devices

A concept related to that of the backend network is the storage area network (SAN). A SAN is a separate network to handle storage needs. The SAN detaches storage tasks from specific servers and creates a shared storage facility across a high-speed network. The collection of networked storage devices can include hard disks, tape libraries, and CD arrays. Most SANs use Fibre Channel. In a SAN, no server sits between the storage devices and the network; instead, the storage devices and servers are linked directly to the network. The SAN arrangement improves client-to-storage access efficiency, as well as direct storage-to-storage communications for backup and replication functions. Traditionally, the office environment has included a variety of devices with low- to medium-speed data transfer requirements. However, applications in today's office environment would overwhelm the limited speeds (up to 10 Mbps) of traditional LAN. Desktop image processors have increased network data flow by an unprecedented amount. Examples of these applications include fax machines, document image processors, and graphics programs on personal computers and workstations. In addition, disk technology and price/performance have evolved so that desktop storage capacities of multiple gigabytes are common. These new demands require LANs with high speed that can support the larger numbers and greater geographic extent of office systems as compared to backend systems. The increasing use of distributed processing applications and personal computers has led to a need for a flexible strategy for local networking. Support of premises-wide data communications requires a networking service that is capable of spanning the distances involved and that interconnects equipment in a single (perhaps large) building or a cluster of buildings. Although it is possible to develop a single LAN to interconnect all the data processing equipment of a premises, this is probably not a practical alternative in most cases. A more attractive alternative is to employ lower-cost, lower-capacity LANs within buildings or departments and to interconnect these networks with a higher-capacity LAN. This latter network is referred to as a backbone LAN. If confined to a single building or cluster of buildings, a high-capacity LAN can perform the backbone function. LAN Architecture

• topologies

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

55

• transmission medium • layout • medium access control

The key elements of a LAN are listed above. Together, these elements determine not only the cost and capacity of the LAN, but also the type of data that may be transmitted, the speed and efficiency of communications, and even the kinds of applications that can be supported.

In the context of a communication network, the term topology refers to the way in which the end points, or stations, attached to the network are interconnected. The common topologies for LANs are shown in Figure include: bus, tree, ring, and star. The bus is a special case of the tree, with only one trunk and no branches. The choice of topology depends on a variety of factors, including reliability, expandability, and performance. This choice is part of the overall task of designing a LAN and thus cannot be made in isolation, independent of the choice of transmission medium, wiring layout, and access control technique. There are four alternative media that can be used for a bus LAN: • Twisted pair: In the early days of LAN development, voice-grade twisted pair was used to provide an inexpensive, easily installed bus LAN. A number of systems operating at 1 Mbps were implemented. Scaling twisted pair up to higher data rates in a shared-medium bus configuration is not practical, so this approach was dropped long ago. • Baseband coaxial cable: A baseband coaxial cable is one that makes use of digital signaling. The original Ethernet scheme makes use of baseband coaxial cable. • Broadband coaxial cable: Broadband coaxial cable is the type of cable used in cable television systems. Analog signaling is used at radio and television frequencies. This type of system is more expensive and more difficult to install and maintain than baseband coaxial cable. This approach never achieved popularity and such LANs are no longer made. • Optical fiber: There has been considerable research relating to this alternative over the years, but the expense of the optical fiber taps and the availability of better alternatives have resulted in the demise of this option as well. For a bus topology, only baseband coaxial cable has achieved widespread use, primarily for Ethernet systems. Compared to a star-topology twisted pair or optical fiber installation, the bus topology using baseband coaxial cable is difficult to work with. Even simple changes may require access to the coaxial cable, movement of taps, and rerouting of cable segments. Accordingly, few if any new installations are being attempted. Despite its limitations, there is a considerable installed base of baseband coaxial cable bus LANs.

LLAANN TTooppoollooggiieess

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

56

The choice of transmission medium is determined by a number of factors. It is, we shall see, constrained by the topology of the LAN. Other factors come into play, including

• Capacity: to support the expected network traffic • Reliability: to meet requirements for availability • Types of data supported: tailored to the application • Environmental scope: to provide service over the range of environments required

The choice is part of the overall task of designing a local network.

The architecture of a LAN is best described in terms of a layering of protocols that organize the basic functions of a LAN. These include physical, medium access control (MAC), and logical link control (LLC) layers. Protocols defined specifically for LAN and MAN transmission address issues relating to the transmission of blocks of data over the network. In OSI terms, a discussion of LAN protocols is concerned principally with lower layers of the OSI model. Figure relates the LAN protocols to the OSI architecture. This architecture was developed by the IEEE 802 LAN standards committee and has been adopted by all organizations working on the specification of LAN standards. It is generally referred to as the IEEE 802 reference model. Working from the bottom up, the lowest layer of the IEEE 802 reference model corresponds to the physical layer of the OSI model and includes such functions as: • Encoding/decoding of signals • Preamble generation/removal (for synchronization) • Bit transmission/reception In addition, the physical layer of the 802 model includes a specification of the transmission medium and the topology. Generally, this is considered "below" the lowest layer of the OSI model. However, the choice of transmission medium and topology is critical in LAN design, and so a specification of the medium is included. Above the physical layer are the functions associated with providing service to LAN users. These include • On transmission, assemble data into a frame with address and error-detection fields. • On reception, disassemble frame, and perform address recognition and error detection. • Govern access to the LAN transmission medium. • Provide an interface to higher layers and perform flow and error control. These are functions typically associated with OSI layer 2. The set of functions in the last bullet item are grouped into a logical link control (LLC) layer. The functions in the first three bullet items are treated as a separate layer, called medium access control (MAC). The separation is done for the following reasons: • The logic required to manage access to a shared-access medium is not found in traditional layer 2 data link control.

LLAANN PPrroottooccooll AArrcchhiitteeccttuurree

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

57

For the same LLC, several MAC options may be provided.

Figure illustrates the relationship between the levels of the architecture. Higher-level data are passed down to LLC, which appends control information as a header, creating an LLC protocol data unit (PDU). This control information is used in the operation of the LLC protocol. The entire LLC PDU is then passed down to the MAC layer, which appends control information at the front and back of the packet, forming a MAC frame. Again, the control information in the frame is needed for the operation of the MAC protocol. For context, the figure also shows the use of TCP/IP and an application layer above the LAN protocols. Logical Link Control The LLC layer for LANs is similar in many respects to other link layers in common use. Like all link layers, LLC is concerned with the transmission of a link-level PDU between two stations, without the necessity of an intermediate switching node. LLC has two characteristics not shared by most other link control protocols: 1. It must support the multiaccess, shared-medium nature of the link (this differs from a multidrop line in that there is no primary node). 2. It is relieved of some details of link access by the MAC layer. Addressing in LLC involves specifying the source and destination LLC users. Typically, a user is a higher-layer protocol or a network management function in the station. These LLC user addresses are referred to as service access points (SAPs), in keeping with OSI terminology for the user of a protocol layer. LLC specifies the mechanisms for addressing stations across the medium and for controlling the exchange of data between two users. The operation and format of this standard is based on HDLC. Three services are provided as alternatives for attached devices using LLC: • Unacknowledged connectionless service: This service is a datagram-style service. It is a very simple service that does not involve any of the flow- and error-control mechanisms. Thus, the delivery of data is not guaranteed. However, in most devices, there will be some higher layer of software that deals with reliability issues. • Connection-mode service: This service is similar to that offered by HDLC. A logical connection is set up between two users exchanging data, and flow control and error control are provided. • Acknowledged connectionless service: This is a cross between the previous two services. It provides that datagrams are to be acknowledged, but no prior logical connection is set up. The basic LLC protocol is modeled after HDLC and has similar functions and formats. The differences between the two protocols can be summarized as follows:

LLAANN PPrroottooccoollss iinn CCoonntteexxtt

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

58

• LLC makes use of the asynchronous balanced mode of operation of HDLC, to support connection-mode LLC service; this is referred to as type 2 operation. The other HDLC modes are not employed.

• LLC supports an unacknowledged connectionless service using the unnumbered information PDU; this is known as type 1 operation.

• LLC supports an acknowledged connectionless service by using two new unnumbered PDUs; this is known as type 3 operation.

• LLC permits multiplexing by the use of LLC service access points (LSAPs). Medium Access Control (MAC) Protocol All LANs and MANs consist of collections of devices that must share the network's transmission capacity. Some means of controlling access to the transmission medium is needed to provide for an orderly and efficient use of that capacity. This is the function of a medium access control (MAC) protocol. MAC Frame Handling

The MAC layer receives a block of data from the LLC layer and is responsible for performing functions related to medium access and for transmitting the data. As with other protocol layers, MAC implements these functions making use of a protocol data unit at its layer. In this case, the PDU is referred to as a MAC frame. The exact format of the MAC frame differs somewhat for the various MAC protocols in use. In general, all of the MAC frames have a format similar to that of Figure. The fields of this frame are • MAC Control: protocol control info needed for MAC protocol functioning • Destination MAC Address: destination physical attachment point on LAN • Source MAC Address: source physical attachment point on LAN. • LLC: LLC data from the next higher layer. • CRC: The Cyclic Redundancy Check field, an error-detecting code

In the LAN protocol architecture, the functions of detecting errors using the CRC and retransmitting damaged frames are split between the MAC and LLC layers. The MAC layer is responsible for detecting errors and discarding any frames that are in error. The LLC layer optionally keeps track of which frames have been successfully received and retransmits unsuccessful frames. All three LLC protocols employ the same PDU format, which consists of four fields. The DSAP (Destination Service Access Point) and SSAP (Source Service Access Point) fields each contain a 7-bit address, which specify the destination and source users of LLC. One bit of the DSAP indicates whether the DSAP is an individual or group address. One bit of the SSAP indicates whether the PDU is a command or response PDU. The format of the LLC control field is identical to that of HDLC, using extended (7-bit) sequence numbers. For type 1 operation, which supports the unacknowledged connectionless service, the unnumbered information (UI) PDU is used to transfer user data. There is no acknowledgment, flow control, or error

MMAACC FFrraammee FFoorrmmaatt

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

59

control. With type 2 operation, a data link connection is established between two LLC SAPs prior to data exchange. Connection establishment is attempted by the type 2 protocol in response to a request from a user. Once the connection is established, data are exchanged using information PDUs, as in HDLC. The information PDUs include send and receive sequence numbers, for sequencing and flow control. The supervisory PDUs are used, as in HDLC, for flow control and error control. Either LLC entity can terminate a logical LLC connection by issuing a disconnect (DISC) PDU. With type 3 operation, each transmitted PDU is acknowledged. A new (not found in HDLC) unnumbered PDU, the Acknowledged Connectionless (AC) Information PDU, is defined. User data are sent in AC command PDUs and must be acknowledged using an AC response PDU.

Figure b indicates the way in which data are encapsulated using a bridge. Data are provided by some user to LLC. The LLC entity appends a header and passes the resulting data unit to the MAC entity, which appends a header and a trailer to form a MAC frame. On the basis of the destination MAC address in the frame, it is captured by the bridge. The bridge does not strip off the MAC fields; its function is to relay the MAC frame intact to the destination LAN. Thus, the frame is deposited on the destination LAN and captured by the destination station. Bridges In virtually all cases, there is a need to expand beyond the confines of a single LAN, to provide interconnection to other LANs and to wide area networks. Two general approaches are used for this purpose: bridges and routers. The bridge is the simpler of the two devices and provides a means of interconnecting similar LANs. Because the devices all use the same protocols, the amount of processing required at the bridge is minimal. More sophisticated bridges are capable of mapping from one MAC format to another. There are several reasons for the use of multiple LANs connected by bridges:

• Reliability: By using bridges, the network can be partitioned into self-contained units, to provide fault isolation

• Performance: A number of smaller LANs will often give improved performance if devices can be clustered so that intranetwork traffic significantly exceeds internetwork traffic.

• Security: The establishment of multiple LANs may improve security of communications, keeping different types of traffic with different security needs on physically separate media.

• Geography: Clearly, two separate LANs are needed to support devices clustered in two geographically distant locations.

CCoonnnneeccttiioonn ooff TTwwoo LLAANNss

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

60

Figure illustrates the action of a bridge connecting two LANs, A and B, using the same MAC protocol. In this example, a single bridge attaches to both LANs; frequently, the bridge function is performed by two "half-bridges," one on each LAN. The functions of the bridge are few and simple: • Read all frames transmitted on A and accept those addressed to any station on B. • Using the medium access control protocol for B, retransmit each frame on B. • Do the same for B-to-A traffic. Several design aspects of a bridge are worth highlighting: • The bridge makes no modification to the content or format of the frames it receives, nor does it encapsulate them with an additional header. Each frame to be transferred is simply copied from one LAN and repeated with exactly the same bit pattern on the other LAN. Because the two LANs use the same LAN protocols, it is permissible to do this. • The bridge should contain enough buffer space to meet peak demands. Over a short period of time, frames may arrive faster than they can be retransmitted. • The bridge must contain addressing and routing intelligence. At a minimum, the bridge must know which addresses are on each network to know which frames to pass. Further, there may be more than two LANs interconnected by a number of bridges. In that case, a frame may have to be routed through several bridges in its journey from source to destination. • A bridge may connect more than two LANs. In summary, the bridge provides an extension to the LAN that requires no modification to the communications software in the stations attached to the LANs. The IEEE 802.1D specification defines the protocol architecture for MAC bridges. Within the 802 architecture, the endpoint or station address is designated at the MAC level. Thus, it is at the MAC level that a bridge can function. Figure 15.9 shows the simplest case, which consists of two LANs connected by a single bridge. The LANs employ the same MAC and LLC protocols. The bridge operates as previously described. A MAC frame whose destination is not on the immediate LAN is captured by the bridge, buffered briefly, and then transmitted on the other LAN. As far as the LLC layer is concerned, there is a dialogue between peer LLC entities in the two endpoint stations. The bridge need not contain an LLC layer because it is merely serving to relay the MAC frames. The concept of a MAC relay bridge is not limited to the use of a single bridge to connect two nearby LANs. If the LANs are some distance apart, then they can be connected by two bridges that are in turn connected by a communications facility. The intervening communications facility can be a network, such as

BBrriiddggee FFuunnccttiioonn

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

61

a wide area packet-switching network, or a point-to-point link. In such cases, when a bridge captures a MAC frame, it must encapsulate the frame in the appropriate packaging and transmit it over the communications facility to a target bridge. The target bridge strips off these extra fields and transmits the original, unmodified MAC frame to the destination station.

High Speed LANs Recent years have seen rapid changes in the technology, design, and commercial applications for

local area networks (LANs). A major feature of this evolution is the introduction of a variety of new schemes for high-speed local networking. To keep pace with the changing local networking needs of business, a number of approaches to high speed LAN design have become commercial products. The most important of these are

• Fast Ethernet and Gigabit Ethernet: The extension of 10-Mbps CSMA/CD (carrier sense multiple access with collision detection) to higher speeds is a logical strategy, because it tends to preserve the investment in existing systems.

• Fibre Channel: This standard provides a low-cost, easily scalable approach to achieving very high data rates in local areas.

• High-speed wireless LANs: Wireless LAN technology and standards have at last come of age, and high-speed standards and products are being introduced.

Ethernet (CSMA/CD)

The most widely used high-speed LANs today are based on Ethernet and were developed by the IEEE 802.3 standards committee. As with other LAN standards, there is both a medium access control layer and a physical layer. The media access uses CSMA/CD. This and its precursors can be termed random access, or contention, techniques. They are random access in the sense that there is no predictable or scheduled time for any station to transmit; station transmissions are ordered randomly. They exhibit contention in the sense that stations contend for time on the shared medium.

When two frames collide, the medium remains unusable for the duration of transmission of both damaged frames. For long frames, compared to propagation time, the amount of wasted capacity can be considerable. This waste can be reduced if a station continues to listen to the medium while transmitting. This leads to the following rules for CSMA/CD:

CCSSMMAA//CCDD OOppeerraattiioonn

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

62

1. If the medium is idle, transmit; otherwise, go to step 2. 2. If the medium is busy, continue to listen until the channel is idle, then transmit immediately. 3. If a collision is detected during transmission, transmit a brief jamming signal to assure that all stations

know that there has been a collision and then cease transmission. 4. After transmitting the jamming signal, wait a random amount of time, referred to as the backoff, then

attempt to transmit again (repeat from step 1). Figure illustrates the technique for a baseband bus. The upper part of the figure shows a bus LAN layout. At time t0, station A begins transmitting a packet addressed to D. At t 1, both B and C are ready to transmit. B senses a transmission and so defers. C, however, is still unaware of A's transmission (because the leading edge of A's transmission has not yet arrived at C) and begins its own transmission. When A's transmission reaches C, at t 2, C detects the collision and ceases transmission. The effect of the collision propagates back to A, where it is detected some time later, t 3, at which time A ceases transmission. With CSMA/CD, the amount of wasted capacity is reduced to the time it takes to detect a collision. ALOHA The earliest of these techniques, known as ALOHA, was developed for packet radio networks. However, it is applicable to any shared transmission medium. ALOHA, or pure ALOHA as it is sometimes called, specifies that a station may transmit a frame at any time. The station then listens for an amount of time equal to the maximum possible round-trip propagation delay on the network (twice the time it takes to send a frame between the two most widely separated stations) plus a small fixed time increment. If the station hears an acknowledgment during that time, fine; otherwise, it resends the frame. If the station fails to receive an acknowledgment after repeated transmissions, it gives up. A receiving station determines the correctness of an incoming frame by examining a frame check sequence field, as in HDLC. If the frame is valid and if the destination address in the frame header matches the receiver's address, the station immediately sends an acknowledgment. The frame may be invalid due to noise on the channel or because another station transmitted a frame at about the same time. In the latter case, the two frames may interfere with each other at the receiver so that neither gets through; this is known as a collision. If a received frame is determined to be invalid, the receiving station simply ignores the frame. ALOHA is as simple as can be, and pays a penalty for it. Because the number of collisions rises rapidly with increased load, the maximum utilization of the channel is only about 18%. Slotted ALOHA To improve efficiency, a modification of ALOHA, known as slotted ALOHA, was developed. In this scheme, time on the channel is organized into uniform slots whose size equals the frame transmission time. Some central clock or other technique is needed to synchronize all stations. Transmission is permitted to begin only at a slot boundary. Thus, frames that do overlap will do so totally. This increases the maximum utilization of the system to about 37%. Both ALOHA and slotted ALOHA exhibit poor utilization. Both fail to take advantage of one of the key properties of both packet radio networks and LANs, which is that propagation delay between stations may be very small compared to frame transmission time. CSMA The foregoing observations led to the development of carrier sense multiple access (CSMA). With CSMA, a station wishing to transmit first listens to the medium to determine if another transmission is in progress (carrier sense). If the medium is in use, the station must wait. If the medium is idle, the station may transmit. It may happen that two or more stations attempt to transmit at about the same time. If this happens, there will be a collision; the data from both transmissions will be garbled and not received successfully. To account for this, a station waits a reasonable amount of time after transmitting for an acknowledgment, taking into account the maximum round-trip propagation delay and the fact that the acknowledging station must also contend for the channel to respond. If there is no acknowledgment, the station assumes that a

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

63

collision has occurred and retransmits. This strategy is effective for networks in which the average frame transmission time is much longer than the propagation time. Collisions can occur only when more than one user begins transmitting within a short time interval (the period of the propagation delay). If a station begins to transmit a frame, and there are no collisions during the time it takes for the leading edge of the packet to propagate to the farthest station, then there will be no collision for this frame because all other stations are now aware of the transmission. The maximum utilization achievable using CSMA can far exceed that of ALOHA or slotted ALOHA. The maximum utilization depends on the length of the frame and on the propagation time; the longer the frames or the shorter the propagation time, the higher the utilization. Nonpersistent CSMA With CSMA, an algorithm is needed to specify what a station should do if the medium is found busy. One algorithm is nonpersistent CSMA. A station wishing to transmit listens to the medium and obeys the following rules: 1. If the medium is idle, transmit; otherwise, go to step 2. 2. If the medium is busy, wait an amount of time drawn from a probability distribution (the retransmission delay) and repeat step 1. The use of random delays reduces the probability of collisions. To see this, consider that two stations become ready to transmit at about the same time while another transmission is in progress; if both stations delay the same amount of time before trying again, they will both attempt to transmit at about the same time. A problem with nonpersistent CSMA is that capacity is wasted because the medium will generally remain idle following the end of a transmission even if there are one or more stations waiting to transmit. Wireless LAN

Wireless LANs are generally categorized according to the transmission technique that is used. All

current wireless LAN products fall into one of the following categories: • Infrared (IR) LANs: An individual cell of an IR LAN is limited to a single room, because infrared light does not penetrate opaque walls. • Spread spectrum LANs: This type of LAN makes use of spread spectrum transmission technology. In most cases, these LANs operate in the ISM (industrial, scientific, and medical) microwave bands so that no Federal Communications Commission (FCC) licensing is required for their use in the United States.

A wireless LAN must meet the same sort of requirements typical of any LAN, including high capacity, ability to cover short distances, full connectivity among attached stations, and broadcast capability. In addition, there are a number of requirements specific to the wireless LAN environment: • Throughput: The medium access control protocol should make as efficient use as possible of the wireless medium to maximize capacity. • Number of nodes: to support hundreds of nodes across multiple cells. • Connection to backbone LAN: interconnection with stations on a wired backbone LAN through control modules connecting both types of LANs. • Service area: typical coverage area has a diameter of 100 to 300 m. • Battery power consumption: Mobile workers use battery-powered workstations that need a long battery life when used with wireless adapters. • Transmission robustness and security: a wireless LAN may be especially vulnerable to interference and eavesdropping. • Collocated network operation: likely that two or more wireless LANs operate in same or adjacent areas with possible interference between LANs.

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

64

• License-free operation: want wireless LAN products without having to secure a license for the frequency band used by the LAN. • Handoff/roaming: enable mobile stations to move from one cell to another. • Dynamic configuration: The MAC addressing and network management aspects of the LAN should permit dynamic and automated addition, deletion, and relocation of end systems without disruption to other users. A desirable, though not necessary, characteristic of a wireless LAN is that it be usable without having to go through a licensing procedure. The licensing regulations differ from one country to another, which complicates this objective. Within the United States, the FCC has authorized two unlicensed applications within the ISM band: spread spectrum systems, which can operate at up to 1 watt, and very low power systems, which can operate at up to 0.5 watts. Since the FCC opened up this band, its use for spread spectrum wireless LANs has become popular. In the United States, three microwave bands have been set aside for unlicensed spread spectrum use: 902 - 928 MHz (915-MHz band), 2.4 - 2.4835 GHz (2.4-GHz band), and 5.725 - 5.825 GHz (5.8-GHz band). Of these, the 2.4 GHz is also used in this manner in Europe and Japan. The higher the frequency, the higher the potential bandwidth, so the three bands are of increasing order of attractiveness from a capacity point of view. In addition, the potential for interference must be considered. There are a number of devices that operate at around 900 MHz, including cordless telephones, wireless microphones, and amateur radio. There are fewer devices operating at 2.4 GHz; one notable example is the microwave oven, which tends to have greater leakage of radiation with increasing age. At present there is little competition at the 5.8 GHz-band; however, the higher the frequency band, in general the more expensive the equipment.

Figure 17.4 illustrates the model developed by the 802.11 working group. The smallest building block of a wireless LAN is a basic service set (BSS), which consists of some number of stations executing the same MAC protocol and competing for access to the same shared wireless medium. A BSS may be isolated or it may connect to a backbone distribution system (DS) through an access point (AP). A simple configuration is shown in Figure 17.4, in which each station belongs to a single BSS; that is, each station is within wireless range only of other stations within the same BSS. This figure also indicates that an access point (AP) is implemented as part of a station; the AP is the logic within a station that provides access to the DS by providing DS services in addition to acting as a station. To integrate the IEEE 802.11 architecture with a traditional wired LAN, a portal is used. The portal logic is implemented in a device, such as a bridge or router, that is part of the wired LAN and that is attached to the DS.

The AP functions as a bridge and a relay point. In a BSS, client stations do not communicate directly with one another. Rather, if one station in the BSS wants to communicate with another station in the same BSS, the MAC frame is first sent from the originating station to the AP, and then from the AP to the destination station. Similarly, a MAC frame from a station in the BSS to a remote station is sent from the

IIEEEEEE 880022..1111 AArrcchhiitteeccttuurree

Data Communication & Networking IV Sem BCA

K. Adisesha, Presidency College COPY: Jan 2009

65

local station to the AP and then relayed by the AP over the DS on its way to the destination station. The BSS generally corresponds to what is referred to as a cell in the literature. The DS can be a switch, a wired network, or a wireless network. When all the stations in the BSS are mobile stations, with no connection to other BSSs, the BSS is called an independent BSS (IBSS). An IBSS is typically an ad hoc network. In an IBSS, the stations all communicate directly, and no AP is involved.

It is also possible for two BSSs to overlap geographically, so that a single station could participate in more than one BSS. Further, the association between a station and a BSS is dynamic. Stations may turn off, come within range, and go out of range. An extended service set (ESS) consists of two or more basic service sets interconnected by a distribution system. Typically, the distribution system is a wired backbone LAN but can be any communications network. The extended service set appears as a single logical LAN to the logical link control (LLC) level.