meljun cortes computer archetecture_data_communication

154
CS212 Computers Architecture And Data Communication Study Guide Version 1.0

Upload: meljun-cortes

Post on 29-Nov-2014

699 views

Category:

Education


1 download

DESCRIPTION

MELJUN CORTES Computer Archetecture_data_communication

TRANSCRIPT

Page 1: MELJUN CORTES Computer Archetecture_data_communication

CS212 Computers Architecture And Data Communication Study Guide Version 1.0

Page 2: MELJUN CORTES Computer Archetecture_data_communication

© 2006 by Global Business Unit – Higher Education Informatics Holdings Ltd A Member of Informatics Group Informatics Campus 10-12 Science Centre Road Singapore 609080 CS212 Computers Architecture And Data Communication Study Guide Version 1.0 Revised in June 2006 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted by any form or means, electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the publisher. Every precaution has been taken by the publisher and author(s) in the preparation of this book. The publisher offers no warranties or representations, not does it accept any liabilities with respect to the use of any information or examples contained herein. All brand names and company names mentioned in this book are protected by their respective trademarks and are hereby acknowledged. The developer is wholly responsible for the contents, errors and omission.

Page 3: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-1

Chapter 1. Computer Hardware

• The hallmark of a von Neumann machine is a large random-access memory. Each

cell in the memory has a unique numerical address, which can be used to access or

replace the contents of that cell in a single step. In addition to its ability to address

memory locations directly, a von Neumann machine also has a central processing

unit (the CPU) that possesses a special working memory (register memory) for

holding data. Data that are being operated on and a set of built-in operations that is

rich in comparison with the Turing machine. The exact design of the central

processor varies considerably, but typically includes operations such as adding two

binary integers, or branching to another part of the program if the binary integer in

some register is equal to zero. The CPU can interpret information retrieved from

memory either as instructions to perform particular operations or as data to apply

the current operation to. Thus portion of memory can contain a sequence of

instructions, called a program, and another portion of memory can contain the data

to be operated on by the program. The CPU repeatedly goes through a fetch-execute

cycle. A von Neumann machine runs efficiently because of its random access memory

and because its architecture can be implemented in electronic circuitry that makes it

very fast.

1.1 Components

All computers can be summarized with just two basic components:

• Primary storage or memory

• A central processing unit or CPU.

Page 4: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-2

• The CPU is the "brains" of the computer. Its function is to execute programs that are

stored in memory. The CPU fetches an instruction stored in memory then executes

the retrieved (fetched) instruction within the CPU before proceeding to fetch the next

instruction from memory This process continues until they are told to stop. These

programs may involve either its data being process in some manner or results of an

operation being stored. Thus we can summarize:

• Fetch instruction: read instructions from the memory

• Interpret instructions: decode the instruction to determine the operation to

be performed

• Process data: execute the instruction

• Write data: results may mean writing data to memory or an

I/O device

• The ability to process and store data then entails more essential components, the

architecture for which is the basic make-up of any computer. The CPU typically

consists of:

• A control unit

• An Arithmetic Logic Unit (ALU)

• Registers

Page 5: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-3

1.1.1 CPU Structure Legend: Control Unit: MAR: memory address register IR: instruction register MBR: memory buffer register PC: program counter PSW: program status word

SB ALU Unit: AC: accumulator register DR: general-purpose register MQ: multiplier-quotient register MM: Main Memory SB: System Bus

• Organization of a CPU Structure

• Control Unit (CU)

Circuitry located on the central processing unit, which coordinates and controls all

hardware. This is accomplished by using the contents of the instruction register to

decide which circuits are to be activated. The control unit is also responsible for

fetching instructions from the main memory and decoding the instruction.

• Arithmetic-Logic Unit (ALU)

Performs arithmetic operations such as addition and subtraction as well as logical

operations such as AND, OR and NOT. Most operations require two operands. One

of these operands usually comes from memory via the memory buffer register, while

the other is the previously loaded value stored in the accumulator. The results of an

arithmetic-logic unit operation are usually transferred to the accumulator (AC).

Instruction Decoder

IR MAR

MBR PC

DR

Arithmetic Logic Unit

AC MQ

MM

PSW

I/O Module

I/O

Module

Page 6: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-4

• Registers

Registers are the smallest units of memory and are located internally within the CPU.

And are most often used to either temporarily store results or control information.

Registers within the CPU serve two basic functions:

• User-visible registers: these enable to machine/assembly language programs

to minimize main memory references by optimizing use of registers.

• Control and Status registers: used by the control unit to control the

operation of the CPU and by privileged, operating system programs to control

the execution of programs.

Examples include those usually found in the fetch-execute cycle:

• Accumulator (AC, DR)

A register located on the central processing unit. The contents can be used

by the arithmetic-logic unit for arithmetic and logic operations, and by the

memory buffer register. Usually, all results generated by the arithmetic-

logic unit end up in the accumulator.

• Instruction Register (IR)

A register located on the central processing unit, which holds the contents

of the last instruction fetched. This instruction is now ready to be executed

and is accessed by the control unit.

Page 7: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-5

• Memory Address Register (MAR)

A register located on the central processing unit, which is in turn,

connected to the address lines of the system. This register specifies the

address in memory where information can be found and can be also used

to point to a memory location where information is to be stored.

• Memory Buffer Register (MBR)

A register located on the central processing unit, which is in turn

connected to the data lines of the system. The main purpose of this register

is to act as an interface between the central processing unit and memory.

When the control unit receives the appropriate signal, the memory location

stored in the memory address register is used to copy data from or to the

memory buffer registers.

• Program Counter (PC)

Contains the memory address of the next instruction to be executed. The

contents of the program counter are copied to the memory address register

before an instruction is fetched from memory. At the completion of the

fetched instruction, the control unit updates the program counter to point

to the next instruction, which is to be fetched.

Page 8: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-6

• Program Status Word (PSW)

Generally referred to as a Status Register (SR), this register encapsulates

key information used by the CPU to record exceptional conditions.

Conditions such as CPU-detected errors (an instruction attempting to

divide by zero), hardware faults detected by error checking circuits, and

urgent service requests or interrupts generated by the I/O modules.

1.2 Memory

• Memory is made up of a series of zero's (0) and one's (1) called bits or binary. These

individual bits are grouped together in lots of eight and are referred to as a byte.

Every byte in memory can be accessed by a unique address that identifies its location.

The memory in modern computers contains millions of bytes and is often referred to

as random-access memory (RAM).

• Memory organization

• This memory is called a N-word m-bit memory

• N generally is in a power of 2, i.e. N=2n

• Size of address is n bits

• Size of word is m bits

• Example: A 4096-word 16-bit memory

Page 9: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-7

Word Length = m bits Address 0 Address 1 N Words 2n Words Uni-directional

Address N-2 address bus Address N-1 Bit 1 m bits Bi-directional data bus

• A diagram showing the structure of a Main Memory layout

• Memory Read

• Address is placed into the MAR

• Read control is asserted

• Contents of desired location is placed into MBR

• Memory Write

• The word to be written is placed into the MBR

• Address of memory location to be written is specified in MAR

• Write control line is asserted

• Content of MBR is transferred into memory location specified by the MAR

MAR

MBR

Page 10: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-8

1.3 Input / Output Devices

• Input Devices

• Accepts outside information

• Converts it into digital signals suitable for computation by the CPU

• Output Devices

• Communicates data stored in memory or processed data, to the outside world

• May be in various forms such as a computer Monitor or a Hardcopy

1.4 Bus Interconnection

• A bus is a communication pathway connecting two or more devices

• It is a shared transmission medium. Multiple devices connect to the bus, and a

signal transmitted by any one device is available for reception for all other

devices attached to the bus.

• Only one device at a time can successfully transmit

• A bus usually consists of multiple communication pathways, or lines, each

transmitting either 1 or 0, e.g. an 8-bit unit of data can be transmitted over eight

bus lines

• A system bus connects all the major components, i.e. CPU, CU, I/O

Page 11: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-9

• Bus structure

• A system bus typically consists of 50-100 separate lines. Each of which is split

amongst the Data, Address and Control lines. The number of lines is usually

referred to as the width of the bus.

• Each line is assigned to a particular function

• The bus lines can be classified into 3 functional groupings

1. Data Lines

2. Address Lines

3. Control Lines

Data Bus Address bus Control bus

• Diagram of the Bus Interconnection Structure

Control Unit

ALU

Registers

Main memory

Secondary memory

Input Devices

Output Devices

Page 12: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-10

• Data Lines

• Provides a path for moving data between system modules

• Address Lines

• Used to designate the source or destination of the data on the data bus

• The width of the address bus determines the maximum possible capacity of the

system

• Control Lines

• Used to control the access to and the use of data and address lines

• Control signals transmit both command and timing information between system

modules

• Timing signals indicate the validity of data and address information

• Command signals specify the operations to be performed

Page 13: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-11

1.5 Instruction-Execution Cycle

All computers have an instruction execution cycle. A basic instruction execution cycle

can be broken down into the following steps:

1.Fetch cycle

2.Execute cycle

• The fetch-Execute cycle

1.5.1 Fetch Cycle

• To start off the fetch cycle, the address, which is stored in the program counter

(PC), is transferred to the memory address register (MAR). The CPU then

transfers the instruction located at the address stored in the MAR to the

memory buffer register (MBR) via the data lines connecting the CPU to

memory. The control unit (CU) coordinates this transfer from memory to CPU.

To finish the cycle, the newly fetched instruction is transferred to the

instruction register (IR) and unless told otherwise, the CU increments the PC

to point to the next address location in memory.

Fetch instruction

Execute instruction

Start

Stop

Page 14: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-12

• Steps:

1. [PC] -> [MAR]

2. [MAR] -> Address bus

3. Read control is asserted

4. [MEM] -> Data bus -> [MBR]

5. [MBR] -> [IR]

6. [PC] + 1 -> [PC]

After the CPU has finished fetching an instruction, the CU checks the contents of

the IR and determines which type of execution is to be carried out next. This

process is known as the decoding phase. The instruction is now ready for the

execution cycle.

1.5.2 Execute Cycle

• Once an instruction has been loaded into the instruction register (IR), and the

control unit (CU) has examined and decoded the fetched instruction and

determined the required course of action to take, the execution cycle can

commence. Unlike the fetch cycle (and the interrupt cycle, both of which have

a set instruction sequence, which we will see later) the execute cycle can

comprise some complex operations (commonly called opcodes).

Page 15: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-13

• Steps:

1. [IR] -> decoding circuitry

2. If required data are not available in instruction, determine the location

3. Fetch the data, if any

4. Execute the instruction

5. Store results, if any

• The actions within the execution cycle can be categorized into the following four

groups:

1. CPU - Memory: Data may be transferred from memory to the CPU or from the

CPU to memory.

2. CPU - I/O: Data may be transferred from an I/O module to the CPU or from the

CPU to an I/O module.

3. Data Processing: The CPU may perform some arithmetic or logic operation on

data via the arithmetic-logic unit (ALU).

4. Control: An instruction may specify that the sequence of operation may be

altered. For example, the program counter (PC) may be updated with a new

memory address to reflect that the next instruction fetched should be read from

this new location.

Page 16: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-14

• For simplicity reasons, the following examples will deal with two operations that

can occur. The [LOAD ACC, memory] and [ADD ACC, memory], both of

which could be classified as memory reference instructions. Instructions, which

can be executed without leaving the CPU, are referred to as non-memory

reference instructions.

• LOAD ACC, memory

• This operation loads the accumulator (ACC) with data that is stored in the

memory location specified in the instruction. The operation starts off by

transferring the address portion of the instruction from the IR to the memory

address register (MAR). The CPU then transfers the instruction located at the

address stored in the MAR to the memory buffer register (MBR) via the data

lines connecting the CPU to memory. The CU coordinates this transfer from

memory to CPU. To finish the cycle, the newly fetched data is transferred to

the ACC.

• Steps:

1. [IR] {address portion} -> [MAR]

2. [MAR] -> [MEM] -> [MBR]

3. [MBR] -> [ACC]

Page 17: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-15

• ADD ACC, memory

• This operation adds the data stored in the ACC with data that is stored in the

memory location specified in the instruction using the ALU. The operation starts

off by transferring the address portion of the instruction from the IR to the MAR.

The CPU then transfers the instruction located at the address stored in the MAR to

the MBR via the data lines connecting the CPU to memory. This transfer from

memory to CPU is coordinated by the CU. Next, the ALU adds the data stored in

the ACC and the MBR. To finish the cycle, the result of the addition operation is

stored in the ACC for future use.

• Steps:

1. [IR] {address portion} -> [MAR]

2. [MAR] -> [MEM] -> [MBR]

3. [MBR] + [ACC] -> [ALU]

4. [ALU] -> [ACC]

After the execution cycle completes, if an interrupt is not detected, the next instruction

is fetched and the process starts all over again.

Page 18: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-16

• Example:

Draw a diagram (including MAR, MBR, IR and PC), show how the instruction MOV

AX [7000] is being fetched if the starting address for the instruction is 1FFF and the

content of the location at 7000 is ABCD.

• Instruction – Fetch-cycle

PC MAR MEMORY (1) (2) 7000 IR MBR 1FFF (5) (4) (3) (6) [PC] +1 -> [PC]

Address bus Data bus

• Instruction – Execution-cycle

IR (1) Decoding circuitry MEMORY MAR 7000 (2) (3) AX MBR (4) (6) (5) Address bus Data bus

1FFF 1FFF

MOV AX [7000]

MOV AX [7000]

ABCD

Mov AX[7000]

MOV AX [7000]

7000

ABCD ABCD

ABCD

Page 19: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-17

• Fetch –cycle:

1. [PC] -> [MAR] MAR = 1FFF

2. [MAR] -> Address bus

3. Read control line is asserted

4. [MEM]1FFF -> Data bus -> [MBR] MBR = MOV AX [7000]

5. [MBR] -> [IR] IR = MOV AX [7000]

6. [PC] + 1 -> [PC]

• Execute-cycle:

1. [IR] -> decoding circuitry MAR = 7000

2. [MAR] <- 7000

3. [MAR] -> Address bus

4. Read control line is asserted

5. [MEM]7000 -> Data bus -> [MBR] MBR = ABCD

6. [MBR] -> AX AX = ABCD

1.5.3 Interrupt Cycle

• An interrupt can be described as a mechanism in which an I/O module etc. can

break the normal sequential control of the central processing unit (CPU). And

thus alter the way we view the traditional sequence of the fetch and execute cycle.

The main advantage of using interrupts is that the processor can be engaged in

executing other instructions while the I/O modules connected to the computer are

engaged in other operations.

Page 20: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-18

Fetch Cycle Execute Cycle Disabled

Enabled

Interrupt Cycle

• Instruction cycle with interrupts

• Common interrupts that the CPU can receive

• Program: Generated by some condition that occurs as a result of an

instruction execution, such as arithmetic overflow, division by zero, attempt to

execute an illegal machine instruction, and reference outside a user's allowed

memory space.

• Timer: Generated by a timer within the processor. This allows the

operating system to perform certain functions on a regular basis.

• I/O: Generated by an I/O controller, to signal normal completion of an

operation or to signal a variety of error conditions.

• Hardware failure: Generated by a failure such as power failure or

memory parity error.

Start

Fetch instruction

Execute instruction

Check for interrupt

Halt

Page 21: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-19

• Up until now we have dealt with the instruction execution cycle on the hardware

level. When interrupts are introduced, the CPU and the operating system driving the

system, is responsible for the suspension of the program currently being run, as well

as restoring that program at the same point before the interrupt was detected. To

handle this, an interrupt handler routine is executed. This interrupt handler is usually

built into the operating system.

1.6 Modern Processors

• When central processing units (CPU's) were first developed they processed the

first instruction before starting the second. For example, the processor fetched the

first instruction, decoded it, and then executed the fetched instruction, before

fetching the second instruction and starting the process over again. In a processor,

such as the one just described, the CPU itself is the weak link. The external bus

operates for at least one cycle (clock pulse) in the three, but has to wait the

remaining cycles for the CPU.

Cycle 1 Cycle 2 Cycle 3 Cycle 4

Instruction 1 Fetch Execute

Instruction 2 Fetch Execute

• Sequential execution of program instructions

Page 22: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-20

• Modern processors on the other hand, have developed what are called pipelines.

Pipelines are the most common implementation technique in a CPU today that

increases the performance of the system. The idea behind the pipeline is that while

the first instruction is being executed, the second instruction can be fetched, or in

simple terms, instructions overlap.

• The first pipelines to be introduced where a simple three-stage pipeline. While this

utilizes all the resources of the system, conflict of resources can occur, resulting in

instructions being held until the previous instruction has completed its current

stage. Apart from these minor hiccups, it is possible for the CPU to complete an

instruction every cycle as opposed to the earlier processors that required three

cycles per instruction.

• To overcome the delays associated with the three stage pipeline, modern

processors have broken down the execute cycle into a number of phases, some

have even broken down the fetch cycle in the fight to overcome delays in their

processors. No matter how many phases the cycle is broken down to, the end

result is that only one instruction can be completed every cycle.

Cycle 1 Cycle 2 Cycle 3 Cycle 4

Instruction 1 Fetch Execute

Instruction 2 Fetch Execute

Instruction 3 Fetch Execute

• Execution cycle in a three-stage pipelined processor

Page 23: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-21

• Enter the world of superscalar pipeline, where more than one instruction can be

issued per clock cycle. Intel describes their processors by different levels. For

example a level 2 (L2) processor (Pentium) can issue two instructions per clock

cycle and their level 3 (L3) processor (Pentium Pro) can issue 3. The topic of

pipelined architecture will be covered in more detail in later chapters.

Page 24: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-22

• Exercises:

a) With a diagram, introduce into the fetch-execute cycle a provision for an interrupt. b) The instruction (e.g. LOAD LABEL) loads the data at the address specified in the

accumulator, which is a hexadecimal value. With a diagram show how many times the instruction must be fetched. Given that both the opcode field and the main memory are 1 byte wide. The starting address is FEDC16 and the instruction is LOAD 0110.

c) The operation ADD ACC, Memory adds the data stored in the ACC with the data that

is stored in the memory location specified in the instruction using the ALU. Show the steps of the execution-cycle on the operation.

d) With the aid of a diagram, show how the instruction ADD [5000], 10, [1000] is

loaded into main memory if the starting address for the instruction is 0200 and the data stored in location 5000 and 1000 are 8 and 8 respectively. Then… Show how the instruction, which will add the data at address 5000 with the data in address 1000 and the number 10, and then store the result in address 5000, will be executed.

e) Give the definitions and purposes of the following:

• PSW • ALU • MM • MAR • MBR

f) Give a brief description on the concept of pipelined architecture. How does a

traditional sequence like the fetch-execute differ in this type of architecture?

Page 25: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 1 - COMPUTER HARDWARE

1-23

Page 26: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-1

Chapter 2. Instruction Formats

• A typical program involves performing a number of functionally different steps, such

as adding two numbers, testing for a particular condition, reading a character from the

keyboard, or sending a character to be displayed on a monitor. A computer must have

instructions capable of performing these four types of operations:

• Data transfer between the main memory and the CPU registers

• Arithmetic and logic operations on data

• Program sequencing and control

• I/O transfers

• The format of an instruction is depicted in a rectangular box symbolising the bits of

the instruction code. The bits of the binary instruction are divided into groups called

fields. The most common fields found in instruction formats are:

• An operation code field that specifies the operation to be performed.

• An address field that designates either a memory address or a code for choosing a

processor registers.

• A mode field that specifies the way the address field is to be interpreted.

• Other special fields are sometimes employed under certain circumstances, as for

example a field that gives the number of shifts in a shift type instruction (a concept

discussed in more detail in LD201), or an operand field in an immediate type

instruction.

The operation code field of an instruction is a group of bits that define various

processor operations, such as Add, Subtract, Complement, and shift.

The bits that define the mode field of an instruction code specify a variety of

alternatives for choosing the operands from the given address field. The various

addressing modes will be discussed in the next chapter.

Page 27: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-2

2.1 Address Field

• Operations specified by computer instructions are executed on some data stored in

memory or in processor registers. Operands residing in memory are specified by their

addresses. Operands residing in processor registers are specified by a register address.

A register address is a binary code of n-bits that specifies one of 2n registers in the

processor. Thus a computer with 16 processor registers R0 through to R15 will have

in its instruction code a register address field of four bits. The binary code 0101, for

example, will designate register R5.Computers may have instructions of several

different lengths containing varying number of addresses. The number of address

fields in the instruction format of a computer depends upon the internal organisation

of its registers. Most instructions fall in one of three types of organisation:

• Single accumulator organisation.

• Multiple register organisation.

• Stack organisation.

2.1.1 Accumulator Organisation

• An accumulator type organisation is the simplest of computer organisations. All

operations are performed with the implied accumulator register. The instruction

format in this type of computer uses one memory address field.

For example, the instruction that specifies an arithmetic addition has only one address

field symbolised by X.

ADD X

• ADD is the symbol for the operation code of the instruction and gives X the address

of the operand in memory. This instruction results in the operation AC <- AC + M[X].

AC is the accumulator register and M[X] symbolises the memory word located at

address X.

Page 28: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-3

2.1.2 Multiple Register Organisation

• A processor unit with multiple registers usually allows for greater programming

flexibility. The instruction format in this type of computer needs three registers. Thus,

the instruction for arithmetic addition may be written in symbolic form as

ADD R1, R2, R3

• to denote the operation R3 <- R1 + R2. However, the number of register address

fields in the instruction can be reduced from three to two if the destination register is

the same as one of the source registers.

ADD R1, R2

• Thus, the instruction will denote the operation R2 <- R2 + R1. Registers R1 and R2

are the source registers and R2 is also the destination register.

Computers with multiple processor registers employ the MOVE instruction to

symbolise the transfer of data from one location to another. The instruction

MOVE R1, R2

• Denotes a transfer R2 <- R1. Transfer type instructions need two address fields to

specify the source operand and the destination of transfer.

Page 29: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-4

2.1.3 Stack Organisation

• The stack organisation will also be presented in further detail in the next chapter, and

later in this section. Computers with stack organisation have instructions that require

one address field for transferring data to and from the stack. Operation type

instructions such as ADD do not need an address field because the operation is

performed directly with the operands in the stack.

• To illustrate the influence of the number of address fields on computer programs, we

will evaluate the arithmetic statement

X = ( A + B ) * ( C + D )

Using the One, Two, and Three address instructions.

Page 30: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-5

2.2 Three Address Instructions

OPER 1 DATA SOURCE 1

OPER 2 DATA SOURCE 2

OPER 3 DESTINATION

• Computers with three address instruction formats can use each address field to

specify either a processor register or a memory address for an operand. The program

in symbolic form evaluates X = ( A + B ) * ( C + D ) is shown below, together with

an equivalent register transfer statement for each instruction.

ADD A, B, R1 R1 <- M [A] + M [B]

ADD C, D, R2 R2 <- M [C] + M [D]

MUL R1, R2, X M [X] <- R1 * R2

• It is assumed that the computer has two processor registers, R1 and R2. The symbol

M[A] denotes the operand stored in memory at the address symbolised by A.

• Advantage: Programming flexibility

• Disadvantage: Long instruction word

OPCODE OPER 1 OPER 2 OPER 3

Page 31: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-6

2.3 Two Address Instructions

OPER 1 DATA SOURCE 1

OPER 2 DATA SOURCE 2

• Implied destination, either oper 1 or oper 2

• Two address instructions are the most common in commercial computers. Here again

each address field can specify either a processor register or a memory address. The

program to evaluate X = ( A + B ) * ( C + D ) is as follows:

MOVE A, R1 R1 <- M[A]

ADD B, R1 R1 <- R1 + M[B]

MOVE C, R2 R2 <- M[C]

ADD D, R2 R2 <- R2 + M[D]

MUL R2, R1 R1 <- R1 * R2

MOVE R1, X M[X] <- R1

• The MOVE instruction moves or transfers the operands to and from memory and

processor registers. The second listed in the symbolic instruction is assumed (implied)

to be the destination where the result of the operation is transferred.

• Advantage: Smaller instruction word

• Disadvantage: Implied destination may not be the desired address, needs a

data transfer to the actual location.

OPCODE OPER 1 OPER 2

Page 32: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-7

2.4 One Address Instructions

OPER 1 DATA SOURCE 1

• The other data source is an accumulator or stack

• Implied destination of operation

• A computer with a one-address instruction uses an implied AC register. The program

to evaluate the arithmetic statement is as follows:

LOAD A AC <- M[A]

ADD B AC <- AC + M[B]

STORE T M[T] <- AC

LOAD C AC <- M[C]

ADD D AC <- AC + M[D]

MUL T AC <- AC * M[T]

STORE X M[X] <- AC

• All operations are done between the AC register and a memory operand. The

symbolic address T designates a temporary memory location required for storing the

intermediate result.

• Advantage: Smaller instruction word.

• Disadvantage: Data needs to be loaded into the accumulator first. And the

implied destination may not be the desired address, therefore it requires a data

transfer to the actual location.

OPCODE OPER 1

Page 33: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-8

2.5 Zero Address ( Stack ) Instructions

• Most computers have a facility for a memory stack but only a few commercial

computers have the appropriate instructions for evaluating arithmetic expressions.

Such computers have a stack organised CPU with the top locations of the stack as

registers. The rest of the stack is in memory. In this way, the operations that must be

performed with the top two elements of the stack are available in processor registers

for manipulation with arithmetic circuits.

The PUSH and POP instructions require one address field to specify the source or

destination operand. Operation type instructions for the stack such as ADD and MUL

imply two operands on top of the stack and do not require an address field in the

instruction. The following program shows how the expression will be evaluated, X =

( A + B ) * ( C + D ):

PUSH A TOS <- A

PUSH B TOS <- B

ADD TOS <- ( A + B )

PUSH C TOS <- C

PUSH D TOS <- D

ADD TOS <- ( C + D )

MUL TOS <- ( C + D ) * ( A + B )

POP X M[X] <- TOS

• Advantage: Smaller instruction word.

• Disadvantage: Data needs to be loaded into the stack first. And the implied

destination may not be the desired address, therefore it requires a data transfer

to the actual location.

OPCODE

Page 34: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-9

2.6 Using Different Instruction Formats

• Given the scenario of a computer system, which has a 40-bit wide instruction word,

there are 12 instructions in the instruction set (4 bits for the opcode and 36 bits left for

the operand). How many bits would by allocated to each of the instruction formats.

2.6.1 Three Address Format

• 12 bits per operand field

• The number of addressable ranges would be 2n12 = 4096

4 Bits 12 Bits 12 Bits 12 Bits

0

4096

212 -1

OPCODE OPER 1 OPER 2 OPER 3

Main Memory

Page 35: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-10

2.6.2 Two Address Format

• 18 bits per operand field

• The number of addressable ranges would be 2n18 = 256K

4 Bits 18 Bits 18 Bits

0

256K

218 -1

2.6.2 Two Address Format

• 36 bits per operand field

• The number of addressable ranges would be 2n36 = 64M

4 Bits 36 Bits

OPCODE OPER 1 OPER 2

Main Memory

OPCODE OPER 1

Page 36: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-11

0

64M

236 -1

Main Memory

Page 37: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-12

• Exercises:

(a) Write programs to compute X= ( A + B ) * ( C * D / E ) on each of the following

machines:

i) 0-address format

ii) 1-address format

iii) 2-address format

iv) 3-address format

The instruction available for use is as follows:

0-address 1-address 2-address 3-address

PUSH LOA M ADD A, B ADD A, B, B

POP STO M DIV D, E DIV D, E, E

MUL ADD M MUL C, D MUL C, E, E

DIV SUB M MUL A, C MUL B, E, X

ADD MUL M MOV X, A

SUB DIV M

(b) Assume a system has a 24-bit wide instruction word. Calculate the minimum number

of bits needed for opcode in order to evaluate the above expression for various

machines in part (a). Hence calculate the maximum number of bits for the operand for

each of the machines.

Page 38: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-13

(c) A computer system has a 128-bit instruction word and uses a 3-address format. There

are 155 different general-purpose registers available. Assume that the use of any of

these registers is required in a particular instruction – i.e. the appropriate register must

be specified in the instruction word as a special field. If there are 200 different

opcodes/instructions available for the system, what is the instruction format like?

Page 39: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-14

Page 40: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 2 – INSTRUCTION FORMATS

2-15

Page 41: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 1

Chapter 3. Addressing Methods

• Address fields in a typical instruction format are quite limited in addressable range,

therefore it would be better to give them the ability to reference a large range of

locations in main memory or, from some systems virtual memory. To achieve this

objective, a variety of addressing techniques has been employed. They all involve

some trade-off between address range and/or addressing flexibility on the one hand,

and a number of memory references and/or complexity of address calculation on the

other.

• In many computer systems, the computer will have several different programs. To

efficiently load and remove these programs from memory or different locations,

addressing techniques are provided that make the program re-locatable, meaning that

the same program can be run in many different sections at memory.

3.1 Addressing Techniques

• All computers provide more than one type of addressing mode. The question arises as

to how the control unit can determine which address mode is being used in a

particular instruction. Several approaches are taken. Often, different opcodes will use

different addressing modes. Also, one or more bits in the instruction format can be

used as mode field. The value of the mode field determines which addressing mode to

be used. Another thing to note is the effective address, either being a main memory

address or a register address.

Page 42: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 2

3.1.1 Immediate Addressing

• The operand is given explicitly in the instruction. This mode is used in specifying

address and data constants in programs:

MOV 200immediate, R0

• The instruction places the value 200 in register R0. The immediate mode is used to

specify the value of a source operand. It makes no sense as a destination because it

does not specify a location in which an operand can be stored. Using a subscript to

denote the immediate mode is not appropriate in an assembly language. Sometimes,

the value is written, as it is, e.g. 200. But this can be confusing. A common

convention is to use the pound sign # in front of the value of an operand to indicate

that this value is to be used as an immediate operand. Using this convention, the

above instruction is written as:

MOV #200, R0

• Example:

• Load 5000: Move the value 5000 into the accumulator

• Immediate addressing technique

5000

Accumulator

Load 5000

Main memory

Page 43: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 3

3.1.2 Direct Addressing

• To specify a memory address in an instruction word, the most obvious technique is

simply to give the address in binary form. Although direct addressing provides the

most straightforward (and fastest) way to give a memory address, several other

techniques are also used. It requires only one memory reference and no special

calculation. However, because of this it has only a limited address space. A typical

instruction using direct addressing may look as follows:

MOV AX, [7000]

• In this case the instruction specifies that the contents of [7000], which is a memory

location, must be retrieved and placed into the accumulator AX. Common convention

for denoting an instruction using direct address is to place a bracket by the side, e.g.

[address].

• Example :

• Load 5000: Move the content at address 5000 into the accumulator

• Direct addressing technique

ABCD

Accumulator

Load [5000]

ABCD 5000

Main memory

Page 44: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 4

3.1.3 Indirect Addressing

• The effective address of the operand is the contents of a main memory location, the

location whose address appears in the instruction. We denote indirection by placing

the name of the memory location (or register address as we will see later in this

chapter) with a @ symbol. The memory location that contains the address of the

operand is called a pointer. In direction is an important and powerful concept in

programming. Consider the analogy of a treasure hunt: In the instructions for the hunt

you may be told to go to a house at a given address. Instead of finding the treasure

there, you find a note that gives you another address where you will find the treasure.

By changing the note, the location of the treasure can be changed, but the instructions

for the hunt remain the same. Changing the note is equivalent to changing the

contents of a pointer in a computer program. An example instruction may look like:

MUL #10, @1000, [2000]

• In this instruction we are asked multiply the value 10 with the contents of whatever is

found to be the address at location @1000. For example, after we first point to

@1000, the contents are the address 3000, of which we will go there to retrieve a

value. The final part of the above instruction now asks us to store the resultant

operation in address 2000, found in main memory.

• Example

• Load @5000: Access main memory at address 5000 to get address of

actual data. Access main memory at this address to retrieve actual data

into accumulator.

Page 45: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 5

• Indirect addressing technique

3.1.4 Register Addressing • Similar to direct addressing, however the operand field refers to a register address

containing the actual data, instead of a main memory address. If register address is

heavily used in an instruction set, this implies that the CPU registers will be heavily

used. Because of the severely limited number of registers (compared to main memory

locations), their use in this fashion makes sense only if they are employed efficiently.

The advantages of register addressing is the access time, however the obvious

disadvantage to register addressing is their limited size. A typical instruction could be

similar to that of direct address, for example:

MOV AX, R1

• In this case the instruction specifies that the contents of R1, which is a memory

location, must be retrieved and placed into the accumulator AX. Common convention

for denoting an instruction using register address is to place an R followed by the

register location by the side, e.g. R3.

Accumulator

Load @5000

ABCD 5000

FACE ABCD FACE

Main memory

Page 46: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 6

• Example:

• Load R5: Move the content in the register 05 into the accumulator

• Register addressing technique

3.1.5 Register Indirect Addressing • Similar to indirect addressing, however the operand field points to a register

containing the effective address of data. The effective address may be a memory

location or a register location. The same advantages and disadvantages that could be

said of indirect addressing could also be true here. A typical instruction might be,

again similar to indirect addressing, as such:

ADD #20, @R8, R3

In this instruction we are asked add the value 20 with the contents of whatever is

found to be in the register address @R8. For example, after we first point to @R8, the

contents are the address 2000 (in the this case the effective address is a memory

location), of which we will go there to retrieve a value. The final part of the above

instruction now asks us to store the resultant operation in register location R3.

ABCD

Accumulator

Load [R5]

Main memory

ABCD

R1 R2 R3 R4 R5 R6 R7

Page 47: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 7

• Example

• Load @R3: Access the register 05 to get the address of actual data

(Assume this address is a register). Access this register to retrieve actual

data into accumulator

• Register indirect addressing technique

3.1.6 Displacement Addressing • Operand field contains offset or displacement. Uses a special –purpose register whose

contents are added to the offset to produce the effective address of data, Basically it

combines the capabilities of direct addressing and register indirect addressing. Under

displacement addressing, there are three basic techniques, although it should be noted

that all are following the same generic method of specifying an effective address:

1. Relative addressing (PC)

2. Based addressing (BX)

3. Indexed addressing (SI)

FACE

Accumulator

Load @R3

Main memory

R1 R2 R3 R4 R5 R6 R7

FACE

R5

Page 48: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 8

• Example

• Load [displacement register e.g. PC, BX, or SI + 5]: Move the data at

location e.g. (PC + 5) into the accumulator.

• Displacement addressing technique

5000 5001 5002 5003 5004 5005 5006

Load [PC + 5]

ABCD Accumulator

ABCD

5001

PC

+

PC 5001

Offset 5

5006 (Relative Address)

Offset

Page 49: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 9

3.2 Assembly Language

• Machine instructions are represented by patterns of 0’s and 1’s. Such patterns are

awkward to deal with when writing programs. Therefore we use symbols to represent

the patterns.

• For example, in the case of the Move and Add instructions, we use the symbolic

names MOVE and ADD to represent the corresponding operation code patterns.

Similarly we use the notation R3 to refer to register number three.

A complete set of such symbolic names and rules for their use constitutes a

programming language generally referred to as assembly language. The symbolic

names are called mnemonics; the set of rules for using the mnemonics in the

specification of complete instructions and programs is called the syntax of the

language.

• Programs written in an assembly language can be automatically translated into a

sequence of machine instructions by a special program called an assembler. The

assembler, like any other program, is stored as a sequence of machine instructions in

the main memory. A user program is usually entered into the computer, at this point

the program is simply a set of lines of alphanumeric characters. When the assembler

program is executed, it reads the user program, analyses it, and then generates the

desired machine language program. The latter contains patterns of 0’s and 1’s

specifying the instructions that will be executed by the computer. The user program in

its original alphanumeric text format is called the source program, and the assembled

machine language program is called the object program.

Page 50: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 10

3.2.1 Instruction Notation

• The assembly language syntax may require us to write the MOVE instruction as

MOVE R0, SUM

• The mnemonic, MOVE, represents the operation performed by the instruction. The

assembler translates this mnemonic into a binary code that the computer understands.

The binary code is usually referred to as the opcode, because it specifies the operation

denoted by the mnemonic.

The opcode mnemonic is followed by at least one blank space. Then the information

that specifies the operand is given. In the example above, the source operand is in

register R0. This information is followed by the specification of the destination

operand, separated from the source operand by a comma. The destination operand is

in the memory location that has its address represented by the name SUM.

Since there are several possible addressing modes for specifying operand locations,

the assembly language syntax must indicate which mode is being used. For example,

a numerical value or a name used by itself, such as SUM in the preceding instruction

may be used to denote the direct mode. The pound sign usually denotes an immediate

operand. Thus the instruction

ADD #5, R3

• Adds the number 5 to the contents of register R3 and puts the result back into register

R3. The pound sign is not the only way to denote immediate addressing. In some

cases, the intended addressing mode is indicated by the opcode used. The assembly

language may have different opcode mnemonics for different addressing modes.

For example, the previous Add instruction may have to be written as

ADDI 5, R3

Page 51: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 11

• The mnemonic ADDI states that the source operand is given in the immediate

addressing mode.

Putting parentheses around the name or symbol denoting the pointer to the operand

usually specifies indirect addressing. For example, if the number 5 is to be placed in a

memory location whose address is held in register R2, the desired action can be

specified as

MOVE #5, (R2) or perhaps MOVEI 5, (R2)

3.2.2 Number Notation

• When dealing with numerical values, it is often convenient to use the familiar decimal

notation. Of course, these values are stored in the computer as binary numbers. In

some situations it is more convenient to specify the binary patterns directly. Most

assemblers allow numerical values to be specified in different ways, using

conventions that are defined by the assembly language syntax. Consider for example,

the number 93, which is represented by the 8-bit binary number 01011101. If this

value is to be used as an immediate operand, it can be given as a decimal number, as

in the instruction

ADD #93, R1

Or as a binary number identified by a percent sign, as in

ADD #%01011101, R1

Page 52: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 12

• Binary numbers can be written more compactly as hexadecimal numbers, in which

four bits are represented by a single hexadecimal digit. The hexadecimal notation is a

direct extension of the BCD code (binary coded decimal). Where the first ten patterns

0000, 0001, …., 1001 are represented by the digits 0, 1, …., 9, as in BCD. The

remaining six 4-bit patterns, 1010, 1011, …., 1111, are represented by the letters A,

B, …., F.

• Thus, in hexadecimal representation, the decimal value becomes 5D. In assembler

syntax, a hexadecimal representation is often identified by a dollar sign. Therefore we

can write

ADD #$5D, R1

Page 53: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 13

• Exercises

• Student Notes:

Page 54: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 3 – ADDRESSING METHODS

3- 14

Page 55: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-1

Chapter 4. Stacks & Subroutines

• A very useful feature included in many computers is a memory stack, also known as a

last-in-first-out (LIFO) list. A stack is a storage device that stores information in such

a manner that the item stored last is the first item retrieved. The operation of a stack is

sometimes compared to a stack of books. The last book placed on top of the stack will

be the first to be taken off.

• The stack is useful for a variety of applications and its organisation possesses special

features that facilitate many data processing tasks. A stack is used in some electronic

calculators and computers to facilitate the evaluation of arithmetic expressions.

However, its use in computers today is mostly for handling subroutines and

interrupts.

• A memory stack is essentially a portion of a memory unit accessed by an address that

is always incremented or decremented after the memory access. The register that

holds the address for the stack is called a stack pointer (SP) because its value always

points at the top item of the stack. The two operations of a stack are insertion and

deletion of items.

• The operation of insertion onto a stack is call PUSH and it can be thought of as the

result of pushing something onto the top of the stack.

• The operation of deletion is called POP and it can be thought of as the result of

removing one item so that the stack pops out.

• However, nothing is physically pushed or popped in a memory stack. These

operations are simulated by decrementing or incrementing the stack pointer register.

Page 56: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-2

• Example:

The diagram below shows a portion of a memory organised as a stack.

MEMORY

ADDRESS

Stack Limit 099

100

101

102

Stack Base 103

• A memory stack

• The stack pointer SP holds the binary address of the item that is currently on top of

the stack. Three items are presently stored in the stack: A, B & C in consecutive

addresses 103, 102 and 101 respectively. Item C is on top of the stack, so SP contains

the address 101. To remove the top item, the stack is popped by reading the item at

address 101 and incrementing SP. Item B is now on top of the stack since SP contains

the address 102. To insert a new item, the stack is pushed by first decrementing SP

and then writing the new item on top of the stack. Note that item C has been read out

but not physically removed. This does not matter as far as the stack operation is

concerned, because when the stack is pushed, a new item is written on top of the stack

regardless of what was there before.

• We can assume that the items in the stack communicate with a data register DR. A

new item is inserted with the push operation as follows:

SP <- SP – 1

M[SP] <- DR

C

B

A

SP = 101

Page 57: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-3

• The stack pointer is decremented so it points at the address of the next word. A

memory write micro-operation inserts the word from the DR onto the top of the stack.

Note that SP holds the address of the top of the stack and that M[SP] denotes the

memory word specified by the address presently in SP.

A new item is deleted with a pop operation as follows:

DR <- M[SP]

SP <- SP + 1

• The top item is read from the stack into the DR. The stack pointer is then

decremented at the point at the next item in the stack.

The two micro-operations needed for either the push or pop are access to memory

through SP and updating SP. Which of the two micro-operations is done first and

whether SP is updated by decrementing or incrementing depends on the organisation

of the stack. The stack may be constructed to grow by increasing the memory address.

In such a case, SP is incremented for the push operations and decremented for the pop

operations. A stack may also be constructed so that SP points at the next empty

location above the top of the stack. In this case, the sequence of micro-operations

must be interchanged.

A stack pointer is loaded with an initial value. This initial value must be the bottom

address of an assigned stack in memory. From then on, SP is automatically

decremented or incremented with every push or pop operation. The advantage of a

memory stack is that the processor is always available and automatically updated in

the stack pointer.

Page 58: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-4

4.1 Reverse Polish Notation (RPN)

• A stack organisation is very effective for evaluating arithmetic expressions. The

common mathematical method of writing arithmetic expressions imposes difficulty

when evaluated by a computer. Conventional arithmetic expressions are written in the

infix notation, with each operator written between the operands. Consider the

expression:

A * B + C * D

• To evaluate the arithmetic expression it is necessary to compute the product A * B,

store this product, compute the product of C * D, and then sum the two products.

From this simple example we see that to evaluate arithmetic expressions in infix

notation it is necessary to scan back and forth along the expression to determine the

sequence of operations that must be performed.

• The Polish mathematician Jan Lukasiewicz proposed that arithmetic expressions be

written in prefix notation. This representation referred to as Polish notation places the

operator before the operands. Postfix notation, referred to as reverse polish notation,

places the operator after the operands. The following examples demonstrate the three

representations:

A + B Infix notation

+ A B Prefix or Polish notation

A B + Postfix or reverse Polish notation

• Reverse Polish notation, also known as RPN, is a form suitable for stack

manipulation. The expression above can thus be said to be written in RPN as:

A B * C D * +

Page 59: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-5

4.1.1 RPN Operation

• Scan the expression from left to right.

• When the operator is reached, perform the operation with the two operands to

the left of the operator.

• Remove the two operands and the operator and replace them with the number

obtained from the operation.

• Continue to scan the expression and repeat the procedure for every operator

until there are no more operators.

• For the above expression we find the operator * after A and B. We perform the

operation A * B and replace A, B and * by the product to obtain

( A * B ) C D * +

• where ( A + B ) is a single quantity obtained from the product. The next operator is a

* and its previous two operands are C and D; so we perform C * D and obtain an

expression with two operands and one operator:

( A * B ) ( C * D ) +

• The next operator is + and the two operands on its left are two products; so we add

the two quantities to obtain the result.

The conversion from infix notation to reverse Polish notation must take into

consideration the operational hierarchy adopted for infix notation. This hierarchy

dictates that we first perform all arithmetic inside inner parentheses, then inside outer

parentheses, then do multiplication and division, and finally, addition and subtraction.

Page 60: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-6

• Consider the expression:

( A + B ) * [ C * ( D + E ) + F ]

• To evaluate the expression we first perform the arithmetic inside the parentheses and

then evaluate the expression inside the square brackets. The multiplication of C * ( D

+ E ) must be done prior to the addition of F. The last operation is the multiplication

of the two terms between the parentheses and brackets. The expression can be

converted to RPN by taking into consideration the operation hierarchy. The converted

expression is

A B + D E + C * F + *

• Proceeding from left to right, we first add A and B, then add D and E. at this point we

are left with:

( A + B ) ( D + E ) C * F + *

• Where ( A + B ) and ( D + E ) are each a single number obtained from the sum. The

two operands for the next * are C and ( D + E ). These two numbers are multiplied

and the product added to F. The final * causes the multiplication of the last result with

the number ( A + B ). Note that all expressions in RPN are without parentheses.

The subtraction and division operations are not commutative, and the order of the

operands is important. We define the RPN expression A B - to mean ( A – B ) and the

expression A B / to represent the division of A / B.

Page 61: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-7

4.2 Stack Operations

• Reverse Polish notation combined with a stack provides an efficient way to evaluate

arithmetic expressions. This procedure is employed in some electronic calculators and

also in some computers. The stack is particularly useful for handling long, complex

problems involving chain calculations. It is based on the fact that that any arithmetic

expression can be expressed in parentheses-free Polish notation.

• The procedure consists of first converting the arithmetic expression into its equivalent

RPN. The operands are pushed onto the stack in the order that they appear. The

initiation of an operation depends on whether we have a calculator or a computer. In a

calculator, the operators are entered through the keyboard. In a computer they must

be initiated by a set of program instructions. The following operations are executed

with the stack when an operation is specified: the two topmost operands in the stack

are popped and used for the operation. The result of the operation is pushed into the

stack, replacing the lower operand. By continuously pushing the operands onto the

stack and performing the operations as defined above, the expression in the proper

order and the final result remains on top of the stack.

• Example:

The following expression will clarify the procedure:

( 3 * 4 ) + ( 5 * 6 )

in reverse Polish notation, it is expressed as:

3 4 * 5 6 * +

• Now consider the above operations in the stack as shown below:

Page 62: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-8

• Step 1: STACK ADDRESS

100

101

102

103

3

• Step 2: STACK ADDRESS

100

101

102

103

4

• Step 3: STACK ADDRESS

100

101

102

103

*

3 SP = 103

4

3

SP = 102

12 SP = 103

Page 63: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-9

• Step 4: STACK ADDRESS

100

101

102

103

5

• Step 5: STACK ADDRESS

100

101

102

103

6

• Step 6: STACK ADDRESS

100

101

102

103

*

• Step 7: STACK ADDRESS

100

101

102

103

+

5

12

SP = 102

6

5

12

SP = 101

30

12

SP = 102

42 SP = 103

Page 64: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-10

2.2.1 Computer Stack (from chapter 2)

• The PUSH and POP instructions require one address field to specify the source or

destination operand. Operation type instructions for the stack such as ADD and MUL

imply two operands on top of the stack and do not require an address field in the

instruction. The following program shows how the expression will be evaluated, X =

( A + B ) * ( C + D ):

PUSH A TOS <- A

PUSH B TOS <- B

ADD TOS <- ( A + B )

PUSH C TOS <- C

PUSH D TOS <- D

ADD TOS <- ( C + D )

MUL TOS <- ( C + D ) * ( A + B )

POP X M[X] <- TOS

Page 65: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-11

4.3 Subroutines

• A subroutine is a self-contained sequence of instructions that performs a given

computational task. During the execution of a program, a subroutine my be

called to perform its function many times at various points in the program.

Each time a subroutine is called, a branch is made to the beginning of the

subroutine to start executing its set of instructions. After the subroutine has

been executed, a branch is made again to return to the main program. A

subroutine is also known as a procedure.

• The instruction that transfers control to a subroutine is known by different

names. The most common names are call subroutine, call procedure, jump to

subroutine, or branch to subroutine. The call subroutine instruction has a one-

address field. The instruction is executed by performing two operations. The

address of the next instruction, which is available in the PC (called the return

address), is stored in a temporary location and control is then transferred to the

beginning of the subroutine. The last instruction that must be inserted in every

subroutine program is a return to the calling program. When this instruction is

executed, the return address stored in the temporary location is transferred to

into the PC. This results in a transfer of program control to the program that

called the subroutine.

• Different computers use different temporary locations for storing the return

address. Some computers store it in a fixed location in memory, some store it

in a processor register, and some store it in a stack. The advantage of using it

in a stack for a return address is that when a succession of subroutines are

called, the sequential return address can be pushed onto the stack. The return

instruction causes the stack to pop, and the content of the top of the stack is

then transferred to the PC. In this way, the return is always to the program that

last called the subroutine.

Page 66: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-12

• A subroutine call instruction is implemented with the following micro-

operations:

SP <- SP – 1 Decrement the stack pointer

M[SP] <- PC Store return address in stack

PC <- effective address Transfer control to the subroutine

The return instruction is implemented by popping the stack and transferring

the return address to PC.

PC <- M[SP] Transfer return address to PC

SP <- SP + 1 Increment stack pointer

By using a subroutine stack, all return addresses are automatically stored by

the hardware in the memory stack. The programmer does not have to be

concerned or remember where to return after the subroutine is executed.

4.4 Nested Subroutines

• A common programming practice, called subroutine nesting, is to have one

subroutine call another subroutine. In this case, the return address of the

second call is also stored in the register. Hence, it is essential to save the

contents of the register in some other location before calling another

subroutine. Otherwise the return address of the first subroutine will be lost.

• Subroutine nesting can be carried out to any depth. Eventually, the last

subroutine called completes its computations and returns to the subroutine that

called it. The return address needed for this first return is the last one

generated in the nested call sequence.

Page 67: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-13

• A particular register is designated as the stack pointer to be used in the

operation; the stack pointer points to a stack called the processor stack. In

such a computer, a call subroutine instruction pushes the contents of the PC

onto the processor stack and loads the subroutine address into the PC. The

return instruction pops the return address from the processor stack into the

PC.

• Example:

Stack in Main Memory

address

3700

3800 SUB 1

3801

8500 MAIN

8501

BF00

SUB 2 .

.

C000

D000 STACK

Call SUB 2 RET

Call SUB 1

RET

Page 68: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-14

• Before execution of SUB 1

• PC = SP =

C000 SP PC

C010

D000

• Call SUB 1 (assume no saving of registers)

• Increment PC = Decrement SP =

• Push PC onto the stack

• Load SUB 1 starting address into PC =

SP PC

UNUSED

PREVIOUS DATA

UNUSED

Page 69: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-15

• Before execution of SUB 2

• PC = SP =

SP PC

• Call SUB 2 (assume no saving of registers)

• Increment PC = Decrement SP =

• Push PC onto the stack

• Load SUB 2 starting address into PC =

SP PC

UNUSED

UNUSED

Page 70: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-16

• Return from SUB 2

• Get return address from the stack to PC =

• Increment SP =

SP PC

• Return from SUB 1

• Get return address from the stack to PC =

• Increment SP =

SP PC

UNUSED

UNUSED

PREVIOUS DATA

Page 71: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-17

4.5 Parameter Transfer

• When calling a subroutine, a program must provide to the subroutine the

parameters, that is, the operands or their addresses, to be used in the

computation. Later, the subroutine returns other parameters, in this case, the

results of the computation. This exchange of information between a calling

program and a subroutine is referred to as parameter passing. Parameter

passing may occur in several ways. The parameters may be placed in registers

or in fixed memory locations, where they can be accessed by the subroutine,

Alternatively, the parameters may be placed on a stack, possibly the processor

stack used for saving the return address.

• Passing parameters through CPU registers is straightforward and efficient.

However, if many parameters are involved, there may not be enough general-

purpose registers available for this purpose. And the calling program may

need to retain information in some registers for use after returning from the

subroutine, making these registers unavailable for passing parameters. Using a

stack, on the other hand, is highly flexible; a stack can handle a large number

of parameters.

Page 72: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-18

• Exercises: (a) Describe the steps using a specific linkage register how parameters are passed (at

low-level) into a subroutine by a calling program. Assume [R2] is the specific linkage

register.

(b) The principle of a nested subroutine means that a subroutine can call another

subroutine to perform some task. Given the diagram below, note the location of the

Stack Pointers (SP) and the Program Counter (PC) before and during the calling of

a subroutine and state the Starting address of the subroutine where it is appropriate.

2000 2300 Call Sub 2 SUB 1 2301 3000 Call Sub 1 MAIN 3001 4000 SUB 3 Return 5000 5500 Call Sub 3 SUB 2 5501 C000 Unused Data C010 STACK D000

Page 73: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 4 – STACKS & SUBROUTINES

4-19

Page 74: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-1

Chapter 5. Input Output Organization

• The input and output subsection of a computer provides an efficient mode of

communication between the central processing unit and the outside environment.

Programs and data must be entered into the computer memory for processing and

results obtained from computations must be recorded or displayed to the user. Among

the input and output devices that are commonly found in computer systems are

keyboards, display terminals, printers and disks. Other input and output devices

encountered are magnetic tape drives digital plotters, optical readers, analog-to-digital

converters, and various data acquisition equipment. Computers can be used to control

various processes such as machine tooling, assembly line procedures, and industrial

control.

• The input and output facility of a computer is a function of its intended application.

The difference between a small and large system is partially dependent on the amount

of hardware the computer has available for communicating with other devices and the

number of devices connected to the system, and thus various modes of transfer, or

architectures will differ. However, it can be said that an I/O module does have some

generic functions.

5.1 I/O Module Functions

• Control and Timing

To co-ordinate the flow of traffic between the computers internal resources

and those of external devices, because of the fact that the CPU will be

communicating with multiple devices. Therefore the internal resources, such

as main memory and the system are must be shared among a number of

activities including data I/O.

Page 75: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-2

• CPU Communication

The CPU must be able to decode specific commands and also send commands

to the I/O module such as READ SECTOR and WRITE SECTOR for

example. These commands are sent via the control bus. The data bus is

utilised for the transfer of data between the CPU and the I/O module. The

address bus is used for address recognition, where each I/O device has an

address unique to that particular device which is being controlled. Lastly there

is status reporting, usually noted in the PSW, to report the current status of an

I/O module and to report various error conditions that may have occurred.

• Device Communication

Communicating to external devices in terms of commands, status information

of various devices or of the I/O module, and the exchange of data between the

I/O module and CPU and vice versa.

• Data Buffering

Data buffering is required because of the different data transfer rate of the

CPU, memory and the particular I/O devices. An I/O module must be able to

operate at both device and memory speeds.

• Error Detection

Any error that has been detected will be recorded and reported to the CPU,

examples of these are a paper-jam, bad disk track, transmission error and so

on.

Page 76: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-3

5.2 Accessing I/O Devices

• Most modern computers use a single bus arrangement as shown below:

BUS

….

• A single bus structure

The processor, memory, and the I/O devices are connected to this bus, which

consists of three sets of lines as discussed before, namely the address, data and

control lines. Each I/O device is assigned a unique set of addresses. When the

processor places a particular address on the address lines, the device that

recognises this address responds to the commands issued on the control lines. The

processor requests either a read or a write operation, and the requested data is then

transferred over the data lines. When I/O devices and memory share the same

address space, the arrangement is called memory-mapped I/O.

• Example of Memory-mapped:

With memory-mapped I/O, any machine instruction that can access memory can

be used to transfer data to or from an I/O device. For example, if DATAIN is the

address of the input buffer associated with the keyboard, the instruction

MOVE DATAIN, R0

Processor Memory

I/O Device 1 I/O Device n

Page 77: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-4

Reads the data from DATAIN and stores them in processor register R0. Similarly,

the instruction

MOVE R0, DATAOUT

Sends the contents of register R0 to location DATAOUT, which may be the

output data buffer of a monitor or a printer. The memory concept usually applies

to the application of programmed I/O in I/O operations, and this is discussed in

the next section.

In another type of organisation called I/O mapped, the memory and the I/O

address spaces are separate. In this case the CPU must execute separate I/O

instructions to activate either the read I/O or write I/O lines, which cause a word

to be transferred between the addressed I/O and the CPU.

5.2.1 Programmed I/O

• Programmed I/O is most useful in small, low-speed systems where hardware costs

must be minimised. Programmed I/O requires that all I/O operations be executed

under the direct control of the CPU; in other words, every data-transfer operation

involving an I/O device requires the execution of an instruction by the CPU.

Typically the transfer is between two programmable registers: one a CPU register and

the other attached to the I/O device. The I/O device does not have a direct access to

main memory. A data transfer from an I/O device to memory requires the CPU to

execute several instructions, including an input instruction to transfer a word from the

I/O device to the CPU and a store instruction to transfer the word from CPU to

memory.

• The procedural representation of programmed I/O is as follows:

Page 78: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-5

CPU ���� I/O

I/O ���� CPU

Not ready

Error Condition

Ready

I/O ���� CPU

CPU ���� Memory

No

Yes

• Programmed I/O flowchart

5.2.2 Interrupt Driven I/O

• The problem with programmed I/O is that the CPU has to wait a long time for the I/O

module either in the case of transmission or reception of data. The CPU, while

waiting, must repeatedly interrogate the status of the I/O module. As a result, the

level of the performance of the entire system is severely degraded.

CPU asserts read command to I/O module

The CPU reads status of I/O module

CPU reads a word from I/O module

CPU writes one word to Memory

Ready status?

Transfer

complete?

Page 79: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-6

An alternative is for the CPU to issue an I/O command to a module and then goes on to

do some other useful work. The I/O module will then interrupt the CPU to request service

when it is ready to exchange data with the CPU. The CPU then executes the data transfer,

as before, and then resumes its former processing.

CPU ���� I/O

CPU does something else

I/O ���� CPU

Interrupt

Error Condition

Ready

I/O ���� CPU

CPU ���� Memory

No

Yes

• Interrupt Driven I/O flowchart

CPU asserts read command to I/O module

Read status of I/O module

CPU reads a word from I/O module

CPU writes one word to Memory

Check status?

Transfer

complete?

Page 80: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-7

5.2.3 Direct Memory Access (DMA)

• Removing the CPU from the path and letting the I/O device manage the memory

buses directly would greatly improve the speed of transfer, as compared to the

previous two techniques. This transfer technique known as direct memory access or

DMA takes over the buses to manage the transfer directly between the I/O device and

memory.

• The CPU may be placed in an idle state in a variety of ways. One common method is

to disable the buses through special control signals. The bus request input is used by

the DMA to request from the CPU to relinquish control of the buses. When this input

is active, the CPU terminates the execution of its present instruction and places the

address bus, data bus and the read/write lines into a high-impedance state. After this

is done, the CPU activates the bus granted output to inform the DMA that it can take

control of the buses. When the bus granted line is enabled, the DMA takes control of

the bus system to communicate directly with the memory. The transfer can be made

for an entire block of memory words, suspending the CPU operation until the entire

block is transferred. This is referred to as burst transfer. The transfer can be made one

word at a time between CPU instruction executions as well, this is known as cycle

stealing. This is more commonly used to raise the bus request line to the CPU so

suspension can begin. The CPU merely delays its operations for one memory cycle to

allow the DMA transfer to steal one memory cycle, the effect is more of a pause and

hence this is not an interrupt.

Page 81: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-8

CPU is free to do anything

Next instruction to be executed

• Cycle Stealing

• The DMA module needs to take control of the bus in order to transfer data

to and from memory and only when the CPU does not need it. The

diagram below shows the possible breakpoints for the DMA to interrupt

the CPU.

INSTRUCTION CYCLE

DMA Breakpoints

• Cycle Stealing in DMA

CPU generates a read 'block of data' command to

DMA module

DMA interrupts CPU

CPU reads DMA status

Processor Cycle

Processor Cycle

Processor Cycle

Processor Cycle

Processor Cycle

Processor Cycle

Fetch Cycle

Decode instruction

Fetch operand

Execute instruction

Store result Process interrupt

Next instruction to be executed by the CPU

Page 82: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-9

5.3 Interrupts

• Interrupt is an asynchronous event that suspends a CPU from its ordinary operations

and quickly jumps to a pre-programmed routine called a handler. The CPU must be

able to return to its original operation after executing the service routine.

• The concept of a program interrupt is used to handle a variety of problems that arise

out of normal program sequence. Program interrupt refers to the transfer of program

control from a currently running program to another service program as a result of an

externally or internally generated request. Control returns to the original program

after the service program is executed. The interrupt procedure is in principle similar

to a subroutine call except for three variations:

1. The interrupt is initiated by an external or internal signal rather than from the

executions of an instruction.

2. The address of the service program that processes the interrupt request is

determined by a hardware procedure rather than from the address field of an

instruction.

3. In response to an interrupt it is necessary to store all the information that defines

the state of the computer rather than storing only the program counter.

• After the computer has been interrupted and the corresponding service program has

been executed, the computer must return to exactly the same state that it was before

the interrupt occurred. Only if this happens will the interrupted program be able to

resume exactly as if nothing has happened. The state of the computer at the end of an

execution of an instruction is determined from the contents of the program counter

and other processor registers and the values of various status bits. The collection of

all status bits is sometimes called the program status word (PSW) or the status

register (SR). Typically, it includes the status bits from the last ALU operation and it

specifies what interrupts are allowed to occur and whether the computer is operating

in a user or system mode. Many computers have a resident operating system that

Page 83: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 5 – INPUT OUTPUT ORGANIZATION

5-10

controls and supervises all other programs. When the computer is executing programs

that are part of the operating system, the computer is placed in system mode, and the

computer is set in user mode when user application programs are running. The mode

of the computer at any given time is determined from special status bits in the PSW.

Page 84: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-1

Chapter 6 Memory Organization

• Memory is the portion of a computer system that is used for the storage, and

subsequent retrieval of data and instructions.

• Every computer contains several types of devices to store the instructions and data

required for its operation. These storage devices plus the algorithms-implemented by

hardware and/or software-needed to manage the stored information form the memory

system of the computer.

• A CPU should have rapid uninterrupted access to the external memories where its

programs and the data they posses are stored so that the CPU can operate at or near its

maximum speed. Unfortunately, memories that operate at speeds comparable to

processor speeds are expensive, and generally only very small systems can afford to

employ a single memory using just one type of technology. Instead, the stored

information is distributed, often in complex fashion, over various memory units that

have very different performance and cost.

6.1 Memory Hierarchy

• There is a tradeoffs among the three key characteristics of memory, namely cost, capacity, and access time.

• At any given time, a variety of technologies are used to implement memory systems. Across this spectrum of technologies, the following relationship hold.

1. Shorter access time, greater cost per bit.

2. Greater capacity, smaller cost per bit.

3. Greater capacity, longer access time

Page 85: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-2

MEMORY HIERARCHY

a. Decreasing Cost/bit

b. Increasing Capacity

Internal memory type c. c.Increasing access time

d. Decreasing frequency

of access by the CPU

External memory type

• Representation of the Memory Hierarchy

• A good design is a memory organization that relies on not one but a hierarchy of

memory components - using smaller, more expensive, faster memories

supplemented by large, cheaper, slower memories.

• CPU Registers:

The fastest memory unit in the memory hierarchy. These high-speed registers in the CPU

serve as the working memory for temporary storage of instructions and data. They

usually form a general purpose register file for storing data as it is processed. Each

register can be accessed, that is, read or written into, within a single clock cycle.

• Cache memory:

Most computers now employ another layer of memory known as the cache, which is

positioned logically between the main and secondary memories. A cache's storage

capacity is less than that of main memory, but with an access time of one to three cycles

faster.

CPU REGISTERS

CACHE MEMORY

MAIN MEMORY

SECONDARY MEMORY

Page 86: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-3

• Main memory:

Also referred to as the RAM (Random Access Memory).Main memory stores programs

and data that are in active use. Storage locations in main memory are addressed directly

by the CPU's load and store instructions.

• Secondary memory:

This memory type is much larger in capacity but also much slower than main memory.

Secondary memory stores system programs, large data files, and the like that are not

continually required by the CPU. It also acts as an overflow memory when the capacity

of main memory is exceeded. Information in secondary storage is considered to be on-

line but accessed indirectly via input/output programs that transfer information between

main and secondary memory. Examples include hard disk, magnetic tapes, Compact

Disks (CD), etc..

6.1.1 Memory performance

The issue of speed, cost and size are always paramount in discussing memory systems.

An ideal memory would be fast large and inexpensive. But these chips are expensive

because for cost reasons, packing a large number of cells into a single chip is impractical.

Of course there are alternative memory types, but these are slower such as secondary

storage. During program execution, the speed of memory access is of utmost importance.

The key to managing the operation of the hierarchical memory system is to bring the

instructions and data that will be used in the near future as close to the CPU as possible,

using the mechanisms presented when we discuss about Cache memory can do this.

Page 87: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-4

• A more formalised categorisation of performance can be viewed from these

characteristics:

1. Access time: Access time is determined by the different types of access methods. For

examples for random-access memory, this is the time from the instant that an address

is presented to the memory to the instant that data have been stored or available for

use. For non-random-access memory, access time is the time it takes to position the

read-write mechanism at the desired location.

2. Memory cycle time: this concept is primarily applied to random-access memory and

consists of the access time plus any additional time required before a second access

can commence. The additional time may be required for transients to die out on signal

lines or to regenerate data if they are read destructively.

3. Transfer rate: This is the rate at which data can be transferred into or out of a

memory unit.

For random-access memory, transfer rate = 1/(Cycle Time).

For non-random-access memory,

Average time to read or write N bits = Average access time + (Number of

bits)/Transfer rate

6.1.2 Access Methods

• One of the sharpest distinctions among memory types is the method of accessing

units of data. Four types may be distinguished:

1. Sequential access: memory is organised into units of data, called records. Access

must be made in a specific linear sequence. Stored addressing information is used to

separate records and assist in the retrieval process. A shared read/write mechanism is

used, and this must be moved from its current location to the desired location, passing

and rejecting each intermediate record. Thus the time to access an arbitrary record is

highly variable.

Page 88: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-5

Therefore, access time will be location dependent and data will be accessed whether required or not. For example: the magnetic tape.

2. Direct access: as with sequential access, direct access involves a shared read/write mechanism. However, individual blocks or records have a unique address based on physical location. Access is accomplished by direct access to reach a general vicinity plus sequential searching, counting, or waiting to reach the final location. Again, access time is variable. For example: the hard disk.

3. Random access: each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant. Thus, any location can be selected at random and directly addressed and accessed.

For example: the main memory.

4. Associative access: this is random-access type of memory that enables one to make a

comparison of desired bit locations within a word for a specified match, and to do this

for all words simultaneously. Thus a word is retrieved based on a portion of its

contents rather than its address. As with ordinary random-access memory, each

location has its own addressing mechanism, and retrieval time is constant independent

of location or prior access patterns. Cache memory employs associative access.

6.2 Cache Memory

6.2.1 How Caching Works

• The effectiveness of the cache mechanism is based on a property of computer

programs called the locality of reference. Analysis of programs shows that most of

their execution time is spent on routines in which many instructions are executed

repeatedly. These instructions may constitute a simple loop, nested loop, or a few

procedures that repeatedly call each other.

Page 89: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-6

• The actual detailed pattern of instruction sequencing is not important-the point is that

many instructions in localised areas of the program are executed repeatedly during

some period, and the remainder of the program is accessed relatively infrequently.

This is referred to as locality of reference.

• There are two categories under the locality of reference.

o Temporal locality means that a recently executed instruction is likely to be

executed again very soon.

o Spatial locality means that instructions in close proximity to a recently executed

instruction (with respect to the instructions' addresses) are also likely to be

executed soon.

• If the active segments of a program can be placed in a fast cache memory, then the

total execution time can be reduced significantly. Conceptually, operation of a cache

memory is very simple. The memory control circuitry is designed to take advantage

of the property of locality of reference. The temporal aspect of the locality of

reference suggests that whatever an information item (instruction or data) is first

needed, this item should be brought into the cache where it will hopefully remain

until it is needed again. The spatial aspect suggests that instead of bringing just one

item from the main memory to the cache, it is wise to bring several items that reside

at adjacent addresses as well.

Page 90: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-7

• A diagram of what the cache memory looks like is as follows:

• Basic structure of a cache memory

• One general way of introducing a cache into a computer is as such: a look-aside

buffer. The cache and the main memory are directly connected to the system bus. In

this particular design the CPU initiates a memory access by placing a (real) address

on the memory address bus at the start of a read or write cycle.

• The cache compares the address to the tag address currently residing in its tag

memory. If a match is found, that is, a cache hit occurs, the access is completed by a

read or write operation executed in the cache; main memory is not included. If no

match is found in the cache, that is, a cache miss occurs, then the desired access is

completed by a read or write operation directed to memory. In response to a cache

miss, a block (line) of data from memory is transferred into the cache.

• The cache implements various replacement policies to determine where to place an

incoming block. When necessary, the cache block is replaced and the block being

replaced is transferred from cache back to main memory. Note that cache misses,

even though they are infrequent, result in block transfers that tie up the system bus,

making it unavailable for other uses like I/O operations.

Cache Data

Memory Cache Tag Memory

Cache

Hit

Address Control Data

Page 91: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-8

6.2.2 Replacement Policies

• The placement of blocks of information in a memory system is called memory

allocation. The method of selecting the part of memory in which an incoming block

is to be placed is the replacement policy.

• When a new block is brought into the cache, one of the existing blocks must be replaced.

• Simple replacement policies assign a block to memory only when an unoccupied or

inactive region of sufficient size is available. More aggressive policies pre-empt

occupied blocks to make room for the block. The main goal in choosing a

replacement policy is to maximise the hit ratio of the faster memory and minimise the

miss ratio. Two useful replacement policies are first-in-first-out (FIFO) and least

recently used (LRU).

• For example consider a paging system in which the cache has a capacity of three

pages. The execution of a program requires reference to five distinct pages. The page

address stream formed by the program is 2, 3, 2, 1, 5, 2, 4, 5, 3, and 2.

TIME. 1 2 3 4 5 6 7 8 9 10

ADDRESS 2 3 2 1 5 2 4 5 3 2

TRACE.

FIFO

HIT HIT HIT

2

3

2 5 3 3 5 5 5 2

3 3

1

3

1

2

1

2

4

2

4 4

2 2

4

2

Page 92: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-9

• The action of a FIFO replacement policy

TIME. 1 2 3 4 5 6 7 8 9 10

ADDRESS 2 3 2 1 5 2 4 5 3 2

TRACE.

LRU

HIT HIT HIT

• The action of a LRU replacement policy

6.2.3 Hit Rate & Miss Penalty

• An indicator of the effectiveness of a particular implementation of the memory

hierarchy is the success rate in accessing information at various levels of the

hierarchy. Recall that a successful access to data in a cache memory is called a hit.

The number of hits stated as a fraction of all attempted accesses is called the hit rate,

and the miss rate is the number of misses stated as a fraction of attempted accesses.

• Ideally, the entire memory hierarchy would appear to the CPU as a single memory

unit that has the access time of a cache on the CPU and the size of a device in

secondary storage. How close we get to this ideal depends largely on the hit rate at

different levels of the hierarchy.

• Performance is adversely affected by the actions that must be taken after a miss.

The extra time needed to bring the desired information into the cache is called the

2

3

2 2 3 3 2 2 2 2

3 3

1

5

1

5

1

5

4

5

4 4

5 5

2

2

Page 93: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-10

miss penalty. This penalty is ultimately reflected in the time that the CPU is

stalled because the required instructions or data are not available for execution. In

general, the miss penalty is the time needed to bring a block of data from a slower

unit in the memory hierarchy to a faster unit. The miss penalty is reduced if

efficient mechanisms for transferring data between the various units of the

hierarchy are implemented.

6.3 Virtual Memory

Most computers today have something like 32 or 64 megabytes of RAM available for the

CPU to use Unfortunately, that amount of RAM is not enough to run all of the programs

that most users expect to run at once.

• In most modern computer systems, the physical main memory is not as large as the address space

spanned by an address issued in the processor. When a program does not completely fit into main

memory, the parts of it not currently being executed are stored on secondary storage devices.

• The purpose of virtual memory is to enlarge the address space, the set of

addresses a program can utilize. For example, virtual memory might contain

twice as many addresses as main memory. A program using all of virtual

memory, therefore, would not be able to fit in main memory all at once.

Nevertheless, the computer could execute such a program by copying into

main memory those portions of the program needed at any given point during

execution.

To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.

The process of translating virtual addresses into real addresses is called mapping. The copying of virtual pages from disk to main memory is known as paging or swapping.

Page 94: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-11

• Virtual memory, was invented to automatically manages the two levels of memory hierarchy represented by main memory and secondary storage. Unlike the cache, a virtual memory block is called a page, and a virtual memory miss is called a page fault. The CPU produces a virtual address, which is translated by a combination of hardware and software to a physical address, which in turn can be used to access main memory. This process is called memory mapping or address translation.

Note: The number of pages addressable with the virtual address need not match the number of pages addressable with the physical address.

• The binary addresses that the processor issues for either instructions or data are

called virtual or logical addresses. These addresses are translated into physical

address by a combination of hardware and software components.

• If a virtual address refers to a part of the program or data space that is currently in

the physical memory, then the contents of the appropriate location in the main

memory are accessed immediately. On the other hand, if the referenced address is

not in the main memory. Its contents must be brought into a suitable location in

the memory before they can be used.

• A special hardware unit, called the memory management unit (MMU), translates

virtual addresses into physical addresses. When the desired data (or instructions) are

in the main memory, these data are fetched and if the data are not main memory, the

MMU causes the operating system to bring the data into the main memory from the

secondary storage.

PROCESSOR

MMU

Virtual address

Data

Page 95: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-12

• Virtual memory organisation

• When a program generates an access request to a page that is not in the main

memory, a page fault is said to occur - similar to a cache miss. The whole page must

be brought in from secondary storage into memory before access can proceed. When

it detects a page fault, the MMU asks the operating system to intervene by raising an

exception (interrupt). Processing of the active task is interrupted, and control is

transferred to the operating system. The operating system then copies the requested

page from the disk into main memory and returns control to the interrupted task.

Because a long delay occurs while the page transfer takes place, the operating system

may suspend execution of the task that caused the page fault and begin execution of

another task whose pages are in main memory.

• If a new page is brought from secondary memory when the main memory is full,

it must replace one of the resident pages. The problem of choosing which page to

remove is just as critical here as it is in the cache, and the idea that programs

spend most of their time in a few localised areas also applies. Because main

memories are considerably larger than cache memories, it should be possible to

keep relatively larger portions of a program in the main memory. This will reduce

the frequency of transfers to and from secondary storage.

CACHE

MAIN MEMORY

SECONDARY

Physical address

Transfer

Physical address Data

Page 96: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-13

6.3.1 Paging

• Since a page can reside anywhere, we need a mechanism to find it. This

mechanism is a structure called a page table. A page table, which resides in

memory is indexed with the page number from the virtual address and contains

the corresponding physical page number.

• Each program has its own page table, which maps the virtual address space of the

program to physical memory. To indicate the location of the page table in

memory, the hardware includes a register (page table register) that points to the

start of the page table.

• A page is a fixed-length block that can be assigned to fixed regions of physical

memory called page frames. The advantage of paging as opposed to other virtual

memory techniques is that data transfer between memory levels is simplified. An

incoming page can be assigned to any available page frame. In a pure paging system,

each virtual address consists of two parts: a page address and a displacement (offset),

this is found within the page table. Pages themselves should neither be too large or to

small. If too large, they heavily utilise valuable space in the main memory, yet too

small, many page faults are likely to be incurred.

6.3.2 Address Translation

• A method for translating virtual addresses into physical addresses is to assume that all

programs and data are composed of pages, the definition of which is given above.

They constitute the basic unit of information that is moved between the main memory

Page 97: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-14

and the secondary memory whenever the translation mechanism determines that a

move is required.

• A virtual-memory address translation method based on the concept of fixed-length

pages is shown in the diagram on the next page (6-12). Each virtual address generated

by the processor, whether it is for an instruction fetch or an operand fetch/store

operation, is interpreted as a virtual page number followed by an offset that specifies

the location of a particular word within a page.

• Information about the main memory location of each page is kept in the page table.

This information includes the main memory address where the page is stored and the

current status of the page. An area in the main memory that can hold one page is

called a page frame.

• The starting address of the page table is kept in a page table base register. By adding

the virtual page number to the contents of this register, the address of the

corresponding entry in the page table is obtained. The contents of this location give

the staring address of the page if that page currently resides in the main memory.

Page Table Address Offset Virtual page number

Virtual address from processor

Page table base register

PAGE TABLE +

Page 98: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 6 – MEMORY ORGANIZATION

6-15

• Virtual-memory address translation

• Exercises:

(a) What is the need for a memory hierarchy?

(b) Is it necessary to have different types of memory units or is it advisable to have only

one type of memory instead of different types of memory?

(c) What are replacement policies? What should be the main objective of having

replacement policies?

(d) What is the main purpose of having cache memory?

(e) Before the introduction of virtual memory concept how did programmers run large

programs, which are larger than the available memory size?

Page frame Offset

Control Bits

Page frame in memory

Physical address in main memory

Page 99: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 1

Chapter 7. Pipelining

• Briefly mentioned at the end of chapter one was the concept of a pipelined

architecture. In this chapter we will see how pipelining can lead to improved

performance. Pipelining is a particularly effective way of organising parallel activity

in a computer system.

• Pipelining is a technique of decomposing a sequential process into sub operations,

with each sub process being executed in a special dedicated segment that operates

concurrently with all other segments. Under this technique the computers hardware

processes more than one instruction at a time and does not wait for one instruction to

complete before starting the next.

• A Pipeline can be visualised as a collection of processing segments where each

segment performs partial processing. The result obtained from each segment is passed

to the next segment in the pipeline. The final result is obtained after the data have

passed thru all the segments in the pipeline.

7.1 Parallel Processing

• Instead of processing each instruction sequentially as in a conventional computer,

a parallel processing system is able to perform concurrent data processing to

achieve faster execution time.

• The purpose of parallel processing is to speed up the computer processing

capability and increase its throughput.

For example: While one instruction is being executed in the ALU the next instruction can

be read from the main memory.

Page 100: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 2

7.2 Basic Concepts

• Consider how the idea of pipelining can be used in a computer. The processor

executes a program by fetching and executing instructions, one after the other:

• A sequential execution

• Now consider a computer that has two separate hardware units, one for fetching

instructions and another for executing them:

• Hardware organisation

• The instruction fetched by the fetch unit is deposited in an intermediate storage

buffer. The results of execution are deposited in the destination location specified by

an instruction. The computer is controlled by a clock whose period is such that the

fetch and execute steps of any instruction can each be completed in one clock cycle

where clock cycle is the time taken to complete one instruction. In the first clock

cycle, the fetch unit fetches an instruction and stores it in the buffer at the end of the

clock cycle. In the second clock cycle, the instruction fetch unit proceeds with the

fetch operation for the second instruction. Meanwhile, the execution unit performs the

Fetch Execute Fetch Fetch Execute Execute

INSTRUCTION 1 INSTRUCTION 2 INSTRUCTION 3

TIME

Instruction Fetch unit

Instruction Execution

unit

Storage Buffer

Page 101: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 3

operation specified in the first instruction that is available in the buffer. The end of

the second clock cycle completes the execution of the first instruction and instruction

number two is now available in the buffer. In this manner, both the fetch and the

execute units are kept busy all the time.

• Pipelined execution

• The processing of an instruction need not be divided into only two steps. For

example, a pipelined processor may process each instruction in four steps, as follows:

1. F: fetch-read the instruction from memory.

2. D: decode-decode the instruction and fetch the source operand(s).

3. O: operate-perform the operations.

4. W: write-store the results in the destination location.

• The sequence of events in this case would look like:

CLOCK CYCLE 1 2 3 4

INSTRUCTION

1

2

3

F1

E2 F2

E3 F3

E1

CLOCK CYCLE 1 2 3 4 5 6 7

INSTRUCTION

1

2

3

F1 W1 O1

D2 F2

D1

F3 F3 O3 D3

W2 O2

Page 102: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 4

• Instruction execution divided into four parts

• This means that four distinct hardware units are needed. These units must be capable

of operating in parallel. Each unit operates on different data, and the result of which is

passed to the next unit downstream through a storage buffer. Each buffer holds the

information needed by the units downstream to complete execution of an instruction.

• Hardware organisation

• For example, during clock cycle 4, the information in the buffers are as follows:

1. Buffer B1 holds instruction number 3, which was fetched in cycle 3 and being

decoded by the instruction-decoding unit.

2. Buffer B2 holds both the source operands for instruction 2 and the specification of

the operation to be performed, which were produced by the decoding hardware in

cycle 3. It also holds the information needed for the write step of instruction 2.

This information is not needed by stage 3 of the pipeline, but it must be passed on

to stage 4 in the following clock cycle to enable that stage to perform the required

write operation.

3. Buffer B3 holds the results produced by the operation unit and the destination

information for instruction 1.

• With a four-stage pipeline, the rate at which instructions are executed is four times

that of sequential operation. It is important to understand that pipelining does not

result in individual instructions being executed faster, rather, it is the throughput that

increases, throughput is the amount of processing that can be accomplished during a

Fetch Instruction

F

Decode Instruction

D

Storage Buffer B1

Perform Operation

O

Write Results

W

Storage Buffer B2 Storage Buffer B3

Page 103: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 5

given interval of time and throughput is measured by the number of instructions per

second whose execution is completed.

• Based on what we have seen, the increase in performance resulting from pipelining

appears to be proportional to the number of pipeline stages. This would be true if

pipelined operations could be sustained throughout program execution.

Unfortunately, this is not the case. For a variety of reasons, one of the pipeline stages

may not be able to complete its processing task for a given instruction in the time

allotted. At any time one of the stages in the pipeline cannot complete its operation in

one clock cycle, the pipeline stalls. This can be caused by a time-consuming

arithmetic operation or by having to access the main memory following a cache miss.

Whenever the pipeline is stalled, some degradation in performance occurs. An

important goal in designing a pipelined processor is to identify ways to minimise their

impact on performance.

• Effect of an operation that takes more than one clock cycle to complete

7.2.2 Dependency Constraints

• Consider a program with two instructions. When this program is executed in a

pipeline, the execution of the second instruction can begin before the execution of the

first instruction is completed. This means that the results generated by the first

CLOCK CYCLE 1 2 3 4 5 6 7

INSTRUCTION

1

2

3

F1 W1 O1

D2 F2

D1

F3 O3 D3

W2 O2

Page 104: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 6

instruction may not be available for use by the second instruction. We must ensure

that the results obtained when instructions are executed in a pipelined processor are

identical to those obtained when the same instructions are executed sequentially. Just

imagine what would happen to our car in the assembly line if the components were

placed in all different orders. The potential for obtaining errors, in the sense of

incorrect results, when operations are performed in parallel can be demonstrated with

a simple example. Assume that A=5, and then consider this simple operation:

A � 3 + A

B � 4 * A

• When these operations are performed in the order given, the result is B=32. But if

they are performed in parallel, the value of A used in computing B is the original

value 5; this leads to an incorrect result. If these two operations are performed by

instructions in a program, then the instruction must be executed one after the other,

because the data used in the second instruction depends on the result of the first

instruction. On the other hand, the two operations can be performed in parallel,

because these operations are independent:

A � 5 * C

B � 20 + C

• This example illustrates a basic constraint that must be enforced to guarantee correct

results. No two operations that depend on each other can be performed in parallel.

This rather obvious condition has far-reaching consequences. Understanding the

implications of this condition is the key to understanding the variety of design

alternatives and trade-offs encountered in pipelined computers. In the case of such an

incorrect result occurring in a pipelined processor, the pipeline will stall.

• Hence, the dependency just described arises when the destination of one instruction is

used as a source in a subsequent instruction. Such dependency can seen as follows:

Page 105: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 7

MUL R2, R3, R4

ADD R5, R4, R6

• The result of the multiply instruction is placed into register R4. Assuming that the

multiply operation takes one clock cycle to complete, execution of the MUL

instruction would then proceed. As the decode unit begins decoding the ADD

instruction in cycle 3, it realises that R4 is used as a source operand. Hence, the D

step of that instruction cannot be completed until the W step of the multiply

instruction has been completed. As a result the pipelined execution has stalled.

• Exercises

1. What is pipelining?

2. What is Parallel Processing and what are the main advantages of parallel processing?

3. What is meant by dependency constraints?

4. Consider the following sequence of instructions:

ADD #20, R0, R1

MUL #3, R2, R3

ADD R0, R2, R5

In all instructions, the destination operand is given last. Initially, registers R0 and R2

contain 2000 and 50 respectively. These instructions are executed in a computer that has

a four-stage pipeline similar to that in your study guide. Assume that the first instruction

is fetched in clock cycle 1, and that instruction fetch only requires only one clock cycle.

Page 106: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 7 - PIPELINING

7- 8

i) Draw a diagram to describe the operation being performed by each pipeline

stage during each of the clock cycles 1 through 4, regardless if you manage to

execute all instructions.

ii) Give the contents of the interstage buffers, B1, B2, and B3, during clock

cycles 2, 4, and 5.

5. Repeat problem (3) for these instructions in Program X (the instructions are related to

this program), but in this instance, draw the accompanying memory map if, memory

addresses [1000], [2000], and [3000] contain the values 3000, 2000, and 1000

respectively.

MUL #20, #20, [1000]

ADD [1000], [2000], [3000]

Page 107: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-1

Chapter 8. Data Communication Concepts – Data Communication I

The key technology of the information age is computer communications. The value of a high speed data communication network is that it brings message sender and receiver closer together in time. As a result, we have collapsed the information lag, which is the time it takes for information to be disseminated world wide.

Knowledge of data communication is more and more important today because the Internet have transformed the earth into a “Global village”.

Data Communication (or data transmission) is the movement of encoded information from one point to another by means of electrical or optical transmission systems. Such systems often are called data communication networks.

OR

Data communication is the exchange of data between two devices via some form of transmission medium. Two computers are said to be interconnected if they are able to exchange information

• Two types

o Local: This is considered to be local if the communicating devices are in the same building or a similarly restricted geographical area

o Remote: Is considered to be remote if the devices are farther apart

8.1 Data Communication System

• The fundamental purpose of a communications system is the exchange of data between two parties.

o Source: This device generates the data to be transmitted

o Transmitter: Generally the data generated by a source system are not transmitted directly in its raw format. The information is transformed and encoded as electromagnetic signals by the transmitter for further transmission

o Transmission system: This can be a single transmission line or a complex network connecting source and destination

o Receiver: The receiver accepts the signal from the transmission system and converts it back to its original form.

Page 108: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-2

o Destination: This takes the data from the receiver

8.2 Components of a Data Communication System

Message:

This is the information which is communicated.

It can consist of text, numbers, pictures, sound or video or any combination of these

Sender:

This is the device that sends the data message

Receiver

This is the device that receives the data message

Medium

The transmission medium is the physical path through which message travels from sender to receiver

It may be a twisted pair cable, co-axial cable, fiber optic cable laser or radio waves.

Protocol

This is a set of rules that govern data communication. It provides a common platform / language for the sender and receiver to communicate.

Page 109: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-3

A Simple Data Communications System

8.3 Types of Communication

• There are basically three different types of computer data transmission (based on devices)

o Processor to Processor:

This normally refers to communication between two or more computers to interchange large quantities of data such as bulk update of files or records and so on.

This also refers to the communication that takes place between two or more computers when working in tandem.

Page 110: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-4

This type of communication tends to be very fast and often takes place between computers in the same room

o Personal computer or dumb terminal to host computer:

The personal computer can send receive and store information from another large computer. The large computer is normally the host computer

o Personal computer to Personal Computer:

Personal computers can communicate with each other on a one to one to basis. They exchange information freely with one another.

• Communication is also classified as Online or Offline

o Online:

� A direct connection is made between devices interchanging information and the transfer occurs almost instantaneously. The amount of time taken for the actual data transfer depends on the amount of data to be transmitted and the capacity of the line

o Offline:

� Data is prepared for transmission at a later time. This type of communication is batch processing because they process data in batches and communicate the data at a predetermined time

• Broadly speaking there are two types of design for communication subnet

o Point to point subnet

o Broadcast channels

8.4 Character Codes

• A character is a symbol that has a common, constant meaning. It might be the letter A or B, a number such as 1 or 2, or special symbols such as ? or &.

• Characters are represented by groups of bits that are binary zeros (0) and ones (1). These groups of bits are called a coding scheme (or code).

Page 111: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-5

• A byte is a group of consecutive bits that are treated as a unit or character. Could be 5, 6, 7, 8 or 9 bits depending on the codes and error checking used. (Some groups use this to represent a character).

• Examples of character codes used in data communications are:

♦ ASCII (American Standard Code for Information Interchange)

♦ EBCDIC (Extended Binary Coded Decimal Interchange Code)

♦ BCD (Binary Coded Decimal) and Baudot Code.

8.4.1 ASCII

♦ Developed by the American National Standards Institute (ANSI)

♦ Usually 7 bit code (128 characters) and one parity (for error checking on individual characters); 8-bit version also exists (256 characters i.e. extended ASCII) for graphics and foreign languages applications

♦ Widely used in data communication and processing

8.4.2 EBCDIC

♦ Developed by IBM for data processing (1 start, 8 data, 1 parity, 1 stop in asynch 11 bits sync 9 bits)

♦ 8 bit code i.e. 256 characters possible

♦ If parity is implemented, a ninth bit will be used

8.4.3 Baudot Code

♦ 5 bit code derived from telegraphy

♦ Used on international Telex network (called Telex code, telegraph code). Speed is 150 bits per second or less.

♦ Uses "shift" character to increase character set to 58 lid character combinations.

Different collating sequence for letters and numbers e.g. 1 is higher than 9; A is higher than Z.

Page 112: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-6

8.5 Data Flow Alternatives

Sender Only

Sender or Receiver

Sender or Receiver

Receiver Only

Receiver or Sender

Receiver or SenderSimultaneously

OR

Simplex Transmission

Half Duplex Transmission

Full Duplex Transmission

8.5.1 Simplex Transmission

♦ Unidirectional, no feedback from receiver(card reader input device)

♦ Keyboards and Monitors are examples

8.5.2 Half Duplex Transmission

♦ Bidirectional – two way transmission but data can be transmitted only in one direction at a time.

♦ Either transmit or receive

♦ Modem turnaround time can be substantial – The amount of time half duplex communication takes to switch between sending and receiving is called as the turnaround time.

♦ E.g. traffic on a one-lane bridge, walkie-talkie, information enquiry system, telephone (long distance) - citizens broad board radio

Page 113: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-7

8.5.3 Full Duplex Transmission

♦ Bidirectional

♦ Transmit and receive simultaneously

♦ No turnaround time involved

♦ E.g. traffic on a two-way street, telephone conversation (short distance)

8.6 Modes of Transmission

8.6.1 Parallel Transmission

Source Receiver

0 11 00 11 10 01 01 01 0

ASCIICharacter 2

ASCIICharacter 1

Parallel by bit, serial by character

♦ Bits of the character are transmitted in parallel, whereas the characters themselves are transmitted serially i.e. one character after the other

♦ For on-site communications and for the transmission of data between the computer and its peripheral devices e.g. printer, magnetic tape handlers, disk subsystems)

♦ High transfer rate but expensive over long distance

Eg: Printers use parallel transmission.

Page 114: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-8

8.6.2 Serial Transmission

Source Receiver11101010

ASCIICharacter 2

00001101

ASCIICharacter 1

Serial by bit, serial by character

♦ Bits of the encoded character are transmitted one after the other along one

channel (send bit 1 than bit 2 over)

♦ The receiver then assembles the incoming bit stream into characters

♦ Used for long-distance communications

♦ In general with serial transmission the transmitting device sends one bit then a second bit and so on, until all the bits are transmitted.

♦ Serial transmission is considerably slower than parallel transmission.

8.7 Bit Synchronization

To correctly interpret the bits coming from the source, the receiver has to know when to look at the line to take the bits off the line.

This is done by means of a clock at each end of the line.

Source Receiver

100 bps 100 bps

Clock Clock100 bps100 bps

Clock help us achieve clock synchronization

Page 115: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-9

The source clock tells the source how often to put the bits onto the line, and the receiver clock tells how often to look at the line.

Once bit synchronization is achieved, the next problem is to achieve character synchronization i.e. determining which group of bits belongs to a character.

If the receiver can:

♦ determine which bit is the first bit of a character or message,

♦ know how many bits there are in a character,

then, it can count off the required number of bits and assemble the character.

8.7.1 Overhead

Non-data bits or characters necessary for transmission, error detection or for use by protocol

E.g. For asynchronization transmission : start bits, stop bits

For synchronous transmission : SYN characters

8.7.2 Synchronous Transmission

♦ No interval between characters

♦ Low overhead

♦ Error checking/correction

♦ Problems of Synchronization when message sequence consists of many '1' or '0's

Page 116: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-10

8.7.3 Asynchronous Transmission

♦ Often referred to as start-stop protocol

♦ Random interval between characters.

♦ High overhead

♦ Limited error checking/no error correction

♦ No problem with synchronization

8.7 Transmission Media

The transmission media is the matter or substance that carries voice or data transmission.

Many different types of transmission media are currently in use such as: copper, (wire or coaxial cable), glass (fibre optical cable), or air (radio, infrared, microwave or satellite).

A circuit (channel or line) is nothing more than the path over which data moves.

There are two basic types of media.

a. Guided media are those in which the message flows through a physical media such as a twisted pair wire, coaxial cable, or fibre optic cable.

b. Radiated media are those in which the message is broadcast through the air such as infrared, microwave or satellite.

8.7.1 Guided Media

Page 117: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-11

Twisted pair, coaxial cable and fibre optics are the most commonly used guided media. Out of these three mediums widely used are twisted pair and coaxial cable.

Twisted Pair Cable

♦ Made of copper coated with insulating material and continuously twisted throughout its entire length.

♦ Twisting helps minimize the effects of noise or electromagnetic interference.

♦ Relatively inexpensive and easy to install (flexible).

♦ Low noise immunity, narrow bandwidth.

♦ Readily available in existing buildings

♦ Usage: - Voice Communication

- Data Communication

♦ E.g. : Telephone

Page 118: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-12

Coaxial Cable

A coaxial cable has an inner conductor (copper wire) with an outer conductor concentric with and completely surrounding it, which is usually grounded.

♦ A dielectric layer is separating the inner and outer conductors. The entire cable is housed by an outer casting, which could be either a jacket or a shield.

♦ The cables currently used for local networking are classified in two ways according to the modulation techniques employed: baseband (50 ohms) and broadband (75 ohms).

Page 119: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-13

Fibre Optic Cable

♦ fibre optic cable may consist of several or even hundreds of optical fibres, each of these are capable of transmitting data at very high bit rate.

♦ A single optical fibre has a center core of a glass (or silica) or plastic material with a high index of refraction, surrounded by a cladding layer of a material with a slightly lower index. Optical fibres are very lightweight and small size. Reduced size and weight (than copper or coaxial), but easy to break.

♦ Low error rate, very high noise immunity, immunity to electrical and magnetic noise

♦ High cost of installation with special equipment and skill required

♦ Very expensive but may be economical for high-volume application

♦ Broad bandwidth

Page 120: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-14

♦ Optical signals have frequencies of the order of hundreds of THz (that is 1014 Hz) and can be carried over thousands of km within certain type of optical fibre at specific optical wavelength.

♦ Usage: - Voice Communication

- Video Transmission

- - Data Communication

♦ Networks using fibre optic are called "Fiber Distributed Data Interface (FDDI)" often ring-based.

8.7.2 Radiated Media Or Wireless Transmission

♦ No physical connections

♦ Required no media

♦ Modes: - Broadcast

- Point-to-point

♦ E.g.: - Microwave (Line-of-sight)

- Satellite (Very long distance)

- Infra-red (Short distance)

- Radio

Page 121: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-15

8.6.5 Microwave

♦ A microwave is an extremely high-frequency radio communication beam that is transmitted over a direct line-of-sight path between any two points.

♦ As its name implies microwave signal is an extremely short wavelength.

♦ Microwave signals can be focused into narrow, powerful beams that can be projected over long distances.

♦ Transmitter and receiver must be in line of sight - 30 miles apart because of earth curvature

♦ Possible interference from environment

♦ The distance coverable depends to a large extent on the height of the antenna: the taller the antennas, the longer the sight distance. A system of repeaters can be installed with each antenna to increase the distance served.

♦ As the distance between communication points increases towers are used to elevate the radio antennas to account for the Earth’s curvature and maintain a clear line-of-sight path between two parabolic reflectors.

♦ Lack of security

♦ High initial equipment cost

Page 122: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-16

♦ Relatively high speed - data rates up to 250 M bps

8.6.6 Satellite

♦ Satellite communication has one station as a satellite orbiting the Earth acting

as a super antenna and repeater.

♦ Line of sight required between satellite and earth stations

♦ Satellite-based wireless Internet access systems use satellites in relatively low orbits. This places them relatively close to users, so their signals are strong

♦ 12 to 24 transponders per satellite. These transponders receive, amplify, change frequency and transmit

♦ Geosynchronous orbit (22,300 miles)

♦ Low security - anyone with satellite dish and right frequency can tune in

♦ Ease of adding stations.

♦ One main disadvantage of satellite transmission is the delay that occurs because the signal has to travel into space and back to earth (Propagation delay). Data Rates of up to 50 M bps

Page 123: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 8 : DATA COMMUNICATION CONCEPTS – DATA COMMUNICATION I

8-17

• Exercises

(a) What is the fundamental purpose of a communications system? List the main components of a communication system.

(b) What is the purpose of synchronization? How to achieve bit synchronization?

(c) Discuss the differences between serial and parallel transmission.

(d) What is an overhead bit? What is the purpose of adding overhead bits to the data bits when transmitting data?

(e) What are the two main categories of transmission media? What factors should we consider when selecting a transmission media?

Page 124: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-1

Chapter 9. Modulation, Multiplexing & Switching –

Data Communication II

There are two fundamentally different types of data: digital and analog. Computers

produce digital data that are binary either on or off (also can be represented by binary 1

and 0). In contrast telephones produces analog data that are sent as electrical signals

shaped like the sound waves they transfer.

However, data can be converted from one form into another for transmission over

network circuits. For example digital computer data can be transmitted over an analog

telephone circuit by using special device called modem. Likewise it is possible to

translate analog voice data into digital form for transmission over digital computer

circuits using a device called codec.

9.1 Modulation

Modulation is the technique of modifying the form of an electrical signal so that the signal can carry intelligent information on a communication medium.

Modem (modulator-demodulator)

♦ One of the basic components of a network ♦ It takes the binary electrical pulses received from the microcomputer or

terminal and converts or modulates the signal so it can be transmitted ♦ The modulated signal often is referred to as an analog signal and the signal

that does the carrying is the carrier wave, and modulation changes the shape of the carrier wave to transmit 0s and 1s.

♦ Perform either analog modulation or digital modulation

ModemAnalogSignal

BinaryVoltage Pulses

Page 125: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-2

9.1.1 Analog Modulation

♦ There are three fundamental methods of modulation: amplitude modulation, frequency modulation and phase modulation.

Amplitude Modulation (AM)

Max swing to and fro above and below OV

Freq. =no. of swings

to & fro in 1 second

If the modulator causes the amplitude to the carrier signal to vary then the result is AM. (Same for freq & phase)

♦ Also known as amplitude shift keying (ASK). ♦ The amplitude or height of the sine wave is varied to transmit the ones and

zeroes. ♦ More susceptible to noise than frequency modulation. ♦ Faster transmission rate (above 2400 bits/s)

Am

plitu

de

Bits 0 0 1 0 1 0 0 1

Page 126: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-3

♦ Disadvantages • Data sent twice (upper lateral band is a reflection of the lower lateral

band)

Frequency Modulation (FM)

♦ Also know as frequency shift keying (FSK). ♦ The frequency of the sine wave transmit the ones and zeroes. ♦ The amplitude and phase of the carrier are held constant-high pitch tone (high

freq) to binary 1. ♦ For e.g. A low frequency (1070 Hz) is used to send a 0, and a higher

frequency(1270 Hz) is used to send a 1.

Phase Modulation (PM)

♦ Also known as phase shift keying (PSK).In 180 phase charge, the sine were immediately goes in one other direction.

♦ Data is transmitted by changing or shifting the phase of the sine wave. ♦ PM could be two-phase (0° and 180°), four-phase (0°, 90°, 180°, and 270°)

etc. ♦ Every time there is a change in state (0 to 1, or 1 to 0), there is a 180° change

in phase (for a two-phase modulation).

Page 127: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-4

MODEMS

♦ Modem is an acronym for Modulator / DEModulator. ♦ A modem takes the digital electrical pulses received from a computer,

terminal or microcomputer and converts them into a continuous analog signal that is needed for transmission over an analog voice – grade circuit.

♦ Modem are either internal (inside the computer) or external (connected to the computer by a cable).

♦ Modulators are used at the senders end to convert the digital signals to analog. ♦ Demodulators does the vise versa procedure (at the receivers end they convert

the analog signal back to digital)

9.2.2 Digital Modulation

The device used for converting analog data into digital form for transmission and subsequently recovering the original analog data from the digital is known as a codec (coder-decorder). One of the techniques used are pulse code modulation.

In the case of converting voice transmission into digital signals.

Voice and data can be sent digitally. All voice data image transmissions will eventually be sent digitally as it is more efficient and produces less error during transmission.

Pulse Amplitude Modulator (PAM)

♦ Give a different height digital pulse for each different plus or minus voltage

Pulse Code Modulator (PCM)

♦ Most common digitizing technique in use ♦ PAM samples are quantized to get PCM ♦ The amplitude of each PAM pulse is approximated by an n-bit integer with a

digital modem V.32 V.42 bits data computer

Page 128: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-5

9.2 Multiplexing

• Multiplexing is a set of techniques that allows the simultaneous transmission of multiple signals across a single data link.

• A multiplexer is a device that converts several low-speed signals from different devices, and transmit the combined signals over a high-speed line.

Page 129: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-6

• In a multiplexed system n devices share the capacity of one link.

Terminal 1 Terminal 2 Terminal 3

Multiplexor

Multiplexor

Computer 1 Computer 2 Computer 3

Low-speed Line

Low-speed Line

High-speed Line

• A multiplexer is transparent in that it does not do anything to the data on the way through; apart from being slightly delayed, the data that come out one end are the same as the data that went in the other.

• Multiplexers and normally used in pairs, with one multiplexer at each end of the communication circuit. Data from several terminals can be sent over a single communication circuit by one multiplexer. At the receiving multiplexer (demultiplexer), the data is separated and sent to the appropriate destination.

• Signals are multiplexed using two basic techniques. Frequency- Division Multiplexing (FDM) Time-Division Multiplexing (TDM).

9.2.1 Frequency Division Multiplexing

• This is a analog technique.

• With FDM, a limited bandwidth channel is divided into narrow bands (SUBCHANNEL), each for a separate transmission at a lower frequency.

• This can be applied when the bandwidth of a link is greater than the combined bandwidths of the signals to be transmitted.

• Signals generated by each sending device modulate different carrier frequencies

• These modulated signals are then combined into a single composite signal that can be transported by the link.

Page 130: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-7

• Carrier frequencies are separated by enough bandwidth to accommodate the modulated signal.

• Channels are separated by strips of unused bandwidth (guard bands) to prevent signals from overlapping

• FDM can be used in time-domain and frequency-domain.

• Typical examples are broadcast and cable television.

9.2.2 Time-Division Multiplexing (TDM)

• This is a digital process.

• This is applied when the data rate capacity of the transmission medium is greater than the data rate required by the sending and receiving devices.

• With TDM, the transmission of more than one data stream over the same channel using successive time intervals for different signals.

• The transmission appears to be simultaneous as the time interval allocated for each data stream is short. Each device is given an equal time period controlled by a timing pulse

• TDM can be implemented in two ways.

Page 131: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-8

• Synchronous TDM

o The term synchronous has a different meaning from that used in other areas of telecommunications. This means that the multiplexer allocates exactly the same time slot for each device at all times whether or not a device has anything to transmit.

o Frames: Time slots are grouped into frames.

o A frame consists of one complete cycle of time slots, including one or more slots dedicated to each sending device.

o In a system with n input lines, each frame has at least n slots with each slot allocated to carrying data from a specific input line.

o Interleaving: Synchronous TDM can be compared with a fast moving switch. As the switch opens in front of a device, that device has the opportunity to send a specified amount of data onto the path.

o The switch moves from one device to another device at a constant rate and in a fixed order. This process is called interleaving.

o Interleaving can be done by bit, by byte or by any other data unit.

o Framing Bits: Since the time slot order in a synchronous TDM system does not vary from frame to frame very little overhead information needs to be included in each frame. The order of receipt tells the de-multiplexer where to direct each time slot so no addressing is necessary. One or more synchronization bits are usually added to the beginning of each frame to avoid timing inconsistencies. These bits called framing bits follow a pattern frame to frame that allows the de-multiplexer to synchronize with the incoming stream.

o Bit-Stuffing: It is possible to connect devices of different data rates to a synchronous TDM In order to make this to work the different data rates must be integer multiples of each other. When the speeds are not integer multiples of each other t hey can be made to behave as if they were by a technique called bit-stuffing.

o The multiplexer adds extra bits to a device's source stream to force the speed relationships among the various devices into integer multiples of each other.

Page 132: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-9

• Asynchronous TDM / Statistical Time Division Multiplexing

o This is designed to overcome the disadvantages of synchronous TDM.

o Like synchronous TDM ATDM allows a number of lower-speed input lines to be multiplexed into a single higher speed line.

o The total speed of the input lines can be greater than the capacity of the path.

o ATDM supports the same number of input lines as synchronous TDM with a lower capacity link.

o The number of ATDM is based on a statistical analysis of the number of input lines that are likely to be transmitting at any given time.

o Rather than pre-assigned each slot is available to any of the attached input lines that has data to send.

o Ability to allocate time slots dynamically coupled with the lower ratio of time slots to input lines greatly reduces the likelihood and degree of waste

• Addressing and Overhead

o Adding address bits to each time slot increases the overhead of an achromous system and limits its potential efficiency.

o Addresses usually consists of only a small number of bits and can be made even shorter by appending a full portion only to the first portion of a transmission with abbreviated versions to identify subsequent portions.

o The need for addressing makes asynchronous TDM inefficient for bit or byte interleaving Asynchronous TDM is efficient only when the size of time slots is kept relatively small

• Variable length time slots.

o Asynchronous TDM can accommodate traffic of varying data rates by varying the length of the time slots. Stations transmitting at a faster rate can be given a longer slot.

o Managing variable-length fields require tat control bits be appended to the beginning of each time slot to indicate the length of the coming data portion. These extra bits also increase the overhead of the system and again are efficient only with larger time slots.

Page 133: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-10

A1 B1 C1 D1 A2 B2 C2 D2Synchronous

Time-Division Multiplexing

First Cycle Second Cycle

Wasted Bandwidth

StatisticalTime-Division Multiplexing A1 B1 C1 D1

Extra BandwidthAvailable

First Cycle

Second Cycle

Legend

Data

Address

9.3 Switching

• A switched network consists of a series of interlinked nodes called switches. Switches are hardware or software devices capable of creating temporary connections between two or more devices linked to the switch but not to each other.

• In large inter-networks there may be several paths to the same destination. Therefore messages may be routed over several different paths.

• The purpose of any switching strategy is to allow end-to-end message transfers. Any station requesting such service initiates it by notifying the switching node to which it is attached.

• Traditionally three methods of switching have been studied.

o Circuit switching

o Packet switching

o Message switching

Page 134: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-11

9.3.1 Circuit switching

• A ``dedicated'' circuit is set up for each connection. The communicating parties use this fixed circuit during the conversation. Once the communication is finished, the circuit can be released for other uses.

• Circuit switching creates a direct physical connection between two devices such as phones or computers

• A circuit switch is a device with n inputs and m outputs that creates a temporary connection between an input link and an output link. The number of input need not match with the number of output.

• An n by n folded switch can connect n lines in full duplex mode.

• Circuit switching uses either of the technologies

o Space division switches:

� The paths in the circuit are separated from each other spatially. This technology was originally designed for use in analog networks but is currently used in both analog and digital networks.

� Cross bar switches:

� A crossbar switch connects n inputs to m outputs in a grid using electronic micro switches at each cross point.

� The major limitation of this design is the number of cross points required.

� Multistage switches

� A solution to the limitation of the cross bar switch is the use of multistage switch which combines cross bar switches in several ways.

� In multistage switches devices are linked to switches that in turn are linked to a hierarchy of switches.

� The design of a multistage switch depends on the number of stages and the number of switches.

� Multiple paths: Multiple paths switches provide several options for connecting each pair of linked devices.

♦ Involves three phases:

Page 135: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-12

• Circuit establishment : Before any data can be transmitted, an end-to-end (station-to-station) must be established. Once this is set, a test is made to determine if the end station is busy or is prepared to accept the connection.

• Data transfer : Digital or analog signals can now be transmitted. Generally, the connection is full duplex.

• Circuit disconnect : After data transfer, connection is terminated, usually by the action of one of the two end stations. Signals must be propagated to intermediate switching nodes to deallocated the dedicated resources.

• Both stations as well as the required transmission resources must be available at the same time before the exchange can begin. This setup procedure introduces a delay into the overall communication process.

• . If acknowledgments are required, this delay is cumulative and is incurred before the message must be dedicated for the duration of the call even if no information is being exchanged.

• This may be appropriate in real time applications such as voice communications or when a continuous flow of data is involved, otherwise it is inefficient and wasteful of channel capacity.

Advantages:

1. Fixed bandwidth, guaranteed capacity (no congestion)

2. Low varience end-to-end delay (delay is almost constant)

Disadvantages:

1. Connection set-up and tear-down introduces extra overhead (thus initial delay)

2. User pay for circuit, even when not sending data

3. Other users can't use the circuit even if it is free of traffic

Page 136: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-13

9.3.2 Message Switching • Message switching is a store-and-forward concept where a message with an

appropriate destination address is sent into the network and is stored at each intermediate switching point (network node) where its integrity is checked before it is sent onto the next stage of its journey.

• Data are sent in logical units, called messages, e.g. telegrams, electronic mails. • It is not necessary to establish a dedicated path between two stations. Thus no

setup call is involved. • Sources do not segment messages but instead send the original messages into the

network in tact • The data message is transmitted to the switching node, where it is stored in a

queue. When the message reaches the head of the queue, it is then transmitted to the next node if the link is available.

• Message switching is also known as store-and-forward switching. • Each message contains a header that includes the destination address, and any one

message can be sent to several destinations. • Prioritization can be built into the header, allowing the switch node to process

high priority messages before lower-priority ones. • At each node, the entire message is received, stored briefly and then transmitted

to the next node. Thus delay in message switching may be as long as circuit switching

Advantages:

1. Data channels are shared among communication devices improving the use of bandwidth.

2. Messages can be stored temporarily at message switches, when network congestion becomes a problem.

3. Priorities may be used to manage network traffic.

4. Broadcast addressing uses bandwidth more efficiently because messages are delivered to multiple destinations

Disadvantages:

1. The only real disadvantage to Message Switching is it's not suitable for real time applications such as data communication, video or audio.

Page 137: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-14

9.3.3 Packet Switching • It operates on the same principles as data store-and-forward; however, it is

distinguished from message switching by the smaller size of the message. • It divides the data traffic into blocks, called packets of some given maximum

length. • Length of the units of data is limited and fixed. Messages beyond the fixed length

must be divided into smaller units (called packets) and sent out at a time. • Data is sent in individual packets • Packet contains data plus a destination address. • Since source and destination station are not directly connected, network must

route each packet from node to node. • Each switching node has a small amount of buffer space to temporarily hold

packets. If the out-going line is busy, the packets stay in queue until the line becomes avaiable

Advantages:

1. Packet switching uses resources more efficiently

2. Very little set-up, or tear-down time

Disadvantages:

1. No guarantee in delay

2. Algorithms are more complicated

3. Difficult to bill customers

Page 138: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 9 : MODULATION, MULTIPLEXING & SWITCHING -DATA COMMUNICATION II

9-15

• Exercises

(a) What is the importance of modulation when transferring data over a data communication network? (b) Compare Analog and digital modulation giving examples for each type. (c) Discuss the different types of multiplexing emphasizing on the advantages offered by each type. (d) Why is switching necessary? Which type of switching technique is more faster? (e) Discuss the importance of the three phases in circuit switching. (f) What is a ‘packet’? Is it important to transfer data in packets? (g) What drawbacks does message switching offer compared to packet switching?

Page 139: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 1

Chapter 10. Data Communication Standards & Topologies –

Data Communication III

There are many different standards available today relating to data communication and

transmission over networks. Before data can be transmitted two computers both of the

computers need to use the same type or compatible data communication standards.

Standards are essential to make sure that data transmission takes place without any

problems.

10.1 Data Communication Standards

Standards (What is a standard)

• A standard provides a model for development that makes it possible for a product to work regardless of the individual manufacturer.

• They are developed by cooperation among standards creation committees forums, and government regulatory bodies.

• Advantages of using data communication standards:

o Standards are essential in creating and maintaining an open and competitive market for manufacturers and in guaranteeing national and international operability of data and telecommunication technology and processes.

o They provide guidelines to manufacturers, vendors government agencies and other service providers to ensure the kind of interconnectivity. This assures that there will be a large market for a particular piece of equipment or software and encourages mass production thus bringing down cost.

o This also allows products from multiple vendors to communicate giving the purchaser more flexibility in equipment selection and use.

Page 140: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 2

Principal advantages of standards are:

• A standard assures that there will be a large market for a particular piece of

equipment or software.

• A standard allows products from multiple vendors to communicate, giving the

purchaser more flexibility in equipment selection and use.

• The users do not have to buy all the products from a single company this gives the

benefit of low prices and products with better features.

The following organizations have been accepted as standards creation committees

• The International Standards Organizations (ISO)

• The International Telecommunications Union-Telecommunication Standards Sector (ITU-T)

• The American National Standards Institute (ANSI)

• The Institute of Electrical and Electronics Engineers (IEEE)

• The Electronics Industries Association (EIA)

• Telcordia

10.1.1 The International Standards Organization: (ISO)

• This is also called International Organization for standardization. This was

created in 1947 and is a voluntary organization dedicated to worldwide agreement on international standard in a variety of fields

• It is a voluntary non treaty multinational body whose membership is drawn

standards creation committees of various governments throughout the world.

10.1.2 ANSI

• The primary role of ANSI is to act as an overall coordinating body for private and public sector organizations wishing to develop common standards.

Page 141: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 3

10.1.3 IEEE

• The IEEE is a professional body best known for its work in preparing standards on interfaces software engineering etc. The standards prepared are ratified by ANSI before submission to the ISO for international ratification.

• LAN standards Project 802: Divided into four groups

o The plenary. General forum for interested parties

o The executive committee. Consists of chairman of all working groups and acts as a coordinating body between the working and the technical advisory groups

o Working groups: This group actually prepares the draft standards. There are at present 6 groups with the following areas of study

� 802.1 High level interface

� 802.2 Logical Link Control

� 802.3 CSMA/CD

� 802.4 Token Bus

� 802.5 Token Ring

� 802.6 MAN

10.2 NETWORK PROTOCOLS

• A network protocol is a set of rules that govern data communication. It represents an agreement between the communicating devices. Without protocols even though devices are connected to each other they will not be able to communicate.

• Network protocols are standards that allow computers to communicate. A

protocol defines how computers identify one another on a network, the form that

the data should take in transit, and how this information is processed once it

reaches its final destination. Protocols also define procedures for handling lost or

damaged transmissions or "packets.

• TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for

Novell NetWare), DECnet (for networking Digital Equipment Corp. computers),

Page 142: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 4

AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN

Manager and Windows NT networks) are the main types of network protocols in

use today.

• The key elements of a protocol are: syntax, semantics and timing.

o Syntax refers to the structure or format of data, meaning the order in

which they are presented.

o Semantic refers to the meaning of each section of the bits. It helps to

understand how a particular pattern should be interpreted and what action

is to be taken based on that interpretation.

o Timing refers to two characteristics: When data should be sent and how

fast it can be sent.

10.2.1 Open System Interconnection (OSI) Reference Model ♦ The OSI Reference Model, for computer communication network architecture

was developed by the International Standards Organization (ISO) as a step towards international standardization of network protocols. It provides the conceptual framework for defining standards for interconnecting heterogeneous computer systems.

♦ OSI is both a standard and a network architecture model. ♦ OSI is not a protocol or set of rules but a layering of required functions, or

services that provides a framework with which to define protocols. ♦ It defines a consistent language and boundaries for establishing protocols. ♦ OSI defines a complete architecture having seven layers. Each layer performs

a specific function. ♦ OSI reference model is a example of a good layered architecture.

7. Application -- End user services such as email. 6. Presentation -- Data problems and data compression 5. Session -- Authentication and authorization 4. Transport -- Guarantee end-to-end delivery of packets 3. Network -- Packet routing 2. Data Link -- Transmit and receive packets 1. Physical -- The cable or physical connection itself.

Page 143: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 5

♦ Layers 1 to 3 are usually referred to as the lower layers. Layers 4 through 7 are usually referred to as the higher layers, or upper layers.

♦ Layer 1 (physical) must be implemented in hardware. Layer 2 and 3 are implemented partly in hardware and partly in software. Higher layers are always implemented in software.

The use of protocol layers has significant benefits:

• A large task is reduced to a series of smaller sub tasks • Layers or group of layers can be substituted or replaced. • User’s view of a complex system can be simplified and hidden. • Functionality is contained within a single layer and have no effect on other layers

10.2.2 TCP/IP

• TCP/IP is made up of two acronyms, TCP, for Transmission Control Protocol,

and IP, for Internet Protocol. TCP handles packet flow between systems and IP

handles the routing of packets. However, that is a simplistic answer that we will

expound on further.

• All modern networks are now designed using a layered approach. Each layer

presents a predefined interface to the layer above it. By doing so, a modular

design can be developed so as to minimize problems in the development of new

applications or in adding new interfaces.

• The ISO/OSI protocol with seven layers is the usual reference model. Since TCP/IP was designed before the ISO model was developed it has four layers; however the differences between the two are mostly minor.

TCP/IP Protocol Stack. 4. Application -- Authentication, compression, and end user services. 3. Transport -- Handles the flow of data between systems and provides access to the network for applications via the (BSD socket library) 2. Network -- Packet routing

Page 144: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 6

1. Link -- Kernel OS/device driver interface to the network interface on the computer.

10.3 LOCAL and WIDE AREA NETWORKS

10.3.1 LANs (Local Area Networks)

• A network is any collection of independent computers that communicate with one

another over a shared network medium. LANs are networks usually confined to a

geographic area, such as a single building or a college campus. LANs can be

small, linking as few as three computers, but often link hundreds of computers

used by thousands of people. The development of standard networking protocols

and media has resulted in worldwide proliferation of LANs throughout business

and educational organizations.

• LANs make use of CSMA/CD to reduce collision.

• CSMA/CD

o Collisions: When unregulated access is provided to a single line signals try to overlap and destroy each other. This is called collision. A LAN therefore needs a mechanism to coordinate traffic minimize the number of collisions that occur and maximize the number of frames that are delivered. The access mechanism used in Ethernet is called Carrier Sense Multiple Access (CSMA) standardized in 802.3

o In CSMA system any workstation wishing to transmit must first listen for existing traffic on the line. A device listens by checking for a voltage. If no voltage is detected the line is considered idle and the transmission is initiated.

o CSMA cuts down on the number of collisions but do not eliminate them..

Characteristics of LAN

♦ Nodes are located in a relatively limited geographic area (less than 15 miles). ♦ A LAN is a network restricted to a single site, such as the campus of a

University. ♦ Owned by a single organization ♦ Usually decentralized control

Page 145: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 7

♦ High data rate (FAST ETHERNET – 100 Mbps and also gigabit systems are in development)

♦ Low error rate ♦ Inexpensive transmission medium (e.g. coaxial) ♦ Common topologies make use of LAN are: bus, ring and star.

Example – ETHERNET

10.3.2 WANs (Wide Area Networks)

• Often a network is located in multiple physical places. Wide area networking

combines multiple LANs that are geographically separate. This is accomplished

by connecting the different LANs using services such as dedicated leased phone

lines, dial-up phone lines (both synchronous and asynchronous), satellite links,

and data packet carrier services. Wide area networking can be as simple as a

modem and remote access server for employees to dial into, or it can be as

complex as hundreds of branch offices globally linked using special routing

protocols and filters to minimize the expense of sending data sent over vast

distances.

Characteristics of WAN

♦ Nodes are found over a much wider area – distances up to thousands of kilometers

♦ Typical data rates up to 100 kbits per second ♦ Usually used by several different organizations – managed by organizations

independent of users, example telecommunications authority. ♦ Higher error rates ♦ Generally use point-to-point links ♦ Often access regulated public or private communication systems ♦ Frequently for large database access by users over telephone lines ♦ Common topologies make use of WANs are mesh and star

Example – INTERNET

Page 146: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 8

10.4 LINE CONFIGURATION

• Line configuration refers to the way two or more communication devices attach to a line

• A link is the physical communication pathway that transfers data from one device to another.

• There are two possible line configurations viz. point to point line configuration and Multipoint line configuration.

10.4.1 Point to Point line configuration

o A point to point line configuration provides a dedicated link between two devices.

o The entire capacity of the channel is reserved for transmission between two devices.

10.4.2 Multipoint line configuration

o This is one in which more than two devices share a single link.

o In this environment the capacity of the channel is shared either spatially( several devices share simultaneously) or temporally( time shared line )

10.5 NETWORK TOPOLOGIES

• The term topology refers to the way a network is laid out physically or logically and two or more links form a topology.

• Topology also can be referred to as the basic geometric layout of the network – the way in which the computers on the network are interconnected.

• The topology of a network is the geometric representation of the relationship of all the links and linking devices.

• There are five basic topologies possible viz Mesh, Star, Tree, bus and ring.

Page 147: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 9

• These names describe how the devices in a network are interconnected rather than their physical connection

• Two basic types of relationships are possible between devices. viz. Peer-to-Peer (Devices share the kink equally ) Primary-Secondary; (One device controls traffic and the others must transmit through it.)

• The choice of topology depends on

o Cost

o Type and number of equipments being used

o Required responsive time

o Rate of data transfer

10.5.1 Ring Topology

A

B

C

E

D

• A ring topology connects all computers on the LAN in one closed loop circuit with each computer linked to the next.

• Each device has a dedicated point-to-point line configuration only with two devices on either side of it.

• A signal is passed along the ring in one direction from device to device until it reaches its destination

• Each device in the ring incorporates a repeater.

• When a device receives a signal intended for another device its repeater regenerates the bits and passes them along

• To add or delete a device requires moving only two connections

Page 148: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 10

Advantages

• Cable failure affect limited users

• Equal access for all users

• Each workstation has full access to the speed of the ring

• As workstation increases performance diminishes

Disadvantages

• Unidirectional traffic can be a disadvantage. A break in the ring can disable the entire network

• This weakness is normally solved using a dual ring or a switch capable of closing off a break

• Expensive adapter cards

10.5.2 Bus Topology

B D

A C D

• All the computers are connected to one circuit running the length of the network.

• The bus cable carries the transmitted message along the cable. As the message arrives at each workstation the work station computer checks the destination address contained in the message to see if it matches its own. If it does not match it does nothing more

• This is multipoint

• One long cable acts as back bone to link all devices in the network

• Nodes are connected to the bus cable by drop lines and taps.

Page 149: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 11

• A drop line is a connection running between the device and the main cable.

• A tap is a connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with the metallic core.

• The signal becomes weaker as it travels further away in the cable

• The term bus implies a high – speed circuit and a limited distance between the computers, such as within a building.

• The distance can be increased by using a hub, which is a repeater.

• Ethernet uses a bus topology.

• There are THREE common wiring implementations for bus networks

• 10Base2 (thin-net, CheaperNet) 50-ohm cable using BNC T connectors, cards provide transceiver

• 10Base5 (ThickNet) 50-ohm cable using 15-pin AUI D-type connectors and external transceivers

• 10BaseT (UTP) UTP cable using RJ45 connectors and a wiring centre

Advantages:

• Ease of installation

• Backbone cable can be laid along the most efficient path

• Bus uses less cabling than mesh, star or tree. Each drop line has reach only as far as the nearest point on the back bone

• Optimally efficient at installation

• Low cost

Disadvantages

• Difficult reconfiguration and fault isolation

• Limits on cable length and difficult to add new devices

• Signal reflection at taps can cause degradation in quality

• A fault or break in the bus cable stops all transmissions

• As the number of workstations increase the speed of the network slows down

Page 150: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 12

10.5.3 Tree Topology

A

CB D

E F G

• A tree topology is a generalization of bus topology

• Nodes in a tree are linked to a central hub through a secondary hub that controls the traffic to the network

• The central hub in the tree is an active hub and contains a repeater.

• Repeater is a hardware device that regenerates the received bit pattern before sending them out.

• The secondary hubs may be active or passive hubs. A passive hub provides a simple physical connection between the attached devices.

• Advantages and disadvantages: Same as that of star

Advantages

• The addition of secondary hubs allows more devices to be attached to a single central hub and can therefore increase the distance a signal can travel

• It allows network isolation for prioritizing communications from different computers

Disadvantages

• With this topology, there is one route between any 2 nodes. If any part of the route is used by another pair of nodes, there is no alternative path and hence transmission is not possible until that section is released

Page 151: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 13

10.5.4 Star Topology

B

C

G F

A

D

E

• In a star topology each device has a dedicated point-to-point link only to a central controller usually called hub (star coupler) The devices are not connected to each other

• The central controller acts as an exchange. In general there are two alternatives for the operation of the central node. One approach is for the central node to operate in broadcast fashion. A transmission of a frame from one station to a node is retransmitted on all of the outgoing links. Though the arrangement is physically a star it is logically a bus. Another approach is for the central node to act as a frame switching device.

• If one device wants to send data to another it sends the data to the controller which then relays the data to the other connected

Advantages

• In a star topology each device requires only one link and one I/O port to connect it to any number of others. Easy to add new workstations

• Centralized control

• Centralized Network / Hub monitoring

• If one link fails only that link is affected. All the other links remain active. This permits easy fault identification fault isolation

• As long as the hub is working this can be used to monitor link problems and bypass defective links

Page 152: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 14

Disadvantages

• Hub failure cripples all the workstations connected to the hub

• Hubs are slightly more expensive than thin Ethernet

10.5.5 Mesh Topology

AC

B

D

G

E

F

• In a mesh topology every device has a dedicated point to point link to every other device. The link carries traffic only between the two devices it connects

• A fully connected mesh connection has n(n-1)/2 physical channels to link n devices. Every device on the network on the device must have n-1 input/output ports.

Advantages

• This topology provides alternative routes between nodes.

Disadvantages

• The total branch length can be significantly increased especially if the network is

fully connected. The cost may become too high to be justified.

Page 153: MELJUN CORTES Computer Archetecture_data_communication

CS212 CHAPTER 10: DATA COMMUNICATION STANDARDS & TOPOLOGIES - DATA COMMUNICATION III

10- 15

• Exercises

(a) What are data communication standards and how does standards contribute to the

successful transmission of data over a communication network?

(b) What are ‘Protocols’? Provide examples for widely used data communication

protocols.

(c) Compare and contrast TCP/IP and OSI reference model.

(d) Discuss the benefits of a ‘layered approach’ in data communication protocols.

(e) Compare point– to- point and multi-point line configurations.

(f) What factors should one consider when selecting a topology for a data communication

network?

(g) What are the network topologies suitable for Local Area Networks (LANs)?

(h) Discuss is a ‘Mesh’ topology the most suitable topology for a Wide Area Network

(WAN)? Consider its advantages and disadvantages over other network topologies.

Page 154: MELJUN CORTES Computer Archetecture_data_communication

IVC at a Glance

���������� ��������������

IVC is an interactive system designed exclusively for Informatics students worldwide! It allows students to gain online access to the wide range of resources and features available anytime, anywhere, 24hours per day, and 7days a week!

In order to access IVC, students need to log-in with their user ID and password.

Among the many features students get to enjoy are e-resources, message boards, and online chat and forum. Apart from that, IVC also allows students to download

assignments and notes, print examination entry cards and even view assessment results.

With IVC, students will also be able to widen their circle of friends via the discussion and chat rooms by getting to know other campus mates from around the world. They can get updates on the latest campus news, exchange views, and chat about common interest with anyone and everyone, anywhere.

Among the value-added services provided through IVC are global orientation and e-revision.

Global orientation is where new students from around the world gather at the same time for briefings on the programmes they undertake as well as the services offered by Informatics.

e-Revision on the other hand is a scheduled live text chat session where students and facilitators meet online to discuss on assessed topics pre-exams. Students can also post questions and get facilitators to respond immediately. Besides that, students can obtain revision notes, and explore interactive exam techniques and test banks all from this platform.

In a nutshell, IVC is there to ensure that students receive the best academic support they can get during the course of their education pursuit with Informatics. It could give students the needed boost to excel well beyond expectations.

�������������������� ����������www.informaticseducation.com/ivc�

Screen shot of IVC menu Screen shot of IVC login page