chapt1 computer architecture.ppt

50
INTRODUCTION TO COMPUTER SYSTEMS Computer systems have conventionally been defined through their interfaces at a number of levels. Included among the levels are the application programs, the high-level languages, and the set of machine instructions. Based on the interface between different levels of the system, a number of computer architectures can be defined. The interface between the application programs and a high-level language is referred to as a language architecture. The instruction set architecture

Upload: petro-baha

Post on 17-Nov-2015

14 views

Category:

Documents


1 download

TRANSCRIPT

  • INTRODUCTION TO COMPUTER SYSTEMSComputer systems have conventionally been defined through their interfaces at a number of levels.Included among the levels are the application programs, the high-level languages, and the set of machine instructions.

    Based on the interface between different levels of the system, a number of computer architectures can be defined.

    The interface between the application programs and a high-level language is referred to as a language architecture. The instruction set architecture defines the interface between the basic machine instruction set and the runtime and I/O control.

    *

  • INTRODUCTION TO COMPUTER SYSTEMSA different definition of computer architecture is built on four basic viewpoints.

    These are the structure, the organization, the implementation, and the performance.In this definition, the structure defines the interconnection of various hardware components,the organization defines the dynamic interaction and management of the various components, the implementation defines the detailed design of hardware components, and the performance specifies the behavior of the computer system.

    *

  • INTRODUCTION TO COMPUTER SYSTEMSA historical background on the evolution of cornerstone ideas in the computing industry is given in the following slides. The effort to build computers has not originated at one single place.

    It is probably fair to say that the first program-controlled (mechanical) computer ever build was the Z1 (1938). This was followed in 1939 by the Z2 as the first operational program-controlled computer with fixed-point arithmetic. However, the first recorded university-based attempt to build a computer originated on Iowa State University campus in the early 1940s

    *

  • Overview: What is Computer Architecture?The functional operation of the individual hardware units in a computer system and the flow of information and control among them.

    Computer hardware is the electronic circuits and electromechanical equipment that constitute the computer

    *

  • A computers instruction Set Architecture{ISA} consists of all the programmer-visible features of the machineInstructions: Specify the arithmetic&logic operations to be performed, data transfers, control Register set:Temporary storage locations in CPUAddressing modes: Is a different way in which operands can be named/referred in machine instructionsA computers architecture is different from its implementationImplementation uses the architecture as a blueprint and involves choices of technology, speed,cost, etc

    Computer instruction Set Architecture

    *

  • HISTORICAL BACKGROUNDJust about the same time a complete design of a fully functional programmable special-purpose machine, the Z3, was reported in Germany in 1941.The University of Pennsylvaniais recorded to have hosted the building of the Electronic Numerical Integrator andCalculator (ENIAC) machine in 1944

    *

  • HISTORICAL BACKGROUNDEDVAC was the first operational general purpose machine built using vacuum tubes. It was built to help compute artillery firing tables during World War II.It was programmable through manual setting of switches and plugging of cables. In 1946, the improved version of the ENIAC, called the Electronic Discrete Variable Automatic Computer (EDVAC), was in process. It was completed in 1952.

    *

  • HISTORICAL BACKGROUNDIn between 1946 & 1949, Electronic Delay Storage Automatic Calculator EDSAC was initiated and became the worlds first full-scale, stored-program,fully operational computer.

    The series consisted of MARK I, II, III, and IV. The latter two machines introduced the concept of separate memories for instructions and data.

    *

  • The term Harvard Architecture was given to such machines to indicate the use of separate memories. This term Harvard Architecture

    is used today to describe machines with separate cache for instructions and data.

    The first general-purpose commercial computer, the UNIVersal Automatic

    Computer (UNIVAC I), was on the market by the middle of 1951 HISTORICAL BACKGROUND

    *

  • HISTORICAL BACKGROUNDThe first general-purpose commercial computer, the UNIVersal AutomaticComputer (UNIVAC I), was on the market by the middle of 1951.

    IBM (International Business Machine) announced its first computer, the IBM701, in 1952.

    In 1964 IBM announced a line of products under the name IBM 360 series.This led Digital Equipment Corporation (DEC) to introduce the first minicomputer, the PDP-8.It was considered a remarkably low-cost machine.

    *

  • Intel introduced the first microprocessor,

    the Intel 4004, in 1971.

    The world witnessed the birth of the first personal computer (PC)

    in 1977 when Apple computer series were first introduced. In 1977 the world also witnessed the introduction of the VAX-11/780 by DEC. Intel followed suit by introducing the first of the most popular microprocessor, the 80 _ 86 series.

    HISTORICAL BACKGROUND

    *

  • HISTORICAL BACKGROUNDThe 1980s and 1990s witnessed the introduction of many commercial parallel

    computers with multiple processors. They can generally be classified into two main categories:

    (1) shared memory and (2) distributed memory systems.

    The number of processors in a single machine ranged from several in a shared

    memory computer to hundreds of thousands in a massively parallel system.

    *

  • Examples of parallel computers during this era include: Sequent Symmetry, InteliPSC,nCUBE, Intel Paragon, Thinking Machines (CM-2, CM-5), MsPar (MP),Fujitsu (VPP500), and others

    HISTORICAL BACKGROUND

    *

  • HISTORICAL BACKGROUNDOne of the clear trends in computing is the substitution of centralized servers bynetworks of computers. These networks connect inexpensive, powerful desktopmachines to form unequaled computing power.

    Local area networks (LAN) of powerful personal computers and workstations began to replace mainframes and minis by 1990. These individual desktop computers were soon to be connectedinto larger complexes of computing by wide area networks (WAN).

    *

  • HISTORICAL BACKGROUND

    *

  • TECHNOLOGICAL DEVELOPMENTComputer technology has shown an extraordinary rate of improvement. Thisincludes the development of processors and memories.The integration of numbers of transistors into a single chip has increased from a few hundred to millions, due to fabrication technology of transistors.The scale of integration has grown from small-scale (SSI) to medium-scale (MSI)to large-scale (LSI) to very large-scale integration (VLSI), and currently to waferscaleintegration (WSI).

    *

  • TECHNOLOGICAL DEVELOPMENTWafer scale integration: This is an evolution in semiconductor technology that builds a gigantic circuit on an entire wafer.Just as the integrated circuit eliminated cutting apart thousands of transistors from the wafer only to wire them back again on circuit boards, wafer scale integration eliminates cutting apart the chips. All the circuits for an entire computer are designed onto one super-sized chip

    *

  • TECHNOLOGICAL DEVELOPMENT

    *

  • 1st Generation: 1946-59 vacuum tubes, relays, mercury delay lines2nd generation: 1959-64 discrete transistors and magnetic cores3rd generation: 1964-75 small and medium scale integrated circuits4th generation: 1975-present, single chip microcomputerIntegration scale: components per chip

    Small: 10-100 Medium: 100-1,000 Large: 1000-10,000 Very large: greater than 10,000

    *

  • TransistorsAdvantages of transistors are: -SmallerCheaperLess heat dissipationSolid State deviceMade from Silicon (Sand)Replaced vacuum tubesTransistor Based Computers were second generation machines

    *

  • MicroelectronicsLiterally - small electronicsA computer is made up of gates, memory cells and interconnectionsThese can be manufactured on a semiconductore.g. silicon wafer

    *

  • THE SYSTEM BUS MODELThis is the refinement of Von Neumann ModelIt is a communication pathway connecting two or more devicesUsually broadcast Is made up ofData busAddress busControl busThere are also power bus, and some architectures also have a separate I/O bus.

    *

  • Bus Interconnection Scheme

    *

  • How buses look likeBuses look like: -Parallel lines on circuit boardsRibbon cablesStrip connectors on mother boardsSets of wires

    *

  • A Typical Computer System

    *

  • Moores LawMoores law was created by Goldon Moore Cofounder of Intel, it says: -Number of transistors on a chip will double every yearComputing power doubles every 18 months for the same price: Higher packing density means shorter electrical paths, giving higher performance

    Smaller size gives increased flexibility, reduced power and cooling requirements

    Fewer interconnections increases reliability

    *

  • The Motherboard (top view)

    *

  • The Motherboard

    *

  • Levels of Machines

    *

  • PERFORMANCE MEASURES

    In assessing the performance of a computer, there are a number of performance measures that are used to assess computers. A user of a computer measures its performance based on the time taken to execute a given job (program). The engineer measures the performance of his system by the total amount of work (throughput) done in a given time.A metric for assessing the performance of a computer helps comparing alternativedesigns

    *

  • PERFORMANCE MEASURESPerformance analysis helps us to KNOW how fast can a program be executed using a given computer

    Hence we need to determine the time taken by a computer to execute a given job. Let us define the clock cycle time as the time between two consecutive rising (trailing)edges of a periodic clock signal (see fig. 1.1).The time required to execute a job by a computer is expressed in terms of clock cycles.

    *

  • PERFORMANCE MEASURES

    *

  • PERFORMANCE MEASURESWe denote the number of CPU clock cycles for execution of a job to be:cycle count (CC)= the number of CPU clock cycles for executing a job cycle time (CT) andthe clock frequency given by f=1/CT.

    The time taken by the CPU to execute a job=CPU time = CC x CT = CC/f

    *

  • PERFORMANCE MEASURESIt may be easier to count the number of instructions executed in a given program as compared to counting the number of CPU clock cycles needed for executing that program. Hence,the average number of clock cycles per instruction (CPI) has been used as an alternate performance measure. The following equation shows how to compute the CPI

    *

  • PERFORMANCE MEASURES

    *

  • PERFORMANCE MEASURESIt is known that the instruction set of a given machine consists of a number of instruction categories: ALU (simple assignment and arithmetic and logic instructions),load,store, branch, and so on.

    In the case that the CPI for each instructioncategory is known, the overall CPI can be computed as:

    *

  • PERFORMANCE MEASURESWhere, Ii = the number of times an instruction of type i is executed in the program CPIi = the average number of clock cycles needed to execute such instruction.Example: Consider computing the overall CPI for a machine A for which the following performance measures were recorded when executing a set of benchmark programs. Assume that the clock rate of the CPU is 200 MHz.

    *

  • PERFORMANCE MEASURES

    *

  • PERFORMANCE MEASURESAssuming the execution of 100 instructions, the overall CPI can be computed as:

    *

  • PERFORMANCE MEASURESIt should be noted that the CPI reflects the organization and the instruction set architecture of the processor while the instruction count reflects the instruction set architecture and compiler technology used. 2.This shows the degree of interdependence between the two performance parameters.

    Therefore, it is imperative that both the CPI and the instruction count are considered in assessing the merits of a given computer- in comparing the performance of two machines.

    *

  • PERFORMANCE MEASURESA different performance measure that has been given a lot of attention is MIPS-million instructions-per-second (the rate of instruction execution per unit time), which is defined as:

    *

  • PERFORMANCE MEASURESExample: Suppose that the same set of benchmark programs considered above were executed on another machine, call it machine B, for which the following measures were recorded:

    *

  • PERFORMANCE MEASURESExample contd: What is the MIPS rating for the machine considered in the previous example (machine A) and machine B assuming a clock rate of 200 MHz?

    *

  • PERFORMANCE MEASURESThus MIPSb > MIPSa.It is interesting to note here that although MIPS has been used as a performancemeasure for machines, one has to be careful in using it to compare machines having different instruction sets.

    This is because MIPS does not track executiontime. Consider, for example, the following measurement made on two different machines running a given set of benchmark programs.

    *

  • PERFORMANCE MEASURES

    *

  • PERFORMANCE MEASURES

    *

  • PERFORMANCE MEASURESb

    *

  • PERFORMANCE MEASURESThe example shows that although machine B has a higher MIPS compared tomachine A, it requires longer CPU time to execute the same set of benchmarkprograms.Million floating-point instructions per second, MFLOP (rate of floating-pointinstruction execution per unit time) has also been used as a measure for machinesperformance. It is defined as

    *

  • PERFORMANCE MEASURESMillion floating-point instructions per second, MFLOP (rate of floating-point instruction execution per unit time) has also been used as a measure for machines performance. It is defined as:

    *

  • PERFORMANCE MEASURES

    While MIPS measures the rate of average instructions, MFLOPS is only defined for the subset of floating-point instructions. An argument against MFLOPS is the fact that the set of floating-point operations may not be consistent across machines and therefore the actual floating-point operations will vary from machine to machine. Yet another argument is the fact that the performance of a machine for a given program as measured by MFLOPS cannot be generalized to provide a single performance metric for that machine.

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *

    *