lsi testing tede

18
Tests goodfor SSI and MSI circuits can't cope with the complexity of LSI. New techniques for test generation and response evaluation are required. LSI Testing TedE M. S. Abadir* H. K. Reghbati** University of Saskatchewan The growth in the complexity and performance of digital circuits can only be described as explosive. Large- scale integrated circuits are being used today in a variety of applications, many of which require highly reliable operation. This is causing concern among designers of tests for LSI circuits. The testing of these circuits is dif- ficult for several reasons: * The number of faults that has to be considered is large, since an LSI circuit contains thousands of gates, memory elements, and interconnecting lines, all individ- ually subject to different kinds of faults. * The observability and controllability of the internal elements of any LSI circuit are limited by the available number of I/O pins. As more and more elements are packed into one chip, the task of creating an adequate test becomes more difficult. A typical LSI chip may con- tain 5000 gates but only 401/0 pins. * The implementation details of the circuits usually are not disclosed by the manufacturer. For example, the only source of information about commercially available microprocessors is the user's manual, which details the instruction set and describes the architecture of the microprocessor at the register-transfer level, with some *M. S. Abadi, i,now in tha Die prt-et of Electrical Engineeing of thi University of Southern California. *-H. K Reghbati is now in the Departnt of Cow,puting Sci- t of Sion rer University, Brnaby, British Colubia. information on the system timing. The lack of implemen- tation information eliminates the use of many powerful test generation techniques that depend on the actual im- plementation of the unit under test. * As more and more gates and flip-flops are packed into one chip, new failure modes-such as pattern- sensitivity faults-arise.I These new types of faults are difficult to detect and require lengthy test patterns. * The dynamic nature of LSI devices requires high- speed test systems that can test the circuits when they are operating at their maximum speeds. * The bus structure of most LSI systems makes fault isolation more difficult because many devices-any of which can cause a fault-share the same bus. * Solving the problems above increases the number of test patterns required for a successful test. This in turn in- creases both the time required for applying that test and the memory needed to store the test patterns and their results. LSI testing is a challenging task. Techniques that worked well for SSI and MSI circuits, such as the D-algorithm, do not cope with today's complicated LSI and VLSI circuits. New testing techniques must be developed. In what follows, we describe some basic techniques developed to solve the problems associated with LSI testing. 4so.00 (D 1983 IEEE IEEE MICRO

Upload: phamdang

Post on 06-Jan-2017

227 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: LSI Testing TedE

Tests goodfor SSI and MSI circuits can't cope with the complexity ofLSI.

New techniques for test generation and response evaluation are required.

LSI Testing TedEM. S. Abadir*

H. K. Reghbati**

University of Saskatchewan

The growth in the complexity and performance ofdigital circuits can only be described as explosive. Large-scale integrated circuits are being used today in a varietyof applications, many of which require highly reliableoperation. This is causing concern among designers oftests for LSI circuits. The testing of these circuits is dif-ficult for several reasons:

* The number of faults that has to be considered islarge, since an LSI circuit contains thousands of gates,memory elements, and interconnecting lines, all individ-ually subject to different kinds of faults.

* The observability and controllability of the internalelements of any LSI circuit are limited by the availablenumber of I/O pins. As more and more elements arepacked into one chip, the task of creating an adequatetest becomes more difficult. A typical LSI chip may con-tain 5000 gates but only 401/0 pins.

* The implementation details of the circuits usuallyare not disclosed by the manufacturer. For example, theonly source of information about commercially availablemicroprocessors is the user's manual, which details theinstruction set and describes the architecture of themicroprocessor at the register-transfer level, with some

*M. S. Abadi, i,now intha Dieprt-et of Electrical Engineeing of thiUniversity of Southern California.*-H. K Reghbati is now in the Departnt of Cow,puting Sci- tofSion rer University, Brnaby, British Colubia.

information on the system timing. The lack of implemen-tation information eliminates the use of many powerfultest generation techniques that depend on the actual im-plementation of the unit under test.

* As more and more gates and flip-flops are packedinto one chip, new failure modes-such as pattern-sensitivity faults-arise.I These new types of faults aredifficult to detect and require lengthy test patterns.

* The dynamic nature of LSI devices requires high-speed test systems that can test the circuits when they areoperating at their maximum speeds.

* The bus structure of most LSI systems makes faultisolation more difficult because many devices-any ofwhich can cause a fault-share the same bus.

* Solving the problems above increases the number oftest patterns required for a successful test. This in turn in-creases both the time required for applying that test andthe memory needed to store the test patterns and theirresults.

LSI testing is a challenging task. Techniques thatworked well for SSI and MSI circuits, such as theD-algorithm, do not cope with today's complicated LSIand VLSI circuits. New testing techniques must bedeveloped. In what follows, we describe some basictechniques developed to solve the problems associatedwith LSI testing.

4so.00 (D 1983 IEEE IEEE MICRO

Page 2: LSI Testing TedE

liques

Testing methods

There are many test methods for LSI circuits, eachwith its own way of generating and processing test data.These approaches can be divided into two broad cate-gories-concurrent and explicit.2

In concurrent approaches, normal user-application in-put patterns serve as diagnostic patterns. Thus testingand normal computation proceed concurrently. In ex-plicit approaches, on the other hand, special input pat-terns are applied as tests. Hence, normal computationand testing occur at different times.

Concurrent testing. Systems that are tested concur-rently are designed such that all the information trans-ferred among various parts of the system is coded withdifferent types of error detecting codes. In addition,special circuits monitor these coded data continuouslyand signal the detection of any fault.

Different coding techniques are required to suit thedifferent types of information used inside LSI systems.For example, m-out-of-n codes (n-bit patterns with ex-actly m l's and n - m O's) are suitable for coding controlsignals, while arithmetic codes are best suited for codingALU operands.3The monitoring circuits-checkers-are placed in

various locations inside the system so that they can detectmost of the faults. A checker is sometimes designed in away that enables it to detect a fault in its own circuitry aswell as in the monitored data. Such a checker is called aself-checking checker.3

Hayes and McClusky surveyed various concurrenttesting methods that can be used with microprocessor-based LSI systems.2 Concurrent testing approaches pro-vide the following advantages:

* Explicit testing expenses (e.g., for test equipment,down time, and test pattern generation) are eliminatedduring the life of the system, since the data patterns usedin normal operation serve as test patterns.

* The faults are detected instantaneously during theuse of the LSI chip, hence the first faulty data patterncaused by a certain fault is detected. Thus, the user canrely on the correctness of his output results within thedegree of fault coverage provided by the error detectioncode used. In explicit approaches, on the other hand,nothing can be said about the correctness of the resultsuntil the chip is explicitly tested.

* Transient faults, which may occur during normaloperation, are detected if they cause any faulty data pat-tern. These faults cannot be detected by any explicittesting method.

Unfortunately, the concurrent testing approach suf-fers from several problems that limit its usage in LSItesting:

* The application patterns may not exercise all thestorage elements or all the internal connection lines.Defects may exist in places that are not exercised, andhence the faults these defects would produce will not bedetected. Thus, the assumption that faults are detected as

February 1983 35

Page 3: LSI Testing TedE

they occur, or at least before any other fault occurs, is nolonger valid. Undetected faults will cause fault accumu-lation. As a result, the fault detection mechanism mayfail because most error detection codes have a limitedcapability for detecting multiple faults.

* Using error detecting codes to code the informationsignals used in an LSI chip requires additional 1/0 pins.At least two extra pins are needed as error signal in-dicators. (A single pin cannot be used, since such a pinstuck at the good value could go undetected.) Because ofconstraints on pin count, however, such requirementscannot be fulfilled.

* Additional hardware circuitry is required to imple-ment the checkers and to increase the width of the datacarriers used for storing and transferring the coded infor-mation.

* Designing an LSI circuit for concurrent testing is amuch more complicated task than designing a similar LSIcircuit that will be tested explicitly.

* Concurrent approaches provide no control overcritical voltage or timing parameters. Hence, devices can-not be tested under marginal timing and electrical condi-tions.

* The degree of fault coverage usually provided byconcurrent methods is less than that provided by explicitmethods.

The above-mentioned problems have limited the use ofconcurrent testing for most commercially available LSIcircuits. However, as digital systems grow more complexand difficult to test, it becomes increasingly attractive tobuild test procedures into the UUT-unit under test-itself. We will not consider the concurrent approach fur-ther in this article. For a survey of work in concurrenttesting, see Hayes and McCluskey.2

Explicit testing. All explicit testing methods separatethe testing process from normal operation. In general, anexplicit testing process involves three steps:

* Generating the test patterns. The goal of this step isto produce those input patterns which will exercise theUUT under different modes of operation while trying todetect any existing fault.

* Applying the test patterns to the UUT. There aretwo ways to accomplish this step. The first is external

testing-the use of special test equipment to apply thetest patterns externally. The second is internal test-ing-the application of test patterns internally by forcingthe UUT to execute a self-testing procedure.2 Obvious-ly, the second method can only be used with systems thatcan execute programs (for example, with microproces-sor-based systems). External testing gives better controlover the test process and enables testing under differenttiming and electrical conditons. On the other hand, inter-nal testing is easier to use because it does not need specialtest equipment or engineering skills.

* Evaluating the responses obtained from the UUT.This step is designed with one of two goals in mind. Thefirst is the detection of an erroneous response, which in-dicates the existence of one or more faults (go/no-gotesting). The other is the isolation of the fault, if one ex-ists, in an easily replaceable module (fault locationtesting). Our interest in this article will be go/no-gotesting, since fault location testing of LSI circuits sees on-ly limited use.

Many explicit test methods have evolved in the lastdecade. They can be distinguished by the techniques usedto generate the test patterns and to detect and evaluatethe faulty responses (Figure 1). In what follows, we con-centrate on explicit testing and present in-depth discus-sions of the methods of test generation and responseevaluation employed with explicit testing.

Test generation techniques

The test generation process represents the most impor-tant part of any explicit testing method. Its main goal isto generate those test patterns that, when applied to theUUT, sensitize existing faults and propagate a faultyresponse to an observable output of the UUT. A test se-quence is considered good if it can detect a high percen-tage of the possible UUT faults; it is considered good, inother words, if its degree of fault coverage is high.

Rigorous test generation should consist of three mainactivities:

* Selecting a good descriptive model, at a suitablelevel, for the system under consideration. Such a model

Figure 1. LSI test technology.

IEEE MICRO36

Page 4: LSI Testing TedE

should reflect the exact behavior of the system in all itspossible modes of operation.

* Developing a fault model to define the types offaults that will be considered during test generation. Inselecting a fault model, the percentage of possible faultscovered by the model should be maximized, and the testcosts associated with the use of the model should beminimized. The latter can be accomplished by keepingthe complexity of the test generation low and the lengthof the tests short. Clearly these objectives contradict oneanother-a good fault model is usually found as a resultof a trade-off between them. The nature of the faultmodel is usually influenced by the model used to describethe system.

* Generating tests to detect all the faults in the faultmodel. This part of test generation is the soul of thewhole test process. Designing a test sequence to detect acertain fault in a digital circuit usually involves two prob-lems. First, the fault must be excited; i.e., a certain testsequence must be applied that will force a faulty value toappear at the fault site if the fault exists. Second, the testmust be made sensitive to the fault; i.e., the effect of thefault must propagate through the network to an observ-able output.

Rigorous test generation rests heavily on both accuratedescriptive (system) models and accurate fault models.

Test generation for digital circuits is usually ap-proached either at the gate level or at the functional level.The classical approach of modeling digital circuits as agroup of connected gates and flip-flops has been used ex-tensively. Using this level of description, test designersintroduced many types of fault models, such as theclassical stuck-at model. They also assumed that suchmodels could describe physical circuit failures in terms oflogic. This assumption has sometimes restricted thenumber of physical failures that can be modeled, but it

has also reduced the complexity of test generation sincefailures at the elementary level do not have to be con-sidered.Many algorithms have been developed for generating

tests for a given fault in combinational networks.1'4'5'6'7However, the complexity of these algorithms depends onthe topology of the network; it can become very high forsome circuits. Ibarra and Sahni have shown that theproblem of generating tests to detect single stuck-atfaults in a combinational circuit modeled at the gate levelis an NP-complete problem.8 Moreover, if the circuit issequential, the problem can become even more difficultdepending on the deepness of the circuit's sequentiallogic.

Thus, for LSI circuits having many thousands ofgates, the gate-level approach to the test generation prob-lem is not very feasible. A new approach-the functionallevel-is needed.

Another important reason for considering faults at thefunctional level is the constraint imposed on LSI testingby a user environment-the test patterns have to begenerated without a knowledge of the implementationdetails of the chip at the gate level. The only source of in-formation usually available is the typical IC catalog,which details the different modes of operation anddescribes the general architecture of the circuit. Withsuch information, the test designer finds it easier todefine the functional behavior of the circuit and toassociate faults with the functions. He can partition theUUT into various modules such as registers, multi-plexers, ALUs, ROMs, and RAMs. Each module can betreated as a "black box" performing a specified input/output mapping. These modules can then be tested forfunctional failures; explicit consideration of faults af-fecting the internal lines is not necessary. The examplegiven below clarifies the idea.

February 1983 37

Page 5: LSI Testing TedE

Consider a simple one-out-of-four multiplexer such asthe one shown in Figure 2. This multiplexer can be mod-eled at the gate level as shown in Figure 2a, or at the func-tional level as shown in Figure 2b.A possible fault model for the gate-level description is

the single stuck-at fault model. With this model, the faultlist may contain faults such as the line labeled with "f' isstuck at 0, or the control line "C0" is stuck at I.At the functional level, the multiplexer is considered a

black box with a well-defined function. Thus, a faultmodel for it may specify the following as possible faults:selection of wrong source, selection of no source, or pres-ence of stuck-at faults in the input lines or in the multi-plexer output. With this model, the fault list may containfaults such as source "X" is selected instead of source"Y," or line "Z" is stuck at 1.Ad hoc methods-which determine what faults are the

most probable-are sometimes used to generate faultlists. But if no fault model is assumed, then the testsderived must be either exhaustive or a rather ad hoccheck of the functionality of the system. Exhaustive testsare impossible for even small systems because of theenormous number of possible states, and superficial testsprovide neither good coverage nor even an indication ofwhat faults are covered.Once the fault list has been defined, the next step is to

find the test patterns required to detect the faults in thelist. As previously mentioned, each fault first has to be

x

C1 -

(a)

excited so that an error signal will be generated some-where in the UUT. Then this signal has to be sensitized atone of the observable outputs of the UUT. The three ex-amples below describe how to excite and sensitize dif-ferent types of faults in the types of modules usually en-countered in LSI circuits.

Consider the gate-level description of the three-bit in-crementer shown in Figure 3. The incrementer outputY2Y1Yo is the binary sum of Ci and the three-bit binarynumber X2X1X0, while CO is the carry-out bit of the sum.Note that X0(YO) is the least significant bit of the in-crementer input (output).Assume we want to detect the fault "line f is stuck at

0. " To excite that fault we will force a 1 to appear on linef so that, if it is stuck at 0, a faulty value will begenerated at the fault site. To accomplish this both X0and Ci must be set to 1. To sensitize the faulty 0 at f, wehave to set X1 to 1; this will propagate the fault to Y2 in-dependent of the value of X2. Note that if we set X1 to 0,the fault will be masked since the AND gate output willbe 0, independent of the value atf. Note also that X2 wasnot specified in the above test. However, by setting X2 to1, the fault will propagate to both Y2 and CO, whichmakes the response evaluation task easier.

Consider a microprocessor RAM and assume we wantto generate a test sequence to detect the fault "accessingword i in the RAM results in accessing word j instead."To excite such a fault, we will use the following sequence

y w

U

Co

(b) YU

Figure 2. A one-out-of-four multiplexer-gate-level description (a); functional-level description (b).

IEEE MICRO

C1 Co0 0

U

x

y

z

w

0 1

1 0

1 1

38

Page 6: LSI Testing TedE

of instructions (assume a microprocessor with single-operand instructions):

Load the word 00 . . 0 into the accumulator.Store the accumulator contents into memory address j.Load the word 11 . . . 1 into the accumulator.Store the accumulator contents into memory address i.

If the fault exists, these instructions will force a11 . 1 word to be stored in memory address j insteadof 00 . . . 0. To sensitize the fault, we need only readwhat is in memory address j, using the appropriateinstructions. Note that the RAM and its fault have beenconsidered at the functional level, since we did notspecify how the RAM is implemented.

Consider the program counter (PC) of a microproces-sor and assume we want to generate a test sequence thatwill detect any fault in the incrementing mode of this PC,i.e., any fault that makes the PC unable to be incre-mented from x to x+ 1 for any address x. One way toexcite this fault is to force the PC to step through all thepossible addresses. This can be easily done by initializingthe PC to zero and then executing the no-operation in-struction x+ 1 times. As a result, the PC will contain anaddress different than x+ 1. By executing another no-operation instruction, the wrong address can be observedat the address bus and the fault detected. In practice,such an exhaustive test sequence is very expensive, andmore economical tests have to be used. Note that, as inthe example immediately above, the problem and itssolution have been considered at the functional level.Four methods are currently used to generate test pat-

terns for LSI circuits: manual test generation, algo-rithmic test generation, simulation-aided test generation,and random test generation.Manual test generation. In manual test generation, the

test designer carefully analyzes the UUT. This analysiscan be done at the gate level, at the functional level, or ata combination of the two. The analysis of the differentparts of the UUT is intended to determine the specific

patterns that will excite and sensitize each fault in thefault list. At one time, the manual approach was widelyused for medium- and small-scale digital circuits. Then,the formulation of the D-algorithm and similar algo-rithms eliminated the need for analyzing each circuitmanually and provided an efficient means to generate therequired test patterns.1'5 However, the arrival of LSIcircuits and microprocessors required a shift backtoward manual test generation techniques, because mostof the algorithmic techniques used with SSI and MSIcircuits were not suitable for LSI circuits.Manual test generation tends to optimize the length of

the test patterns and provides a relatively high degree offault coverage. However, generating tests manually takesa considerable amount of effort and requires personswith special skills. Realizing that test generation has to bedone economically, test designers are now moving in thedirection of automatic test generation.One good example of manual test generation is the

work done by Sridhar and Hayes,9 who generated testpatterns for a simple bit-sliced microprocessor at thefunctional level.A bit-sliced microprocessor is an array of n identical

ICs called slices, each of which is a simple processor foroperands of k-bit length, where k is typically 2 or 4. Theinterconnections among the n slices are such that the en-tire array forms a processor for nk-bit operands. Thesimplicity of the individual slices and the regularity ofthe interconnections make it feasible to use systematicmethods for fault analysis and test generation.

Sridhar and Hayes considered a one-bit processor sliceas a simplified model for commercially available bit-sliced processors such as the Am2901.l1 A slice can bemodeled as a collection of modules interconnected in aknown way. These modules are regarded as black boxeswith well-defined input-output relationships. Examplesof these functional modules are ALUs, multiplexers, andregisters. Combinational modules are described by theirtruth tables, while sequential modules are defined bytheir state tables (or state diagrams).

Figure 3. Gate-level description of a three-bit incrementer.

February 1983

X2XlXo

Co

YlYo

C/j(

39

Page 7: LSI Testing TedE

The following fault categories were considered:

* For combinational modules, all possible faults thatinduce arbitrary changes in the truth table of the module,but that cannot convert it into a sequential circuit.

* For sequential modules, all possible faults that cancause arbitrary changes in the state table of the modulewithout increasing the number of states.

Only one module was assumed to be faulty at any time.To test for the faults allowed by the above-mentioned

fault model, all possible input patterns must be appliedto each combinational module (exhaustive testing), and achecking sequence'I to each sequential module. In addi-tion, the responses of each module must be propagatedto observable output lines. The tests required by the in-dividual modules were easily generated manually-adirect consequence of the small operand size (k= 1). Andbecause the slices were identical, the tests for one slicewere easily extended to the whole array of slices. In fact,Sridhar and Hayes showed that an arbitrary number ofsimple interconnected slices could be tested with the same

number of tests as that required for a single slice, as longas only one slice was faulty at one time. This property iscalled C-testability. Note that the use of carry-lookaheadwhen connecting slices eliminates C-testability. Also notethat slices with operand sizes equal to 2 or nmore usuallyare not C-testable.The idea of modeling a digital system as a collection of

interconnected functional modules can be used in model-ing any LSI circuit. However, using exhaustive tests andchecking sequences to test individual modules is feasibleonly for toy systems. Hence, the fault model proposed bySridhar and Hayes, though very powerful, is not directlyapplicable to LSI testing.

Algorithmic test generation. In algorithmic testgeneration, the test designer devises a set of algorithms togenerate the l's and 0's needed to test the UUT. Algo-rithmic test techniques are much more economical thanmanual techniques. They also provide the test designerwith a high level of flexibility. Thus, he can improve thefault coverage of the tests by replacing or modifyingparts of the algorithms. Of course, this task is much

PathEestziandSS;ASSw050thefuSCirD-algorlthm \-O ;*I

Oneofsthelasscl fautdtecionmethd at th

th ptsnstzatiton testin technique.3eTX:ihe^bsi

p00f 7rincpl nvlvdignpalth esitiztonirltivel

sipe For aninpuat X,tdetect a faul 'linlf isstuck

nor00-mal (fault-f)circui*tito takethe ale TThis conito isncedssaryX bteSld:X;ntf 0sufficen to de9tect0 the

t:00;* Excitation-TShenpts must be specfid so as>

* Erro prpaaio- pathfpirom tShefut site tot0an obsrvale otputmustb selete,andadditional0sga valuies uto0prpgt tefutsignal alongthist

fied0so as to produc te signas vaues specfid n

00000There; may be severalposible choices sfor errort0propagation 0and Slne justifcaion.t Also,0 insomeEc0ases itheremybe a choice of way innwhbich to excite0::the fault. Some o:f these hoices may lead to an in:onf-I$ sistencyf, jand soX the0 procedure Xmust backtrc and:considerXthe 0next aliternative. If all the alernativeselea t aicositency,thisipisthat the fault,

-0 To facilitate :the pathX sensitizain process, wein-:troduce xthe symbol 0 t represetasignal which hias

thevalue In :a normal circuit and 0 Rin afaulity vcircuit,and:- Dito reprent a signal which: hasthe value Oin a

noral ircitand1 i afajilt crcut. hepath sen-

cubica algbr1'2itoenable automaic generation of 0tet.This alsiaiitates testgenratiion forx moret

Xcomplex fault models ianldtfor ufault propagation-thrughcomls:sgicexlogicm.00X-4000elemen. c: 00We shall define three tye o uesi (i.e., line values;specifieinpional notation):0X;;; tSt000000 ;:00:000::X^: t

*gL tFor;a 0circiti elementV E wh"ichrelizes tther com-tbinational function ff, the "primitive cubes"t offer a:typica presntainof the prime implicants of f and f. 0

of E. ~~ ~ ~ ~ ~ ~~ofril0tTesecubs oncsel rprentj:h l:gtialbehavior

A "primitive D-cube of a fault" inbalogic elementEEspeifies the minimal input conditio:ns that must beapie fto E fin order tofprouce an error signal (D or CD)

aXttheoutput of E. 000 0;0 0d00 0X * The "prpgation D-cubes" of a logic element fEspecify the minimal input conditions to the logic ele-ment thatarerequired to propagatean error signal onanignpFut ;(or inputs) to the outputof that element:.; 0;;f0

XTo generate a test for a stuck-at fault in a combina-tional circuit,the$0-algrithm:must perform the0following:0 ; ;t tt ; ;tt0 t ;

(1)0 Fault excitation-A primitive D-cube of the faultunder consideration must be selected. This generatesXth errorssignal Dor D at the site of the fault. (Usuallyachoice exists in this step. The initial choice is ar-bitrary, andl it maybe necessary to backtrack and con-sdr another choice). 0000 : f0t? t ;

IEEE MICRO40

Page 8: LSI Testing TedE

simpler than modifying the l's and 0's in a manuallygenerated test sequence.

Techniques that use the gate-level description of theUUT, such as path sensitization4 and the D-algorithm,5can no longer be used in testing complicated LSI circuits.Thus, the problem of generating meaningful sets of testsdirectly from the functional description of the UUT hasbecome increasingly important. Relatively little work hasbeen done on functional-level testing of LSI chips thatare not memory elements.9'12"13"i4'15"16"17 Functionaltesting of memory chips is relatively simple because ofthe regularity of their design and also because their com-ponents can be easily controlled and observed from theoutside. Various test generation algorithms have beendeveloped to detect different types of faults in mem-ories. l, 18 In the rest of this section we will concentrate onthe general problem of generating tests for irregular LSIchips, i.e., for LSI chips which are not strictly memorychips.

It is highly desirable to find an algorithm that can,generate tests for any LSI circuit, or at least most LSI cir-cuits. One good example of work in this area is the

technique proposed by Thatte and Abraham for generat-ing tests for microprocessors.12'13 Another approach,pursued by the authors of this article, is a test generationprocedure capable of handling general LSI circuits.15'16'17

The Thatte-Abraham technique. Microprocessorsconstitute a high percentage of today's LSI circuits.Thatte and Abrahamr12"3 approached the microproces-sor test generation problem at the functional level.The test generation procedure they developed was

based on

* A functional description of the microprocessor atthe register-transfer level. The model is defined in termsof data flow among storage units during the execution ofan instruction. The functional behavior of a micropro-cessor is thus described by information about its instruc-tion set and the functions performed by each instruction.

* A fault model describing faults in the various func-tional parts of the UUT (e.g., the data transfer function,the data storage function, the instruction decoding andcontrol function). This fault model describes the faulty

February 1983 41

Page 9: LSI Testing TedE

behavior of the UUT without knowing its implementa-tion details.The microprocessor is modeled by a graph. Each regis-

ter in the microprocessor (including general-purpose reg-isters and accumulator, stack, program counter, addressbuffer, and processor status word registers) is repre-sented by a node of the graph. Instructions of the micro-processor are classified as being of transfer, data manip-ulation, or branch type. There exists a directed edge(labeled with an instruction) from one node to another ifduring the execution of the instruction data flow occursfrom the register represented by the first node to thatrepresented by the second. Examples of instruction rep-resentation are given in Figure 4.Having described the function or the structure of the

UUT, one needs an appropriate fault model in order toderive useful tests. The approach used by Thatte andAbraham is to partition the various functions of a micro-processor into five classes: the register decoding func-tion, the instruction decoding and control function, thedata storage function, the data transfer function, and thedata manipulation function. Fault models are derived foreach of these functions at a higher level and indepen-dently of the details of implementation for the micropro-cessor. The fault model is quite general. Tests are derivedallowing any number of faults, but only in one function

Figure 4. Representations of microprocessor instruc-tions-li, transfer instruction, R2-R1 (a); 12, add instruc-tion, R3-R1 + R2(b); 13, or instruction, R2-R1 OR R2 (C);14, rotate left instruction (d).

at a time; this restriction exists solely to cut down thecomplexity of test generation.The fault model for the register decoding function

allows any possible set of registers to be accessed insteadof a particular register. (If the set is null then no register isaccessed.) This fault model is thus very general and in-dependent of the actual realization of the decodingmechanism.

For the instruction decoding and control function, thefaulty behavior of the microprocessor is specified asfollows-when instruction Ii is executed any one of thefollowing can happen:

* Instead of instruction Ij some other instruction Ik isexecuted. This fault is denoted by F(Ij/Ik).

* In addition to instruction Ij, some other instructionIk is activated. This fault is denoted by F(Ij/Ij+ Ik).

* No instruction is executed. This fault is denoted byF(Ij/4-).Under this specification, any number of instructions canbe faulty.

In the fault model for the data storage function, anycell in any data storage module is allowed to be stuck at 0or 1. This can occur in any number of cells.

The fault model for the data transfer function includesthe following types of faults:

* A line in a path used in the execution of an instruc-tion is stuck at 0 or 1.

* Two lines of a path used in the instruction are cou-pled; i.e., they fail to carry different logic values.

Note that the second fault type cannot be modeled bysingle stuck-at faults. The transfer paths in this faultmodel are logical paths and thus will account for anyfailure in the actual physical paths.

Since there is a variety of designs for the ALU andother functional units such as increment or shift logic, nospecific fault model is used for the data manipulationfunction. It is assumed that complete test sets can bederived for the functional units for a given fault model.By carefully analyzing the logical behavior of the

microprocessor according to the fault models presentedabove, Thatte and Abraham formulated a set of algo-rithms to generate the necessary test patterns. Thesealgorithms step the microprocessor through a preciselydefined set of instructions and addresses. Each algorithmwas designed for detecting a particular class of faults,and theorems were proved which showed exactly the kindof faults detected by each algorithm. These algorithmsemploy the excitation and sensitization concepts pre-viously described.To gain insight into the problems involved in using the

algorithms, Thatte investigated the testing of an eight-bitmicroprocessor from Hewlett-Packard.12 He generatedthe test patterns for the microprocessor by hand, usingthe algorithms. He found that 96 percent of the singlestuck-at faults that could affect the microprocessor weredetected by the test sequence he generated. This figure in-dicates the validity of the technique.

IEEE MICRO42

Page 10: LSI Testing TedE

The Abadir-Reghbati technique. Here we will brieflydescribe a test generation technique we developed forLSI circuits.15'16 We assumed that the tests would begenerated in a user environment in which the gate- andflip-flop-level details of the chip were not known.We developed a module-level model for LSI circuits.

This model bypasses the gate and flip-flop levels anddirectly describes blocks of logic (modules) according totheir functions. Any LSI circuit can be modeled as a net-work of interconnected modules such as counters, reg-isters, ALUs, ROMs, RAMs, multiplexers, and decoders.

Each module in an LSI circuit was modeled as a blackbox having a number of functions defined by a set of binarydecision diagrams (see box, next page).'9 This type of dia-gram, a functional description tool introduced by Akers in1978, is a concise means for completely defining the logicaloperation of one or more digital functions in an implemen-tation-free form. The information usually found in an ICcatalog is sufficient to derive the set of binary decisiondiagrams describing the functions performed by the dif-ferent modules in a device. These diagrams-like truthtables and state tables-are amenable to extensive logicalanalysis. However, unlike truth tables and state tables,they do not have the unpleasant property of growing ex-ponentially with the number of variables involved.Moreover, the diagrams can be stored and processedeasily in a digital computer. An important feature ofthese diagrams is that they state exactly how the modulewill behave in every one of its operation modes. Such in-formation can be extracted from the module's diagramsin the form of a set of experiments. 15,20 Each of these ex-periments describes the behavior of the module in one ofits modes of operation. The structure of these ex-periments makes them suitable for use in automatic testgeneration.We also developed a functional-level fault model de-

scribing faulty behavior in the different modules of anLSI chip. This model is quite independent of the detailsof implementation and covers functional faults that alterthe behavior of a module during one of its modes ofoperation. It also covers stuck-at faults affecting any in-put or output pin or any interconnection line in the chip.

Using the above-mentioned models, we proposed afunctional test generation procedure based on path sen-sitization and the D-algorithm.15 The procedure takesthe module-level model of the LSI chip and the func-tional description of its modules as parameters andgenerates tests to detect faults in the fault model. Thefault collapsing technique' was used to reduce the lengthof the test sequence. As in the D-algorithm, the pro-cedure employs three basic operations, namely implica-tion, D-propagation, and line justification. However,these operations are performed on functional modules.We also presented algorithmic solutions to the prob-

lems of performing these operations on functionalmodules.'6 For each of the three operations, we gave analgorithm which takes the module's set of experimentsand current state (i.e., the values assigned to the moduleinputs, outputs, and internal memory elements) asparameters and generates all the possible states of themodule after performing the required operation.

We have also reported our efforts to develop test se-quences based on our test generation procedure for typi-cal LSI circuits. 17 More specifically, we considered a one-bit microprocessor slice C that has all the basic featuresof the four-bit Am2901 microprocessor slice. 10 The cir-cuit C was modeled as a network of eight functionalmodules: an ALU, a latch register, an addressableregister, and five multiplexers. The functions of the in-dividual modules were described in terms of binary deci-sion diagrams or equivalent sets of experiments. Testscapable of detecting various faults covered by the faultmodel were then generated for the circuit C. We showedthat if the fault collapsing technique is used, a significantreduction in the length of the final test sequence results.The test generation effort was quite straightforward,

indicating that the technique can be automated withoutmuch difficulty. Our study also shows that for a simpli-fied version of the circuit C the length of the test se-quence generated by our technique is very close to thelength of the test sequence manually generated by Srid-har and Hayes9 for the same circuit. We also describedtechniques for modeling some of the features of theAm2909 four-bit microprogram sequencer10 that are notcovered by the circuit C.The results of our case study were quite promising and

showed that our technique is a viable and effective onefor generating tests for LSI circuits.

Simulation-aided test generation. Logic simulationtechniques have been used widely in the evaluation andverification of new digital circuits. However, an impor-tant application of logic simulation is to interpret thebehavior of a circuit under a certain fault or faults. Thisis known as fault simulation. To clarify how this techni-que can be used to generate tests for LSI systems, we willfirst describe its use with SSI/MSI-type circuits.To generate a fault simulator for an SSI/MSL circuit,

the following information is needed:'

* the gate-level description of the circuit, written in aspecial language;

* the initial conditions of the memory elements; and

* a list of the faults to be simulated, including classicaltypes of faults such as stuck-at faults and adjacent pinshorts.

The above is fed to a simulation package which gener-ates the fault simulator of the circuit under test. Theresulting simulator can simulate the behavior of the cir-cuit under normal conditions as well as when any faultsexlst.

Now, by applying various input patterns (either gener-ated by hand, by an algorithm, or at random), the simu-lator checks to see if the output response of the correctcircuit differs from one of the responses of the faulty cir-cuits. If it does, then this input pattern detects the faultwhich created the wrong output response; otherwise theinput pattern is useless. If an input pattern is found todetect a certain fault, this fault is deleted from the fault

February 1983 43

Page 11: LSI Testing TedE

list and the process continues until either the input pat-terns or the faults are finished. At the end, the faults re-maining in the fault list are those which cannot bedetected by the input patterns. This directly measures thedegree of fault coverage of the input patterns used.Two examples of this type of logic simulator are

LAMP-the Logic Analyzer for Maintenance Planningdeveloped at Bell Laboratories,21 and the Testaid IIIfault simulator developed at the Hewlett-Packard Com-pany. 12 Both work primarily at the gate level andsimulate stuck-at faults only. One of the main applica-tions of such fault simulators is to determine the degreeof fault coverage provided by a test sequence generatedby any other test generation technique.

There are two key requirements that affect the successof any fault simulator:

* the existence of a software model for each primitiveelement of the circuit, and

* the existence of a good fault model for the UUTwhich can be used to generate a fault list covering most ofthe actual physical faults.

These two requirements have been met for SSI/MSIcircuits, but they pose serious problems for LSI circuits.If it can be done at all, modeling LSI circuits at the gatelevel requires great effort. One part of the problem is thelack of detailed information about the internal structureof most LSI chips. The other is the time and memory re-quired to simulate an LSI circuit containing thousands ofgates. Another severe problem facing almost all LSI testgeneration techniques is the lack of good fault models ata level higher than the gate level.The Abadir-Reghbati description model proposed in

the previous section permits the test designer to bypassthe gate-level description and, using binary decisiondiagrams, to define blocks of logic according to theirfunctions. Thus, the simulation of complex LSI circuitscan take place at a higher level, and this eliminates thelarge time and memory requirements. Furthermore, theAbadir-Reghbati fault model is quite efficient and issuitable for simulation purposes. In fact, the implicationoperation16 employed by the test generation procedurerepresents the main building block of any fault simu-lator. It must be noted that fault simulation techniquesare very useful in optimizing the length of the test se-quence generated by any test generation technique.

Random test generation. This method can be con-sidered the simplest method for testing a device. A ran-dom number generator is used to simultaneously applyrandom input patterns both to the UUT and to a copy ofit known to be fault-free. (This copy is called the goldenunit.) The results obtained from the two units are com-pared, and if they do not match, a fault in the UUT isdetected. This response evaluation technique is known ascomparison testing; we will discuss it later. It is impor-tant to note that every time the UUT is tested, a new ran-dom test sequence is used.The important question is how effective the random

test is, or, in other words, what fault coverage a random

Binary decision diagrams

Binary decision diagrams are a means of definingthe logical operation of digital functions.1 They tellthe user how to determine the output value of a digitalfunction by examining the values of the inputs. Eachnode in these diagrams is associated with a binaryvariable, and there are two branches coming out fromeach node. The right branch is the "1" branch, whilethe left branch is the "O" branch. Depending on thevalue of the node variable, one of the two brancheswill be selected when the diagram is processed.To see how binary decision diagrams can be used,

consider the half-adder shown in Figure la. Assumewe are interested in defining a procedure to determinethe value of C, given the binary values of X and Y. Wecan do this by looking at the value of X. If X = 0, then

Figure 1. A half-adder (a); binary decision diagram forC = X-Y (b); binary decision diagram for S = X®EY (c).

IEEE MICRO44

Page 12: LSI Testing TedE

C=O, and we are finished. If X=l, we look at Y. IfY =O, then C0O, else C = 1, and in either case we arefinished. Figure lb shows a simple diagram of thisprocedure. By entering the diagram at the node in-dicated by the arrow labeled with C and then pro-ceeding through the diagram following the appropri-ate branches until a 0 or 1 value is reached, we candetermine the value of C. Figure 1 c shows the diagramrepresenting the function S of the half-adder.

To simplify the diagrams, any diagram node whichhas two branches as exit branches can be replaced bythe variable itself or its complement. These variablesare called exit variables. Figure 2 shows how this con-vention is used to simplify the diagrams describingthe half-adder.

In the previous discussion, we have consideredonly simple diagrams in which the variables within thenodes are primary input variables. However, we canexpand the scope of these diagrams by using auxiliaryvariables as the node variables. These auxiliaryvariables are defined by their diagrams. Thus, when auser encounters such a node variable, say g, whiletracing a path, he must first process the diagramdefining g to determine the value of g, and then returnto the original node and take the appropriate branch.This process is similar to the use of subroutines inhigh-level programming languages.

For example, consider the full-adder defined by

C0+1 = E1C, + EjA,

result in taking the 0 branch and exiting with Ci+1 =Ai = 1.

Since node variables can refer to other auxiliaryfunctions, we can simply describe complex modulesby breaking their functions into small subfunctions.Thus, the system diagram will consist of small dia-grams connected in a hierarchical structure. Each ofthese diagrams describes either a module output oran auxiliary variable.

Akers1 described two procedures to generate thebinary decision diagram of acombinational function f.The first one uses the truth table description of f,while the other uses the boolean expression of f. Asimilar procedure can be derived to generate the bi-nary decision diagram for any sequential functiondefined by a state table.

Binary decision diagrams can be easily stored andprocessed byacomputer through the use of binary treestructures. Each node can be completely defined by anordered triple: the node variable and two pointers to thetwo nodes to which its 0 and 1 branches are directed.Binary decision diagrams can be used in functionaltesting.2

References

1. S.B.Akers,"BinaryDecisionDiagram,"/EEETrans.Com-puters, Vol. C-27, No. 6, June 1978, pp. 509-516.

2. S. B. Akers, "Functional Testing with Binary DecisionDiagrams," Proc. 8th Int'l Symp. Fault-Tolerant Com-puting, June 1978, pp. 82-92.

S, = Ei G) Ci,

where E1 = A, (® B,. Figure 3 shows the diagrams forthese three equations. If the user wants to know thevalue of C, + 1 when the values of the three primary in-puts Ai,Bi, and C are all l's, he enters the C0+1diagram, where he encounters the node variable E1. Bytraversing the E, diagram, he obtains a value of 0.Returning to the original Ci+1 diagram with Ei = Owill

Figure 2. Simplified binary decision diagrams for thehalf-adder.

Figure 3. Binary decision diagrams for a full-adder.

February 1983 45

Page 13: LSI Testing TedE

test of given length provides. This question can be an-swered by employing a fault simulator to simulate the ef-fect of random test patterns of various lengths. The re-sults of such experiments on SSI and MSI circuits showthat random test generation is most suitable for circuitswithout deep sequential logic."122'23 However, by com-bining random patterns with manually generated ones,test designers can obtain very good results.The increased sequentiality of LSI circuits reduces the

applicability of random testing. Again, combining man-ually generated test patterns with random ones improvesthe degree of fault coverage. However, two factorsrestrict the use of the random test generation technique:

* The dependency on the golden unit, which is as-sumed to be fault-free, weakens the level of confidence inthe results.

* There is no accurate measure of how effective thetest is, since all the data gathered about random tests arestatistical data. Thus, the amount of fault coverage pro-vided by a particular random test process is unpredict-able.

Response evaluation techniques

Different methods have been used to evaluate UUTresponses to test patterns. We restrict our discussion to

the case where the final goal is only to detect faults or,equivalently, to detect any wrong output response. Thereare two ways of achieving this goal-using a good re-sponse generator or using a compact testing technique.

Good response generation. This technique implementsan ideal strategy-comparing UUT responses with goodresponse patterns to detect any faulty response. Clearly,the key problems are how to obtain a good response andat what stage in the testing process that response will begenerated. In current test systems, two approaches tosolving these problems are taken-stored responsetesting and comparison testing.

Stored response testing. In stored response testing, aone-shot operation generates the good response patternsat the end of the test generation stage. These patterns arestored in an auxiliary memory (usually a ROM). A flowdiagram of the stored response testing technique isshown in Figure 5.

Different methods can be used to obtain good re-sponses of a circuit to a particular test sequence. One wayis to do it manually by analyzing the UUT and the testpatterns. This method is the most suitable if the test pat-terns were generated manually in the first place.The method most widely used to obtain good re-

sponses from the UUT is to apply the test patterns eitherto a known good copy of the UUT-the golden unit-or

Figure 5. Stored response testing.

Figure 6. Comparison testing.

IEEE MICRO

ERRORSIGNAL

ERRORSIGNAL

46

Page 14: LSI Testing TedE

to a software-simulated version of the UUT. Of course,if fault simulation techniques were used to generate thetest patterns, the UUT's good responses can be obtainedvery easily as a partial product from the simulator.The use of a known good device depends on the avail-

ability of such a device. Hence, different techniques mustbe used for the user who wants to test his LSI system andfor the designer who wants to test his prototype design.However, golden units are usually available once thedevice goes into production. Moreover, confidence in thecorrectness of the responses can be increased by usingthree or five good devices together to generate the goodresponses.The major advantage of the stored response technique

is that the good responses are generated only once foreach test sequence, thus reducing the cost of the responseevaluation step. However, the stored response techniquesuffers from various disadvantages:

* Any change in the test sequence requires the wholeprocess to be repeated.

* A very large memory is usually needed to store allthe good responses to a reasonable test sequence, becauseboth the length and the width of the responses are rela-tively large. As a result, the cost of the testing equipmentincreases.

* The speed with which the test patterns can be ap-plied to the UUT is limited by the access time of thememory used to store the good responses.

Comparison testing. Another way to evaluate theresponses of the UUT during the testing process is to ap-ply the test patterns simultaneously to both the UUT anda golden unit and to compare their responses to detectany faulty response. The flow diagram of the comparisontesting technique is shown in Figure 6. The use of com-parison testing makes possible the testing of the UUT atdifferent speeds under different electrical parameters,given that these parameters are within the operatinglimits of the golden unit, which is assumed to be ideal.Note that in comparison testing the golden unit is used

to generate the good responses every time the UUT istested. In stored response testing, on the other hand, thegolden unit is used to generate the good responses onlyonce.

The disadvantages of depending on a golden unit aremore serious here, however, since every explicit testingprocess requires one golden unit. This means that everytester must contain a golden copy of each LSI circuittested by that tester.One of the major advantages of comparison testing is

that nothing has to be changed in the response evaluationstage if the test sequence is altered. This makes com-parison testing highly desirable if test patterns aregenerated randomly.

Compact testing. The major drawback of good re-sponse generation techniques in general, and storedresponse testing in particular, is the huge amount ofresponse data that must be analyzed and stored. Com-pact testing methods attempt to solve this by compress-ing the response data R into a more compact form f(R)from which most of the fault information in R can bederived. Thus, because only the compact form of thegood responses has to be stored, the need for largememory or expensive golden units is eliminated. An im-portant property of the compression functionf is that itcan be implemented with simple circuitry. Thus, com-pact testing does not require much test equipment and isespecially suited for field maintenance work. A generaldiagram of the compact testing technique is shown inFigure 7.

Several choices for the function f exist, such as "thenumber of l's in the sequence," "the number of 0 to 1and 1 to 0 transitions in the sequence" (transition count-ing),24 or "the signature of the sequence" (signatureanalysis).25 For each compression function f, there is aslight probability that a response RI different from thefault-free response RO will be compressed to a form equaltof(RO), i.e., f(R1) = f(RO). Thus, the fault'causing theUUT to produce RI instead of RO will not be detected,even though it is covered by the test patterns.The two compression functions that are the most wide-

ly accepted commercially are transition counting andsignature analysis.

Transition counting. In transition counting, thenumber of logical transitions (0 to 1 and vice versa) iscomputed at each output pin by simply running eachoutput of the UUT into a special counter. Thus, thenumber of counters needed is equal to the number of

Figure 7. Compact testing.

February 1983 47

Page 15: LSI Testing TedE

Figure 8. A one-out-of-four multiplexer.

output pins observed. For every m-bit output datastream (at one pin), an n-bit counter is required, wheren = [log2m] . As in stored response testing, the transi-tion counts of the good responses are obtained by apply-ing the test sequence to a golden copy of the UUT andcounting the number of transitions at each output pin.This latter information is used as a reference in any ex-plicit testing process.

In the testing of an LSI circuit by means of transitioncounting, the input patterns can be applied to the UUT ata very high rate, since the response evaluation circuitry isvery fast. Also, the size of the memory needed to storethe transition counts of the good responses can be verysmall. For example, a transition counting test using 16million patterns at a rate of one MHz will take 16 sec-onds, and the compressed stored response will occupyonly K 24-bit words, where K is the number of outputpins. This can be contrasted with the 16 million K-bitwords of storage space needed if regular stored responsetesting is used.The test patterns used in a transition counting test

system must be designed such that their output responsesmaximize the fault coverage of the test.24 The examplebelow shows how this can be done.

Consider the one-out-of-four multiplexer shown inFigure 8. To check for multiple stuck-at faults in themultiplexer input lines, eight test patterns are required,as shown in Table 1. The sequence of applying these eight

Table 1.The eight test patterns used

for testing the multiplexer of Figure 8.

SO Sl Xl X2 X3 X4 Y0 0 1 0 0 0 10 0 01 1 1 00 1 0 1 0 0 10 1 1 0 1 1 01 0 0 0 1 0 11 0 1 1 0 1 01 1 0 0 0 1 11 1 1 1 1 0 0

patterns to the multiplexer is not important it we want toevaluate the output responses one by one. However, thissequence will greatly affect the degree of fault coverage iftransition counting is used. To illustrate this fact, con-sider the eight single stuck-at faults in the four input linesXl, X2, X3, and X4 (i.e., Xl stuck-at 0, Xl stuck-at 1,X2 stuck-at 0, and so on). Each of these faults will bedetected by only one pattern among the eight test pat-terns. For example, the fault "Xl stuck-at 0" will bedetected by applying the first test pattern in Table 1, butthe other seven test patterns will not detect this fault.Now, suppose we want to use transition counting to eval-uate the output responses of the multiplexer. Applyingthe eight test patterns in the sequence shown in Table I(from top to bottom) will produce the output response10101010 (from left to right), with a transition count ofseven. Any possible combination of the eight faultsdescribed above will change the transition count to anumber different from seven, and the fault will be de-tected. (Note that no more than four of the eight faultscan occur at any one time.) Thus, the test sequenceshown in Table 1 will detect all single and multiple stuck-at faults in the four input lines of the multiplexer.Now, if we change the sequence of the test patterns to

the one shown in Table 2, the fault coverage of the testwill decrease considerably. The output responses of thesequence of Table 2 will be 11001100, with a transitioncount of three. As a result, six of the eight single stuck-at

Table 2.A different sequence of the eight

multiplexer test patterns.

SO S1 Xl X2 X3 X4 Y

0 0 1 0 0 0 10 1 0 1 0 0 10 0 0 1 1 1 00 1 1 0 1 1 01 0 0 0 1 0 11 1 0 0 0 1 11 0 1 1 0 1 01 1 1 1 1 0 0

IEEE MICRO48

Page 16: LSI Testing TedE

faults will not be detected, because the transition countof the six faulty responses will remain three. For exam-ple, the fault "Xl stuck-at 1" will change the output re-sponse to 11101100, which has a transition count ofthree. Hence, this fault will not be detected. Moreover,most of the multiple combinations of the eight faults willnot change the transition count of the output, and hencethey will not be detected either.

It is clear from the above example that the order of ap-plying the test patterns to the UUT greatly affects thefault coverage of the test. When testing combinationalcircuits, the test designer is completely free to choose theorder of the test patterns. However, he cannot do thesame with test patterns for sequential circuits. Moreseriously, because he is dealing with LSI circuits thatprobably have multiple output lines, he will find that aparticular test sequence may give good results at someoutputs and bad results at others. One way to solve thesecontradictions is to use simulation techniques to find theoptimal test sequence. However, because of the limita-tions discussed here, transition counting cannot be rec-ognized as a powerful compact LSI testing method.

Signature analysis. In 1977 Hewlett-Packard Corpora-tion introduced a new compact testing technique calledsignature analysis, intended for testing LSI systems.25-28In this method, each output response is passed through a16-bit linear feedback shift register whose contents f(R),after all the test patterns have been applied, are called thetest signature. Figure 9 shows an example of a linearfeedback shift register used in signature analysis.The signature provided by linear feedback shift

registers can be regarded as a unique fingerprint-hence,test designers have extremely high confidence in theseshift registers as tools for catching errors. To betterunderstand this confidence, let us examine the 16-bitlinear feedback shift register shown in Figure 9. Let usassume a data stream of length n is fed to the serial datainput line (representing the output response to be eval-uated). There are 2n possible combinations of datastreams, and each one will be compressed to one of the216 possible signatures. Linear feedback shift registershave the property of equally distributing the different

combinations of data streams over the different signa-tures.27 This property is illustrated by the followingnumerical examples:

* Assume n = 16. Then each data stream will bemapped to a distinctive signature (one-to-one mapping).

* Assume n = 17. Then exactly two data streams willbe mapped to the same signature. Thus, for a particulardata stream (the UUT good output response), there isonly one other data stream (a faulty output response)that will have the same signature; i.e., only one faultyresponse out of 217 - 1 possible faulty responses will notbe detected.

* Assume n= 18. Then four different data streamswill be mapped to the same signature. Hence, only threefaults out of 218 -1 possible faults will not be detected.

We can generalize the results obtained above. For anyresponse data stream of length n > 16, the probability ofmissing a faulty response when using a 16-bit signatureanalyzer is27

2n 16_ 1 2 -16, for n>>16.2n- 1

Hence, the possibility of missing an error in the bitstream is very small (on the order of 0.002 percent). Notealso that a great percentage of the faults will affect morethan one output pin-hence the probability of not de-tecting these kind of faults is even lower.

Signature analysis provides a much higher level of con-fidence for detecting faulty output responses than thatprovided by transition counting. But, like transitioncounting, it requires only very simple hardware circuitryand a small amount of memory for storing the goodsignatures. As a result, the signatures of the outputresponses can be calculated even when the UUT is testedat its maximum speed. Unlike transition counting, thedegree of fault coverage provided by signature analysis isnot sensitive to the order of the test patterns. Thus, it isclear that signature analysis is the most attractive solu-tion to the response evaluation problem.

Figure 9. The 16-bit linear feedback shift register used in signature analysis.

February 1983 49

Page 17: LSI Testing TedE

The rapid growth of the complexity and performanceof digital circuits presents a testing problem of increasingseverity. Although many testing methods have workedwell for SSI and MSI circuits, most of them are rapidlybecoming obsolete. New techniques are required to copewith the vastly more complicated LSI circuits.

In general, testing techniques fall into the concurrentand explicit categories. In this article, we gave special at-tention to explicit testing techniques, especially those ap-proaching the problem at the functional level. The ex-plicit testing process can be partitioned into three steps:generating the test, applying the test to the UUT, andevaluating the UUT's responses. The various testingtechniques are distinguished by the methods they use toperform these three steps. Each of these techniques hascertain strengths and weaknesses.We have tried to emphasize the range of testing tech-

niques available, and to highlight some of the milestonesin the evolution of LSI testing. The details of an in-dividual test method can be found in the sources we havecited. -

References

1. M. A. Breuer and A. D. Friedman, Diagnosis and ReliableDesign of Digital Systems, Computer Science Press,Washington, DC, 1976.

2. J. P. Hayes and E. J. McCluskey, "Testing Considera-tions in Microprocessor-Based Design," Computer, Vol.13, No. 3, Mar. 1980, pp. 17-26.

3. J. Wakerly, Error Detecting Codes, Self-Checking Cir-cuits and Applications, American Elsevier, New York,1978.

4. D. B. Armstrong, "On Finding a Nearly Minimal Set ofFault Detection Tests for Combinatorial Nets," IEEETrans. Electronic Computers, Vol. EC-15, No. 2, Feb.1966, pp. 63-73.

5. J. P. Roth, W. G. Bouricius, and P. R. Schneider, "Pro-grammed Algorithms to Compute Tests to Detect andDistinguish Between Failures in Logic Circuits," IEEETrans. Electronic Computers, Vol. EC-16, No. 5, Oct.1967, pp. 567-580.

6. S. B. Akers, "Test Generation Techniques," Computer,Vol. 13, No. 3, Mar. 1980, pp. 9-15.

7. E. I. Muehldorf and A. D. Savkar, "LSI Logic Test-ing-An Overview," IEEE Trans. Computers, Vol. C-30,No. 1, Jan. 1981, pp. 1-17.

8. 0. H. Ibarra and S. K. Sahni, "Polynomially CompleteFault Detection Problems," IEEE Trans. Computers,Vol. C-24, No. 3, Mar. 1975, pp. 242-249.

9. T. Sridhar and J. P. Hayes, "Testing Bit-Sliced Micropro-cessors," Proc. 9th Int'l Symp. Fault-Tolerant Com-puting, 1979, pp. 211 -218.

10. The Am2900 Family Data Book, Advanced MicroDevices, Inc., 1979.

11. Z. Kohavi, Switching and Finite Automata Theory,McGraw-Hill, New York, 1970.

12. S. M. Thatte, "Test Generation for Microprocessors,"PhD thesis, University of Illinois, Urbana, 1979.

13. S. M. Thatte and J. A. Abraham, "Test Generation forMicroprocessors," IEEE Trans. Computers, Vol. C-29,No. 6, June 1980, pp. 429-441.

14. M. A. Breuer and A. D. Friedman, "Functional LevelPrimitives in Test Generation," IEEE Trans. Computers,Vol. C-29, No. 3, Mar. 1980, pp. 223-235.

15. M. S. Abadir and H. K. Reghbati, "Test Generation forLSI: A New Approach," Tech. Report 81-7, Dept. ofComputational Science, University of Saskatchewan,Saskatoon, 1981.

16. M. S. Abadir and H. K. Reghbati, "Test Generation forLSI: Basic Operations," Tech. Report 81-8, Dept. ofComputational Science, University of Saskatchewan,Saskatoon, 1981.

17. M. S. Abadir and H. K. Reghbati, "Test Generation forLSI: A Case Study," Tech. Report 81-9, Dept. of Com-putational Science, University of Saskatchewan, Saska-toon, 1981.

18. M. S. Abadir and H. K. Reghbati, "Functional Testing ofSemiconductor Random Access Memories," Tech. Report81-6, Dept. of Computational Science, University ofSaskatchewan, Saskatoon, 1981.

19. S. B. Akers, "Binary Decision Diagram," IEEE Trans.Computers, Vol. C-27, No. 6, June 1978, pp. 509-516.

20. S. B. Akers, "Functional Testing with Binary DecisionDiagram," Proc. 8th Int'l Symp. Fault-Tolerant Com-puting, June 1978, pp. 82-92.

IEEE MICRO

RlLunomtiua

I,,,;lJlin

e<-

luuomflrl1serjUltorz M+cigN UMUF

N--

uI --

The proceedings contain tutorials on microprocessor use inelectronic engine control and microcomputer applications inautomotive electronics as well as session papers on engine anddrive train control, automotive accessories, and test, service,and diagnostics are included. 135 pp.

Order #432

Proceedings-Workshop onAutomotive Applications of Microprocessors

October 7-8, 1982

Members-$15.00Nonmembers-$30.00Use order form on p. 87.

r r,

50

Page 18: LSI Testing TedE

21. B. A. Zimmer, "Test Techniques for Circuit Boards Con-taining Large Memories and Microprocessors," Proc.1976 Semiconductor Test Symp., pp. 16-21.

22. P. Agrawal and V. D. Agrawal, "On Improving the Effi-ciency of Monte Carlo Test Generation," Proc. 5th Int'lSymp. Fault-Tolerant Computing, June 1975, pp.205-209.

23. D. Bastin, E. Girard, J. C. Rault, and R. Tulloue, "Prob-abilistic Test Generation Methods," Proc. 3rd Int'l Symp.Fault-Tolerant Computing, June 1973, p. 171.

24. J. P. Hayes, "Transition Count Testing of CombinationalLogic Circuits," IEEE Trans. Computers, Vol. C-25, No.6, June 1976, pp. 613-620.

25. "Signature Analysis," Hewlett-Packard J., Vol. 28, No.9, May 1977.

26. R. David, "Feedback Shift Register Testing," Proc. 8thInt'l Symp. Fault-Tolerant Computing, June 1978.

27. H. J. Nadig, "Testing a Microprocessor Product UsingSignature Analysis," Proc. 1978 Semiconductor TestSymp., pp. 159-169.

28. J. B. Peatman, Digital Hardware Design, McGraw-Hill,New York, 1980.

Magdy S. Abadir is a research assistantand graduate student working towards thePhD degree in electrical engineering at theUniversity of Southern California. Hisresearch interests include functional test-ing, design for testability, test patterngeneration, and design automation. He re-

ceived the BSc degree in computer sciencefrom Alexandria University, Egypt, in1978 and the MSc in computer science

from the University of Saskatchewan in 1981.Abadir's address is the Department of Electrical Engineering,

University of Southern California, Los Angeles, CA 90007.

Hassan K. Reghbati is an assistant pro-fessor in the Department of ComputingScience at Simon Fraser University, Burn-aby, British Columbia. He was an assis-tant professor at the University of Sas-katchewan from 1978 to 1982, where hewas granted tenure. From 1970 to 1973 hewas a lecturer at Arya-Mehr University ofTechnology, Tehran, Iran. His researchinterests include fault-tolerant computing,

VLSI systems, design automation, and computer communica-tion. The author or coauthor of over 15 papers, he has pub-lished in IEEE Micro, Computer, Infor, and Software-Prac-tice & Experience. One of his papers has been reprinted in theAuerbach Annual 1980: Best Computer Papers. He has servedas a referee for many journals and was a member of the programcommittee of the CIPS '82 National Conference.

Reghbati holds a BSc from Arya-Mehr University of Tech-nology and an MSc in electrical engineering from the Universityof Toronto, where he is completing requirements for his PhD.He is a member of the IEEE.

His address is the Department of Computing Science, SimonFraser University, Burnaby, BC V5A IS6, Canada.

Harper & RowIn 1983.MohamedRafiquzzamanCalifornia State Pol17technic tI iversitv-Poiona

MICROPROCESSORSAND MICROCOMPUTERDEVELOPMENTSYSTEMSDesigning Microprocessor-Based Systems

Familiarizes students with the basic concepts of tvpi-cal 8,16, and 32-bit microprocessors, interface chips,and microcomputer development svstems neces-sary to design and develop hardware and softwarefor microprocessor-based applications. Contains nu-merous laboratory exercises and examples designedfor practical applications.

4/83. 640 pages(tentative). Solutions Manual.

Sydney B. NewellMicroelectronics Center, North Carolina

INTRODUCTION TOMICROCOMIPUTINGProvides an introduction to microprocessors andassembly language programming using the 6800 and68000.1982. 615 pages. Solutions Manual.

Coming soon-A new Harper & Row series in Computer Engineering

Consulting Editor, Daniel P. Siewiorek, Carnegie-Mellon UniversityIf vou plan to write on the subjects covered by thenew series, your ideas and manuscripts will be mostwelcome and will receive full consideration. Pleasecontact Carl McNair, Harper & Row computer engi-neering editor, for additional information.

To request examination copies, write to Suite 3D, Harper &Row, 10 East 53d Street, New York, N.Y 10022. Please includecourse title, enrollment, and present text.

LHa*et6' 'RowReader Service Number 4February 1983