error-oriented architecture testing* - ieee computer … the inadequacies of present testing...

12
Error-oriented architecture testing* by LARRY LA! Carnegie-Mellon University Pittsburgh, Pennsylvania ARCHITECTURE VALIDATION Motivation Architecture validation is becoming more and more impor- tant as diverging cost/performance criteria and competition cause the number of models within a computer family to proliferate. Some popular architectures are now being man- ufactured by many different companies and the chances of a company inexperienced with the architecture making mis- takes is very high. Not only will errors in an implementation cause software incompatibility, the costs of fixing them are usually prohibitively high once there are a large number of defective machines in the field. Excellent evidence demon- strating the inadequacies of present testing techniques is implementation errors discovered in the field for many major computer families. This study was initiated in the hope that an error-oriented approach to architecture testing may pro- vide a better detection of implementation errors. In this paper, the term architecture refers to the time- independent functional appearance of a computer system to its users. An implementation of an architecture is an ensem- ble of hardware/fIrmware/software that provides all the func- tions as defined in the architecture. Architecture validation Architecture validation is the process of validating that a given machine indeed implements a specified architecture. There are three basic approaches: 1. Verification-prove the correctness of the design of an implementation using formal mathematical techniques. 2. Simulation-based on models of the physical building blocks and a description of the design, simulate the implementation to see if it behaves as expected. 3. Testing-establish a certain level of confidence in an * This research was sponsored by the Defense Advanced Research Projects Agency (DoD), ARPA Order No. 3597, and monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-1l51. The views and con- clusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or im- plied. of the Defense Advanced Research Projects Agency or the U.S. gov- ernment. 565 implementation by running test programs on a proto- type machine. The first two approaches are most useful when the imple- mentation is still being designed or when architecture spec- ifications are still being formulated. Once a machine is built, however, the only way to find out whether it actually works is to run programs on it-i.e. through testing. Before one can set out to validate any implementation, one needs to have a specification of the target architecture. There are two basic ways to specify/describe an architec- ture: (0 using a formal language, e.g. ISPS,1 VDLlAPL;2 and (ii) using a natural language, e.g .. Principles of Opera- tion,15 Processor handbook. 5 The latter is often the most important because many architectures do not have a formal specification while a natural language description is almost always available, is more readable, and hence is read by users and implementors alike. ** For verification and simu- lation, a complete and consistent formal description is needed. For testing, a natural language description is usually adequate for the test programmer, although any ambiguities in the description must be resolved before tests can be derived for the parts affected. Before we move on to architecture testing, we will briefly review the work that has been done in verification and simulation. Verification Verification seeks to confirm absolutely, on paper, that a given implementation does meet its specifications. Two im- plementation-specific ways of architecture verification are microprogram verification and hardware verification. The microprogram verification approach 16,18,42 can be summarized as follows: given the formal specifications of the target machine and the description of the underlying microengine, the formal verification system looks at a mi- croprogram written for the microengine and aUempts to prove that the microprogram running on the microengine would emulate (i.e. implement) the target machine. So far this approach has only been applied to very simple ma- chines. ** For a detailed exposition on architecture specification, see Reference 17. From the collection of the Computer History Museum (www.computerhistory.org)

Upload: vucong

Post on 13-Mar-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

Error-oriented architecture testing*

by LARRY K\l/OK~WOON LA! Carnegie-Mellon University Pittsburgh, Pennsylvania

ARCHITECTURE VALIDATION

Motivation

Architecture validation is becoming more and more impor­tant as diverging cost/performance criteria and competition cause the number of models within a computer family to proliferate. Some popular architectures are now being man­ufactured by many different companies and the chances of a company inexperienced with the architecture making mis­takes is very high. Not only will errors in an implementation cause software incompatibility, the costs of fixing them are usually prohibitively high once there are a large number of defective machines in the field. Excellent evidence demon­strating the inadequacies of present testing techniques is implementation errors discovered in the field for many major computer families. This study was initiated in the hope that an error-oriented approach to architecture testing may pro­vide a better detection of implementation errors.

In this paper, the term architecture refers to the time­independent functional appearance of a computer system to its users. An implementation of an architecture is an ensem­ble of hardware/fIrmware/software that provides all the func­tions as defined in the architecture.

Architecture validation

Architecture validation is the process of validating that a given machine indeed implements a specified architecture. There are three basic approaches:

1. Verification-prove the correctness of the design of an implementation using formal mathematical techniques.

2. Simulation-based on models of the physical building blocks and a description of the design, simulate the implementation to see if it behaves as expected.

3. Testing-establish a certain level of confidence in an

* This research was sponsored by the Defense Advanced Research Projects Agency (DoD), ARPA Order No. 3597, and monitored by the Air Force Avionics Laboratory under Contract F33615-78-C-1l51. The views and con­clusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or im­plied. of the Defense Advanced Research Projects Agency or the U.S. gov­ernment.

565

implementation by running test programs on a proto­type machine.

The first two approaches are most useful when the imple­mentation is still being designed or when architecture spec­ifications are still being formulated. Once a machine is built, however, the only way to find out whether it actually works is to run programs on it-i.e. through testing.

Before one can set out to validate any implementation, one needs to have a specification of the target architecture. There are two basic ways to specify/describe an architec­ture: (0 using a formal language, e.g. ISPS,1 VDLlAPL;2 and (ii) using a natural language, e.g .. Principles of Opera­tion,15 Processor handbook. 5 The latter is often the most important because many architectures do not have a formal specification while a natural language description is almost always available, is more readable, and hence is read by users and implementors alike. ** For verification and simu­lation, a complete and consistent formal description is needed. For testing, a natural language description is usually adequate for the test programmer, although any ambiguities in the description must be resolved before tests can be derived for the parts affected.

Before we move on to architecture testing, we will briefly review the work that has been done in verification and simulation.

Verification

Verification seeks to confirm absolutely, on paper, that a given implementation does meet its specifications. Two im­plementation-specific ways of architecture verification are microprogram verification and hardware verification.

The microprogram verification approach 16,18,42 can be summarized as follows: given the formal specifications of the target machine and the description of the underlying microengine, the formal verification system looks at a mi­croprogram written for the microengine and aUempts to prove that the microprogram running on the microengine would emulate (i.e. implement) the target machine. So far this approach has only been applied to very simple ma­chines.

** For a detailed exposition on architecture specification, see Reference 17.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 2: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

566 National Computer Conference, 1979

Hardware verification 24,13 deals with methods to prove the correctness of hardware designs. In this approach, de­scriptions of low level components and a description of how they are interconnected are taken as input. The goal is to verify that the given interconnection satisfies some higher level specification.

Both microprogram verification and hardware verification can only verify paper designs. Some test programs are still needed to check out the actual hardware. They must also develop accurate models of low level components which are changing rapidly with technology.

Simulation

Simulation has the advantage that it is easier to do than verification. Simulating computer hardware using software, however, usually results in a speed penalty ranging from a thousand to one up to a million to one. Hence it is usually impractical to test a design using simulation beyond running some very short tests that check some internal workings and critical paths. Besides the severe speed penalty, simulation also faces the problem of developing good models for rapidly advancing components and technology.

Testing

Testing has the strong appeal that it deals directly with the physical implementation, which is what one really wants to validate, rather than some abstract description of the implementation. Testing is also much easier to do than ver­ification or simulation. One can start writing a test program with nothing more than a natural language specification whereas verification and simulation both require modeling, formal description of the architecture and of the particular design, and a software system that can carry out the veri­fication and simulation. These advantages make testing the most practical, though not totally satisfying, way to validate an implementation.

The drawback of testing is that it cannot give complete assurance-in practice it often gives less than satisfactory assurance. The former is a direct result of the affirmative nature of testing and of the complexity of a computer­because exhaustive testing, which is the only way to give complete assurance, is impractical for even a simple com­puter. The latter, however, is usually caused by the lack of good test methodologies and test programmers. It can be drastically improved if the object to be tested can be ana­lyzed in detail before test programs using the analysis as guidelines are written.

The viability of testing as a validation tool lies in the empirical fact that implementations usually have regular structures and therefore the errors made in designing them are not totally random. To illustrate this, let us consider testing the ADD instruction of a computer. In almost all cases, only a tiny fraction of the 2nx2 n (where n is the word length) possibilities would be tested and then one would declare with confidence that the adder »'orks, and indeed it usually does! The reason for this is that the tests usually

cover most of the probable errors: experience shows that the chances of having errors that could not be discovered by the tests is pretty small. If errors are truly random, however, and one is asked to test a black box whose internal structure one does not know about, then one cannot hope to achieve any high level of confidence by testing only a tiny fraction of the input possibilities. Needless to say, test­ing all the possibilities is out of the question-e.g. 232x232=264> 1019• The question then is: amongst a sea of possible errors, how do we pick out the most probable ones and test for them? The error-analysis techniques developed later in this paper attempt to answer this question.

ARCHITECTURE TESTING

Architecture testing defined

Architecture testing is functional testing aimed at validat­ing implementations of an architecture. An architecture test­ing program is designed to be a tool for certifying different machines claimed to implement a specific architecture. .. Functional testing" means that the tests primarily aim at finding design and logical errors rather than problems in realization (e.g. repeatability and bit dependencies) or hard­ware failures. The level of confidence an architecture testing program can provide depends on the probability of having errors undetected by its tests.

Considerations in writing an architecture testing program

The most crucial constraint for an architecture testing program is time. Any testing that can be done within a reasonable amount of time is but a tiny fraction of all pos­sible tests. The critical issue in writing an architecture test­ing program is therefore how to select the most profitable tests and test data for a given time constraint.

Unlike diagnostics which often must locate faults rapidly in the field, architecture testing programs can have run time in the order of days. But that is still only a fraction of the test cycles one would like to have. Fortunately the tests in an architecture testing program do not depend on the results of each other and therefore different parts of the program can be run simultaneously on many prototypes in parallel to obtain more test cycles. Program size should not be a con­straint since only one test needs to be in core at anyone time (can simply use overlays). Because the total program is likely to be huge, however, a good testing methodology should allow automatic generation of test data and test pro­grams to avoid the tedious task of test programming.

Ideally we would like an architecture testing program to be as implementation-independent as possible. However, as we have pointed out before, the only way to get high con­fidence in testing "black boxes" is exhaustive testing. Therefore any practical architecture testing program must necessarily make certain assumptions about the implemen­tations it is going to he mn on in order to cut down the test space. In other words, an architecture testing program must

From the collection of the Computer History Museum (www.computerhistory.org)

Page 3: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

be written with certain classes of implementations*** in mind. Within the target classes, the program should be non­implementation-specific in that it should be designed to ef­fectively test all implementations within the classes. This is a case of tradeoff between generality and efficiency-one wants to have a program that can effectively validate as many kinds of implementations as possible while there are practical limitations as to how much resources the program can consume.

Recent developments and related work

Two current developments have contributed to the recent interest in architecture testing. One is standardization efforts like the MCF project3 which need independent validations of prototypes submitted by various contract bidders. The second development is the spread of microprocessors and LSI components. Many microprocessor and LSI parts are now manufactured by a half-dozen companies representing almost as many different implementations. The need for assurance of compatibility, together with pin limitation, have generated considerable interest in functional testing. 19

Some research in the field of program testing2o,21,11 is of considerable interest to architecture testing. The work that is closest to architecture testing is that of compiler valida­tion. 12,14 An area of special interest is test data selection techniques 7- 9-one can represent a function in an architec­ture by a canonical procedure written in a hardwar~ descrip­tion language like ISPS and then use the selection techniques to choose test data. Architecture testing and program testing bear many similarities, and research done in one area is likely to benefit the other.

ERRORS IN IMPLEMENTING AN ARCHITECTURE

Why do people make errors? What errors do people and design systems make? If one knows why people make er­rors, one can try to prevent them in the first place, thereby getting at the root of the problem. If one knows what kind of errors are likely to occur in a particular environment, one can orient one's testing effort accordingly to maximize re­turn on the effort. It is wasteful to test for errors that are almost certain not to occur while more likely errors are not tested for. The kind of errors that people make varies with their task environments. In implementing a computer archi­tecture, the likelihood of different types of errors varies with technology, design tools used, experience and training of the design group, available history of previous implemen­tation errors (designers are usually more aware of them), project management etc ..

We began with the conjecture that although every imple-

*** An example of an implementation class is the family of 16 bit adders implemented by n bit full adder slices (O<n<16) with carry look ahead. Another example is mUltiplication implemented by repeated additions, whether it is implemented in hardware, frrmware, or software. Multiplication implemented by table lookup is in a different class because it requires a totally different test strategy.

Error-oriented Architecture Testing 567

mentation effort has its own error probability distribution, overall the errors are likely to fall into several general cat­egories. Identifying these categories would give some insight into the nature of implementation errors as well as providing guidance for the writing of architecture testing programs. Instead of grouping known implementation errors into cat­egories like what some people have done with errors in programming, 10,7,25 the approach of first conjecturing error categories and then calibrating them was adopted. This ap­proach was chosen because (i) we wanted to explore imple­mentation errors from an implementor's viewpoint, and (ii) very few error histories were publicly available. t

Design of the experiment

There are four phases in our experiment: preliminary study, conjecture, case study and calibration of conjecture. Each phase is explained in detail below.

Preliminary Study-To find out why people make errors and what the sources of errors are, the following studies were conducted:

1. Architecture specifications (mainly those of the PDP-11 §) were studied for error-prone spots.

2. Experienced programmers of the PDP-II were inter­viewed. They were asked to recall "errors" they have made in learning to use the architecture. The "errors" include: wrong assumptions, unclear specifications, confusions caused by counter-intuitive features etc.

3. Existing classifications of errors in programming were used as food for thought. 7,10,25

Conjecture-Based on the information gathered in the preliminary study, the author proposed several categories as likely sources of errors.

Case Study-Using the proposed categories as a guide, a ""likely error analysis" of the PDP-II computer architecture was made. The analysis was aimed at revealing the error­prone spots in the written specifications and the architecture itself. The idea is to simulate an actual architecture test programmer using the categories as guidelines for testing. Based on the results of the analysis, specific tests are rec­ommended so that the effectiveness of the methodology can be calibrated later on.

Caiibralion-The above three phases were completed without knowledge of the actual implementations errors that have actually occurred. To see how well the methodology had done, the specific tests were compared against lists of real implementation errors which were only made known to the author after the tests had been developed.

t Complete error histories are rarely published. In fact, some manufacturers try their best to conceal errors they have made in the past. § The PDP-II 04/05/10/35/40/45 Processor Handbook published in 1975 by Digital Equipment Corporation is used throughout this study. For those who are unfamiliar with the PDP-II, see Appendix A for a brief description of the addressing modes and instructions.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 4: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

568 National Computer Conference, 1979

Errors in implementing an architecture

The preliminary study revealed eight likely sources of errors. They are:

]. Incomplete and imprecise specification 2. Interdependent side-effects 3. Asymmetry Inonuniformity 4. Logical complexity 5. Boundary values 6. Counter-intuitive and unusual features 7. Inconsistencies in nomenclature and style 8. Missing functions

Each of these categories is explained in detail in the follow­ing subsections. Real-life examples, mostly from the PDP­II, are given whenever appropriate. Since the categories overlap with one another, some examples have been some­what arbitrarily classified.

Incomplete and Imprecise Specification

Whether the incompleteness in an architecture specifica­tion is intentional or accidental, the hardware of any imple­mentation does something for the unspecified operations. Users are often tempted to use those peculiar "features" in their programs. If later models do not have the same "fea­tures," there is a software incompatibility problem. An in­complete specification may cause implementors to use an incompatible scheme in implementing the unspecified op­erations.

A specification which appears precise to its writer may be imprecise or ambiguous for others because of nontrivial implicit assumptions made by the former. Following is an example from the PDP-]] handbook:

• The overflow and carry condition code settings for the subtract (SUB) and compare (CMP) instructions are described in the processor handbook as follows: V: set if there was arithmetic overflow as a result of the operation, that is if ope~ands were of opposite signs and the sign of the source was the same as the sign of the result; cleared otherwise. C: cleared if there was a carry from the most significant bit of the result; set otherwise. One must have a good knowledge of two's complement arithmetic to be able to understand this. The setting of the carry condition code even presumes a particular way of implementing the subtract operation-comple­ment and add. In fact, the borrow generated by a hard­ware subtractor would be the exact opposite of the carry generated by the presumed method. In any case, the operations should have been precisely defined to avoid any misunderstanding.

Interdependent side-effects

Instructions which have multiple side-effects are error­prone, especially if the outcome of the instruction depends on the order in which the side-effects are carried out. Some­times ambiguities can occur at the interfaces of architectural features which are individually well-defined. An instruction consisting of mUltiple operations is inherently ambiguous if the order of the operations is not clearly specified and the effect of the instruction depends on this order. Most often this arises when there are multiple operations on the same register or memory location within an instruction and the order of operations is not explicitly stated in the specifica­tion.

Asymmetry/nonuniformity

Asymmetry/nonuniformity often causes additional com­plexity in programming and in implementation. Asymmet­rical/nonuniform side effects, notably condition code set­tings, are usually counter-intuitive as well.

• The instruction MUL Rn,SRC will cause Rn to contain the low order part of the result if R is an odd-numbered register and cause Rn to contain the high order part of the result if Rn is an even-numbered register. This asymmetry would not have occurred if the mUltiply instruction is defined such that the low order part of the result, which is what is needed most of the time, is always stored into Rn, and not Rnv1 '

• The automatic sign extension that occurs in moving a byte to a register with the MOVB instruction often catches programmers off guard. One would expect a byte instruction to operate on a byte and yield a one byte result. This nonuniformity is actually caused by the more fundamental nonuniformity* that registers are not byte addressable while all memory locations are.

Logical complexity

Some instructions are error-prone due to their sheer com­plexity. The human mind does not efficiently handle com­plexities beyond a certain threshold. Complex interactions that change a lot of processor states (trace traps, interrupts etc.) are conceptually hard to understand as well as difficult to implement, especially- if multiple activities can occur at the same time. Extra testing is required to ensure the cor­rectness of complex instructions.

* We are not saying that this nonuniformity was a bad design decision: in fact it was probably a good one. We just want to point out that any nonuni­formity is a likely source of error"-

From the collection of the Computer History Museum (www.computerhistory.org)

Page 5: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

Boundary values

Boundary values for an instruction are input values that are at the boundaries of different decision regions in the input domain of the instruction. This concept is analogous to decision branches inside a program which compare input or computed values against some test values in order to determine the execution paths that the program should fol­low. In fact, the inclusion of this category is inspired by the freq uent occurrence of boundary' value errors in program­ming. The most prevalent kinds of errors in this category are missing boundaries and offby-one errors. Missing boundaries are situations in which one or more of the bound­aries between decision regions are missing. Off-by-one er­rors are errors in which a boundary is off a distance of one (for some appropriate definition of distance) away from where it should be.

Counter-intuitive and unusual features

Features that deviate from or behave just opposite to what one would normally expect or find in other architectures are error-prone. They also considerably slow down program­mers who have to deal with them. Similarly, inappropriate or non-mnemonic names for instructions invite errors. With­out proper explanation and motivation, even a useful feature may create confusion.

• Some condition code settings in the PDP-II are counter-intuitive. For example, the increment and de­crement instructions do not affect the carry. 23

Inconsistencies in nomenclature and style

Inconsistencies and exceptions are often introduced due to carelessness or ignorance. It is often penny-wise and pound-foolish to foul up an otherwise uniform and clean style just to squeeze an extra bit of performance out of an architecture.

• In PDP-II instruction nomenclature, instructions that end with a B are supposed to be byte instructions. The SW AB instruction, despite having a B as the iast char­acter of its name, is actually a word instruction-it takes a word operand and generates a word result. It would probably be better to call it SWAP.

• The multiply and divide instructions store the high­order word of their two-word results in Rn and the low order word in Rn v 1, which is just opposite to the practice of storing a high byte at the higher address within a word. The same comment applies to the scheme of storing the high part of a floating number in the lower word.

Error-oriented Architecture Testing 569

Missing features

This is not really a source of error, but it is put here as a reminder that a good test program should at least test for the existence (not necessarily the complete correctness) of every feature. It is not uncommon that relatively simple features are left out of an implementation due to the over­sight or lack of experience of its designers. To illustrate what I mean by features in an architecture, the major fea­tures of two typical instructions are presented below.

ADD: • Add source operand to destination operand and store

result in destination. • If there is a carry out, set the carry bit, clear it other­

wise. • If the result is zero, set the zero indicator, clear it

otherwise. • If the result is negative, set the negative indicator, clear

it otherwise. • If the addition results in an overflow/underflow, set the

corresponding indicator. HALT:

• If in user mode, causes an "illegal user instruction" trap.

• If in supervisor mode, stop all operations:

1. All flags are left untouched,

2. The program counter points to the next instruction following the HALT.

A major feature can often be broken down further into several minor features, depending on the complexity of the instruction. To guard against leaving some major features untouched, a comprehensive checklist for features of each instruction should be used. Each entry on the checklist roughly corresponds to a leaf on the decision tree of the instruction.

More likely than not, most features would have been exercised by tests written for other categories, thus it will only be necessary to write special tests for the items that are left untouched by other tests. In order to reduce the test space to a practical size, often only the existence, and not the correct functioning, of features can be established by such tests.

A CASE STUDY OF THE PDP-1I ARCHITECTURE

The architecture testing philosophy advocated in this paper is rather straightforward: given the amount of re­sources one is willing to spend in testing, try to minimize the probability that an error is undetected. This implies that one should test for the most likely errors first. In fact, these are often the only errors that one can afford to test for. The crucial question here is: what are the most likely errors?

From the collection of the Computer History Museum (www.computerhistory.org)

Page 6: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

570 National Computer Conference, 1979

This section presents a "likely error analysis" of the basic PDP-II architecture** as specified in PDP-II processor handbooks 5. The likely errors are identified and classified using the proposed criteria. It must be pointed out here that what "likely errors" are depends on when in the develop­ment of an implementation the tests are made. We have been assuming all along that we will test prototypes that have most instructions "working" and that we are looking for the obscure errors. The PDP-II is selected for case study because (i) it is a major computer family having numerous implementations; (ii) the history of its implementation errors is readily accessible to allow evaluation of the proposed testing strategy; and (iii) the basic architecture is simple enough for a thorough study of this sort.

The analysis is intended to be illustrative rather than com­plete-someone who is willing to spend the energy needed to analyze an architecture for likely errors can probably tum up more potential bugs. A list of recommended tests is given at the end of each section. As a whole the recommended tests should be viewed as "hole-plugging" tests to be added on top of any testing scheme that covers basic and obvious functions.

Incomplete or imprecise specification

The handling of hardware error conditions is only briefly mentioned in the processor handbook. There is no well­defined priority scheme for handling multiple, simultaneous processor trap conditions. The handbook also does not spec­ify clearly what happens if a trap occurs in the middle of an instruction.

Other problems in the specification:

• The specification of the overflow V bit setting is wrong for the subtract carry (SBC) instruction. It is stated as follows in the manual: V: set if (dst) was 100000; cleared otherwise It should instead be V: set if (dst) was 100000 and C was I; cleared other­wise

• The specification of the carry condition code settings in the SUB and CMP(B) instructions presume particular implementation schemes (see Incomplete and Impre­cise Specification).

Tests recommended:

• Trap priority-special hardware is probably required to carry out this test.

• Handling of multiple trap conditions caused by a single instruction.

• Test the V bit setting of the SBC and SBCB instruc­tions.

• Test the C bit setting of SUB and CMP(B) instructions.

** Fnr the pllrpo~e of thi., study, FrS, FrS, meTTlory management. floating point instructions are not considered part of the basic PDP-II architecture.

Interdependent side-effects

In the PDP-II, many instructions having interdependent side-effects are also inherently ambiguous because the ar­chitecture specification often does not specify the orders of execution for mUltiple side-effects.

Multiple, explicit operations on the same register

A double operand instruction is inherently ambiguous if (i) its source addressing mode uses Rn and its destination addressing mode is one of (Rn)+, @(Rn)+, -(Rn), and @(Rn); or if (ii) its source mode is one of (Rn) +, @(Rn)+, -(Rn), and @-(Rn) and its destination mode uses Rn. For example:

• OPR Rn,(Rn)+-if the second operand is fetched from memory (the first operand needs no fetching) and au­toincrement is performed befi}/"c? carrying out the op­eration, the incremented Rn will be used as the source operand. But if the operands (including register oper­ands) are first stored into temporary registers as they are fetched, the original value of Rn would be used as the source operand. The latter is more intuitive, but the processor handbook makes no statement about this am­biguity.

Test recommended:

• For all double operand instructions, test all the 64 com­binations of addressing modes that are ambiguous. Of course, we would need to define how the combinations should behave before we can test them.

Multiple, explicit and implicit operations on the same register

PC (register 7) is automatically incremented each time it is used to fetch a word from memory. It is used implicitly in some addressing modes while SP (register 6) and some memory locations (notably those reserved for trap vectors) are used implicitly by several instructions. Instructions that operate on these registers or memory locations explicitly as well as use them implicitly at the same time deserve special attention.

• A double operand instruction that uses the PC is am­biguous if (i) its source is PC and its destination is one of(PC)+, @(PC)+,-(PC),@(PC),X(R), and @X(R); or if (ii) its source is one of the list just given and the destination is PC.

Tests recommended:

• Test an the 12 amhiguo\l'i; comhinations of PC ad­dressing modes.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 7: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

Modification and decision on the same operand

If an instruction both modifies its operand(s) and uses it for a decision (branch, setting of condition codes etc.), then the relative order of the modification and the decision be­comes critical.

Tests recommended:

• Test JMP (Rn)+ and JSR Rm,(Rn)+ which are both ambiguous.

Asymmetry Inonuniformity

Fundamental asymmetries/nonuniformities

• Registers are not byte addressable while all the memory locations are.

• Highest page of main memory is reserved for the I/O page.

• Some memory locations are special (e.g. processor sta­tus word, stack limit etc.).

• Some memory locations are not writable (e.g. some status registers).

• Some instructions implicitly use special memory loca­tions (EMT, TRAP, BPT, and lOT).

Tests recommended:

• Make sure that the I/O page is in the right place. • Make sure that all the special memory locations are

there. For instructions that use special memory loca­tions, make sure that they access the correct special locations when executed.

Other asymmetries/non uniformities

• Logical instructions COM(B), BIT(B), BIC(B), and XOR have different condition code setting conventions. In COM(B), the V bit is cleared but the C bit is not changed.

• The auto increment deferred addressing mode @(Rn)+ always increments Rn by 2, even for byte instructions, whereas (Rn)+ increments Rn only by ! for byte in­structions. Similarly, @-(Rn) always decrements Rn by 2, even for byte instructions, whereas -(Rn) decre­ments Rn only by I for byte instructions.

• Automatic sign extension in MOYB - ,Rn (see Asym metryINonuniformity).

• MUL & DIV instructions store low order part of result into R v 1 (see .4symmetryINonuniformity).

Tests recommended:

• Test COM(B) for the correct setting of the C bit.

Error-oriented Architecture Testing 571

• Test @(Rn)+ and @-(Rn) for incrementing/decre­menting Rn by two.

• Test sign extension in MOVB - ,Rn.

Logical complexity

There are not many complex instructions or features on the basic PDP-II. The MARK instruction and trace trap are probably the most complex features and the trap instructions (EMT, TRAP, BPT, lOT, RTI, RTT), SOB, JSR, and RTS instructions deserve some extra testing effort.

Tests recommended:

• Test the previous instructions/features to make sure that the right sequences of operations are performed when they are invoked.

Boundary values

It is straightforward to figure out the boundary values for logical instructions-just test all four combinations for each bit position. It is often the case for arithmetic instructions, however, that there are too many boundary values and sub­sets must be chosen among them. Without going into de­tailed arguments, we assert that testing just a few key points on a boundary is almost as good as testing all the points on the boundary. This approach is illustrated below through a '"boundary value analysis" of the ADD instruction.

The input domain is partitioned into different decision regions*** for each of the four condition codes N,Z,V, and C (which stand for Negative, Zero, oVerflow, and Carry respectively). For example, one region would consist of all input values that generate results which are le~s than zero and will therefore set the N bit while the complement of this region would consist of those input values that generate results which are not less than zero. There are well-defined boundaries between the partitions (see Figure 1). For ex­ample, the boundary values lying on the two sides of the· longest N bit boundary are given by y+z=i and y+x=O respectively (Figure 2). Note that the partitioning is sym­metrical with respect to the line y =x because ADD is a commutative operation and that the partitions for different condition codes often have common boundary values. Be­sides correct setting of condition codes, one may also want to test for the following: the correct operation of each output bitt, correct generation and propagation of carry from each

*** A region may not appear contiguous on a two-dimensional graph. The input domain actually wraps around. For example, 215.1 (011111 8) and-21~I()()()()()8) are neighboring points. "Neighboring" means that the dis­tance between the two points is one for some distance measure. In this case, the distance is measured by numerical difference. In other cases, we may want to use the geometric code distance measure in which neighboring points are those points that differ from the original point by one bit-hence there are n neighboring points for each point (where n is the word length). t In the case ofthe ADD instruction, adding 00 ... 00 & II ... 11,01 ... 01 & 10 ... 10, to ... 10 & 01 ... 01, II ... II & 00 ... 00, and II ... II & II . . . II can test each output bit individually.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 8: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

572 National Computer Conference, 1979

y (deltination oper.nd)

x

Region in which the C (Carry) bit should be set

Region in which the N (Negative) bit shoufd be set

Region in which the V (oVerflow) bit should be set

Boundaries:

N: y+x IS 0 & y+x - -1; y+x a t 5 -1 & y+x - 2

15; y+x. _215 & y+x • _(i5 +1)

Z: y+)( a 0 & y+)( .. -1 & y+x • 1;

V: y+x == _215 & y+x - _(2 15+1); y+x. 215_1 & y+X • 21~

c: y +)C • 0 & y +)C • -11 y • -1, )C ~ 0 & y • 0, )( ~ 01 )( • -1, y ~ 0 & )( • 0, y 2: 0, Figure l-Boundary values for condition code settings of the ADD

instruction.

(source operand)

Number of pOints on the boundaries

218

3x2 16

217

218

From the collection of the Computer History Museum (www.computerhistory.org)

Page 9: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

0-2,2 • .2,2

.1,1

0 1,-1.

N bit should be cleared on this side

-2,-2 1,-2 fli 0 2,-2

01 boundary values

6.-J

N bit should be set Boundary on this side

Figure 2-A close look at the N bit boundary.

position . . . Each of these have their own set of boundary values that need to be tested. In general, one first determines the things (actions) that one wants to check out, then par­titions the input domain into decision regions for each par­ticular action and finally picks out the boundary values as test data.

Ideally all boundary values of an instruction should be included as test points in testing the instruction. If one cannot afford to test all of them (e.g. in a 32 bit machine), a subset of "key points" could be chosen for testing. The minimum set of test points recommended are those values that are either at the "corners" of boundaries or ~t extreme points of the input domain. The existence of a boundary can be tested by crossing it from one boundary value to a neigh­boring value on the other side of the boundary. The process of selecting the key points is illustrated in Figure 3.

Tests recommended:

• Verify the operation (condition code settings, etc.) of each logical and arithmetical instruction for all its

(-2,-2)

N bit

Legend: o

(2,2)

Z bit

the origin (0,0)

boundary values chosen

Error-oriented Architecture Testing 573

boundary values. If this is not practical, pick a subset of "key points." For logical instructions, the simplest test is to assume independence among different bits in a word and apply the input pairs of 000000 & 000000, 000000 & 177777, 177777 & 000000, and 177777 & 177777.§

Counter-intuitive and unusual features

• Complement (COM), a logical instruction, sets the C bit instead of leaving it untouched.

• The increment (INC) and decrement (DEC) instructions do not affect the carry.

Tests Recommended:

• Test the C bit setting of the COM(B) instruction. • Test that the carry bit is truly unaffected by INC(B)

and DEC(B) instructions.

Inconsistencies in nomenclature and style

• The usage of the "contents of' notation, ( ... ), is in­consistent.

• The notation used to represent a register is also incon­sistent.

Test recommended:

• None. In this case none of the inconsistencies is very serious. If one is writing a complete test program, how­eVer, then it would be worthwhile to pay special atten­tion to even minor inconsistencies,

§ A better test is to apply some algorithmic patterns (like walking O's and l's) to systematically check for cross-coupling between bits.

C bit

the liM •• ,ment,

I .nd - illultr,t, the boundary ero.line"

Figure 3-Three examples showing the selection of test data based on boundary crossings.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 10: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

574 National Computer Conference, 1979

Missing features

A good test program should try to test for the existence (not necessarily the complete correctness) of as many fea­tures as possible.

Tests recommended:

• The existence of all major features should always be checked out.

• As our resources permit, test for the existence of as many minor features as possible.

RESULTS AND CONCLUSIONS

The eight error categories are conjectures established based on their potential of causing implementation errors. To examine how well they capture actual errors, the tests recommended in the previous section were calibrated against two error histories. The first one is a list of incom­patibilities among models of the PDP-II and the second one is a list of errors found in an ISP "implementation." In both cases, only errors in implementing the basic architecture* are considered.

Comparison with published incompatibilities among various models

This error history is published by DEC itselfG and it lists known differences among five implementations of the PDP­II architecture (see Appendix B). These are usually obscure errors that have slipped through the conventional validation tests. Among the 14 implementation errors, eight would definitely be caught by the recommended tests, three would probably be caught and the remaining three would likely slip through. All. however, fall into five of the eight categories and hence would likely be caught by more detailed tests (which requires a more detailed analysis of the PDP-II ar­chitecture than what was done).

Comparison with errors found in the PDP-II ISP

The ISP description of a computer architecture can be considered an implementation of that architecture through emulation on the "Register Transfer Machine." 1 A PDP-II ISP description was recently written and debugged. ** A rather complete history of the errors that have been discov­ered in the description has been kept. The comparison (see Appendix C) revealed that eight of the 13 errors would definitely be caught by the recommended tests. Two would probably be caught and the remaining three would likely slip

* Errors in features like user/supervisor/kernel modes, memory management, and floating point instructions are not considered. ** ()riginally writt("n hy nan SiewiClfeK- at ('Me, f<:Of detail., see Reference 22.

through. All, however. fall into the error categories and would conceivably be detected by more detailed tests.

Discussion

The above comparisons suggest the following:

I. An error-oriented architecture testing program written with the proposed categories as primary test targets can augment existing test schemes. In the first case, the recommended tests caught a significant percentage of obscure errors that have slipped through conven­tional tests. More encouraging is that all the imple­mentation errors in both cases fall into the eight cate­gories. Hence if someone devotes enough time (perhaps months, a reasonable investment for a major computer family) to develop an architecture testing program using the proposed methodology, most of these errors can conceivably be caught by the detailed tests.

2. Exercises like the "likely error analysis" presented can help architects to improve their specifications and reduce implementation errors caused by problems in the specification. Such exercises are also useful in spotting error-prone areas, which often cause difficulty in implementation as well as programming, in the ar­chitecture itself.

In retrospect, other than a few categories like boundary values which can conceivably be rigorously defined, most of the proposed categories have rather "soft" criteria and require human judgment in applying them. A lot more work is still needed to make them more specific and more ame­nable to automation. In addition. difficult areas such as floating point instructions have not been dealt with. We nevertheless hope that the proposed categories can serve as helpful guidelines for those who are going to write architec­ture testing programs. Analyzing an architecture specifica­tion to identify potential errors is a very time-consuming process, however, and the usefulness of any testing meth­odology ultimately depends on automation. Toward this end, research should continue on analysis techniques and the automatic generation of test data and test programs.

ACKNOWLEDGMENT

The author would like to thank Dan Siewiorek for initi­ating the study and helping along the way. Thanks are also due to Mario Barbacci, Len Shustek, Bob Sproull, Richard Swan, and the referees for their valuable suggestions.

REFERENCES

I. Barbacci, M. R., G. E. Barnes, R. G. Cattell and D. P. Siewiorek, Symbolic Manipulation of Computer Descriptions: The ISPS Computer Description Language, Technical Report, Dept. of Computer Science, Carnegie-Mellon University, 1978.

2. Birman. A. "On Proving Correctness uf Mkruprograms," IB;U J. R. & D Vol. 18. No.4, 1974. pp. 250-266.

From the collection of the Computer History Museum (www.computerhistory.org)

Page 11: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

3. Burr, W. E., A. H. Coleman and W. R. Smith, "Overview of the Military Computer Family Architecture Selection," AFIPS Conference Proceed­ings Vol. 46 (National Computer Conference) 1977, pp. 131-137.

4. Carter, W. C., W. H. Joyner and G. B. Leeman, "Automated Experi­ments in Validating Microprograms," Int'l Symposium on Fault Tolerant Computing: FTC-5, Vol. 5, 1975, p. 247.

5. Digital Equipment Corporation, PDP-II 04/05/10/35/40/45 Processor Handbook, Digital Equipment Corp. Maynard, Mass., 1975.

6. Digital Equipment Corporation, Microcomputer Handbook, Digital Equipment Corp., Maynard, Mass., 1976.

7. DeMillo, R. A., R. J. Lipton and F. G. Sayward, "Hints on Test Data Selection: Help for the Practicing Programmer," Computer, Vol. 11, No. 4, 1978, pp. 34-41.

8. Geller, M., "Test Data as An Aid in Proving Program Correctness," CACM, Vol. 21, No.5, 1978, pp. 368-375.

9. Goodenough, J. B., and S. L. Gerhart, "Towards a Theory of Test Data Selection," Proc. International Conference on Reliable Software, 1975, pp. 493-510.

10. Hartwick, R. Dean, "Test Planning," AFIPS Conference Proceedings, Vol. 46 (National Computer Conference), 1977, pp. 285-294.

11. Hetzel, W. G., (ed.), Program Test Methods, Prentice Hall, 1973. 12. Hicks, H. T., "The Air Force Cobol Compiler Validation System,"

Daiamation, 1969, pp. 73-81. 13. Hoehne, H., and R. Piloty, "Design verification at the register transfer

language level," IEEE Trans. on Computers, Vol. 24, No.9, 1975, pp. 861-867.

14. Hoyt, P. M., "The Navy Fortran Validation System," AFIPS Conference Proceedings, Vol. 46 (National Computer Conference), 1977, pp. 529-537.

15. IBM, IBM System/370 Principles of Operation IBM Corp., Form GA22-7000-4, 1976.

16. Joyner, W. H., W. C. Carter and G. B. Leeman, "Automated Proofs of Microprogram Correctness," Ninth Annual Workshop on M~croprogram­ming, Vol. 51, No. 55, 1976.

17. Lai, Konrad K. K., Computer Architecture Specification, Technical Re­port, Dept. of Electrical Engineering, Carnegie-Mellon University, 1978.

18. Leeman, G. B., "Some Problems in Certifying Microprograms," IEEE Trans. on Computers Vol. C-24, No.5, 1975, pp. 545-553.

19. McCaskill, Richard, and Wayne E. Sohl, Study of Limitations and Attri­butes of Microprocessor Testing Techniques., Technical Report, NASA, contract NAS8-31954 (final report), Macrodata Corp. (prepared for George C. Marshall Space Flight Center), 1977.

20. Miller, E. F. "Program testing: Art meets Theory," Computer, Vol. 10, No.7,1977.

21. Miller, E. F., "Program Testing," Computer Vol. 11, No.4, 1978, pp. 10-12.

22. Parker, R. Alan, A Guide to the PDP-II lSP, Technical Report 5403-232, RAP,NRL Prob 54B02-31, Naval Research Lab, 1977.

23. Russell, R. D., and K. Hall, "The PDP-ll: A Case Study of How Not to Design Condition Codes," The Fifth Annual Symposium on Computer Architecture, Vol. 5, 1978, pp. 190-194.

24. Wagner, T. J., Hardware Verification, Technical Report AIM-304, Stan­ford Artificial Intelligence Lab., 1977.

25. Youngs, E. A., "Human Errors in Programming," Int. 1. Man-Machine Studies, Vol. 6, 1974, pp. 361-376.

APPENDIX A

A brief description of PDP-II addressing modes and ins true tions

Single operand instructions have the format OPR desti­nation and usually perform d ~ op d.

Double operand instructions have the format OPR source, destination and usually perform d ~ s op d. Each operand can be accessed using one of eight addressing modes, giving

Error-oriented Architecture Testing 575

a cross-product of 64 ways double operand instruction. Addressing Modes

of addressing operands in a

Modes Symbolic o R I (R) 2 (R)+

3

4

5

6 7

APPENDIX B

@(R)+

-(R)

@-(R)

X(R) @X(R)

Description (R) is operand (R) is address (R) is adr., increment Rafter

fetching (R) is adr of adr., incr. Rafter

fetching decrement R before fetching,

(R) is adr. decr. R before fetching, (R) is

adr. of adr. indexing, (R)+X is adr. (R)+X is adr. ofadr.

Architectural incompatibilities among five implementations of the PDP-II

The incompatibilities listed below are compiled from a list in Microcomputer Handbook published by Digital Equip­ment Corporation in 1976. The list, entitled "LSI-II, PDP­II Programming/Hardware Difference List," listed the known differences among five implementations of the PDP- . 11 architecture: LSI-II, PDP-II 05/10, 15/20, 35/40, 45. In compiling the following, differences among features that are not part of the basic PDP-I 1 architecture are not considered. N ext to each of the architectural incompatibilities is listed the error category that the incompatibility belongs to and an indication of whether it would have been caught by the tests that we have recommended.

Incompatability in 1. OPR R,(R) + or OPR

R,-(R) 2. OPR R,@(R)+ or

OPR R,@-(R) 3. OPR PC,X(R) or

OPR PC,@X(R) 4. JMP (R)+, JSR

Rm,(Rn)+ 5. JMP R, JSR Rm,Rn

(both illegal instructions) trap differently

6; "SWAB does not change V in some models

7. Bus addresses of the registers are special

Category Ambiguity in

Sequence ditto

ditto

ditto

N onuniformity

Missing Functions

N onuniformity

Would It Be Caught With the Tests? yes

yes

yes

yes

yes

yes

yes

From the collection of the Computer History Museum (www.computerhistory.org)

Page 12: Error-oriented architecture testing* - IEEE Computer … the inadequacies of present testing techniques is ... an error-oriented approach to architecture testing may ... tion language

576 National Computer Conference, 1979

8. Power fail trap has Incomplete no each of the errors is listed the category that the error belongs different priority Spec. to and an indication of whether it would have been caught w.r.t. RESET by one of the tests that we have recommended. instruction

9. RTT instruction not <deliberate Would It Be implemented omission- Error Category Caught?

doesn't I. DEC did not set V boundary yes count> bit when destination value

10. RTI behaves Logical probably was 100000. differently with the T complexity 2. MOVB did not sign asymmetry yes bit set extend byte moved

II. Priority between Incomplete yes to register. trace trap and spec. 3. SBC did not set the missing fn. yes interrupt is different V condition code

12. Trace trap will ditto probably 4. SWAB did not missing fn. yes sequence out of assign the result of WAIT instruction on the SWAB back to some models memory

13. Direct access to N onuniformity yes 5. Byte instruction nonuniformity no Program Status using indexed Register (a special deferred addressing memory location) mode didn't work can change the T bit correctly. in some models 6. MARK instruction logical yes

14. Odd address/non- Incomplete no reset the stack complexity existent traps using spec. pointer wrong the stack pointer 7. PSW did not have nonuniformity yes

15. Guaranteed Incomplete no bus address 177776. or missing execution of the first spec. fn. instruction in an 8. SOB dropped asymmetry no interrupt routine highest bit of offset

16. Odd address trap <deliberate 9. ASRB instruction missing fn. yes not implemented on omission> operates on wrong the LSI-II field that is off by

17. Effect of bus errors <part of error handling, one bit. on PC/register whether this is part of the 10. ASHC & ASH: C incomplete no modification architecture is arguable> and V bits are not spec.

set correctly (spec. not clear).

II. MUL stores peculiar to ISP. would probably APPENDIX C intermediate result in be caught by boundary value

a 17 bit register tests. Errors found in the PDP-/ / ISP instead of a 16 bit

register The errors to be listed are compiled from a collection of 12. DIV; when source incomplete probably

memos documenting the errors that have been found in the register addr. is odd spec.! PDP-II ISP after it has been released by its author. The garbage is produced. inconsist-original PDP-II {SP was written by Dan Siewiorek at Car- ency in negie-Mellon University and has been maintained by Alan style Parker of NRL. For the purpose of this paper, we have left 13. DIV: divide by missing fn. yes out errors concerning user/supervisor/kernel modes, mem- zero did not set ory management. floating point instructions, and those er- condition codes and rors peculiar to ISP as a programming language. Next to abort.

From the collection of the Computer History Museum (www.computerhistory.org)