siva: a system for coverage-directed state space...

37
SIVA: A System for Coverage-Directed State Space Search Abstract We introduce SImulation Verification with Augmentation (SIVA), a tool for coverage-directed state space search on digital hardware designs. SIVA tightly integrates simulation with symbolic techniques for efficient state space search. Specifically, the core algorithm uses a combination of ATPG and BDD’s to generate “directed” input vectors, i.e., inputs which cover behavior not excited by simulation. We also present approaches to automatically generate “lighthouses” that guide the search towards hard-to-reach coverage goals. Experiments demonstrate that our approach is capable of achieving significantly greater coverage than either simulation or symbolic techniques in isolation. Keywords: functional verification, formal methods, guided search, coverage 1

Upload: lenga

Post on 10-Apr-2018

214 views

Category:

Documents


2 download

TRANSCRIPT

SIVA: A System for Coverage-Directed State Space Search

Abstract

We introduce SImulation Verification with Augmentation (SIVA), a tool for coverage-directed state space

search on digital hardware designs. SIVA tightly integrates simulation with symbolic techniques for efficient state

space search. Specifically, the core algorithm uses a combination of ATPG and BDD’s to generate “directed” input

vectors, i.e., inputs which cover behavior not excited by simulation. We also present approaches to automatically

generate “lighthouses” that guide the search towards hard-to-reach coverage goals. Experiments demonstrate that

our approach is capable of achieving significantly greater coverage than either simulation or symbolic techniques

in isolation.

Keywords: functional verification, formal methods, guided search, coverage

1

Author Information

Malay Ganai�

Praveen Yalagandula�

Adnan Aziz�

Andreas Kuehlmann�

Vigyan Singhal�

Department of Electrical and Computer Engineering

The University of Texas at Austin�

Cadence Berkeley Labs�

Tempus Fugit, Inc.

Acknowledgement of Support

This research was supported in part by grants from the National Science Foundation, the State of Texas Higher

Education Coordinating Body, and IBM.

Footnotes

—NONE—

2

1 Introduction

We address the problem of efficiently searching the state space of synchronous digital hardware designs starting

from a designated initial state.

State space search has many applications including verification of safety properties, analysis of third-party RTL

code to understand its behavior, generating tests to satisfy missing coverage, justifying the reachability of a given

state for a combinational equivalence checker, etc.

Conventionally, state space search has been performed by simulation. Large number of input sequences, called

tests, are applied to a software model of the design; these tests are generated by random test pattern generators,

or by hand. Simulation is simple, and scales well in the sense that the time taken to simulate a single input is

proportional to the design’s size. However, simulation offers no guarantees of completeness. More significantly,

given the constraints of time and the complexity of modern designs, the set of test sequences applied can cover

only a tiny fraction of a design’s possible scenarios. As a result, there is a growing recognition that simulation is

simply insufficient to achieve acceptable coverage for today’s complex hardware designs.

This state of affairs has led to the proposal of symbolic search strategies, based on the use of Binary Decision

Diagrams (BDD’s) [3] to implicitly represent and manipulate sets of states rather than individual states [19].

Intuitively, these approaches systematically explore all states reachable from the initial state.

However, BDD-based state space search is limited to designs with small or regular state spaces because of a

phenomenon called “state explosion.” Conceptually, this explosion arises because the transition system analyzed

describes the global behavior of the design. In a design comprised of multiple components, the global state space is

the cross product of the individual component state spaces. Even a system containing only small components can

therefore yield a complex global state-space. This poses a serious challenge to BDD-based state space search; in

practice, it is limited to designs containing of the order of a hundred state bits. Beyond this, the BDD’s constructed

in the course of the search can grow extremely large, resulting in space-outs.

In reality, it is rarely the case that we need to guarantee that all reachable states have been explored. For

example, a designer is less concerned with proving his design correct than with finding bugs in it as early as

possible. Faced with the twin dilemmas of diminished coverage through simulation and the inability of symbolic

methods to explore large designs completely, it is natural to ask how to couple them to explore as much of the state

space as is computationally feasible. Naturally, the development of the search algorithms needs to be based on the

coverage goal.

3

1.1 Our approach

Two techniques which have been successfully applied for analyzing large designs are cycle-simulation and

combinational equivalence checking. The best combinational verification tools tightly integrate random simula-

tion, combinational ATPG, and BDD’s to achieve robustness [18, 4, 21]. The sharing of information between

subroutines allows them to achieve much greater efficiency than any of them in isolation. For example, combina-

tional ATPG is very efficient at finding an input vector which differentiates two cones of logic; it is less effective

at proving equivalence. Conversely, BDD’s are superior to ATPG for proving equivalence, but less efficient than

ATPG at differentiating logic cones.

Even though we are addressing a problem which is in some respects quite different from combinational equiva-

lence checking (there is no concept of “corresponding nodes” in our domain), it is our premise that “dove-tailing”

between different sequential-search strategies is essential. In the sequel, we will show that combinational ATPG

and BDD’s fit very nicely with simulation-based state space search. Based on this we have put together a tool for

state space search that we call SIVA.

A high-level description of our main procedure is as follows: designs are specified as a netlist of gates and

latches; for simplicity, we assume a single designated initial state. The coverage goals are decided by the user

and are fed to the tool as a list of������������ ��������

tuples, where�������

corresponds to a signal in the design and�� �����������������

. These tuples are also referred to as "!�#�� %$ . The goal of our approach is to generate input vector

sequences which justify these targets. An input vector sequence & justifies a target���'����

if the application of & to

the design in the initial state leads the design to a state where the node�

gets the value�.

We start by applying a fixed number of randomly generated input vectors to the initial state; this gives us a set

of states that can be reached in one step from the initial state. If any targets are found to be justified, then we

remove those from the list. Since simulation is not complete, we try to generate input vector sequences for each

target by a solver which combines SAT-based ATPG, and BDD building. When the solver shows that no vector

exists, we go on to the next target. Limits are set on the number of backtracks on the ATPG routine, and on BDD

size; if these are reached, we simply “give up,” i.e., abort the vector generation for the given target, and proceed

to the next one — this keeps the solver robust. After completing analysis from the initial state, the procedure is

called recursively on a state selected from the visited state set.

Intuitively, this approach captures the best of both simulation and symbolic search. Simulation is used to quickly

sample the controller behavior; symbolic methods are used to enable the transitions to rare cases, and to guide the

simulation. Our experiments indicate that the routines are effective, corroborating the heuristic arguments made

above.

4

In the course of this work, we observed that for some of the targets which are distant from the initial state in the

state transition graph, there exists several levels of conditionals which needs to be satisfied before the target can be

satisfied. In SIVA, the user can specify these “hints” in the form of “lighthouses”, which are again��������� ��� ������ �

tuples and are associated with a target. The tool tries to justify the lighthouses along with the targets and if it is

able to justify a lighthouse corresponding to a target, it is assured that the progression is in the right direction.

Providing lighthouses with hard targets helped our tool in attaining a higher coverage. We have also developed

some approaches for automating the process of generating high-quality lighthouses for a target and the results of

experiments conducted are very promising.

1.2 Paper organization

The rest of this paper is structured as follows: We present previous related work in Section 2. In Section 3,

we review germane definitions and salient background material on design representation and BDD’s. In Section 4

we present detailed accounts of our approach along with experimental results. We develop our approach for

automatically generating lighthouses in Section 5. We summarize our contributions, and suggest future work in

Section 6.

2 Previous work in state space search

2.1 BDD approximation

The approach taken by Ravi et al. [23] is to perform symbolic reachability analysis with subsetting. Whenever

the BDD for the visited state set grows beyond a threshold limit, a subset of the set is taken in a manner, which

heuristically preserves a large fraction of the original set while reducing BDD size. We are philosophically opposed

to this, as it does not differentiate between states at all. Yuan et al. [28] attempted to overcome this limitation by

taking subsets which preserve all the distinct control states in the subset. Elsewhere, BDD’s have been used for

“target enlargement” — successive pre-images are computed from the set of target states until the BDD for this

set grows large; this is used as the target of simulation [28, 27].

We have experimented with these approaches. Our basic criticism of them is that they do not scale well. Image

or Pre-Image computation is required by both; we found that this in itself tends to blow up, even when dealing

with state sets whose BDD’s are small. We tried pulling subsetting into the distributed image computation [11],

we routinely found that we got empty images. Furthermore, BDD’s are a very poor representation of state sets

when there is limited node sharing as is commonly the case with the approach of Yuan et al. [28]. In view of this,

we elected not to pursue BDD-based FSM analysis.

5

In our work, we never report false negatives, i.e., report that an unreachable coverage goal is reachable. Several

authors have developed procedures which never report false positives, e.g., [22, 14], but may report false negatives.

A simple view of these papers is that they build an over-estimate of the set of reachable states, e.g., by generic

BDD approximation [22], or by aggressive existential quantification [14]. Consequently, if the tool reports that

the coverage goal is unreachable, it really is unreachable. However, such approaches may lead to false negatives;

we are not aware of any good way of resolving these false negatives.

2.2 Test synthesis

Several papers have recently emerged which address the problem of generating test sequences which excite a

large fraction of the control space: [16, 20, 12]. Specifically, these sequences are derived for an abstract control

model, and then “expanded” to the entire design. Clearly, it is nontrivial to perform expansion — [16, 12] gloss

over this step. Moundanos et al. [20] use a sequential ATPG tool to generate paths from state to state in the

global machine. Our argument against these approaches is that it is not at all clear what they buy; the problem of

performing expansion is as high as that of formally verifying the entire design. It may be possible for the designer

to suggest how to perform the expansion, but then the procedure is no longer automatic.

2.3 Coverage estimation

This research is aimed at evaluating the effectiveness of an existing set of tests. The authors of [15, 17] operate

on designs which can be split into control and datapath. An abstract representation of the controller is built, and

it is determined which edges in the STG of the controller are excited by the test suite applied to the design. In

this way, the designer may discover that certain important functions of the controller have not been excited during

simulation. The primary drawback of these approaches is that since the inputs to the controller are abstracted, the

uncovered edges may be unreachable in the complete design. A related paper is that of Devadas et al. [6], wherein

estimation of test suite coverage is derived for high-level designs using a generalization of the������� � ��� ����

calculus used for fault simulation. They make the important point that bugs need to be both excited as well as

made visible.

2.4 Sequential ATPG

There are commonalities between our work and sequential ATPG in that we try to justify nets. However, much

of the work in sequential ATPG is orthogonal to our effort. For example, for sequential circuits, the notion of

untestability and redundancy do not coincide: some faults that are untestable because they prevent initialization

6

are not redundant and cannot be replaced by a constant [1]. Much effort in sequential ATPG is devoted to dealing

with the lack of a reset signal and the possibilities of X-values [8]. Sequential ATPG approaches based on BDD’s

have been proposed by Cho et al. [5]; these suffer from the same limitations of BDD-based model checking

mentioned previously.

3 Preliminaries

3.1 Hardware models

A netlist is a directed graph, where the nodes correspond to primitive circuit elements, and the edges correspond

to wires connecting these elements. Each node is labelled with a distinct Boolean-valued variable ��� . The three

primitive circuit elements are primary inputs, latches, and gates. Primary input nodes have no fanins. Latches

have a single input. Each latch has a designated initial value. Associated with each gate is a Boolean function

defined on its fanins’ variables. A subset of the set of nodes is designated as being set of primary outputs.

A Finite State Machine (FSM) is a 6-tuple��� $ �� �� ����� where

�is a finite set referred to as the set of

states, $ ���is the initial state,

�and

�are finite sets referred to as the set of inputs and outputs respectively,

� �����������is the next-state function, and

� �����������is the output function. An FSM can be represented

graphically by a directed graph, where the vertices correspond to states, and the edges are labeled with input–

output value pairs.

Given Boolean assignments to the primary input and latch variables in a netlist, one can uniquely compute the

values of each node in the netlist by evaluating the functions at gates. In this way, a netlist � on primary inputs

�������� ��

, outputs � � � �

������� �� and latches�

� �

�������� �"!

bears a natural correspondence to a finite state

machine #%$ on input space�'&)( ��

�������� �� �*�� ���������+

, output space , &-( � � � � �

������� �� �*�� ���������+,

and state space��&.( ���

� �

�������� �/!"�0�� ���������+

, with an initial state given by the initial values for the latch

variables. (Given two nonempty sets A and B, we denote the set of all functions from A to B by( 12��435+

.) As an

example the FSM in Figure 1(b) corresponds to the netlist in Figure 1(a).

[Figure 1 about here.]

3.2 RTL descriptions and coverage goals

The basic unit in the RTL description of a design is a module; this consists of a set of input variables, a set of

output variables, a set of declarations (consisting of registers, combinational variables, and module instantiations),

7

and the module body. The latter can be viewed as consisting of a series of conditional assignments to combinational

variables.

Later, we will see that it is useful to add Boolean-valued “indicator” variables to the design. We illustrate

the application of indicator variables by an example, as shown in Figure 2. Here we have added two indicator

variables3��

and3

� that indicate whether particular conditions were evaluated to be true. More generally, we can

add indicator variables corresponding to whether a control FSM is in a particular state, or the values of two data

registers are equal. In general, the indicator variables can be used for the usual notions of RTL/FSM coverage

including structural (e.g., line coverage) or functional (e.g., FSM edges) [7].

[Figure 2 about here.]

3.3 Invariant checking

A common state space search problem for hardware designs is to determine if every state reachable from a

designated set of initial state lies within a designer specified set of “good states” (referred to as the invariant). For

example, the designer may specify that two registers must always be exclusive, i.e., can never both be�

at the same

time. Then the set of all states in which at least one of these two registers contains a zero is the corresponding

invariant.

Though conceptually simple, invariant checking can be used to verify all “safety” [9] properties when used in

conjunction with monitors. Monitors are FSMs which are added to the design; they observe the design and enter

an error state when the corresponding safety property fails and “raise a flag.”

One approach to invariant checking is to search for all states reachable from the initial state, and checking that

they all lie in the invariant. This approach suffers from very high computational complexity.

4 SIVA

User specified inputs to SIVA include the design under consideration and the set of coverage goals. SIVA reads

designs specified in the blif [24] format which is a textual representation of a netlist of gates and latches. The

internal representation of the design is based on the KGB data structures [18]. States are represented using bit

vectors; sets of states are represented using hash tables. The coverage goals are specified as��������� ��� ������ �

tuples

as described in Section 1.1.

The motivation for the search strategy used by SIVA can be seen by considering the RTL code fragment shown

in Figure 3. Suppose the target is� 3 ����

. If � and � are 16-bit inputs, the probability of a single random input

enabling the conditional when (PS == ‘st0) is extremely low — of the order of�

in� ���

. It is precisely in order

8

to generate inputs enabling these transitions that we use a symbolic solver. The solver is based on combinational

ATPG and building BDD’s for combinational logic.

[Figure 3 about here.]

When SIVA finds such a target, it invokes the symbolic solver which will return an input vector for which

(a+b == c). Thus, the search proceeds as shown in Figure 4. At state ��

we perform � random simulation

steps, followed by calls to the solver to create input vectors that are “directed” towards uncovered behavior; vectors

generated by the solver are “fed back,” i.e., simulated at state ��.

[Figure 4 about here.]

4.1 Input vector generation

The basic procedure Ver Sim underlying SIVA is presented in Figure 5. It follows the approach described

with some enhancement. The procedure Random Simulate performs random simulation by evaluating the netlist

nodes in topological order starting from PI’s and latches. Both the number of random vectors applied, as well

as the length of each random input sequence are user-specified parameters. Simulation is performed word-wise,

allowing 32 input vectors to be processed in one pass.

[Figure 5 about here.]

Visited states are stored in a hash table; along with the state, we store a pointer to the predecessor state and also

to the input vector which yielded it. In this way, for any state visited, we can trace a path back to the initial state.

After random simulation, we invoke a deterministic procedure, denoted by Symbolic Solve, which attempts to

generate inputs justifying each target in the list � . The outline of Symbolic Solve is described in Figure 6. It is a

combination of SAT-based Combinational ATPG [25, 26], and BDD building.

[Figure 6 about here.]

The routine Build Fanin Cone returns faninConeNodeList which is a set of nodes in the fanin cone of the

target. The procedure Build Clauses returns faninConeClauseList which is a list of clauses corresponding to each

gate whose output node is in faninConeNodeList. If Sat Solve comes up with a satisfying assignment, we return

that as the witness. On the other hand if it exceeds the backtrack limit, we invoke a BDD-based approach. To

prevent BDD explosion, we imposed a limit on BDD size. If we find that the BDD has a satisfying assignment,

we pick and return one minterm. For robustness, we alternate between Sat Solve and Bdd Build increasing the

9

backtrack limit/BDD size threshold upto max values for each. If the solver returns a vector, we perform simulation

using the witness���� generated and update the states � and also prune the target list � .

4.1.1 State selection and pruning

After calling Symbolic Solve on all target nodes, we mark the state as being “done” (this state is not selected

again) and Select State selects a new state. The procedure SIVA is called recursively on the states reached. The

routine Select State selects a state randomly from the visited state set.

The state set � can grow to be very large. On inspection, we found that there exists many registers that cor-

respond to datapath. One way to prune the state set is to hash the states only on a user-specified set of control

latches. We implemented this, and found it reduced the sets considerably.

4.1.2 Lighthouses

It is often the case that the user may be able to give hints which will help generate inputs leading to targets. For

example, if a target is several levels of conditionals deep, it is necessary to satisfy the preceding conditionals before

the target can be satisfied. Consider the code fragment in Figure 7. Suppose we are only interested in � as a target.

By inspection, we can reason that it is necessary for

and � to be�

before � can be�; intuitively, these can help

guide the search to � .

[Figure 7 about here.]

The lighthouses for a target are also specified as������������ ������ �

tuples. The tool tries to justify the lighthouses

along with the target in both simulation and symbolic methods based steps. If a lighthouse of a particular target is

justified at state $ with input � , then the routine Select State routine selects the state reached by the system in state

$ on input � in next cycle.

4.2 Experiments and results

To test our procedure, we built a large design by composing a network interface unit and a microprocessor from

the VIS benchmark suite [2] together with some glue logic. By inspection, we added ��

indicator variables; these

corresponded to specific conditionals in the RTL that we wished to cover.

The resulting design has ����

latches and equivalent to� �����

�two-input NAND gates. We performed �

& � ���

word-level random simulations at each state. We set a timeout of� ���

seconds for each set of experiments. All

our experiments were performed on a PentiumII-266 with 64 MBytes of main memory running Redhat Linux 4.0.

10

To identify the benefits of augmenting simulation with a solver, we compare target coverage numbers achieved

by random simulation to those achieved by SIVA.

In the first set of experiments we did not prune states (cf. Section 4.1.1). For the pure random simulation

method, we could apply ������

word-level input vectors in the given time limit. We found that only 5 targets were

covered by this set. On the other hand, with SIVA we could cover 20 targets in the same amount of time. The

calls to the Symbolic Solver took only 1% of the total time.

To experiment with the benefits of state pruning as described in Section 4.1.1, we first identified�� control

latches by inspection. We ran our experiment for� ���

seconds, hashing states only on control latches. We found

again that only 5 targets could be covered with the random simulation. On the other hand, SIVA could cover 29

targets. Loosely speaking, this was because pruning removed large numbers of states that were “equivalent,” in the

sense that they led to the same behaviors (recall that the state to continue the search from is selected randomly).

The time taken by Symbolic Solver was only 0.7% of the total time. The slight decrease in the time taken (as

compared to 1% without pruning) was due to the fact that the solver was called unsuccessfully less often.

Consider the performance results of SIVA tabulated in Table 2. The tool was able to cover only 29 out of

30 targets. We inspected the design to better understand the target that eluded SIVA. We were able to manually

identify 4 lighthouses that we believed would direct SIVAto the unreached target. We then ran SIVAwith the

added lighthouses, as described in Section 4.1.2. Now SIVAcould reach all 30 targets. Detailed results are given

in Table 3. The Symbolic Solver routine took 4.6% of the total run time. Intuitively, the increase in time taken

by Symbolic Solver was because it had to generate tests for lighthouses at each state. In other words, lighthouses

should be used sparingly.

We tried to compare SIVA to conventional BDD-based reachability analysis, specifically, the VIS system [10].

However, VIS could not proceed past reading in the design; when we attempted to perform BDD variable ordering,

it spaced out. Closer inspection indicates that simply reading the design into VIS results in a 30Mb process (mainly

due to inefficient network node data structure). In contrast SIVA uses only 2Mb to read in the design; the entire

run fits in 5Mb.

Even though the lighthouses played a major role in generating input sequences for hard-to-cover targets, the

main drawback of using lighthouses is that the user has to manually examine the design to find them. This can

be tedious, and takes away from the usefulness of SIVA. In addition, specifying an excessively large number

of lighthouses results in performance degradation, since the tool applies the Symbolic Solve routine to each of

them. In the next section, we present some approaches to automatically generating high-quality lighthouses for

hard-to-cover targets.

11

[Table 1 about here.]

[Table 2 about here.]

[Table 3 about here.]

5 Automatic Lighthouse generation

Conceptually, we want to automatically generate in a pre-processing step conditions on various nets of the

design whose satisfaction during the search guarantees “progress” towards the target. Note that there is usually

some ordering present between the conditions. Consequently, in addition to generating lighthouses, we need to

schedule them so that when we satisfy the lighthouses in a given order, we make progress towards the target.

By doing so we avoid the computational cost of trying to satisfy all of them simultaneously. For simplicity, we

generate conditions only on latch outputs, rather than arbitrary nodes in the design.

A naive approach is to use every latch in the transitive fanin of the target as a lighthouse for that target. There

are several problems with this approach:

1. The number of latches present in the transitive fanin cone of target is very large, making the symbolicSolve

step in SIVA a performance bottleneck.

2. For some targets, it may not be necessary to toggle a fanin latch to cover the target.

Again, since there is usually some ordering present between the conditions needed to cover a target, we need

some way of ordering the latches so that we concentrate on a subset at each symbolic solve step. We propose a

way of finding such a partial ordering of the latches by building a latch graph. Then during the search, we use

only those latches as lighthouses, whose predecessors in the partial order are already satisfied.

Let us suppose that to set a latch�

to value�, we need another latch � to be

�in the previous cycle. If

� � ����is

our target, then�

�����

is a natural lighthouse for the target� � ����

. So the first step towards finding the schedule of

lighthouses is to identify edges between the vertices which must be taken to progress from one vertex to the next.

Since conditions which set a latch from�

to�

and conditions which set the latch from�

to�

are different, we have

two different vertices for each latch in the latch graph.

12

Formally, the latch graph is defined on the vertex set� � � � ��� � ��� � � ����������

, where�

is the set of latches in

the design. Initially, we include the edge� � �

� � �� �

� � ��� in the latch graph exactly when there is a combinational

path from�

� to�

� . An edge from���

�����

to���

�� �

in the latch graph is defined to be a required edge if the toggling

of latch�

� from�

to�

requires�

� to be�

in the previous cycle. Since at the initial state half of the vertices are

satisfied, we concentrate only on the other half of the vertex set. In particular, the initial-valued vertices (vertices

of the form� � � �

where

is the initial value of latch�) will not have any fanin vertices in the latch graph.

It may be that the target vertex has incoming edges which are not required edges. Suppose we have an edge

from� ����

to� � ���� in the graph. Also assume that there exists an assignment to the inputs of the fanin cone at the

input of latch � in the design which sets the output to�

regardless of latch ’s value. Heuristically,

� ����is a poor

choice for a lighthouse for� � ���� . We remove such edges in a second step, by performing universal quantification

on latches with respect to their fanin latches.

We now present an example of a design with five latches illustrating the above two steps and justify their order

of execution. Consider the RTL code fragment shown in Figure 8(a). Here

, � and � are single bit latches and

���� � is a 2-bit latch. Assume the target is

� � ���� . Taking the initial values of all latches to be�, the initial latch

graph will be as shown in Figure 8(b). (For clarity, we do not show initial valued vertices in the figure). Consider

the vertex������

. If we perform universal quantification step on this vertex, both the edges from� ����

and� � ����

are removed. But we know that to reach the target������

, we need to satisfy� ����

. So we apply following rule

when finding required edges of a vertex:

Rule 1 (RA) If� � ���!"�

is required to be satisfied before� ���� $ � can be reached, then assume

���takes the value $

when finding required edges for the vertex� �����! �

.

When we find required edges, the edge from������

to� � ���� will turn out to be a required edge. So applying

Rule RA, we can set � to be�

when finding the required edges for������

. Now the edge from� ����

to������

becomes a required edge.

[Figure 8 about here.]

13

We have found that Rule RA is widely applicable in practice, finding many useful lighthouses. With each

required edge found, say from� � ����

to�

�� �

, we may find more required edges for vertices in the transitive fanin

of�

�� �

. So we need to call the required condition finding algorithms iteratively until no new edges are found.

Another useful observation is the fact that we can prune some edges when we find required edges. Specifically,

if we find that an edge from� ����

to� � ���� is a required edge, then we can remove the edge

� �"�to

� � ���� . Some

more edges can also be removed on basis of information from using Rule RA. For example, for the latch graph in

Figure 8(b), we can remove edge from� � ���� to

������

.

After completing the above two steps, we will have two types of edges left in the latch graph. Some of these

edges are required condition edges found in the first step; other edges correspond to those edges which did not get

removed by universal quantification. Because of these edges the latch graph may not be a directed acyclic graph

(DAG).

At this stage, we form the graph of strongly connected components (SCC) and treat each SCC as an entity to

satisfy. Essentially, all vertices in an SCC � � are treated as lighthouses for vertices in SCC � � having � � as a

predecessor in the SCC DAG. (In our experiments we observed that most SCC’s consist of a single vertex.) We

maintain a frontier to keep information about how close to the target we have moved and apply the latches before

the frontier as lighthouses. Initially, the frontier will have the initial valued leaves of the DAG rooted at the target.

The procedure for constructing the latch graph is described in detail in the next section.

5.1 Implementation

The routine latchGraphConstruct constructs the latch graph from the given design by applying the two steps

described in the previous section. Initially, this routine forms the graph���'�� �

, where� & � � � � � � � �

�latches in design

� � � ����������and the edge set

� & � ���������� ����

�� ��� ���

� is in transitive fanin of�

��. The

second step of the procedure is to find required edges by the method of constant propagation. The method of

constant propagation is incomplete — it does not find all required edges. To find the complete set, we use an

approach based on ATPG and simulation. The last step is to perform universal quantification of all nodes with

respect to their inputs.

5.1.1 The Method of Constant Propagation

Some required conditions can be easily found simply by backward propagation of required values at the nodes.

For each vertex in the graph, we extract the single output combinational cone whose output is the input to the latch

corresponding to the vertex. The next step is to set the already known inputs which arise by applying Rule RA.

14

We do a forward propagation of known values before going for backward propagation of values. These tech-

niques are quite well used in the ATPG field [13] to generate input vector justifying nodes in a logic network.

5.1.2 ATPG and Simulation based methods

Constant propagation will not be able to find all required conditions for a vertex. We need to use some other

methods for finding this type of required conditions. One approach might be to build BDD’s for the cones, and

perform cofactoring. However, building BDD’s tends to be computationally infeasible. Another approach is to use

ATPG techniques to resolve the dependencies by performing two ATPG calls per each fanin of a vertex (setting

the fanin to 0 for one call and 1 for the other call). We will now describe a better way of achieving the same result,

which removes half of the calls to ATPG, albeit at the cost of one ATPG for each vertex.

For each vertex� & � � ����

, we perform ATPG to check the whether ��� , the function at the input of latch�, can

be set to value�

. We impose following constraints on the ATPG problem: (a) the required edges are to be satisfied,

and (b) conditions because of Rule RA. These two conditions are always imposed on all ATPG queries invoked in

the algorithms. If ��� is satisfiable, the ATPG tool returns the witness � . Latches corresponding to fanin vertex� & �

���

of�

will have some value in � . We perform another ATPG operation to check if ��� is satisfiable when

the value of � is toggled from its value in � . If this ATPG operation returns a witness, say�

, then we know that�

is not a required fanin of�

.

Still, the number of ATPG operations performed by the algorithm above is quite large — the number of vertices

plus the number of edges. We can use the witness�

to reduce some more ATPG operations as follows: Check

for fanins whose values in � are different from�

. We can safely remove them from the fanin list of�

. Another

improvement comes from the fact that the witness returned by ATPG contains some don’t cares, i.e., some of the

bits in the witness are X’s. We can remove the corresponding fanins also from the fanin list of�

. The ATPG tool,

we used [25], does not always return all possible don’t cares in the witness. We find the remaining don’t cares in

$ � ���� ��� � �

� ��� �routine which is invoked by the algorithm on every witness found.

Another approach to further reducing the number of calls to ATPG, stems from the observation that combi-

national input cones to latches are not pair-wise disjoint. Since simulation is comparatively fast, we can use the

previously found witnesses to check if they satisfy the present vertex under consideration. The final version of

algorithm with all enhancements is shown in Figure 9(a).

[Figure 9 about here.]

15

5.1.3 Universal Quantification Step

The second step in the construction of latch graph is to remove some edges by performing universal quantification.

We can implement this step by using either BDD’s or ATPG. As mentioned earlier, the use of BDD’s is limited to

small networks — for big networks, construction of the BDD for the next state function for even a single latch can

cause memory explosion.

Our approach is to duplicate the given combinational net and tie together the corresponding inputs in two nets

except the inputs corresponding to variable, on which universal quantification has to be done. To check whether� �� ��� � is satisfiable, we set the input

in one net to

�and in other net to

�. Then we feed this net to the ATPG

solver and check for a stuck-at-0 fault.

The algorithm for this step is shown in Figure 9(b). We use some of the simulation techniques used in finding

required edges here to speed up this step. The $ � �� � ��� � �

����� �routine invoked from this step acts in a slightly

different way compared to $ � �� � ��� � �

����� �used in required edge finding algorithms. If we find that a latch �

is a don’t care in the witness of vertex� � ����

, then we remove the edges� � �"� to

� � �� �and

� � ���� to� � ����

from the

latch graph.

The witnesses set � collected in the previous step shown in Figure 9(a) is passed as an argument to this proce-

dure. We found it was quite useful in pruning lot of ATPG’s, which would have otherwise been called in Step 8 of

the univQuantification routine.

5.2 Experiments with lighthouse generation

First, we give an outline of the data decompresser chip design which we used in our experiments. Decompressor

is the entire design. This module has 70 inputs, 10333 latches, and 109666 2-input NAND gate equivalents.

TreeDec is a part of Decompressor. TreeCtl is a simple controller inside TreeDec. TreeDec and

TreeCtl share some of the inputs supplied by the Decompressor module, e.g., BOB. TreeDec has 49 in-

puts, 2864 latches and 38161 2-input NAND gate equivalents; TreeCtl has 26 inputs, 75 latches and 1161

2-input NAND gate equivalents.

BOB is an input signal to the TreeCtl module which starts the controller. The operation of the TreeCtl

module is shown in Figure 10. The names inside ellipses corresponds to the latches in the design. Initially all

latches are set to�. A name in an ellipse indicates that the corresponding latch is set on reaching that state; when

the controller goes out of that state, the latch is reset. OpInd is a 2-bit register and its value is retained through

the steps. HCLen, HLit and HDist are 6-bit registers and SymCnt is a 9-bit register.

[Figure 10 about here.]

16

The targets we have chosen for our experiments on TreeDec and TreeCtl are CLT, WAT, CodGen,GoBsCd

and Done. In addition, we chose BOB as a target while experimenting on Decompressor. The toughest target

of all is Done which is located quite deep in the state space. Done becomes�

only when DnCdGn is�

and

(OpInd==’00’). The first time DnCdGn becomes 1, OpInd increments to� �

. Thus, the controller has to go

through the loop three more times before (OpInd==’00’) is satisfied.

We experimented with the strategy of picking a newly reached state instead of random selection (as described in

Section 4.1.1; we call this version SIVA-dfs. When applied on TreeCtl, all targets were reached even without

the specification of light houses. However, when applied on TreeDec, we could only reach as far as CLT. This

is because of the existence of many other paths from the starting states.

We experimented with the SIVA-dfs algorithm enhanced with automatic light house generator (referred as

SIVA-lg). The results of different steps performed on latch graph are tabulated in Table 4(a).

[Table 4 about here.]

Constant propagation results are in rows whose heading is const Prop. From the results, it is easy to observe

that many required edges were found by this method. The row with heading rem edges gives the number of edges

removed in this method. The remaining required edges are found using ATPG based techniques.

The results of three different algorithms on finding required edges for the above mentioned design are shown

in Table 4(b). The column labeled alg1 corresponds to the straightforward method of performing a single ATPG

operation for each vertex and an ATPG operation for each edge, The column labelled alg3 is the final version given

in Figure 9(a), while the column labelled alg2 is the same as alg3 without using the witnesses found in previous

ATPG operations. The results show that the integration of the simulation techniques with ATPG techniques is

much more efficient than using ATPG alone. The results also show that the augmentation step used in final version

is successful at eliminating many calls to ATPG.

Results for the universal quantification step are shown in rows with the heading Univ Quant in Table 4(a).

Finally, the number of SCC’s in the final graph are shown. Only one of the SCC’s contained two vertices and all

other SCC’s consisted of individual vertices.

We ran SIVA-lg on TreeCtl and TreeDec. The tool was able to cover all targets in TreeCtl, while it

reached the target GoBsCd in TreeDec. However it could not reach the target Done. The results of experiments

on TreeCtl and TreeDec are presented in Table 5.

[Table 5 about here.]

17

6 Summary

In summary, we have developed a stand alone tool for guided state space search SIVA that strives to capture the

best of both simulation and symbolic methods. Simulation is used to quickly sample the design behavior; symbolic

methods are used to enable the transitions to rare cases, and to guide the simulation. The basic components –

ATPG, BDD, simulation, indicator variables – are well known; our more significant contribution is the tight

coupling of these approaches. The automatic lighthouse generator was found to be very useful in guiding the

tool towards the hard targets. The implementation of the automatic lighthouse generator again illustrates the

effectiveness of integrating different techniques.

References

[1] Abramovici, M., M. A. Breuer, and A. D. Friedman: 1990, Digital System Testing and Testable Design. IEEE

Press.

[2] Berkeley, U. www.cad.eecs.berkeley.edu/˜vis.

[3] Bryant, R.: 1986, ‘Graph-based Algorithms for Boolean Function Manipulation’. IEEE Transactions on

Computers C-35, 677–691.

[4] Burch, J. and V. Singhal: 1998, ‘Tight Integration of Combinational Verification Methods’. In: Proc. Intl.

Conf. on Computer-Aided Design.

[5] Cho, H., G. Hatchel, E. Macii, M. Poncino, and F. Somenzi: 1993, ‘A State Space Decomposition Algorithm

for Approximate FSM Traversal Based on Circuit Structural Analysis’. Technical report, ECE/VLSI, Univ.

of Colorado at Boulder.

[6] Devadas, S., A. Ghosh, and K. Keutzer: 1996, ‘An Observability-Based Code Coverage Metric for Functional

Simulation’. In: Proc. Intl. Conf. on Computer-Aided Design.

[7] Dill, D. L.: 1998, ‘Embedded Tutorial: What’s between Simulation and Formal Verification?’. In: Proc. of

the Design Automation Conf. San Francisco, CA.

[8] El-Maleh, A., T. Marchok, J. Rajski, and W. Maly: 1997, ‘Behavior and Testability Preservation Under the

Retiming Transformation’. 16, 528–543.

[9] Emerson, E. A.: 1990, ‘Temporal and Modal Logic’. In: J. van Leeuwen (ed.): Formal Models and Seman-

tics, Vol. B of Handbook of Theoretical Computer Science. Elsevier Science, pp. 996–1072.

18

[10] et al, R. K. B.: 1996, ‘VIS: A system for Verification and Synthesis’. In: Proc. of the Computer Aided

Verification Conf.

[11] Geist, D. and I. Beer: 1994, ‘Efficient Model Checking by Automated Ordering of Transition Relation

Partitions’. In: Computer Aided Verification, Vol. 818 of Lecture Notes in Computer Science. pp. 52–71.

[12] Geist, D., M. Farkas, A. Landver, Y. Lichtenstein, S. Ur, and Y. Wolfsthal: 1996, ‘Coverage Directed Test

Generation Using Formal Verification’. In: Proc. of the Formal Methods in CAD Conf.

[13] Goel, P.: 1981, ‘An Implicit Enumeration Algorithm to Generate Tests for Combinational Logic Circuits’.

IEEE Transactions on Computers C-31, 215–222.

[14] Govindaraju, S., D. Dill, A. Hu, and M. Horowitz: 1998, ‘Approximate Reachability with BDDs using

Overlapping Projections’. In: Proc. of the Design Automation Conf.

[15] Ho, R. and M. Horowitz: 1996, ‘Validation Coverage Analysis for Complex Digital Designs’. In: Proc. Intl.

Conf. on Computer-Aided Design.

[16] Ho, R. C., C. H. Yang, M. A. Horowitz, and D. L. Dill: 1995, ‘Architectural Validation for Processors’. In:

Proceedings of the International Symposium on Computer Architecture.

[17] Hoskote, Y., D. Moundanos, and J. Abraham: 1995, ‘Automatic Extraction of the Control Flow Machine

and Application to Evaluating Coverage of Verification Vectors’. In: Proc. Intl. Conf. on Computer Design.

Austin, TX.

[18] Kuehlmann, A. and F. Krohm: 1997, ‘Equivalence Checking Using Cuts and Heaps’. In: Proc. of the Design

Automation Conf.

[19] McMillan, K. L.: 1993, Symbolic Model Checking. Kluwer Academic Publishers.

[20] Moundanos, D., J. Abraham, and Y. Hoskote: 1996, ‘A Unified Framework for Design Validation and Man-

ufacturing Test’. In: Proc. Intl. Test Conf.

[21] Mukherjee, R., J. Jain, K. Takayama, M. Fujita, J. A. Abraham, and D. S. Fussell: 1997, ‘Efficient Combi-

nation Verification Using Cuts and Overlapping BDDs’. In: Proc. Intl. Workshop on Logic Synthesis.

[22] Ravi, K., K. McMillan, T. Shiple, and F. Somenzi: 1998, ‘Approximation and Decomposition of Binary

Decision Diagrams’. In: Proc. of the Design Automation Conf.

19

[23] Ravi, K. and F. Somenzi: 1995, ‘High Density Reachability Analysis’. In: Proc. Intl. Conf. on Computer-

Aided Design. Santa Clara, CA.

[24] Sentovich, E. M., K. J. Singh, C. Moon, H. Savoj, R. K. Brayton, and A. L. Sangiovanni-Vincentelli: 1992,

‘Sequential Circuit Design Using Synthesis and Optimization’. In: Proc. Intl. Conf. on Computer Design.

pp. 328–333.

[25] Silva, J. and K. Sakallah: 1996, ‘GRASP–A New Search Algorithm For Satisfiability’. In: Proc. Intl. Conf.

on Computer-Aided Design. Santa Clara, CA.

[26] Stephan, P., R. K. Brayton, and A. L. Sangiovanni-Vincentelli: 1996, ‘Combination Test Generation using

Satisfiability’. 15, 1167–1176.

[27] Yang, C. H. and D. L. Dill: 1998, ‘Validation with Guided Search of the State Space’. In: Proc. of the Design

Automation Conf.

[28] Yuan, J., J. Shen, J. Abraham, and A. Aziz: 1997, ‘On Combining Formal and Informal Verification’. In:

Proc. of the Computer Aided Verification Conf.

20

List of Figures

1 A netlist (a) and its corresponding FSM (b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 An example of a design instrumented with indicator variables . . . . . . . . . . . . . . . . . . . . 233 RTL code fragment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 FSM view of directed simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Pseudocode for SIVA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Pseudocode for ATPG/BDD-based Symbolic Solver. . . . . . . . . . . . . . . . . . . . . . . . . 277 The need for lighthouses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 (a) RTL code fragment (b) Latch graph constructed from the code . . . . . . . . . . . . . . . . . 299 Algorithms used in automatic light house generation. (a) ATPG and Simulation techniques based

required condition finder (b) Algorithm for edge pruning by universal quantification. T is thewitness set collected in (a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

10 Flow diagram of TreeCtl module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

21

x1

w1

w2

L1 L2 10

00 01

11

1/0

0/00/1,1/1

0/0,1/00/1,1/1

(a) (b)

Figure 1. A netlist (a) and its corresponding FSM (b).

22

module main(�);

module foo A(� �� ����� � );

module foo B(� �� � �� �� );

endmodule

module foo(� � � � );

input� � ;

output � �

;

reg� ����

� ;

reg3

��3

� ; /* Indicator variables */initial begin3

�& �

;3

�& �

;�*& �;� & �

;��& �

;end

assign �& � � �

;assign

� & ��� �;

always @(posedge clk) begincase(pc)

0:if ((

�) &&

� & & � � ) � /* 1st indicator */3�& �

; /* Condition has been taken */� & ;� & �

;��& �

;�

1:if (( �

�) && �

� & & � � ) � /* 2nd indicator */3�& �

; /* Condition has been taken */� & � ; � & ;��& �

;�

endcaseend

endmodule

Figure 2. An example of a design instrumented with indicator variables

23

if ( a+b == c && PS == ‘st0 )begin

B = 1 ; /* indicator variable */PS = ‘st1;out = ‘FLOW;

end

Figure 3. RTL code fragment.

24

r1

r2

r

d1

d

d

d1’

r’

S0

S1

N

k1

r1’

N

k2

Figure 4. FSM view of directed simulation.

25

/* The routine attempts to identify a set of tests */

/* which justify all the nodes in � */

/* � is the design, $ � is the initial state of the design */

/* � is the set of coverage goals (targets) */

/* � is the number of random input sequences to apply */

/*�is the length of each random input sequence */

Boolean function Ver Sim( � , $ � , � , � ,�)�

State $ = $ � ;

Stateset � =� $ � � ; /* reaches/visited state set */

do�/* Simulate � random vectors at $ , */

/* Update the reached state set � , */

/* remove justified targets from � */

� := ��� Random Simulate ( � , $ , � , � ,�);

for each unjustified target ( � , target)�

���� = Symbolic Solve ( � , $ , target);

if (������&�� ) �/* Simulate with vector

���� and update � */

� := ��� Simulate ( � , $ , ��� � );/* Remove target from � */

� := Prune List ( � ,target);��Mark Done( $ � );$ := Select State( � ); /* Randomly pick new state, continue search */�

while ( $��& � ����� � ����&� );�

Figure 5. Pseudocode for SIVA.

26

/* Find input vector justifying target */

Input t function Symbolic Solve( � , $ , target)�

faninConeNodeList = Build Fanin Cone(target);

faninConeClauseList = Build Clauses(faninConeNodeList);

Add Clause(faninConeClauseList, (target.node == target.value));

for( � =0 ; ��� MAX TRY ; ����� )�

backTrackLimit =3

��� ��� �� � ; /* 3 �

��� are run time constants */

bddSizeLimit =3

�� ��� �� � ; /* 3 �

��� are run time constants */

(result, vector) = Sat Solve (faninConeClauseList, backTrackLimit);

if (result == SATISFIABLE)

return vector;

elseif (result == NOT SATISFIABLE)

return�;

else�/* result == BACKTRACK EXCEEDED */

(result, BDD) = Build BDD (target, bddSizeLimit);

if (result == BDD Is Not Zero (BDD) )

return BDD Get Minterm(BDD) ;

elseif (result == BDD Is Zero (BDD) )

return�;

else /* result == LIMIT EXCEEDED */

continue;��/* end for */�

Figure 6. Pseudocode for ATPG/BDD-based Symbolic Solver.

27

register a, b, c;...if ( u=131 ) �

b = 1;�...if ( u=49 ) �

a = 1 ;�...if ( a && b && u=201 ) �

c = 1;�

Figure 7. The need for lighthouses.

28

assign a = (input==134) ? 1 : 0;

always(posedge clk) beginif (a||b) �

c = 1;count = 2’b00;�

else if (c==1) �if (count==2’b11) �

c = 0;b = 1;�

else �count ++;�

�end

count[0],1 count[1],1

a,1

c,1 b,1

(a) (b)

Figure 8. (a) RTL code fragment (b) Latch graph constructed from the code

29

reqdCondFinder():1: ��� �2: for each vertex

�do

3: if� "�

��

�� ��� �

��� ��� � & �then

4: continue5:

� � �fanins of

�to check for

�6: if any � � � satisfies

�then

7: � & �8: else9: � � ATPG for

�10: ��� � � � � �11: $ � �

���� ��� � ������ � � � � �

12: while� � � �& �

do13: pick a fanin

� � �14:

� � ATPG for�

with�

flipped15: if

�then

16: for all ����

do17: if � has diff. values in � and

��� �then

18:� � � �

��

19: $ � ���� ��� � �

� ��� � ��� � �20: else21:

���� �

is a reqd edge

univQuantification(T):1: for each vertex

�do

2: if� "�

��

�� ��� �

��� ��� � & �then

3: continue4:

� � �fanins of

�to check for

�5: if any � � � satisfies

�then

6: ��& �7: else8: � � ATPG for

�9: ��� � � � � �

10: $ � �� � � � � �

����� � � ��� �11: while

� � � �& �do

12: Pick a fanin� ���

13:� � ATPG for

� � ��� �14: if

�then

15: $ � ���� ��� � �

� ��� � ��� � �

(a) (b)

Figure 9. Algorithms used in automatic light house generation. (a) ATPG and Simulation techniquesbased required condition finder (b) Algorithm for edge pruning by universal quantification. T is thewitness set collected in (a)

30

Done

CntBc++

CntBc!=HLit

CntBc!=HDist

CntBc!=HCLen

read

Bob3readBob2readBob1

rstBob==1

CntSym!=SymCnt

CntSym++

OpInd++

ClrReg

GoBsCd DBCG

GenBsCd

WAT

CntBc!=HCLen

CntBc++

CntBc++

CntBc==HCLen

CntBc==HDist

CntBc==HLit

CntSym==SymCnt

CntSym!=SymCnt

CntSym==SymCntDnCdGn

OpInd==00

CntBc==HLit

CntBc!=HLit

CntBc==HDist

CntBc!=HDistCntBc==HCLen

CodGen

CLT

HLit HDistHCLen

Figure 10. Flow diagram of TreeCtl module

31

List of Tables

1 Coverage results for 30 targets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Coverage results for 30 targets with use of control latches to prune visited states (cf. Section 4.1.1). 343 Coverage results for 30 targets with use of control latches to prune visited states (cf. Section 4.1.1)

and lighthouses to guide the search (cf. Section 4.1.2). . . . . . . . . . . . . . . . . . . . . . . . 354 (a) Results of latchGraphConstruct on three designs. U.Q refers to universal quantification step.

Decomp refers to Decompressor. (b) Results of the experiments with different versions of requiredcondition finding algorithms explained in Section 5.1.2 . . . . . . . . . . . . . . . . . . . . . . . 36

5 The performance of different versions of SIVA on TreeCtl and TreeDec designs. ‘yes’ denotes thatthe target is reached. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

32

Table 1. Coverage results for 30 targets.Method # Random # Atpg calls # Bdd calls Target States Max Mem

Vec Tst no-Tst Abort Tst no-Tst Abort Covered Rch Used (inMb)

Pure Sim 2046000 - - - - - - 5 67 3.1SIVA 2015000 7 705 1 1 0 0 20 24093 5.3

33

Table 2. Coverage results for 30 targets with use of control latches to prune visited states (cf. Sec-tion 4.1.1).

Method # Random # Atpg calls # Bdd calls Target States Max MemVec Tst no-Tst Abort Tst no-Tst Abort Covered Rch Used (inMb)

Pure Sim 2480000 - - - - - - 5 2 3.1SIVA 1271000 9 333 1 1 0 0 29 41 3.6

34

Table 3. Coverage results for 30 targets with use of control latches to prune visited states (cf. Sec-tion 4.1.1) and lighthouses to guide the search (cf. Section 4.1.2).

Method # Random # Atpg calls # Bdd calls Target States Max MemVec Tst no-Tst Abort Tst no-Tst Abort Covered Rch Used (inMb)

SIVA 1922000 5 434 5 5 0 0 30 67 4.1

35

Table 4. (a) Results of latchGraphConstruct on three designs. U.Q refers to universal quantificationstep. Decomp refers to Decompressor. (b) Results of the experiments with different versions ofrequired condition finding algorithms explained in Section 5.1.2

stats TreeCtl TreeDec Decomp

Initialvertices 102 962 4424edges 1276 37252 163028

Const reqd edges 192 1995 11539Prop rem edges 387 2523 12408

ATPGreqd edges 33 68 734rem edges 71 144 854

U.Q. rem edges 548 31330 133556graph vertices 102 962 4424stats edges 270 3255 16210

num sccs 101 961 4423

Design alg1 alg2 alg3

TreeCtl

time 1.87 1.05 0.9ATPG calls 210 129 102ATPG time 1.81 0.9 0.76Sim time 0 0.08 0.11

TreeDec

time 4823 248 212ATPG calls 4201 1171 958ATPG time 4816 217 169Sim time 0 24 34

Decomp

time � 24hrs 4079 3675ATPG calls n/a 7258 6555ATPG time n/a 3529 2876Sim time 0 406 665

36

Table 5. The performance of different versions of SIVA on TreeCtl and TreeDec designs. ‘yes’ denotesthat the target is reached.

Design TreeCTL TreeDecTarget CLT WAT CodGen GoBsCd Done CLT WAT CodGen GoBsCd DoneSIVA yes no no no no yes no no no no

SIVA-dfs yes yes yes yes yes yes no no no noSIVA-lg yes yes yes yes yes yes yes yes yes no

37