theory and algorithms for face hypercube embedding

17
472 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998 Theory and Algorithms for Face Hypercube Embedding Evguenii I. Goldberg, Tiziano Villa, Robert K. Brayton, Fellow, IEEE, and Alberto L. Sangiovanni-Vincentelli, Fellow, IEEE Abstract— We present a new matrix formulation of the face hypercube embedding problem that motivates the design of an efficient search strategy to find an encoding that satisfies all faces of minimum length. Increasing dimensions of the Boolean space are explored; for a given dimension constraints are satisfied one at a time. The following features help to reduce the nodes of the solution space that must be explored: candidate cubes in- stead of candidate codes are generated, cubes yielding symmetric solutions are not generated, a smaller sufficient set of solutions (producing basic sections) is explored, necessary conditions help discard unsuitable candidate cubes, early detection that a partial solution cannot be extended to be a global solution prunes infeasible portions of the search tree. We have implemented a prototype package minimum input satisfaction kernel (MINSK) based on the previous ideas and run experiments to evaluate it. The experiments show that MINSK is faster and solves more problems than any available algorithm. Moreover, MINSK is a robust algorithm, while most of the proposed alternatives are not. Besides most problems of the complete Microelectronics Center of North Carolina (MCNC) benchmark suite, other solved examples include an important set of decoder programmable logic arrays (PLA’s) coming from the design of microprocessor instruction sets. Index Terms—Basic sections, Boolean symmetries, face hyper- cube embedding, input constraints, logic encoding. I. INTRODUCTION V ARIOUS techniques for exact and heuristic encoding based on multivalued minimization of two-level and multilevel logic produce various types of constraints on the codes to be assigned to the symbols. The origin and effects of these constraints on the quality of the minimized result are described in [16]. Among these encoding constraints the most ubiquitous type are face embedding constraints, that are the subject of the present investigation. Consider a set of symbols and an encoding function , for a given (encoding length), that assigns to each symbol a code , i.e., a binary vector of length . A necessary requirement is that is injective, i.e., that different symbols are mapped to different binary vectors. Given a set of Manuscript received December 18, 1996. This work was supported in part by the CAST Program, DARPA, NSF, SRC, and industrial grants from Bell Northern, Cadence, Digital, Fujitsu, Intel, MICRO, and Motorola. This paper was recommended by Associate Editor F. Somenzi. E. I. Goldberg is with Cadence Berkeley Laboratories, Berkeley, CA 94704 USA. T. Villa is with Parades, Rome 00186 Italy. R. K. Brayton and A. L. Sangiovanni-Vincentelli are with the Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA 94720 USA. Publisher Item Identifier S 0278-0070(98)05022-2. symbols ,a face constraint is a subset specifying that the symbols in are to be assigned to one face (or subcube) of a binary -dimensional cube, without any other symbol sharing the same face. Face constraints are generated by multiple-valued (input) literals in two-level and multilevel multivalued minimization [16]. As an example, given symbols , an input constraint involving symbols is denoted by . An encoding satisfying is given by , , , , and , and the face spanned by is 1. Notice that the vertex 101 is not and should not be assigned to any other symbol. Given a set of face constraints , it is always possible to find an encoding that satisfies it, as long as one is free to choose a suitable code length. It is a well-known fact that for any set is satisfied by choosing as the 1-hot encoding function (which assigns to a symbol the binary vector that is always 0 except for a position to 1, the latter denoting symbol ). It is an important combinatorial optimization problem, sometimes called [17] face hypercube embedding, to find the minimum and a related such that is satisfied. The decision version of this problem is NP-complete [12]. An exact solution based on a branch-and-bound strategy to search the partially ordered set of faces of hypercubes was described first in [17], but it is not computationally practical. An exact solution by reduction to the problem of satisfaction of encoding dichotomies 1 was proposed in [18]. It uses a reduction by J. Tracey [15] of the exact satisfaction of encoding dichotomies to a unate covering problem. This approach was made more efficient in [12], by improving the step of generating maximal compatibles of encoding dichotomies. Recently the problem of satisfaction of encoding dichotomies has been revisited in [3], adapting techniques to find primes and solving unate covering with binary decision diagrams that have been so successful in two level logic minimization [2]. The key aspect of these methods is the generation of all possible “useful” columns carrying a bit of encoding for each symbol, where utility is defined as the (partial) satisfaction of a face constraint. For example, consider the constraints and . The encoding columns (01011) ,(01111), (01110), (11011), (11001), (01010) are all useful. Indeed (01011) separates the symbols in the face from the symbols and ; separates the 1 An encoding dichotomy on is a bipartition such that . 0278–0070/98$10.00 1998 IEEE

Upload: univr

Post on 13-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

472 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Theory and Algorithms for FaceHypercube Embedding

Evguenii I. Goldberg, Tiziano Villa, Robert K. Brayton,Fellow, IEEE,and Alberto L. Sangiovanni-Vincentelli,Fellow, IEEE

Abstract—We present a new matrix formulation of the facehypercube embedding problem that motivates the design of anefficient search strategy to find an encoding that satisfies all facesof minimum length. Increasing dimensions of the Boolean spaceare explored; for a given dimension constraints are satisfied oneat a time. The following features help to reduce the nodes ofthe solution space that must be explored: candidate cubes in-stead of candidate codes are generated, cubes yielding symmetricsolutions are not generated, a smaller sufficient set of solutions(producing basic sections) is explored, necessary conditions helpdiscard unsuitable candidate cubes, early detection that a partialsolution cannot be extended to be a global solution prunesinfeasible portions of the search tree. We have implemented aprototype package minimum input satisfaction kernel (MINSK)based on the previous ideas and run experiments to evaluate it.The experiments show that MINSK is faster and solves moreproblems than any available algorithm. Moreover, MINSK is arobust algorithm, while most of the proposed alternatives are not.Besides most problems of the complete Microelectronics Center ofNorth Carolina (MCNC) benchmark suite, other solved examplesinclude an important set of decoder programmable logic arrays(PLA’s) coming from the design of microprocessor instructionsets.

Index Terms—Basic sections, Boolean symmetries, face hyper-cube embedding, input constraints, logic encoding.

I. INTRODUCTION

V ARIOUS techniques for exact and heuristic encodingbased on multivalued minimization of two-level and

multilevel logic produce various types of constraints on thecodes to be assigned to the symbols. The origin and effectsof these constraints on the quality of the minimized result aredescribed in [16]. Among these encoding constraints the mostubiquitous type are face embedding constraints, that are thesubject of the present investigation.

Consider a set of symbols and an encoding function, for a given (encoding length), that assigns to

each symbol a code , i.e., a binary vector of length.A necessary requirement is thatis injective, i.e., that differentsymbols are mapped to different binary vectors. Given a set of

Manuscript received December 18, 1996. This work was supported in partby the CAST Program, DARPA, NSF, SRC, and industrial grants from BellNorthern, Cadence, Digital, Fujitsu, Intel, MICRO, and Motorola. This paperwas recommended by Associate Editor F. Somenzi.

E. I. Goldberg is with Cadence Berkeley Laboratories, Berkeley, CA 94704USA.

T. Villa is with Parades, Rome 00186 Italy.R. K. Brayton and A. L. Sangiovanni-Vincentelli are with the Department

of Electrical Engineering and Computer Science, University of California atBerkeley, Berkeley, CA 94720 USA.

Publisher Item Identifier S 0278-0070(98)05022-2.

symbols , a face constraint is a subset specifyingthat the symbols in are to be assigned to oneface (orsubcube) of a binary -dimensional cube, without any othersymbol sharing the same face. Face constraints are generatedby multiple-valued (input) literals in two-level and multilevelmultivalued minimization [16]. As an example, given symbols

, an input constraint involving symbols isdenoted by . An encoding satisfying is givenby , , , , and , and theface spanned by is 1. Notice that the vertex 101is not and should not be assigned to any other symbol.

Given a set of face constraints , it is always possible tofind an encoding that satisfies it, as long as one is free tochoose a suitable code length. It is a well-known fact thatfor any set is satisfied by choosing as the1-hot encoding function (which assigns to a symbol thebinary vector that is always 0 except for a position to 1, thelatter denoting symbol ). It is an important combinatorialoptimization problem, sometimes called [17]face hypercubeembedding, to find the minimum and a relatedsuch that is satisfied. The decision version of this problemis NP-complete [12].

An exact solution based on a branch-and-bound strategyto search the partially ordered set of faces of hypercubeswas described first in [17], but it is not computationallypractical. An exact solution by reduction to the problemof satisfaction of encoding dichotomies1 was proposed in[18]. It uses a reduction by J. Tracey [15] of the exactsatisfaction of encoding dichotomies to a unate coveringproblem. This approach was made more efficient in [12],by improving the step of generating maximal compatibles ofencoding dichotomies. Recently the problem of satisfactionof encoding dichotomies has been revisited in [3], adaptingtechniques to find primes and solving unate covering withbinary decision diagrams that have been so successful intwo level logic minimization [2]. The key aspect of thesemethods is the generation of all possible “useful” columnscarrying a bit of encoding for each symbol, where utilityis defined as the (partial) satisfaction of a face constraint.For example, consider the constraints

and . The encoding columns (01011),(01111), (01110), (11011), (11001), (01010) areall useful. Indeed (01011) separates the symbols in the face

from the symbols and ; separates the

1An encoding dichotomy onI is a bipartition(I1; I2) such thatI1 [ I2� I.

0278–0070/98$10.00 1998 IEEE

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 473

symbols in from and those in from; and so on. Then a minimum number of encoding columns

is chosen so that all faces are satisfied; in this case a validselection is (01010), (11001), (01110), (01011). The number of“useful” encoding columns may be exponential in the numberof symbols and the final selection step is an instance of unatecovering problem.

From the experimental point-of-view none of the previousalgorithms has performed up to expectations, being unable tosolve exactly various instances of moderate size and practicalinterest. Moreover, algorithms reducing encoding dichotomiesto unate covering have a dismal behavior when the probleminstance consists mostly of uniqueness encoding dichotomies(i.e., encoding dichotomies with only one symbol in eachblock), because they generate most of the encoding columns,which are for .

Heuristic solutions to the face embedding problem havebeen reported in many papers [11], [4], [13], [5], [18], [14].A heuristic solution satisfies all face constraints, but does notguarantee that the code-length is minimum. A related problem,that is not of interest in this paper, is the one of fixing the codelength and maximizing a gain function of the constraints thatcan be satisfied in the given code length. We refer to [16] forbackground material on satisfaction of encoding constraints.

In this paper we present a new matrix formulation of theface hypercube embedding problem that inspires the design ofan efficient exact search strategy. This algorithm satisfies theconstraints one by one by assigning to them intersecting cubesin the encoding Boolean space. The problem of finding a setof cubes with a minimum number of coordinates satisfying agiven intersection matrix was first formulated in [19] withoutany relation to encoding problems. No algorithm to solvethe problem was described. The relation between the faceembedding problem and the construction of intersecting cubeswas employed in an heuristic algorithm described in [13],[5]. The first formulation of a simple criterion of when a setof cubes satisfies a set of constraints was given in [6]. Weuse some theoretical notions, e.g., basic sections, introducedfirst in [7], [8]. The following features speed up the searchof our algorithm: candidate cubes instead of candidate codesor of candidate encoding columns are generated, symmetriccubes are not generated, a smaller sufficient set of solutions(producing basic sections) is explored, necessary conditionshelp to discard unsuitable candidate cubes, early detection thata partial solution cannot be extended to be a global solutionprunes infeasible portions of the search tree. The experiments

with a prototype implementation in a package called MINSKshow that our algorithm is faster, solves more problems thanany available alternative and is robust. All problems of the Mi-croelectronics Center of North Carolina (MCNC) benchmarksuite were solved successfully, except four of them unsolved oruntried by any other tool. Other collections of examples weresolved or reported for the first time, including an important setof decoder programmable logic arrays (PLA’s) coming fromthe design of microprocessor instruction sets.

In Section II we present a theoretical formulation based onmatrix notation. The generation of basic sections is discussedin Section III. How to avoid the generation of symmetricalsolutions is explained in Section IV. In Section V we describea new algorithm to satisfy face constraints and we showa complete example of search in Section VI. Experimentalresults are provided in Section VII. Section VIII concludesthe paper with remarks on what has been achieved and futurework.

II. M ATRIX FORMULATION OF

THE FACE EMBEDDING PROBLEM

Given a matrix , denote byRow its rows andColits columns. denotes theth row of and denotesthe th column of . Themultiplicity of a column ofmult , is the number of columns in that are equal to .We use the term vector to indicate a one dimensional matrix,when there is no need to specify whether it is regarded as arow or a column. Vectors are calledbinary or two-valued iftheir entries are “0” or “1” andthree-valued if their entriesare “0” or “1” or . A singleton vector has a unique “1.”

Given two two-valued vectors and of the same length,their disjunction is the vector whose th entry is thedisjunction of the th entries of and . Similar definitionholds for theconjunction of and . A vector coversavector if, whenever the th entry of is one, the th entryof is one. A vector intersectsa vector if for at leastan index , the th entry of and is one.

A. Constraint and Solution Matrices

Given a set of symbols and a set of face constraintson , theconstraint matrix is a matrix with as many rows asconstraints and columns as symbols. Entry is iff theth constraint contains symbol, otherwise it is 0. For don’t

care face constraints, the don’t care symbols have a – in thecorresponding position of the constraint matrix.

Example 2.1:

474 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Consider the set of constraints. Then the related con-

straint matrix is shown in Example 2.1 at the bottom ofthe previous page. In the sequel we will refer usually to aset of face constraints by its encoding matrix and wewill not distinguish the two. Notice that there is no need toadd singleton constraints, because we guarantee that differentcodes are assigned to different symbols, including the symbolswhose columns in are equal.

Given an encoding that satisfies a constraint matrix,defines a face for each constraint of, i.e., the mini-

mum subcube that contains the codes of the symbols in theconstraint.

For a given constraint matrix and integer , considera face matrix with Row rows (faces or cubes) and

columns (sections), whose entries may be 0 or 1 or.Each row may be regarded as a subcube in the-dimensionalBoolean space. If there exists an encodingsuch that, for each

Row , the th row of is the face that defines forthe -constraint of , then we say that is a solution facematrix of or that satisfies and that the th row ofis a solution cube of the th constraint.

One verifies that is a solution face matrix of , byconstructing anintersection matrix whose rows are thecubes of and whose columns are the minterms of, whereentry is 1 iff minterm is in cube . Then satisfies

if for any column , the matrix contains no less thanmult columns equal to . In other words, we requirethat each minterm (code of a symbol) belongs only to thosefaces to which it is restricted by the constraints; moreover, ifthere are equal columns in the constraint matrix, for each ofthem there must be a different minterm. In this way, there isat least one injective function that associates to eachcolumn of one column of .

Given a matrix satisfying , an encoding that satisfiescan be extracted with the following rule: select an injective

function , whose existence is guaranteed becausesatisfies , then encode symbol (i.e., column of )with the minterm of the column in . Such anencoding satisfies because each code lies only in the facescorresponding to the constraints to which the symbol belongs.

Example 2.2:Given the previous and , consider

satisfies as it is shown by building the matrix shownat the bottom of the page. An encoding that satisfiescan be extracted from , with the following injection fromcolumns of to columns of :

.Notice that any permutation of the following sub-

sets of codes yields another encoding that satisfies: {0100,0000}, {0111,0001,1100}, {1001,1111}, and

{0110,1110}, as indicated by the existence of more thanone column of that is equal to a certain column of.

B. Basic Sections

Given a constraint matrix and an encoding, the set ofthe minimal cubes such that each of them contains the codesof the symbols in a corresponding constraint ofdefines therows of a face matrix . If satisfies then is a solutionface matrix of .

Example 2.3: Consider

The encodingdoes not satisfy . The minimal

cubes containing the codes assigned byto the symbols ineach constraint of are:

The encodingsatisfies . The minimal cubes

containing the codes assigned byto the symbols in eachconstraint of define a face matrix :

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 475

which satisfies as seen by building the intersection matrix

The operation of a finding the minimal cube that containsthe codes of the symbols that appear in a given constraint iscaptured exactly by the notion of basic section that we aregoing to define next. Informally, given a constraint matrix,the columns of a face matrix are basic sections if there isan encoding (that may or may not satisfy ) such that therows of are the minimal cubes containing the codes of theconstraints of .

Consider a vector (whose elements are “0” or “1”) withCol entries. We can regard as anencoding column,

i.e., an assignment of “0” or “1” to each symbol. An encodingfunction defines a set of encoding columns

(i.e., the columns of ) where the th entry ofis “1” (is “0”) if and only if the th coordinate of is“1” (is “0”).

Let us compare the set of columns that have a “1” in( th row of ) with the set of columns that have a “1” in.There are the following cases.

1) covers , i.e., all columns that have a “1” inhave a 1 in . In other words, all the symbols in thethconstraint are set to bit 1 in the encoding column. Saythat the comparison returns a “1.”

2) does not intersect , i.e., no column that has a “1”in has a “1” in . In other words, all the symbolsin the th constraint are set to bit “0” in the encodingcolumn . Say that the comparison returns a “0.”

3) intersects but does not cover , i.e., a proper subsetof the columns that have a “1” in have a “1” in .In other words, some symbols in the constraint are setto “1” and others to “0” in the encoding column. Saythat the comparison returns a.

So given a , let us denote by a column vector withRow entries of value “1,” “0,” or “ ” where the th

entry is “1,” “0,” or “ ,” according to whether the previouscomparison of and returns a “1,” “0,” or “ .” Whenconvenient, we may represent in positional notation, i.e.,as a matrix with two columns, obtained by representing “1”as “01,” “0” as “10” and “ ” as “11.”

Definition 2.1: A three-valued column is called abasicsection for if there is a vector such that .

Example 2.4: In Example 2.3, the encoding columns of thefirst encoding (i.e., , , ,

, which does not satisfy ) are, , , and they yield the basic

sections .The rows of the matrix of the basic sections are the minimalcubes spanned by the codes of the symbols in the constraintsof .

Thus, given a constraint matrix and an encodingwith encoding columns , the basic sections

define the set of minimal cubes suchthat each of them contains the codes of the symbols in acorresponding constraint of (even if does not satisfy ).Moreover, if satisfies the basic sectionsdefine a face matrix that satisfies.

Theorem 2.1Given and , the positional representationof can be obtained by ORing the columns of C asfollows: the first (respectively, second) column of isobtained by ORing all columns such that (respec-tively, ).

Proof: Consider row and the subsets of entriess.t. and s.t. . Then by ORing the

entries in s.t. (respectively, s.t.) one gets a 1 in the first (respectively, second) position if

and only if (iff) or (respectively,or ).

An equivalent definition of basic section follows fromTheorem 2.1: any matrix of 2 columns andRow rows,whose first column is obtained by ORing a subset of columnsof and whose second column is obtained by ORing theremaining columns of , is a basic section.

Example 2.5: Consider , which is thefirst column of the encoding exhibited in Example 2.2. Thenwe have

that is written in three-valued notation as

If we repeat the same operation for the other columnsof theencoding of Example 2.2, the matrix whose columns are thevectors (in three-valued notation) is exactly the matrix

of Example 2.2, i.e., the matrix whose rows are the facesspanned by the codes of the given encoding.

C. Sufficiency of Basic Sections

Basic sections are candidate columns to constructmatricesthat are solutions of a given. They are an appealing notionbecause a set of basic sections may represent “implicitly” morethan one encoding. As seen in Example 2.2, there are manyencodings that generate the same set of faces and differ only inpermutations of codes within a face that are inconsequentialin order to satisfy the face constraints. Contrary to the caseof handling directly encoding columns, by manipulating basicsections, one is likely to explore a smaller set of combinatorialobjects to build an optimal solution.

476 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Example 2.6:Consider , which is thefirst column of the encoding exhibited in Example 2.2, exceptfor the exchange of the codes 0110 and 1110. Then we obtainthe same basic section

showing that different encoding columns may map into thesame basic section.

It is worthwhile to clarify that a matrix that satisfies aconstraint matrix does not consist necessarily (only) of basicsections. A trivial case comes from “redundant” solutions,obtained by adding to a solution matrixan arbitrary column(so not necessarily a basic section). A more interesting casecomes from a solution matrix whose faces are not minimalsubcubes yielded by a corresponding encoding. This lattercase arises when a face matrix satisfies and there isan encoding extracted from such that the face matrix

specified by is not equal to (moreprecisely, some cubes ofcontain the corresponding cubes of

). We will argue that we can avoid the consideration ofandstill guarantee that for any encodingsatisfying there is aface matrix , from which can be extracted, that satisfies.

Example 2.7:We noticed already that the face matrixbuilt in Example 2.3 is made of basic sections. Now supposeto change in the face 11 into the face 1 obtaining

then also satisfies as seen by building the correspondingmodified intersection matrix shown at the bottom of the page.Notice that the first section of

is not basic.The following theorem states that it is sufficient to consider

basic sections to find a minimum solution to face hypercubeembedding.

Theorem 2.2:Given a solution face matrix of the con-straint matrix there is always a solution face matrixofwith the same number of columns that consists only of basicsections.

Proof: Suppose that has columns. For a givensolution face matrix there is at least an encodingthatsatisfies . This defines encoding dichotomies ,each of which is a coordinate of the codes assigned by. Nowby applying to each suchthe operation bs we obtain the basicsections . The matrix whose columns arethe basic sections is a solution face matrixof , because by definition of the bs operation the rows of

are exactly the minimal faces spanned by the codes of thesymbols in each constraint of.

Example 2.8: Continuing Example 2.7 suppose that we aregiven , whose first column is not a basic section, and that wewant to produce the matrix as in Theorem 2.2. The encodingdichotomies defined by are , , and

. Applying the operation, we get the basicsections ,which are exactly the columns of the original matrix.

In the Appendix-A, a mention of the relation of basicsections with prime encoding dichotomies is made for thoseinterested to a comparison with the methods based on thegeneration of prime encoding dichotomies [18], [12].

III. GENERATION OF BASIC SECTIONS

Given a constraint matrix , consider a three-valued columnof Row components. Denote by the disjunction

of the rows of the set and bythe disjunction of the rows of the set .We have also .

We want to characterize when is a basic section of .For to be such there must be an encoding vectorforwhich . As we will see in Theorem 3.1, a minimumrequirement on is that does not intersect , totake care of the entries of that are “0” or “1.”

The entries that are require the introduction of a slightlymore complex condition, based on the following notions.Consider the set of boolean vectors ofCol such that

iff when the th entry of is “1” then the th entriesof both and are “0.” In other words, mayhave 1s only where both and have “0’s.” Sogiven and the cardinality of is the power setof the positions where both and have “0’s.”

Example 3.1: Consider

for the matrix of Example 2.1.

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 477

Then we have

where a in stands for entries that may be either “0” or“1.” The set contains 32 vectors (one for each combinationof “0’s” and “1’s” in the five positions indicated by a, ofwhich we report four as .

A vector intersects correctly a set of vectors ifintersects every vector of , but does not cover any vectorof .

Theorem 3.1: is a basic section for if and only if thefollowing conditions hold:

1) does not intersect ;2) there is a vector such that intersects

correctly the rows of .

Proof: If part: Suppose that the two conditions are true.Then define . We show that . Indeed

covers only rows from the set and doesnot intersect any row from the set by thefirst condition. Moreover, intersects correctly

, by the second condition. So the operation bs whenapplied to constructs exactly the three-valued vector.

Only if part: We prove by contradiction that if eithercondition is false then cannot be basic.

Suppose that the first condition is not true, i.e., thatintersects . Then there are two rows and that havea “1” in a column such that and . Sothere is no such that , because such a shouldcontain without intersecting in order to define correctly

and , but this cannot happen becauseany vector that contains must intersect in column .Therefore cannot be basic.

Suppose that the first condition is true, but the second one isfalse, i.e., that there is no vector such thatintersects correctly the rows of . So there is no suchthat , because such a should intersect, withoutcovering it, every row of that is in and should also cover

without intersecting , therefore there should bea vector such that intersects correctly therows of , against the hypothesis. Therefore,cannot bebasic.

Example 3.2:Continuing Example 3.1, we see thatand intersect correctly the rows of ,

while and do not. So is a basicsection and both

and

generate it (and there may be other vectorsthat generate ).

Fig. 1. Algorithm to generate basic sections.

Given a candidate basic section, it is easy to checkthe first condition of Theorem 3.1, while to test the secondcondition in the worst case one must try all vectors ,where is the number of positions where andare both “0.” Instead of checking the second condition, we canuse the following simpler criterion to detect candidate sections

that are not basic.Theorem 3.2:If and the row is covered by

either or , then is not basic.Proof: Indeed in one case suppose that is covered

by , then no vector intersects without intersecting, so (for some row index) cannot be “0” where

is “0.” In the other case suppose that is covered by ,then no vector intersects without covering it, becausemust cover in order that be “1” where is “1,”therefore cannot be .

Fig. 1 shows an algorithm to generate all the basic sectionsfor a given constraint matrix . The inputs of the algorithmare the vectors and and a partially constructedsection . The components of take values in the range

. The values 0, 1, have the usual meaning,whereas means that theth value has not been decidedyet. To generate all the basic sections the proceduregener-ate_sectionsis invoked with having all components set to.

Initially it is checked whether is consistent, i.e., whetherthe two conditions of Theorem 3.1 hold, by invoking theproceduresection_inconsistent. It is a fact that if at least one ofthese conditions does not hold then this is true for any sectionobtained by assigning one of three values “0,” “1,” “” tocomponents which are not specified yet. So the recursion stopshere and the algorithm returns the empty set. Ifpasses thecorrectness test and all components ofhave been specified,

is a basic section and the algorithm returns it. Otherwise,a constraint such that is chosen and recursivelyone considers the three cases, where is assigned either“0” or “1” or “ ”, and the vectors and arerecomputed. The union of the results of the three recursivebranches is accumulated in the final result.

478 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Fig. 2. Algorithm to check inconsistency of a section.

Fig. 3. Algorithm to check correct intersection.

Fig. 2 shows the flow of the proceduresection_inconsistentthat tests whether a given passes the conditions of Theorem3.1. It also applies the criterion of Theorem 3.2 to detect asearly as possible that is not a basic section. The check ofthe second condition of Theorem 3.1 is done by the procedurecorrect_intersectgiven in Fig. 3. The latter procedure tries toconstruct a vector such that intersects all therows from the set , without covering anyof them. It keeps in the vectormark the rows from the set

which intersect . The setfree_columnscontains the columns that are neither in nor inand so are candidates to be added to. The procedure callsitself recursively, exploring the two cases that the vectorincludes or not a given column from free_columns.

Example 3.3: The set of all basic sections for the constraintmatrix of Example A.1 is

As a comparison for the same problem there areencoding columns (we subtract 2 to eliminate those with

all “0’s” or all “1’s”; we divide by two, because we do notdistinguish those obtained by complementation, as we do notdistinguish basic sections obtained by complementation ofeach other). So many encoding columns generate the samebasic section.

It is worthy to stress that in the approaches based oncomputing encoding dichotomies [12], [3] the more “trivial”is an instance of face embedding problem, the larger isthe number of prime dichotomies that it generates. In theworst case, for no face constraint andsymbols one mustgenerate prime dichotomies (even if there are no faceconstraints, one must add uniqueness constraints, that separatepairs of symbols not distinguished by face constraints). Insteadthe number of sections is “proportional” to the difficulty ofthe problem instance, e.g., for no face constraint and encodingdimension , we need to add a unique constraint consistingof all 1s to which corresponds only one basic section, i.e.,,and a satisfying cube made of repeated times. As anotherillustration, consider the following matrix with only one faceconstraint

There are only three basic sections: “0,” “1,” and “.” Wecan form faces that satisfy the unique constraint by repeatingsome basic sections. Notice that has 121 prime encodingdichotomies. An example, with , of a valid solution facematrix is

satisfies as demonstrated by building the matrix shownat the bottom of the page. An encoding that satisfies

can be extracted from , with the following injectionfrom columns of to columns of : ,

, , ,, , , ,, , .

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 479

IV. CHARACTERIZATION OF SYMMETRIC SOLUTIONS

A crucial feature of an efficient algorithm to solve faceembedding constraints is the ability to avoid the considerationof symmetric solutions, i.e., solutions that differ only bypermutations and inversions of variables of the encoding space.We will refer to permutations and inversions of variables assymmetric transformations or symmetries.

In Section V we will present a proceduresearch_boolean_spacethat finds a solution face matrix inan -dimensional boolean space, if such a solution exists.Let us write for a solution face matrix satisfying thematrix , which stands for the matrix restricted to therows from to . Denote by the set of all solutionface matrices satisfying the matrix and by the setof all solutions of . Call the set of all solutions of ,without any symmetric pair and ; similarly for .The procedure builds incrementally a matrix by findingfirst a solution for the first constraint, then augmentingit to a solution for the first two constraints and so on,until all constraints are considered. More precisely, whenhandling the th constraint the set of all cubes satisfying it,i.e., , is generated and a cube is chosen.Then one verifies whether formed by appending row

to satisfies . If not, another cube of istried and, if none works, one backtracks further to a differentchoice of a cube such that issatisfied by augmented by .

Given a matrix , it is a fact that is a solution ofif and only if a matrix obtained from by permutationsand (bit-wise) inversions of columns is a solution of. Sofor a solution with columns there are ( forpermutations and for inversions) different matrices obtainedby symmetries of , whose generation is useless in orderto find a solution, because they all behave like2. So theyare an equivalence class of which it suffices to consider arepresentative to solve the problem. Now we show how toobtain .

Solutions without symmetries for the first constraint.Consider the first constraint and a cube .

Suppose that the current encoding length is. To avoid thecubes obtained from by permutation it is sufficient toconsider only the cubes differing in the numbers of “0’s” and“1’s.” Indeed, if two different cubes have the same numbersof “0’s” and “1’s” (and so the same number of “’s,” sincethey have the same length) it is always possible to find apermutation transforming one cube into the other. However,if two cubes have different numbers of “0’s” and “1’s,” butthe sum of the numbers of “0’s” and “1’s” is the same we canstill transform one cube into the other by inversion of somecolumns. So to avoid symmetric solutions of we needto consider only cubes with different sums of the numbers of“0’s” and “1’s.” Since a cube having 1s and “0’s” isequivalent after inversions to a cube with “1’s,” weneed to consider only cubes having “1’s” and “’s” which

2For example forn = 6 (n = 7) n!2n is equal to 46 080 (645 120).Rigorously speakingn!2n is exact only when inS there are no columns thatare equal or equal after inversion. In the latter cases the number of differentsymmetric matrices will be smaller.

differ in the number of “1’s.” So we need to generate no morethan candidate solutions to , and those of them thatactually satisfy (cubes having enough minterms insidefor the “1’s” in the constraint, and enough minterms outsidefor the “0’s” of the constraint) are the elements of . Eachelement of is the representative of an equivalence classof cubes, where two cubes are equivalent if and only if thereis a symmetry that transforms one into the other (it is anequivalence relation).

Example 4.1: Consider given in Example 2.1 for .The candidate cubes to satisfy the first constraint are five:

(zero “1’s”), 1 (one “1’s”), 11 (two “1’s”),111 (three “1’s”), 1111 (four “1’s”). None of them can beobtained by a symmetric transformation of another.

Solutions without symmetries for the first con-straints, given the solutions without symmetries for thefirst constraints.

The idea is to avoid the generation of symmetric solutionsof which do not differ in the first cubes.

Given a matrix we define an equivalence relation onits columns stating that two columns are equivalent if and onlyif they are equal. There may be many equivalence classes (atmost as many as there are columns) and one of them maycontain columns made only of s. Say that has thefollowing equivalence classesClass . Forconcision we may say that a class has at least a “0” or a “1”if the columns of the class have at least a “0” or a “1” andthat a class has only “’s” if the columns of the class haveonly “ ’s.”

Consider any solution to obtained by taking asolution matrix from and appending to it a cube

from such that the resulting matrixsatisfies . Iterating the process for all such

cubes we obtain the set of all resultingmatrices (whose submatrix restricted to the firstrowsis a matrix of ) satisfying . It is a fact that this setwill contain symmetric solutions, i.e., matrices and

having the same first rows (say, ) and differingin permutations of columns that are in an equivalence classwhose columns have at least a “0” or a “1” or in permutationsand inversions of columns that are in an equivalence classwhose columns have only “’s.” Notice that there cannot besymmetric solutions differing in the first rows, because byconstruction there are no symmetric cubes in .

Let us show how to obtain , that iswithout symmetric elements. Let us start by generating thesolution matrices of , for a given matrix

from . Suppose that a cube is found such thatthe matrix , whose submatrix restricted to the firstrows is

and whose last row is , is a solution of .Then we should avoid the generation of any cubethat has the same numbers of “0’s” and “1’s” in the columnsof a class with at least a “0” or a “1” and the same sum ofthe numbers of “0’s” and “1’s” in the columns of a class of“ s.” Indeed, we can obtain cube fromby permutation of columns from a class with at least a “0”or a “1” and by permutation and inversion of columns from

480 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

a class with only “ ’s,” neither of which change . Asbefore, instead of cubes having the same sum of the numbersof “0’s” and “1’s” in the columns of a class with only “’s”we may consider cubes having different number of “1’s” inthe columns of that class. Finally, the set is the

union of all the sets , obtained as before for all

solution matrices from .Example 4.2:Consider given in Example 2.1 for .

Suppose that we have already chosen the cubeto satisfy the first constraint. The column equality relation of

has the equivalence classesClass (the firsttwo columns are all equal to 1) andClass (thelast two columns are all equal to ). The candidate cubesthat satisfy , and together with satisfy , areamong those built—to avoid symmetric ones—by combiningthe six patterns (zero “0’s”, zero “1’s”), 1 (zero “0’s”,one “1’s”), 0 (one “0’s”, zero “1’s”), 11 (zero “0’s”, two“1’s”), 00 (two “0’s”, zero “1’s”), 01 (one “0’s”, one “1’s”) forcolumns inClass (all subcubes that have different numbersof “0’s” and “1’s”) and the three patterns (zero “1’s”), 1(one “1’s”), 11 (two “1’s”) (all subcubes that have differentnumbers of “1’s”) for columns inClass . All together weobtain combinations: , 1 , 11,1 , 1 1 , 1 11, 11 , 111 , 1111, 0 , 0 1 ,0 11, 01 , 011 , 0111, 00 , 001 , 0011.

Example 4.3:Consider given in Example 2.1 for. Suppose that we have already built the partial solution

. The column equality relation of

has the equivalence classesClass (the first

two columns are all equal to ), and Class (the

last two columns are all equal to ). The candidate cubes

that satisfy , and together with satisfy , areamong those built—to avoid symmetric ones—by combiningthe six patterns (zero “0’s”, zero “1’s”), 1 (zero “0’s”,one “1’s”), 0 (one “0’s”, zero “1’s”), 11 (zero “0’s”, two“1’s”), 00 (two “0’s”, zero “1’s”), 01 (one “0’s”, one 1’s)both for the columns inClass and for those inClass . Thereason is that the columns in both classes contain an entryequal to 1 and therefore we must generate all subcubes thathave different numbers of 0s and 1s. All together we obtain

combinations: , 1 , 11, 0 ,01, 00, 1 , 1 1 , 1 11, 1 0 , 1 01, 1 00,

11 , 111 , 1111, 110 , 1101, 1100, 0 , 0 1 , 0 11,0 0 , 0 01, 0 00, 01 , 011 , 0111, 010 , 0101, 0100,00 , 001 , 0011, 000 , 0001, 0000.

The previous approach is general and it can be appliedwhenever one needs to generate the set of all cubes in a givenBoolean space, such that no two cubes can be obtained bysymmetric transformation one of the other.

V. AN EXACT ALGORITHM TO FIND A MINIMUM SOLUTION

In Fig. 4 we present the flow of an algorithmfind_solutionthat finds a minimum solution of a constraint matrix. It startswith the minimum dimension (lg of the number of constraints)and it increases it until a solution is found. It is guaranteed

to terminate because every constraint matrix can be satisfiedby an encoding of length , if is the number of symbols;more precisely by an 1-hot encoding. Usually a much shorterencoding length suffices.

A. The Search Strategy

The key feature of the proposed algorithm is that it searchessets of cubes, instead of sets of codes. Since a set of cubesmay correspond to many sets of codes, the algorithm explorescontemporarily many solutions. Once a satisfactory set ofcubes is found, it is straighforward to extract from it asatisfying encoding.

For a given dimension, the search of a satisfying encodingis carried through by the routinesearch_space, that returns asolution face matrix that satisfies . Once is known, itis easy, as shown in Example 2.2, to find an encoding of thesymbols that satisfies . The main features ofsearch_spaceare the following.

1) The constraints are ordered as mentioned in Section V-Eand then processed in that order.

2) Each call ofsearch_spaceprocesses a new constraint.3) It keeps a current partial solutionCurr_Solthat satisfies

all the constraints from the first to the last constraintthat has been processed.

4) It satisfies a constraint by generating a cube that encodesthe constraint (a row of ). A constraint is satisfied ifthere is a cube such that, by adding it to the currentsolution, we satisfy the constraint matrix restricted to theconstraints from the first to the one currently processed.

5) Once the current constraint has been satisfied the cur-rent solution is updated andsearch_spacecalls itselfrecursively with a new constraint.

6) If the current solution cannot be extended to satisfy thecurrent constraint,search_spacebacktracks and tries adifferent cube for the last constraint that was satisfiedby Curr_Sol and it continues to backtrack until it findsa partial solutionCurr_Sol which can be extended tosatisfy the constraint currently processed.

7) The procedurefound_solutiontests whether a face ma-trix is a solution of a set of constraints, by constructingthe intersection matrix as shown in Section II-A.

The following enhancements reduce the nodes of the searchtree that search_spacehas to explore to find a minimumsolution.

1) Candidate cubes are generated by a proceduregen-erate_cand_cubesthat does not produce any symmet-ric pair of cubes, based on the theory presented inSection IV. The equivalence relation on columns iscomputed byrecalculate_classes.

2) A procedurerestrict_cand_cubeseliminates the cubesthat would yield a matrix with sections which arenot basic, as allowed by Theorem 2.2 and shown byexample in Section V-B

3) Cubes that do not satisfy the necessary conditions ofSection V-C to be valid extensions of the current solu-tion are discarded by a procedurediscard_cand_cubes.

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 481

Fig. 4. Algorithm to find a minimum solution.

4) When trying to extend the current solution, the procedureunsat_constrchecks first whether any of the constraintsnot yet processed is unsatisfiable by an extension ofthe current solution; if so,search_spacebacktracks tomodify the current solution. See Section V-D for morediscussion.

Notice that instead of generating all candidate cubes at onceand then pruning them, one could generate them “lazily” ondemand (to the price of more programming complexity). But,for the problem instances that we can solve exactly with thecurrent prototype, the number of candidate cubes does notexceed the few thousands.

B. Restriction to Basic Sections

In Section II we highlighted the fact that not all solution facematrices consist entirely of basic sections, but we argued

in Theorem 2.2 that basic sections are sufficient to find aminimum solution. Therefore when generating cubes that arecandidate solutions of face constraints it is profitable to rejectthose that would produce an with some sections which arenot basic.

Example 5.1: Let us continue Example 4.2 referring toofExample 2.1. The hypothesis is that we have already chosenthe cube to satisfy the first constraint. Thecolumn equality relation of has the equivalence classesClass and Class . The candidate cubesthat satisfy , and together with satisfy , areobtained by combining the 6 patterns , 1 , 0 , 11, 00,01 for columns inClass and the 3 patterns , 1 , 11 forcolumns inClass : , 1 , 11, 1 , 1 1 ,1 11, 11 , 111 , 1111, 0 , 0 1 , 0 11, 01 ,011 , 0111, 00 , 001 , 0011.

482 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Notice that the last nine cubes start by “0.” But the column, that is a candidate first column of when we add

any of the last 9 cubes to , is not a basic section as canbe seen by applying the test of Theorem 3.1, because(the first constraint of ) and (the second constraint of

) intersect (in other words, since the two constraints intersectthe two cubes should intersect too, but they do not). So we candiscard all candidate cubes for whose first component is0 and restrict the search to: , 1 , 11, 1 ,1 1 , 1 11, 11 , 111 , 1111.

The process shown in Example 5.1 can be made systematicas a procedure that removes from the candidate cubes thosethat would yield sections that are not basic. The idea ofthe procedure, whose details we do not report here, is toadd a candidate cube at a time and then check if allsections (where is the current number of bits) are stillbasic. To check whether a section is basic one can use thealgorithm to generate basic sections due to the fact that, givena partially constructed section, that algorithm generates allbasic sections dominating . So, in order to check whether asection is basic, generate all basic sections dominating.If the set is empty it means that itself is not basic, becauseif would be basic the set of basic sections dominatingwould include at least itself.

C. Removal of Unsuitable Cubes

Given a partial solution and a set of candidate cubesfor that do not contain symmetric cubes

nor cubes leading to sections that are not basic, before check-ing if together with a cube is

a solution of , it is worthy to

apply some necessary conditions that must satisfyto pass the test. Precisely we discard a cube

if at least one of three conditions hold:

1) the number of 1s in is greater than whereis the number of s in ;

2) there is a such that cube covers cube ,but vector does not cover (dominate) vector

. In this case there is a column of such thatand , that does not appear in the

intersection matrix of ;3) there is a such that intersects , but

the number of 1s in their intersection is greater thanwhere is the number of s in the cube obtained bythe intersection of and .

D. Early Detection of Unsatisfiable Constraints

Constraints are processed one by one in a predefined order.Suppose that on the path leading to the current node of thesearch tree we have already chosen four cubes satisfying thefirst four constraints and that now we are trying to satisfythe fifth constraint. Suppose also that all constraints from thefifth to the nineteenth are satisfiable, but that the twentiethis unsatisfiable, given the current choice of the first fourcubes. So checking the satisfiability of one constraint ata time, we would discover that the twentieth constraint is

unsatisfiable only after having processed all constraints upto the nineteenth one; then we would start backtracking toanother cube satisfying the nineteenth constraint and we wouldtry again to satisfy the twentieth one, and so on for all thecubes that satisfy the nineteenth constraint. We would repeatthis time-consuming process for all constraints from to thenineteenth to the fifth one, before discovering that we mustmodify the solution to the first four constraints, in order toextend it to a solution that satisfies the constraints up to thetwentieth one.

To prevent such unrobust behavior and lessen the depen-dency on the initial sorting of the constraints, we employ earlydetection of unsatisfiable constraints, given a current partialencoding. At each node of the search tree withsatisfiedconstraints, the algorithm checks first if every remainingunprocessed constraint is satisfiable, given the current choiceof cubes which satisfy the first constraints. Although thischeck requires some extra calculations at each node of thesearch tree, it is fully justified by the drastic reduction of thesearch tree size.

A sketch of the procedure follows. Suppose that we havealready processedconstraints out of . Then we need to checkwhether all the remaining constraints are satisfiable.We process them one be one. Letbe one of them, where

. We add the th constraint to the first processedones and generate all candidate cubes that satisfy thethconstraint. Then we check if there is a candidate cubesatisfying it such that the cubes satisfy thefirst constraints and theth one (cube , hadbeen chosen for satisfying theth constraint). If such a cube isfound then the th constraint is satisfiable, with respect to theprevious choices. We discard cube and the th constraintand move on to the th constraint. If there is no such acube then the th constraint is unsatisfiable, with respect tothe previous choices, and we backtrack to the point where wehad chosen cube .

E. Sorting of Constraints

Constraints are sorted with the goal to prune branches of thesearch tree at the earliest possible stages. We have two sortingcriteria. The first one selects as next constraint the one thatintersects the highest number of already selected constraints.Ties are broken selecting the constraint with the highestnumber of “1’s.” The second criterion selects according tothe highest number of “1’s” and breaks ties with the highestnumber of intersected rows.

VI. A N EXAMPLE OF SEARCH

Consider the matrix

The procedurefind_solutioncalls firstsearch_spacewith

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 483

, but there is no solution there. So it calls againsearch_spacewith . Let us follow the search in the latter space.

search_space is invoked with Curr Sol ,and cur constr . After generate_cand_cubes

Cubes . Afterrestrict_cand_cubes Cubes is the same as before. Afterdiscard_cand_cubes Cubes . Setcur cube 1 .NewCurr Sol satisfies .

search_spaceis invoked withCurr Sol , andcur constr . After generate_cand_cubesCubes

. After restrict_cand_cubesCubes is the same as before. Afterdiscard_cand_cubesCubes . Setcur cube .

NewCurr Sol satisfies .

search_spaceis invoked with

Curr Sol

and cur constr . After generate_cand_cubesCubes

. After restrict_cand_cubesCubes , , , , ,

, , , ,. After discard_cand_cubes

Cubes . Setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . After generate_cand_cubesCubes

. After restrict_cand_cubesCubes

. After discard_cand_cubes

Cubes . Set cur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . unsat_constrdetects that isunsatisfiable, givenCurr_Sol. Backtrack tocur constrand, since there are no other candidate cubes for the latter,backtrack again tocur constr . Setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . After generate_cand_cubesCubes

. Afterrestrict_cand_cubes Cubes

. After discard_cand_cubes Cubes. Setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . unsat_constrdetects that isunsatisfiable, givenCurr_Sol. Backtrack tocur constr

484 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

and setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . unsat_constrdetects that isunsatisfiable, givenCurr_Sol. Backtrack tocur constrand setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . unsat_constrdetects that isunsatisfiable, givenCurr_Sol. Backtrack tocur constrand setcur cube

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . After generate_cand_cubesCubes

. After { restrict_cand_cubes Cubes. After discard_cand_cubes

Cubes . Setcur cube .

NewCurr Sol

satisfies .search_spaceis invoked with

Curr Sol

and cur constr . All constraints are satisfied, soCurr_Solis the final solution. Notice thatsearch_spacewas called 11times.

VII. RESULTS

We implemented the algorithm described in Section V ina prototype package in C called MINSK (minimum inputsatisfaction kernel) and we applied it to a set of benchmarksavailable in the literature. The benchmarks are partitioned intothree sets: finite state machines (FSM’s) from the MCNCcollection, reported in Table I; FSM’s collected from variousother sources, reported in Table III; decode PLA’s of theVLSI-BAM project, provided by Bruce Holmer [9], reportedin Table IV. In all cases, face constraints were generated withESPRESSO [1]. In the tables, we report the following:

1) the name of the example;2) the number of symbols to encode (“#states”);3) the logarithm of the number of symbols (“min. len.”)

together with the minimum code length to satisfy allinput constraints known so far (“best known”);

4) the minimum code length to satisfy all input constraintsfound by MINSK (“min. sol.”);

5) the number of calls of the routinefound_solution(“#checks”);

6) the number of recursive calls of the routinesearch_space(“#calls”);

7) the CPU time for a 300 MHz DEC ALPHA workstation.

We did not report data on examples where the constraints werefew and MINSK found a solution in no time. We found anexact solution for all the examples, except the FSM’stbk,s1488, s1494, s298none of which has been solved before. Forsome examples, likedonfile, scf, dk16exact solutions werenever found automatically before; for others, likeex2an exactsolution was found automatically by NOVA [17] with theoption -e ie, but at the cost of an unreasonable CPU time(60172.6 s. on 60 MHz DEC RISC workstation).3

Up to now four exact algorithms have been tried to solveface hypercube embedding. The first is available as an op-

3A solution of 7 was erroneously reported as exact in [17] fordk16, whereasthe minimum solution has 6 bits.

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 485

TABLE IEXPERIMENTS WITH FSM’S FROM MCNC BENCHMARK SET

TABLE IICOMPARISON WITH OTHER APPROACHES

tion in NOVA -e ie, the second is based on a reductionto satisfaction of encoding dichotomies by means of unatecovering [18], [12], the third is an implicit implementationwith ZBDD’s of the latter [3], and the last is a simplificationof the third, where instead of prime dichotomies one uses all

TABLE IIIEXPERIMENTS WITH FSM’S FROM OTHER SOURCES

TABLE IVEXPERIMENTS WITH DECODE PLA’ S OF THE VLSI-BAM

possible encoding dichotomies [3]. In Table II, we comparethe performance of MINSK with the last three previousalgorithms, based on the data recently reported in [3]. Weare aware that the experiments presented in [3] were run witha 75 MHz SuperSparc workstation with 96 Mbyte memoryand a timeout of two hours. The purpose of the comparisonis to evaluate the behaviors of the various algorithms, not todiscuss specific running times. We included in Table II allthe interesting examples, leaving out “easy” cases where allalgorithms behaved similarly.

The experiments warrant the following practical conclu-sions.

• MINSK is a robust algorithm, that solves in no timeproblems with few constraints and requires more timewhen the set of constraints is larger and more difficult.This apparently innocent feature lacks in the other pro-grams (with the exception of NOVA), pointing out thatthe reduction of face hypercube embedding to encodingdichotomies is not an algorithmically robust strategy.

• MINSK is also superior in running times to the otherprograms in the more difficult cases, showing that the keyingredients of its search strategy, such as generating cubesand not codes, avoiding symmetric solutions and sectionswhich are not basic, prune away large suboptimal portionsof the search space.4 The exact option of NOVA insteadis hopelessly slow in the more difficult cases, because itenumerates codes and not cubes and does not avoid thegeneration of symmetric encodings.

• The implicit algorithms of [3] rely on a very sophisticatedunate covering package that represents the table with

4In decreasing order of practical effectiveness, the ingredients of MINSK’ssearch strategy can be roughly listed as generation of cubes instead of codes,incremental processing of constraints, avoidance of generation of symmetricalsolutions, early detection of unsatisfiable constraints and pruning of sectionsthat are not basic.

486 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

ZBDD’s. MINSK instead is a simple-minded implemen-tation, whose strength lies only in the underlying theory.The running times of MINSK can be improved a lotby making more efficient some critical routines such asfound_solution. This shows how important is to find theappropriate model for a problem, even when powerfulimplicit techniques are available as an alternative.

VIII. C ONCLUSION

We have presented a new matrix formulation of the facehypercube embedding problem that motivates the design ofan efficient search strategy to find an encoding that satisfiesall faces of minimum length. Increasing dimensions of theBoolean space are explored; for a given dimension constraintsare satisfied one at a time. The following features help toreduce the nodes of the solution space that must be explored:candidate cubes instead of candidate codes are generated,symmetric solutions are not generated, a smaller sufficient setof solutions (producing basic sections) is explored, necessaryconditions help discard unsuitable candidate cubes, early de-tection that a partial solution cannot be extended to be a globalsolution prunes infeasible portions of the search tree.

We have implemented a prototype package MINSK basedon the previous ideas and run experiments to evaluate it.The experiments show that MINSK is faster and solves moreproblems than any available algorithm. Moreover, MINSK isa robust algorithm, while most of the proposed alternativesare not. All problems of the MCNC benchmark suite weresolved successfully, except four of them unsolved or untriedby any other tool. Other collections of examples were solvedor reported for the first time, including an important set ofdecoder PLA’s coming from the design of microprocessorinstruction sets.

We know that the current implementation of MINSK issimple-minded and leaves room for improvements to speed-up more the program. For instance, the satisfaction checkwith the intersection matrix is expensive and currently notoptimized. Other areas of improvement to cope with difficultexamples lie in the computation of a lower bound on theBoolean space dimension tighter than lnsymbols; and in adynamic choice of the next cube and its size, based on a tighteranalysis of the cube occupancy requirements of the existingconstraints. Some interesting work about the computation oftighter lower bounds on the minimum code length and on theminimum cube dimension has been reported in [10].

An interesting direction for future work is to design anapproximate algorithm (which finds a near-minimal encodinglength to satisfy all constraints) based on the ideas underlyingthe exact search presented in this paper. Since we have shownhow to reduce the search space using combinatorial objects,like cubes instead of codes, that model more naturally theproblem, our search strategy could be made heuristic to attackproblems too difficult for the exact algorithm. Moreover, theexisting theory and algorithm could be generalized to thefollowing:

1) solve face constraints with “don’t cares,” that is animportant practical problem and

2) solve mixed problems that include constraints in theform of encoding dichotomies.

From some preliminary analysis, both extensions are amenableto the current frame, with some appropriate modificationsto the test when a candidate matrix is a solution and theintroduction of the equivalent of a face for an encodingdichotomy.

Notice that the reduction of face hypercube embeddingto satisfaction of encoding dichotomies [12], [3] has shownexperimentally that face hypercube embedding in a sense con-tains the hardest instances of the problem to satisfy encodingdichotomies. This fact justifies our strategy to solve the formerfirst and extend it later to the latter.

APPENDIX

A. Prime Sections

It is possible to characterize a subset of basic sections, calledprime sections, as sufficient to find a minimum solution. Weare going to define them and show an example. We will notprove their sufficiency, because the proof is intricate and theyare not used in our algorithm. A proof for the case of constraintmatrices with no repeated columns can be found in [8] and itcan be generalized to the general case. A reason to mentionthem here is that they establish a connection with the approachto solve face embedding based on generating prime encodingdichotomies [18], [12]. It is a fact that prime sections are fewerthan prime encoding dichotomies and so they may inspire apotentially more efficient exact algorithm.

An encoding dichotomy(or, more simply,dichotomy) is atwo-block partition of a subset of the symbols to be encoded.The symbols in the left block are associated with the bit “0”while those in the right block are associated with the bit “1.” Ifa dichotomy is used in generating an encoding, then a code bitof the symbols in the left block is assigned “0” while the samecode bit is assigned “1” for the symbols in the right block. Forexample, is a dichotomy in which and areassociated with the bit “0,” while and are associatedwith the bit “1.” A dichotomy is complete if each symbolappears exactly once in either block. A complete dichotomyis an encoding column.

Two dichotomies and arecompatible if the left blockof is disjoint from the right block of and the right blockof is disjoint from the left block of . Otherwise, andare incompatible. Theunion of two compatible dichotomies,

and , is the dichotomy whose left and right blocks are theunion of the left and right blocks of and respectively. Theunion operation is not defined for incompatible dichotomies.A dichotomy covers a dichotomy if the left and rightblocks of are subsets respectively either of the left and rightblocks, or of the right and left blocks of . For example,

is covered by and , butnot by .

Definition A.1: Given a set of dichotomies, aprime di-chotomy is a dichotomy that is incompatible with all di-chotomies not covered by it.

GOLDBERG et al.: THEORY AND ALGORITHMS FOR FACE HYPERCUBE EMBEDDING 487

A set of complete dichotomies generates an encoding asfollows. Each complete dichotomy generates one column ofthe encoding, with symbols in the left (right) block assigneda “0 (1)” in that column. For example, given the completedichotomies and , the unique encod-ing derived is , and .So complete encoding dichotomies and encoding columnsrepresent the same information. in slightly different formats.

Given a matrix of constraints , each face constraint inducessome dichotomies calledinitial dichotomies, The symbolsthat must be on one face are placed in one block of eachinitial dichotomy induced by that constraint, while the otherblock of the initial dichotomy contains one of the symbolsnot on the face. Thus for symbols a faceconstraint generates initial dichotomieseach with the symbols in one block and exactlyone of the remaining symbols in the other block.Repeating this process for each contraint in, one gets allinitial dichotomies induced by . Finally, starting from theseinitial dichotomies the prime dichotomies related to arecomputed [16].

Definition A.2: A basic section is a prime section ifthere is a prime encoding columnsuch that .

Definition A.3: Section covers section if there areencoding columns and such thatand covers .

As anticipated, it can be shown that given a set of basicsections satisfying , if is not a prime sectionthen there is a prime section that covers such that

satisfies .Example A.1:Given the matrix of constraints

the sets of prime dichotomies and corresponding primesections are

There are 12 prime dichotomies and 10 prime sections, because

the prime dichotomies , , and generate the sameprime section .

REFERENCES

[1] R. Brayton, G. Hachtel, C. McMullen, and A. S.-Vincentelli,Logic Min-imization Algorithms for VLSI Synthesis, Kluwer Academic Publishers,1984.

[2] O. Coudert, “Two-level logic minimization: An overview,”Integration,vol. 17, no. 2, pp. 97–140, Oct. 1994.

[3] O. Coudert and C.-J. Shi, “Ze-dicho: An exact solver for dichotomy-based constrained encoding,” inProc. Int. Conf. Computer Design, 1996,pp. 426–431.

[4] S. Devadas, A. Wang, R. Newton, and A. S.-Vincentelli, “Booleandecomposition in multilevel logic optimization,”IEEE J. Solid-StateCircuits, pp. 399–408, Apr. 1989.

[5] C. Duff, “Codage d’automates et theorie des cubes intersectants,”These, Institut National Polytechnique de Grenoble, France, Mar.1991.

[6] E. I. Goldberg, “Metody bulevogo kodirovaniya znachenij argumentovpredicatov (methods of boolean encoding of predicate arguments val-ues),” Inst. Eng. Cybern., Academy of Sciences of Belarus, Preprint no.3, 1991 (in Russian).

[7] E. I. Goldberg, “Matrix formulation of constrained encoding problemsin optimal PLA synthesis,”Inst. Eng. Cybern., Academy of Sciences ofBelarus, Preprint no. 19, 1993.

[8] E. I. Goldberg, “Face embedding by componentwise construction ofintersecting cubes,”Inst. Eng. Cybern., Academy of Sciences of Belarus,Preprint no. 1, 1995.

[9] B. Holmer, “A tool for processor instruction set design,” inProc.European Design Automation Conf., Sept. 1994, pp. 150–155.

[10] F. Y.-Chung Mang, “Lower bounds for face hypercube embedding,”EE290ls Project Report, May 1997.

[11] G. De Micheli, R. Brayton, and A. Sangiovanni-Vincentelli, “Optimalstate assignment for finite state machines,”IEEE Trans. Computer-AidedDesign, pp. 269–285, July 1985.

[12] A. Saldanha, T. Villa, R. Brayton, and A. Sangiovanni-Vincentelli,“Satisfaction of input and output encoding constraints,”IEEETrans. Computer-Aided Design, vol. 13, pp. 589–602, May1994.

[13] G. Saucier, C. Duff, and F. Poirot, “State assignment using a newembedding method based on an intersecting cube theory,” inProc.Design Automation Conf., June 1989, pp. 321–326.

[14] C.-J. Shi and J. Brzozowski, “An efficient algorithm for constrainedencoding and its applications,”IEEE Trans. Computer-Aided Design,pp. 1813–1826, Dec. 1993.

[15] J. Tracey, “Internal state assignment for asynchronous sequentialmachines,” IRE Trans. Electron. Comput., pp. 551–560, Aug.1966.

[16] T. Villa, T. Kam, R. Brayton, and A. Sangiovanni-Vincentelli,Syn-thesis of FSMs: Logic Optimization. New York: Kluwer Academic,1997.

[17] T. Villa and A. Sangiovanni-Vincentelli, “NOVA: State assignment foroptimal two-level logic implementations,”IEEE Trans. Computer-AidedDesign, vol. 9, pp. 905–924, Sept. 1990.

[18] S. Yang and M. Ciesielski, “Optimum and suboptimum algorithms forinput encoding and its relationship to logic minimization,”IEEE Trans.Computer-Aided Design, vol. 10, pp. 4–12, Jan. 1991.

[19] A. D. Zakrevskii,Logicheskii Sintez Kaskadnykh Skhem (Logic Synthesisof Cascaded Circuits), Nauka, 1981 (in Russian).

Evgueni I. Goldberg received the M.S. degreein physics from the Belorussian State University,Russia, in 1983 and the Ph.D. degree in computerscience from the Institute of Engineering Cyber-netics of the Belorussian Academy of Sciences in1995.

From 1983 to 1995, he worked at the Laboratoryof Logic Design of the Institute of EngineeringCybernetics as a Researcher. From 1996 to 1997,he was a Visiting Scholar at the University ofCalifornia at Berkeley. Currently, he works at the

Cadence Berkeley Laboratories as a Research Scientist. His areas of interestsinclude logic design, test, and verification.

488 IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 17, NO. 6, JUNE 1998

Tiziano Villa studied mathematics at the Universityof Milano, Italy, the University of Pisa, Italy, andthe University of Cambridge, U.K., and receivedthe Ph.D. degree in electrical engineering andcomputer science from the University of Californiaat Berkeley in 1995.

He worked in the Integrated Circuits Divisionof the CSELT Laboratories, Torino Italy, as aComputer-Aided Design Specialist, and then formany years, he was a Research Assistant at theElectronics Research Laboratory, University of

California, Berkeley. In 1997, he joined the PARADES Laboratories, Rome,Italy. His research interests include logic synthesis, formal verification,combinatorial optimization, and automata theory. His contributions are mainlyin the area of combinational and sequential logic synthesis. He coauthored thebooksSynthesis of Finite State Machines: Functional Optimization(Boston,MA: Kluwer Academic, 1996) andSynthesis of Finite State Machines: LogicOptimization(Boston, MA: Kluwer Academic, 1997).

Robert Brayton (M’75–SM’78–F’81) received theB.S.E.E. degree from Iowa State University, Ames,in 1956 and the Ph.D. degree in mathematics fromMassachusetts Institute of Technology in 1961.

From 1961 to 1987 he was a member of theMathematical Sciences Department of the IBM T. J.Watson Research Center, Yorktown Heights, NY. In1987 he joined the E.E.C.S. Department of Berke-ley, where he is a Professor and Director of theSRC Center of Excellence for Design Sciences.He holds the Edgar L. and Harold H. Buttner

Endowed Chair in Electrical Engineering in the E.E.C.S. Department atBerkeley. Past contributions have been in analysis of nonlinear networks, andelectrical simulation and optimization of circuits. Current research involvescombinational and sequential logic synthesis for area/performance/testability,asynchronous synthesis, and formal design verification. He has authored over300 technical papers, and seven books.

Dr. Brayton is a member of the National Academy of Engineering, anda Fellow of the IEEE and the AAAS. He received the 1991 IEEE CAStechnical achievement award, the 1971 Guilleman-Cauer award, and the 1987Darlington award. He was the editor of the Journal on Formal Methods inSystems Design from 1992 to 1996.

Alberto L. Sangiovanni-Vincentelli (SM’81–F’83)received the electrical engineering and computerscience degree “Dottore in Ingegneria” (summa cumlaude) from the Politechnico di Milano, Milano,Italy in 1971.

He was Assistant Professor from 1971 to 1974and a “Professore Incaricato” (Associate Professor)from 1974 to 1976 at the Politecnico di Milano. In1980–1981 he spent a year as a visiting Scientist atthe Mathematical Sciences Department of the IBMT. J. Watson Research Center, Yorktown Heights,

NY. In 1987, he spent six months at the Massachusetts Institute of Technology,Cambridge, as Visiting Professor. He is a Professor of Electrical Engineeringand Computer Sciences at the University of California at Berkeley, where hehas been on the Faculty since 1976 and has held Visiting Professor positionsat the University of Torino, University of Bologna, University of Pavia,University of Pisa and University of Rome. He is a Cofounder of Cadenceand Synopsis, the two leading companies in the area of Electronic DesignAutomation. He was a director of ViewLogic and Pie Design System andChair of the Technical Advisory Board of Synopsys. He is on the Board ofDirectors of Cadence, the founder of the Cadence Berkeley Laboratories andof the Cadence European laboratories and a member of their ManagementBoards. He is the Scientific Director of the Project of Advanced Research onArchitectures and Design of Electronic Systems (PARADES), a EuropeanGroup of Economic Interest funded by Cadence, Magneti-Marelli, SGSThomson and the Italian National Research Council. In 1988 he was electedto the National Academy of Engineering. He is the author of over 380 papersand seven books in the area of design methodologies and tools.

Dr. Sangiovanni-Vincentelli received the Distinguished Teaching Award ofthe University of California in 1981. He received the world-wide 1995 Grad-uate Teaching Award of the IEEE (a Technical Field award for “inspirationalteaching of graduate students”). He has received three Best Paper Awards(1982, 1983, and 1990) and a Best Presentation Award (1982) at the DesignAutomation Conference and is the corecipient of the Guillemin-Cauer Award(1982–1983) for novel technical work, the Darlington Award (1987–1988) forthe best paper bridging theory and applications, the Best Paper Award of theCircuits and Systems Society of the IEEE (1989–1990). He has been granteda three-year VIP award from the Italian National Research Council, the awardgiven to an Italian sicentist working abroad who has honored Italy with hisscientific contributions. He was the Technical Program Chairperson of theInternational Conference on CAD and its General Chair. He has been on theprogram committee of several conferences and has been on the editorial boardof the IEEE Transactions of Circuits and Systems, the IEEE Transactions onComputer-Aided Design of Integrated Circuits and Systems, and several otherarchival journals.