frml_vrfn_ieee

Upload: annoojja

Post on 09-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Frml_Vrfn_ieee

    1/6

    Strategies for Mainstream Usage of Formal VerificationRaj S. Mitra

    Texas Instruments, [email protected]

    Abstract

    Formal verification technology has advanced significantly inrecent years, yet it seems to have no noticeable acceptance as amainstream verification methodology within the industry. This paper discusses the issues involved with deploying formalverification on a production mode, and the strategies that mayneed to be adopted to make this deployment successful. Itanalyses the real benefits and risks of using formal verification inthe overall verification process, and how to integrate this newtechnology with traditional technologies like simulation. Thelessons described in this paper have been learnt from several yearsof experience with using commercial formal verification tools inindustrial projects.

    Categories and Subject Descriptors: B.6.3[Hardware]: Logic Design - Design Aids Verification.

    General Terms: Verification, Experimentation.

    Keywords: Formal verification, Emerging technologies.

    1. Introduction

    At the last Design Automation Conference, Jan Rabaey made acomment on formal verification [1], mentioning the possibilitythat someday IEEE will publish a comic strip on adventures informal verification. His reasoning was that the word adventureimplies an image of pretty much ad-hoc and in the exploratoryspace and there is no a production process yet, and hence the

    analogy with the state of affairs of formal verification (FV) today.This comment is not isolated; most users of FV tools would

    testify to the adventurous nature of FV usage in industrialprojects (not on toy designs), and the frustrations it causes. This isin spite of the recent advances seen in this technology andnumerous research papers published in conferences every year.There have been pockets of acceptance some components ofsome products (processors and controllers) have utilized FV butby and large, a general acceptance has been lacking.

    What is first needed today is an unbiased root-cause analysis ofthis systemic failure why are engineers reluctant to use thispromising technology, or whether this technology is usable at all.

    Secondly, becoming aware of the limitations, in technology andmanagement, the requirement is to define a feasible strategy for

    mainstream adoption of formal verification.This paper addresses these two topics. It does not propose any

    new algorithms or abstraction techniques for FV. Instead, themain question asked (and answered to some extent) here is: Whatdoes it take to move FV from a fringe technology to a mainstreamusage (i.e. non-experimental planned production usage, withpredictable outcomes)? Throughout this paper, we are referring tothe usage of commercial production quality tools (not university prototypes) today these are typically in the areas of ModelChecking and Sequential Equivalence Checking. We are also not

    referring to Combinational Equivalence Checking, which iswidely used in the industry now. In this paper, we have used theacronym FV to generically refer to both Formal AssertionVerification as well as to Sequential Equivalence Checking,FAV to specifically refer to the former, SEC to refer to the latter,and ABS to refer to Assertion based Simulation. Although,

    strictly speaking, ABS is not a FV technique, it is the precursor toFV and hence needs to be treated in the same context.

    We start this paper with a brief summary of the currentverification process, and then enumerate the main challenges in

    adopting FV in this context. Then we analyze the advantages(returns on investment) we can expect from FV, and subsequentlyalso discuss that technical advantages are not the sole criteria thatdetermine the success of adoption. Finally, based on the previousdiscussions, we suggest strategies for making the usage of FVmore widespread in the industry than what it is today.

    The strategies suggested in this article are based on lessonslearnt during the application of FV in our organization, over thelast five years and over multiple projects, with several best-in-class commercial FV tools. We have cited several lessons taken

    from research in diffusion of innovation, but these have beenactually put into practice in the context of FV deployments in ourorganization.

    2. Current Verification Process

    To set the context for this paper, we begin with a shortintroduction to the verification process as it is practiced today.Most of this discussion will not be new to the readers, but it willhelp in appreciating the problems with and the value of adoptingFV in a mainstream mode. We will deal only with functionalverification, since only that is relevant for formal verification.

    A SOC is created by hooking up several IPs. The functionalityof each IP is verified separately, and then the SOC / subsystemhookup is verified too (its reset conditions, performance

    characteristics, etc). The individual IPs are verified partly by thedesigners, who run a few directed smoke tests and filter out the

    most obvious bugs, and mostly by the verification engineers, whodo an elaborate job of developing testbenches and eliminating allkinds of bugs from the DUT (design under test) through directed,random, and directed-random tests. Although a fine balance isusually worked out between the two teams for different parts ofthe verification, finally the responsibility of finding the bugs inthe implemented system is shouldered by the verification engineeralone.

    Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies are

    not made or distributed for profit or commercial advantage and thatcopies bear this notice and the full citation on the first page. To copyotherwise, or republish, to post on servers or to redistribute to lists,

    requires prior specific permission and/or a fee.DAC 2008, June 9-13, 2008, Anaheim, California, USACopyright 2008 ACM 978-1-60558-115-6/08/0006 5.00

    800

    44.1

  • 8/8/2019 Frml_Vrfn_ieee

    2/6

    The simulation verification process is a traversal of the Quality-Time Continuum (Fig 1-a), whose principal value is a monotonic

    increase in quality, and not the achievement of an absolute levelof quality. The final result is usually a trade-off between the twoparameters quality (measured by coverage data, bug trends, etc)and the project schedule. Verification is suspended when thecurve reaches the desired levels of coverage metrics, or when theclock runs out and it is time to tape out. We saysuspended, andnot stopped, because there is no such thing as a target of achievingcomplete verification through random and directed-random

    simulation, due to the very nature of simulations, and hence thereis really no end to the simulation process.

    This simulation process, essentially random and incomplete, has

    a strong theoretical foundation in the statistical testing principlesexpounded by Deming [2]. A testcase is nothing but a sample ofthe different paths in the module, which is inspected forconformance with the specification. Hence, valuable lessons frompast work on product quality measurement and improvement may be applied to the field of verification also. We discuss some of

    them below.

    The plateau / saturating nature of the curve in Figure 1-a can beseen as an extension of Figure 1-b (which is a variation of Figure33 of [2]) from the context of product manufacturing the first setof results comes from the removal of special (i.e. specific)causes, and then the system reaches statistical stability and thecurve plateaus to a level. Subsequent improvements can comeonly from addressing the common (i.e. process) causes andreducing the systems variation i.e. (in this context) byimproving the verification process through root-cause analysis ofthe gaps in verification. Without getting into the details ofDemings theory, the important analogies to be drawn from hiswork are the following:

    1.Definition of quality depends on the agent (user or worker),

    and may change with time. However, quality can be definedoperationally in terms of some indices that can be measured

    (here, several coverage metrics functional as well asstructural). An operational definition puts communicablemeaning into a concept, and has to be agreed upon in advance.The indices should measure the quality of the product (DUT)as well as of the procedure of measurement (its verification).

    2.Inspection, automatic (e.g. simulation) or manual (codereviews), simply does not find all the defectives, especially

    when they are rare. Hence, failure in execution must bedefined in advance, in terms of measurable metrics.

    3.Distribution and trends in the measurement indices indicatethe health of the system. Control limits on the indices indicate

    whether the system is in stable statistical control or not, andonce it is observed to be stable, no further improvement is possible by quick fixes; improvements to the processes arenow necessary to reduce the variation of the system.

    4.The indices are intended to show whether the system is stable,they should never be used to indicate specific defects. Inverification, code coverage data should never be used to makespecific code fixes; they should only be used to indicate thatsome broad areas of the code are yet unverified.

    5.Statistical stability of the process of measurement is also vital e.g. will we get the same results if we change the simulatoror the engineers? It is also important to give the measurementinstrument a good context (environment) to do its work, e.g.measuring an ageing fluid is better done at its source. An

    example of this in the simulation context is that assertionsinserted by designers help improve the quality of verification.

    In summary, the simulation process, random and inherently

    incomplete, is very similar to the statistical inspection techniques.The latter recommends inspection, not to remove all the productdefectives, but its metrics to be used as indicators for identifyingareas of process improvement in recruitment, training, ensuringinput quality, design, and supervision. But this is not the purposefor which simulation is used today; its main purpose is to detectbugs, and it is not a surprise that it falls far short of expectation,especially in the context of todays large and complex SOCs.

    This recognition of the gap in verification for large SOCs,which was further triggered by the attention attracted by thePentium bug in both the academia and industry, has urged the

    demand for FV in recent years. But it quickly became apparentthat FV tools are a discontinuity from current simulation practices they cannot be used in the same context and by the same usersas of simulation, and hence their acceptance did not really take offin the verification industry.

    3. Problems with FV Usage

    FV introduces a shift (discontinuities) from the currentverification practices in the following areas:

    Time

    Itemsfoundfaulty

    atfinalaud

    it

    Fig 1: (a) Quality-Time Continuum of Verification by Simulation; (b) A Typical Path of Frustration

    Quality improves by removal of special causes(detecting and fixing individual causes of defects)

    Continued improvement isexpected but will not happen

    Stabilized (i.e., withinControl Limits)

    Stop simulation now?

    Simulation Time

    Coverageo

    r

    DetectedB

    ugs

    a b

    801

  • 8/8/2019 Frml_Vrfn_ieee

    3/6

    1.While assertions can also be used with simulations, the stylesof coding differ significantly between the assertions writtenfor FAV as compared to the assertions written for ABS. It isnecessary to write FAV-friendly assertions in monitor style [3]that are optimized for minimizing the introduction of extra

    state elements. In contrast, assertions and code written forABS typically do not pay any heed for extra state elements

    being added, and this is the coding practice that theverification engineers are currently accustomed to.

    2.FV is applicable to small modules only, due to capacitylimitations of current technology. However, breaking up theDUT into smaller parts is increasingly being done in thesimulation context also, because it is getting increasinglydifficult to feed in relevant inputs, to activate selected parts ofthe code, from a far away module boundary. This requires thatwe break up the DUT in smaller logical parts (structurally or

    temporally), add constraints to manage this separation, andverify the parts separately and the same process applies for both simulation and FV. But the problem gets dramaticallyacute for FV, where the sizes of the individual modules aremuch smaller than what can be handled by simulation.

    3.The capacity problem, compounded with the many innovative(and tool-dependent) abstraction techniques available to solveit, manifests in a more serious problem the lack ofpredictability of the verification process. Given a DUT, thereis no metric (based on number of gates, number of stateelements, or structure of the design, etc) which can predict

    before starting verification, whether we will finally get someanswer or end up with a capacity bottleneck. As a result,unplanned manual abstractions are usually added during the project cycle, thereby breaking the projected verificationschedule. This is quite unlike the simulation context, wherewe know that we will get some results on any DUT, andcan make a reasonable experience-based schedule prediction.

    4.The partitioning of modules (to handle capacity limitations)usually introduces artificial boundaries, and lead to the writingof constraints on these boundaries. Until these constraintshave been refined very perfectly, FV catches any and everyhole in these constraints and reports them as failures. Theseare called false failures because they are not design bugs butenvironment modeling errors, and they have to be fixed beforethe real design bugs can be caught. In the initial stages ofapplying FV, the number of reported false failures can besignificant, and a lot of time is spent in refining the constraintsfor eliminating them. (This item is not critical by itself, butmanifests in the next two items.)

    5.FV requires a very close interaction with the design team, to(a) write assertions for elements inside the module (toovercome the capacity problem), and (b) help in writing and

    refining constraints (to eliminate false failures). The latter part

    is very critical, because the very close interaction it demandswith the design team makes it a show stopper occasionally.

    6.The writing of efficient properties and constraints, andintensively micro-managing their iterations in a verystructured way [4], is the only way to squeeze results fromFV tools on medium sized modules. But this practice is quitealien to the current simulation engineers, who are more usedto writing large testbench software. Some organizations havesolved this problem by maintaining a central team of FVexperts, whose services are available for any project that

    requires the application of FV. These FV experts areintensively trained in the writing of properties efficiently, andin the usage of FV tools, and they are able to share FVexpertise and insights (and also code reusable assertions andglue logic) across multiple projects. Most of the reported

    success stories in FV, i.e. the application of FV to solvecomplex verification problems which have defied a simulation

    solution, have been brought about through such central teamsof FV experts. But this model of application is not scalable,and also brings management problems with it the centralteam is often considered alien by the project team, culturallyand by organizational hierarchy. Forming teamwork amongthese two teams is a management challenge, and requires extrainitiatives (e.g. appreciations, incentives, and interventions).

    7.There is no metric for FV to show its progress it is eithercompletely done (pass or fail) or not done (run out of capacity,or bounded proofs). (In some restricted situations boundedproofs are useful, e.g. a 5 cycle bounded proof for a processorwhose pipeline latency is 4, can be construed as a full proof.)Hence, other metrics are used for FV, to indicate under-specification (not enough properties written) or over-constraints, but these do not correlate with simulation metrics.

    Some of the above discontinuities in verification practice arerelated to the usage of the new technology (e.g. writing efficientassertions and constraints, using abstractions to overcomecapacity limitations, etc), and these can be overcome, to a largeextent, by trainings in the theory of FV (to increase general

    awareness of FV technology), the methodology to be followed,and the usage of commercial tools and their specific engines.

    The other category of discontinuities, which disrupt the flow ofwork, are considered more severe by the design and verificationengineers, and are the real hurdles in the path to mainstream usageof FV. These are the items: (3) low predictability of the FV process, (5) the requirement for having a very close integrationwith the design team, (6) difficulty of managing a central team,and (7) lack of metrics to show progress of verification. Theseprocess discontinuities, resulting from technology discontinuities,are the real barriers to the mainstream usage of FV.

    4. Expected Benefits from FV

    After the analysis of the problems with adopting FV, let us also

    analyze the benefits that we can expect to get from applying FV.Then we can compare the gain against the pain, and decide ifthere is a ROI (return on investment) on adopting FV on a largerscale. Many frustrations occur on the usage of FV because mostpeople are not quite clear on what to expect from FV.

    To the enthusiast and the un-experienced, FV promises the goalof complete verification. But, because of the capacitylimitations of current tools and the subsequent heavy usage of

    constraints to narrow down the scope of FV runs, it is effectivelyrendered as incomplete as simulation. Some would even advocateFV to be more incomplete than simulation, because in simulation

    we at least have a set way to make progress when coverage goalsare not achieved. Hence, completeness of verification is certainlynot the goal of FV, for the verification of medium to large sizedcircuits. (However, for very small circuits, i.e. with less than 100state elements, this can still be considered a realizable goal.)

    But before answering the question of FVs ROI, we need to firstdefine the verification targets we are seeking to achieve.

    802

  • 8/8/2019 Frml_Vrfn_ieee

    4/6

    NormalSimulations

    ABSFAV

    Detecte

    dBugs

    Verification Time

    Fig 2: Bug Detection Rates

    Understanding that both simulation and FV are incomplete (oneinherently, the other effectively so), the main question applicableto both technologies is when should we stop the verification?Todays verification signoff criteria are set by simulation cover-age metrics that we are well accustomed to, and our verificationgoal is to meet (or converge towards) them as fast as possible.

    This leads to what we think is the key to solving the ROI

    question of FV what we need is a quicker detection of bugs, andnot the detection of all bugs. And this is where FV does help,because in the initial stages of the design cycle it (and also ABS)catches bugs much faster than traditional simulations do. Fig 2shows this relative advantage. It is a variation of the curve shownin [5], but with a curve for FAV also. Simulations start slightlylater than FAV, because there is a time associated with creatingtestbenches for simulations, whereas the writing and validation of

    assertions can start along with the RTL development. The initial bug detection rate is also higher for FAV, because it is notdependent on creating testcases to hit the relevant situations.However, the FV curve terminates sooner than the simulationsdue to capacity limitations. Further progress, to catch deep cornercase bugs with FV, comes with extreme efforts (as discussedearlier). A similar benefit is also observed while using SEC [6]

    designers can quickly apply SEC to validate recent changesagainst their previous golden model, and thus reduce costlyiterations between the design and verification teams.

    This raises a related question who should be the FV user: the

    designer or the verification engineer? So far, we have been talkingonly about the verification engineer, but if we take a holistic viewof system process improvement (as in [2]), then it is evident thatthe systems variation can be reduced only by creating highquality designs in the first place, and that indicates that thedesigners should be the primary users of FV. Hence, we have toconsider three very different use scenarios for FV (in sequence):

    Designer: The designer writes white-box assertions at the IPlevel, and proves them (or, applies SEC to prove iterativecorrectness) before handing over the IP to the verification team.

    The automated usage (described below) may be applied by thedesigners too.

    Automated: Here, the design or verification team mostly relieson the usage of (a) plug-in assertion IPs (plug-and-play pre-built

    assertion packages of standard protocols), and (b) pre-definedchecks within the FV tool (e.g. for state machine reachabilitychecks, dead code checks, etc), to catch as many bugs as early aspossible. It is quite likely that some of these checks will force theFV tool into its capacity limitation, but these checks will just needto be run later through simulations again.

    This automated usage is also applicable at the SOC level, forthe verification of chip-level connectivities [7], e.g. IP-connectivity, pin-muxing, boundary pads, DFT connectivity, etc.Today, the common practice is to use simulations for this purpose, but to activate the different paths requires a huge amount of

    testcases and their setup, where all that is required is to validatethat the connections of the IP blocks have been done properly. At

    first glance, FV is not applicable at the chip level. But if all theIPs of the chip are black-boxed, then all that remains are theconnections to be verified (which have a minimal number of stateelements), and FV can be easily applied there. The generation ofthe assertions from the specification of the chip connectivity can be automated, thereby reducing human effort further, and thewhole process can be mostly reduced to a push-button exercise.

    Anotherautomatedapplication is a bug-hunting mode, with theuse of a semi-formal technology [8]. After completing ABS orFAV to the extent feasible, the DUT can be thrown open to thesemi-formal tool, and the tool be run for days or weeks to findmore bugs. Of course, there has to be a measure of progress ofthis verification, giving a sense of confidence that the tool isincreasing its coverage and is not looping in a local region.

    Deep: The verification team writes end-to-end assertions, andattempts to catch deep corner case bugs (e.g. the detection ofhard-to-locate silicon bugs, or performance bugs in a subsystem).The skills required for this effort are exceptional and have to benurtured with care within the organization, through thedevelopment of small central teams.

    Of these different application scenarios, the automatedtechniques have the highest chance of becoming mainstream

    usage, followed by the designer usage (because that requires achange in mindset), and finally the deep and difficult usage. It islikely that the many verification engineers are not going tobecome FV experts to detect corner case bugs.

    5. It is not only ROI

    On the hard and long path to converting simulation users to FV,a typical response that one faces, after FV has caught a bug thatsimulations have not caught, is So what? Simulations would havecaught it in a few more days. Are there really any bugs that FVcan catch but simulations cannot? Technically, no; it is just amatter of time before the right set of vectors is simulated. The

    trick is to find all the right set of vectors as quickly as possible,and that is what ROI of FV is all about. But in practice, it isextremely hard to convince a simulation team with this argument.

    Most discussions on why FV adoption is not picking up in theindustry focus on the ROI of FV. However, a comparison with theadoption rates of other new technologies [9] shows that mostinnovations diffuse at a disappointingly slow rate, at least in theeyes of the inventors and technologists who have created them.

    Reasons for slow adoption vary: availability of other competingremedies, the change agent not being a prominent figure,compatibility with existing solution, and vested interests.

    In general, the problem of FV adoption is very similar to the problem with adoption of preventive innovations a class ofinnovations that have demonstrated a particularly slow rate of

    adoption. A preventive innovation is a new idea that an individualadopts now in order to lower the probability of some unwantedfuture event, and examples in this category are the adoption ofseat belts in cars, the usage of contraceptives, etc. Not only are the

    803

  • 8/8/2019 Frml_Vrfn_ieee

    5/6

    rewards of adoption delayed in time, but it cannot be proven(beyond some statistical data, for which there are as manyproponents as opponents) as to whether it is actually essential. FVseems to fall in this category it excels in finding corner bugs, butthen they are rare anyway and chances are that they will not bedetected in the silicon if they are not caught by simulation.

    The knowledge-attitude-practice gap (KAP-gap, [9] page 176)

    in the diffusion of preventive innovations can sometimes beclosed by a cue-to-action an event occurring at a time thatcrystallizes a knowledge into an action. Some cues-to-actionoccur naturally the above article reports that many people beginto use contraceptives after they experience a pregnancy scare. Inother cases, a cue-to-action may be created by a change agency some national family planning programs pay incentives to create potential adopters. Similarly, we have witnessed that in theprojects where a late corner case bug caused a delay in a chipsschedule or caused a respin, FVs advantages are acknowledgedand the team is more willing to adopt FV in future projects. Butthe same shock may have to be witnessed by other teams beforethey themselves start using FV as aggressively.

    6. Strategies for FV Deployment

    The above discussions on ROI of FV and the non-ROI problemsof adoption lead to the following strategies that can be consideredfor effective mainstream usage of FV. They are presented inapproximately the sequence in which they should be introduced tothe project teams, so that resistance faced is kept to the minimum.

    1.Encourage the integration of assertions into simulation,through ABS. This will enable engineers (both verificationengineers and designers) to start the habit of writing assertions

    in their code, and using them without changing the currentwork flows. These assertions may not be coded efficientlyenough for the application of FAV, but at the least this willremove the first level of resistance to the paradigm shift.

    2.Use automatically generated assertions to integrate FAV into

    current simulation flows, to get quick coverage reports ondead code, state machine reachability, etc. This will introducethe teams to the usage of FAV tools in their simplest usemodels, and will encourage more usage and experimentation.

    3.Besides writing assertions for simulation, also make available pre-packaged plug-and-play assertion packages for standardinterfaces and protocols. This will not involve any significantdeparture from the simulation flows and, will also be theintroduction of running the FAV tools to verify assertions.

    4.Automatically generate assertions on chip-level connectivities,through flow automations. This will also not require the

    verification teams to write assertions, but to use them toreduce verification cycle time.

    5.Along with the adoption of the easy steps, it would be wise tocreate and sustain a central team of FV experts in theorganization, to help with critical verification problems. Thisis not mainstream usage in the sense that the scope ofapplication is limited to a small set of people, but the impactto the organization is significant. But, as mentioned earlier,this requires a different management practice.

    6.Recognizing that simulation engineers will probably neveradopt the rigors of writing efficient assertions and applyingFV in full detail, at this time consider shifting the target basefrom verification to design. Ask designers to write white box

    assertions, and run the FAV and SEC tools themselves.Besides creating a set of white box assertions which can beused in ABS later on, this will also create a new paradigmshift in the work flow shift the onus of module levelverification to the designers. In turn, this paradigm shift will

    make designers aware of the verification problem, thusenabling the development ofdesign for verification techniques

    and other new methodologies for verification. This processimprovement may have the highest impact on silicon quality(a la Deming [2]) which is the real goal of verification (notcoverage metrics).

    7.The last step is a catch-all condition after trying otherapproaches, also try the semi-formal techniques. There are noguarantees (at least with todays tools) for finding allremaining bugs or for proving the absence of bugs, but it is atleast worth a try in critical situations.

    The above strategies have been summarized in Table 1, whereeffort and impact (on project schedule) are marked qualitatively as

    Low, Medium and High. The steps with Low effort can be easilymade mainstream activities; the step with High effort is less likelyto see mainstream usage. Note that these strategies do not require

    the advancement of technology beyond what is currently availabletoday (e.g. the integration of FV and simulation through unifiedcoverage metrics may create a far more mainstream adoption ofFV, but that technology is not available today).

    To accelerate the above steps for adoption, additionalmanagement practices may be used, taken from the lessons learntfrom the diffusion of other innovations [9]. We have witnessedthe validity and usefulness of these lessons in our experience ofFV adoption. These include (but are not limited to) the following:

    8. Opinion leaders of a social system (and a semiconductororganization is such a system) are influential people who

    have earned social accessibility based on their technicalcompetence (not necessarily in the same domain as that ofthe new innovation) and who are recognized to conform to

    the systems norms. They can lead in the spread of newideas, or they can head an active opposition. Hence, it is veryvital to identify such organizational opinion leaders, andgarner their support for the spread of FV usage.

    9. Diffusion is essentially a process of communication, andchannels are necessary for communication. Sharing ofknowledge (e.g. what works well and what does not workwell for FV) is very critical for educating the people on the

    latest developments and advances in this area, and these haveto be customized for the specific organization. Sharing ofbest practices in a common website (as in [10], and ensuringthat it is not biased towards any specific tool vendor) willhelp to spread the knowledge across multiple teams.

    10.Generally, the fastest rate of adoption of innovations stems

    from authority decisions (depending, of course, on howinnovative the authorities are). If management is convincedof the need for attempting FV and making progress throughexperimentation, it can plan to reduce the number ofsimulation engineers in a project and replace them with FV

    engineers in the same project. Thus the overall verificationresource bandwidth remains the same but a part of it getsallocated for FV. This can help in getting early results in thecontext of specific projects and thereby make a case for moreproactive usage of FV in the subsequent projects. But this is

    804

  • 8/8/2019 Frml_Vrfn_ieee

    6/6

    to be handled delicately early victories throughmanagement fiat sometimes turn into subsequent failure.

    11.An innovation diffuses more rapidly if it can be customizedto specific needs of the social system. Integration into theflows used in the organization, such that some segments ofusage become mostly push-button (as in the checks for SOCconnectivity) or the identification of clear rules for applying

    certain classes of checks (as for protocol checking for thedominant protocols used in the organization), will help ingetting more users quickly starting to use the FV tool.

    12.Technology clusters, consisting of one or moredistinguishable elements of technology that are perceived as being closely interrelated, are usually adopted more rapidlytogether. FV is a collection of technologies (FAV, SEC,semi-formal technologies, formal coverage analysis, etc) andtogether they have a higher chance of adoption than bytreating each as a separate entity. We have witnessed thatthose who are interested in FAV are also the ones who arewilling to experiment with SEC which is a relatively newtechnology. Hence, a collective approach should be taken forencouraging their adoption.

    7. Conclusion

    In this paper, we have presented lessons learnt, based on ourexperience, as a dozen strategies for FV deployment. We havedeliberately avoided citing any specific project example, becausedata from one example can be quite misleading withoutunderstanding its full context (completeness of the specification,IP delivery procedure, verification expertise of team, FV expertise& training undertaken, etc). But we have attempted to keep this

    paper enumerative and objective. (See [4], for detailedmethodologies and examples of using FV.)

    Taking a long view of the matter, the core problems with the

    adoption of FV are: (1) the nave understanding that it yields acomplete verification, and (2) the word verification in the name.

    FV seems like an adventure sport (difficult and unpredictable),and many people give up on it due to this, only because theexpectations of ROI are wrong. Understanding the real ROI of FVallows proper usage of FV, and also opens up opportunities for thedevelopment of advanced tools in this segment. It is alsonecessary to understand the scope of simulations and know whatto expect from it. Hence we have spent considerable effort in this paper to draw analogies with statistical testing principles, whichhave a strong theoretical foundation and a successful history.

    Secondly, we must realize that FV is not really an ideal task forverification engineers, given the expertise and scope of todaysverification engineers. Also, applying FV in isolation (i.e. post-design) creates many of the capacity barriers artificially. What isneeded is more usage of FV by the designers, and at a higher level

    of abstraction than RTL. This paradigm shift has been happening(albeit slowly) in several industries during recent years.

    But that is just the beginning of process improvements. The

    growing complexity of SOCs (including hardware-softwareinteractions) may soon overcome the technical advances in

    hardware verification. What we really need is a dramaticimprovement of our processes, to be able to create correct-by-construction designs in the first place, to verify the specificationvs the intent (not just implementation vs written specification),and reduce inspection only to be used for indicating that we are on

    the right track. Formal methods of specification, modeling,analysis and design are necessary for this not just formal

    verification. What this paper really establishes is that FV is a setof technologies for quality improvement and, understanding thetrue meaning ofQuality, process improvement is the way towardsthat goal not just through inspections. These processes includetools, flows, andpeople.

    References

    [1] J Rabaey, Design without borders: A tribute to the legacy ofA Richard Newton, Keynote at DAC, June 2007.

    [2] W E Deming, Out of the crisis, MIT Press, 1982.

    [3] K Shimizu, et al, A specification methodology by a collectionof compact properties as applied to the Intel Itanium processorbus protocol, CHARME, 2001, Scotland.

    [4] A Jain, et al, Formal Assertion based Verification in

    Industrial Setting, Tutorial at DAC 2007, Foils available athttp://www.facweb.iitkgp.ernet.in/~pallab/formalpub.html

    [5] H Foster, Unifying traditional and formal verificationthrough property specification, Designing Correct Circuits(DCC), April 2002, Grenoble.

    [6] A Mathur, V Krishnaswamy, Design for verification insystem level models and RTL, DAC, June 2007.

    [7] S Roy, Top Level SOC Interconnectivity Verification usingFormal Techniques, Microprocessor Test and VerificationWorkshop, Austin, December 2007

    [8] Ho Pei-Hsin, et.al, Smart simulation using collaborativeformal and simulation engines , ICCAD, November 2000.

    [9] E M Rogers, Diffusion of Innovations, 5 th edition, Free

    Press, New York, 2003.[10] Formal verification patterns,

    http://www.oskitech.com/wiki/index.php?title=Main_Page

    Description of Step Effort Impact

    1 Assertions in simulations (ABS) M M

    2 Auto-generated assertions to augment

    simulations (deadcode, reachability)

    L M

    3 Pre-packaged assertion IPs L H

    4 Auto-generated assertions for SOCconnectivity

    L H

    5 Deep FV, with central team H H

    6 Designers using FV (FAV, SEC) M H

    7 Semi-formal bug-hunting* L L

    L = Low, M = Medium, H = High* Low effort is afterapplying ABS and / or FV. Low impact is

    because there is no metric of progress.

    Table 1: Strategies for FV Usage

    805