verification horizons publication lr

Upload: chandan-choudhury

Post on 01-Jun-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/9/2019 Verification Horizons Publication Lr

    1/40

    We’re finally doing it. My wife and I have been in our house for 13½ years and we’re finall

    our kitchen, which my wife has wanted to do for about 11 years now. This is kind of a big stas those of you who have gone through the exercise can attest, I’m sure. When we had the hbuilt, we chose to limit our budget to keep “expensive” from turning into “exorbitant,” and been living with formica countertops and a linoleum floor ever since. It’s rather exciting sethe first steps of converting these into granite and hardwood. As you might imagine,I find myself thinking of how this relates to upgrading averification methodology because I’m sure that, by now,you know that that’s how my mind works.

    When a project like this comes up, my wife and I workwell together. I tend to go overboard and want to buy newappliances, add new cabinets and maybe even throw ina new “island” with a sink. Dee, on the other hand, is themore practical one so we’ll be keeping the appliances andcabinets. We are also removing the wallpaper and havingthe walls painted. So, we’ve got some “legacy IP” we need towork with, but we’ll be able to integrate it all with the new stuffand have a much more useful – and modern – kitchen thanwe had before. Keep this analogy in mind as you read thefollowing articles.

    Our feature article this time comes from my friend AndyMeyer, a fellow Verification Technologist here at Mentor. In“Graph-Based IP Verification in an ARM SoC Environment,” Andy shows how the use of graph-based verification methodsnot only serve to upgrade the quality of your block-levelverification efforts, but also how combining graphs withyour UVM environment makes it easier to extend block-levelcomponents into an SoC test environment. As you’ll see,this approach lets you use reuse your block-level verificationIP both to ensure that your design blocks are functioningproperly in the SoC and also easily construct a complex SoCverification environment from the block-level components.

    With Vision and Planning, You Can ActuallyEnjoy Upgrading Your Process. Seriously.By Tom Fitzpatrick, Editor and Verification Technologist

    A PUBLICATION OF MENTOR GRAPHICS — VOLUME 7, ISSUE 3 — NOVEMBER 2011

    WHAT’S ONTHE HORIZON?

    Graph-Based IP Verification: Reuse yourblock-level verification IP to ensure that your design

    blocks are functioning properly in the SoC. page 6

    Use Scripting Language to DefinePowerful Test Cases: use Tcl to simplify

    the task of customizing testbenches. page 10

    Mixed Analog-Digital Verification withOVM and Questa: See how STMicro created

    analog-friendly OVM verification IP. page 16

    Polymorphic Interfaces: Explore the use ofbstract classes to replace virtual interfaces for connect-ng your OVM/UVM testbench to the DUT. page 19

    UVM/OVM Methodology Cookbook:This article shows how to set up a UVM environment

    for verifying layered protocols. page 25

    Partners’ Corner with TVS:Helping Dialog Semiconductor adopt OVMfor their verification solution. page 29

    Are You Driving in the Dark withFaulty Headlights?: Using the UCDB to

    measure and manage your process. page 34

    “I find myself thinkingof how this (kitchen

    upgrade) relates toupgrading a verificationmethodology becauseI’m sure that, by now,you know that that’s

    how my mind works.”

    —Tom Fitzpatrick

  • 8/9/2019 Verification Horizons Publication Lr

    2/40

    2

    Our next article comes from Infineon who sharetheir thoughts on how to“Use Scripting Language inCombination with the Verification Testbench to Define

    Powerful Test Cases.” This article shows how to use Tcl(or any scripting language) to simplify the task for designersof customizing testbenches. The article discusses bothVHDL and OVM testbenches, but the technique isapplicable to UVM as well.

    When I was in college, I recall my dad, who was an analogdesigner, telling me that “it’s an analog world” and that mydigital exploits would eventually run into this reality. Thetruth of my dad’s wisdom is evident in our next article, inwhich“STMicroelectronics Engineers Find Mixed Analog-Digital Verification Success with OVM and Mentor GraphicsQuesta ADMS Simulator.” In it, you’ll see how STMicrocreated analog-friendly OVM verification IP that allowedthem to take advantage of OVM in generating realisticanalog traffic via Questa ADMS.

    We next share a couple of “how to” articles on some ofthe finer points of OVM/UVM testbench design. The first,“Polymorphic Interfaces: An Alternative for SystemVerilogInterfaces,” by my colleague Shashi Bhutada, builds onprior work that explores the use of abstract classes toreplace virtual interfaces for connecting your OVM/UVMtestbench to the DUT. This particular application showshow this technique allows you to create a single driver classthat can use different specializations of an interface withouthaving to rewrite anything. This is followed by our monthlyarticle from the UVM/OVM Online Verification MethodologyCookbook at the Verification Academy. This article showshow to set up a UVM environment for verifying layeredprotocols.

    In the Partners’ Corner for this issue, our friends fromTest and Verification Solutions share their experiences

    in helping Dialog Semiconductor to adopt OVM for theirverification solution.“Successful Adoption of OVM andWhat We Learned Along the Way” will give you someinsight into what you may expect if you’re thinkingof doing the same.

    Last, but not least, my colleague Darron May asks theimportant question,“Process Management: Are You Drivingin the Dark with Faulty Headlights?” This article builds on

    Darron’s previous article about Questa’s Unified CoveraDatabase to show how we’ve developed additionaltechnologies in our Verification Management tool to turthe data in the UCDB into valuable information for youto measure and manage your verification process.

    I’m concluding this welcome message in my kitchen.The new countertops will be installed in a few days,but the walls already have a coat of primer on them.The process is underway and we’re really looking forwato it. One of the big additions for me is the new flat-screhigh-definition television I set up this afternoon. Whethyou’re upgrading your kitchen or your verification procif you have a vision of where you want to be, the key is your planning up front, take it one step at a time, and doforget to enjoy the experience. The most important thinis to take that first step.

    Respectfully submitted,Tom FitzpatrickEditor,Verification Horizons

  • 8/9/2019 Verification Horizons Publication Lr

    3/40

    3

    Now you don’t have to wait for thenext printed issue to get the latest.

    Hear from the VerificationHorizons Team every week atVerificationHorizonsBlog.com

  • 8/9/2019 Verification Horizons Publication Lr

    4/40

    4

    Page 6

    Graph-Based IP Verificationin an ARM SoC Environmentby Andreas Meyer, Verification Technologist, Mentor Graphics Corporation

    Page 10 Use Scripting Language in Combinationwith the Verification Testbench to DefinePowerful Test Casesby by Franz Pammer, Infineon

    Page 16

    STMicroelectronics Engineers Find Mixed Analog-Digital Verification Success with OVM and MentorGraphics Questa ADMS Simulatorby Alberto Allara , Matteo Barbati, and Fabio Brognara, STMicroelectronics

    Page 19

    Polymorphic Interfaces: An Alternativefor SystemVerilog Interfacesby Shashi Bhutada , Mentor Graphics

    Table of Contents November 2011 Issue

  • 8/9/2019 Verification Horizons Publication Lr

    5/40

    5

    Page 25 Layering in UVM(Extracted from the UVM/OVM Online Methodology Cookbookfound on verificationacademy.com)

    Partners’ Corner

    Page 29 Successful Adoption of OVM andWhat We Learned Along the Way by Mike Bartley, Founder, TVS and Steve Holloway,

    Senior Verification Engineer, Dialog Semiconductor

    Page 34

    Process Management : Are You Drivingin the Dark with Faulty Headlights?by Darron May, Manager of Verification Analysis Solutions, Mentor Graphics Corporation

    Verification Horizonsis a publicationof Mentor Graphics Corporation,all rights reserved.

    Editor: Tom FitzpatrickProgram Manager: Rebecca Granquist

    Wilsonville Worldwide Headquarters8005 SW Boeckman Rd.Wilsonville, OR 97070-7777

    Phone: 503-685-7000

    To subscribe visit:www.mentor.com/horizons

    To view our blog visit:VERIFICATIONHORIZONSBLOG.COM

  • 8/9/2019 Verification Horizons Publication Lr

    6/40

  • 8/9/2019 Verification Horizons Publication Lr

    7/40

    7

    Figure 2: Reconfigurable UVM Environment

    However, this environment relies on the processor busagent (the AXI master agent in figure 2) to directly drivethe ARM bus. As a result, this testbench cannot be usedunmodified for an SoC integration-level test, since thatwould result in conflicts on the processor bus.

    SOC VERIFICATIONFunctional verification requirements at the SoC level arechanging. Where there was little interaction or resourcecontention between IP blocks, an interconnect test wasgenerally sufficient for SoC functional verification. As morecomplex IP blocks are being integrated into SoCs, systemlevel verificationis required to measureinteractions, resourcesharing, utilization, orpower concerns. Theability to reuse existing,tested block-level IPverification componentscan significantly reduce theSoC verification effort.

    Part of the goal of aUVM-based approachis to ensure that IP-levelverification componentshave the potential to be

    instantiated unmodified into an SoC environment.Components such as scoreboards and checkersare good candidates. The IP-level graph stimulusblock may also be able to be used unmodified,but the sequence items that were created bythe graph can no longer be driven directly ontothe processor bus, assuming that the bus onlysupports one master, and a processor has beeninstantiated in the SoC environment.

    While this changes the connection between thestimulus graph and the IP block, the stimulusitself may not need to change. To connect the

    graph, the sequence items need to be accessed by theprocessor, and then driven on the processor bus. Two newcomponents are needed: a software routine running onthe processor that reads sequence items and performsthe specified operations, and a graph item buffer. Sincesequence items tend to contain basic bus operations:read or write to a specific address with specific data, thesoftware routine performs the operation, and then fetches

    the next sequence item. The item buffer is a verificationcomponent that is addressable by the processor that storessequence items from the graph and delivers them to theprocessor when accessed, as shown in figure 3.

    Figure 3: Graph Stimulus Driven Through Processor

  • 8/9/2019 Verification Horizons Publication Lr

    8/40

    8

    Figure 4: System Exerciser

    The simplest software routine is a loop that waits for an itemto be available from the buffer, pulls the item from the graphitem buffer, executes it, and then waits for a new item to beavailable. For more complex operations, this routine mayaccess multiple items. If coordination is required betweenexisting software and stimulus, then the processor may alsowrite commands that provide status or control to a top-levelstimulus graph.

    The graph item buffer can be a UVM sequencer thataccepts items from any number of stimulus graphs, andhas a driver to handshake with the software routine. Forbi-directional operation, two buffers can be implemented toallow software to control the verification environment.

    With this method, all items generated by the stimulus graphwill be driven onto the processor bus, with the supportof the processor. This approach requires that there isan available address space for the graph item buffer tobe accessed by the processor. Because the softwarewill execute several operations to fetch items, the timingbetween items is likely to change. Back-to-back operationsmay be difficult to reproduce, and some low-level controlthat was available through a master agent may be lost.

    This approach willallow the IP to bedriven from within

    the SoC integration.The main verificationIP: stimulus, check,and coverage maybe completelyunmodified,while block-levelenvironment isreconfigured. A toplayer graph maybe used to provide

    some control andcommunicationbetween the existing

    software and the lower-level API stimulus graph.

    SYSTEM EXERCISERWhen multiple IP blocks have been independently testeand then integrated into the SoC, this method can be useas a system exerciser. For IP blocks that do not shareresources, multiple stimulus graphs can be connected

    directly to the graph item buffer, and each IP-level testcan run in parallel as shown in figure 4. Block-levelscoreboards and functional coverage are used to measurIP functionality. Additional system-level scoreboards caadded to check overall system operation if desired.

    Note that this environment will most likely need to inclua top-level graph to coordinate and configure each ofthe lower-level graphs. This may include waiting until thsoftware is running, coordinating operation for sharedresources, and determining overall completion of lowergraphs.

    Using just the IP-level stimulus graphs, this simple systeexerciser allows the IP blocks to run in parallel, possiblyin parallel with system software. Resource sharing, busutilization and contention, and simple system performancan be observed.

  • 8/9/2019 Verification Horizons Publication Lr

    9/40

    9

    Where the IP tests are run unmodified, only tests that donot cause interference with other blocks can be run. Thiswill likely result in a subset of the IP tests being used. Ifsystem-level tests are needed that stress IP interactions,this environment provides all the building blocks to adda new SoC-level graph that drives multiple IP blocks in acoordinated fashion. The graph item buffer can be usedto allow coordination between the system-level graph andsoftware if needed.

    CONCLUSIONThis approach provides a method for functional verificationof an IP in a reusable environment. This allows for a low-cost method for a verification engineer to re-verify an IPafter it has been integrated into an SoC. Existing, tested,verification components are used to check that the IP isoperating correctly within the target SoC environment.

    In an SoC environment that contains multiple independentIP blocks, this approach can be extended to run multipleIP verification suites in parallel, providing a simple system-level exerciser. By adding a top-level stimulus graphfor coordination, a more complex SoC level verificationenvironment can be constructed from the block-levelcomponents.

  • 8/9/2019 Verification Horizons Publication Lr

    10/40

    10

    This article focuses on providing a methodology to make

    the process of writing test cases easier for verifying mixedanalog digital designs. It explains the advantage of usinga scripting language together with a Hardware DescriptionLanguage (HDL) environment.

    It shows a way to give the HDL engineers the possibility touse constrained randomized test cases without having toomuch knowledge about the implemented testbench.

    Two possible scenarios for defining your test cases will bedemonstrated. One can be used for the traditional VHDLtestbench and another for the SystemVerilog (SV) Open

    Verification Methodology (OVM) environment. For bothcases a scripting language will be used to write test cases.

    At the end you will understand that with the mentionedmethodology also design engineers will fully benefit ofverifying their design.

    Please note that within this article Tool command language(TCL) is used as scripting language. The reason for usingTCL is, that Questasim is controlled by TCL and also hassome more built in TCL commands included. Neverthelessthis approach can be used with any scripts which aresupported by your simulation environment.

    WAYS OF MAKING VERIFICATION EASIERFOR HDL ENGINEERSNowadays designers are experienced in optimizing thecircuits for power, area and implementing the algorithmeffectively. Moreover, they can write very good VHDLtestbenches with advanced stimulus generation and fileI/O parsers, but they do not have the deep knowledge todevelop a constrained randomized configurable testbenchenvironment to verify all corner cases of the design.

    Besides the designers you will find the verificationengineers who have deep knowledge of modern softwaredesign and applying the newest techniques on today’shigher verification languages like SV together with

    the OVM libraries. They are able to generate constraine

    randomized stimuli’s with automatic checkers, scoreboaanalyzing all kinds of coverage models.

    In most of the cases it is nearly impossible to distinguisbetween design and verification engineers. In manyprojects design engineers will do a basic verification of module on their own. Reasons can be lack of resourcesor that the design engineer just wants to make a shortrough verification of his designed module by himself. Inorder to avoid any misunderstanding during specificatioor implementation of the design, the four eyes principlerecommended. Meaning a different engineer should verthe block generated by another design engineer.

    It is important to emphasize that it is really stronglyrecommended to make the verification process itself eareusable and powerful as possible. In this way only alimited amount of verification experts are needed who wbuild up the testbench and provide a simple interface fodefining test cases in all variations. Such interfaces haveto be as simple as possible but also as strong as possibleFurthermore, it should still be feasible to vary as much possible the parameters and use the randomizing featureto catch all the corner cases. One common approach is tuse SV OVM language. This methodology already proveasy test case definition by using OVM test files.

    “The ovm_test class defines the test scenario for thetestbench specified in the test. The test class enablesconfiguration of the testbench and environment classes well as utilities for command-line test selection. […], ifrequire configuration customization, use the configuratioverride mechanism provided by the SystemVerilog OVClass Library. You can provide user-defined sequences i

    a file or package, which is included or imported by the tclass.” (OVM SystemVerilog User Guide, 2009) P. 67/68

    If the OVM environment is well structured you can genall possible test cases within your test file by simplydefining some of your parameters (e.g. Use the approprisequence, correct register settings). In addition the

    Use Scripting Language in Combination with the VerificationTestbench to Define Powerful Test Cases. by Franz Pammer, Infineon

  • 8/9/2019 Verification Horizons Publication Lr

    11/40

    11

    modification of such test files can easily be done by the

    design engineer. Depending on the setup you can definepure randomized test cases in your test file or directedtest cases to verify dedicated use cases.

    Several limitations should be pointed out with this approach.First, for every new test case a compilation has to be donebefore. Second, reusing a test file for other topics like testengineering is nearly impossible. In addition, it may takesome time for design engineers to understand the processof writing test cases by these test files.

    With respect to the above mentioned barriers another

    approach was developed. The idea is to use a scriptinglanguage in combination with a testbench like SV OVM toverify the AISC.

    TCL VERIFICATION METHODOLOGYTCL is used by many simulator tools (e.g. Questa, NC-Sim) for controlling the simulators behavior. Normally TCLis used in the simulators only to start /stop simulation andgenerate some logging messages. Since TCL is a verypowerful language, you can generate whole test cases by

    using TCL scripts. To this end the testbench only needs toprovide an interface to the TCL script to allow taking controlof the whole simulation flow. The interface usually will bebuilt up by the verification expert who has deep knowledgeof the testbench itself. Later, only the TCL script will beimplemented by the design engineers to develop the testcases.

    This methodology has many advantages which weredescribed here:

    • Same TCL file can be used for both VHDL/Verilogand SV OVM environment.In cases where you have a VHDL/Verilog testbenchand in parallel a SystemVerilog OVM testbench.You can use the same TCL files. You just have totake care of the TCL interface

    • If file is kept simply it can also be used for CV(Component Verification) and TE (Test Engineering)group as input source.

    • Recompile is not needed.

    If your simulator supports TCL you can simply try outyour test case line by line within your simulator. You donot need to restart your simulation and change your testcase file and try it out again.

    • Other tasks or scripts can be called during your testexecution

    • Make debugging easier.Simply think of analyzing some internal signalsimmediately or do some calculations to check forcorrectness. Temporary log signals to files duringdebug session.

    DESCRIPTION OF A TCL VHDL TESTBENCHIn traditional black box testing strategies the VHDLTestbench is surrounding the Design Under Test (DUT).The testbench controls all activities at the ports of the DUT.It generates the stimulus and checks all the responds fromthe DUT. The stimuli generation can be hard coded in someVHDL behavioral files, or defined in some ASCII text files.For reading in text files an extra file parser is needed whichinterprets the commands in the text file. The file parserwill usually read in and execute the text file line-by-line. All

    the used commands need to be defined in the parser. Forsimple test cases this approach may fulfill all your needs. Itcan become complicated if you want to use some variableswithin your text file. You only have to let the files parsersknow how to interpret variables. It will get more and morecomplicated if you want to do some math or use loops,branches and other conditions within your text file. Thusyour VHDL parser will get more and more complex andfuture adoptions will be nearly impossible.

    To overcome these disadvantages of using ASCII text file

    you can use simple TCL scripts. In scripting language youalready have a huge functionality (e.g. handling of variables,loading another file, using if/while/loops). There is no needto take care of these items.

    Figure 1 on the following page gives you an overview howsuch TCL scripts can be used within your environment.

  • 8/9/2019 Verification Horizons Publication Lr

    12/40

    12

    Figure 1 : Overview TCL VHDL Testbench

    The TCL Command Interpreter (TCL-CIP) is the interfacebetween the TCL command scripts and the RegisterTransfer Level (RTL) environment. It contains the libraries& packages for procedure/variable definitions, file I/Ofunctionality etc. Basically it communicates via a welldefined VHDL TB peripheral or it may also directly takecontrol of some ports of the DUT or even some internalsignals would be possible. The commands and variableswhich were defined in the CIP can be used in the TCLscripts.

    Next is shown a short example how such an approachwould look like. First is illustrated the TCL-CIP. For simplif-ied reasons in this place there will not be packages andnamespaces used, but also without using these techniquesyou can fully benefit with this methodology.

    The TCL-CIP basically consists of three files.

    1.) Global Signal definitions and variables

    # Physical Environment Signalsset ENV_TEMPERATUR /analog_global_pack/

    temperatureset ENV_PRESSURE /analog_global_pack/pressureset VBAT /analog_global_pack/vddd_global

    # Clock signalsset CLK_SYS /tb/dcore/clk

    # Clock periodset T_CLK_SYS “100ns”

    Listing 1 : TCL Global Signals

    2.) Tool specific procedure definition

    As an example some procedures for the Questa simulatowere shown. Whenever the tool version will be changedonly this file needs to be modified.

    #Source file of Questasim specific commandsproc run_proc { runtime } { run $runtime}# Set an internal signal, by using the force commandproc set_signal_proc {netname value }{ force –freeze $signal $value 0}# Disable the force of an internal signalproc noset_signal_proc {netname}{ noforce $signal}

    Listing 2 : TCL Questa Procedures

  • 8/9/2019 Verification Horizons Publication Lr

    13/40

    13

    3.) Low level procedure definition

    Here all the above mentioned files were loaded togetherwith the procedure definition which can be called from theTCL test cases.

    # File: cip_procedures.tcl# Load global signalsdo $env (WORKAREA) /units/dcore/tcl/global_signals.tcldo $env(WORKAREA) /bin/questa_procedures.tcl

    set cmd_file $1.tcl; # This is the name of thetest case file

    set log_file $1.log; # Logging information will be

    written in that log file

    proc log_test { logfileid logtxt }{ puts $logfileid $logtxt echo $logtxt}

    # LOG: Just a comment which is written to thetranscript and to the file

    proc LOG { args } { global Now global tb_g tb_g set args [join $args] set log “\# $args” log_test $tb_g(logfileid) $log}# Simply delay the simulation by calling therun command

    proc DL {time args}{ … run_proc $time}

    # Set a signal directlyproc SS { name args }{ global Now global tb_g tb_g set log “$Now : SS $name $args” log_test $tb_g(logfileid) $log upvar $name $name set namex [set $name]

    set log [set_signal $namex [lindex $args 0]]}

    # Cancle setting of a signal (NotSet)proc NS { name args } { … set log “$Now : NS $name $args” log_test $tb_g(logfileid) $log set namex [set $name] noset_signal $namex}

    set tb_g(logfileid) [open $logfilename w 0777]

    # Check if command file existsif {[file exists $cmd_file] == 0}{ puts stderr “Cannot open [exec pwd]/$cmd_file exit}############## M A I N ##############do $cmd_file ; # Call the test case

    puts $tb_g(logfileid) “TEST CASE RESULT : $TEST_RESULT”

    Listing 3 : TCL Basic Procedures

    Below shows how such a TCL test case could look like:

    # File: 001_BasicTest.tclLOG “Testcase: 000 Basic Test”LOG “Author: MR.X”SS MODE_A “01” ; # Set signalDL 2500us ; # Delay for 2500usLOG “Check power down signals”

    for {set i 0} {$i < 10} {incr i 1} { CL PD_ADC ; # Check low CL PD_CPU DL 100us}NS MODE_A ; # Cancel the Set commandLOG “Check power down signals”# Wait until PD_MODE signal change to 1.If this does not happen within 1ms then go on and #generate an error

  • 8/9/2019 Verification Horizons Publication Lr

    14/40

    14

    WT PD_MODE 1 1ms ; # Wait until PD_MODEgoes to ‘1’ for 1ms

    CH PD_ADC ; # Check HighCH PD_CPULOG “Test finished”

    Listing 4 : TCL Test Case

    A startup file is needed which will execute your test case.

    # File: startup.dotcl_cip.tcl 001_BasicTest

    Listing 5 : Startup File

    The startup file is loaded with the vsim command.

    vsim –c top_dut_tb-cfg.vhd –do startup.do

    Listing 6: Simulator start command

    DESCRIPTION OF A TCL SV OVM TESTBENCHThe same approach as mentioned for the TCL-VHDLtestbench can now be used also for the SV-OVMenvironment. Only for the TLC-CIP interface someadaptations are needed. Additionally within the OVMenvironment you need to write an extra TCL Interface OVC.This OVC reads via a virtual interface the commands fromthe TCL-CIP and takes control over other Open VerificationComponents (OVCs). Depending on the intention you canadd a great deal of flexibility to these TCL scripts:

    • Define the basic configuration• Define which parameters to be randomized• Define border values of randomized parameters• Start dedicated sequencer • Force or read back some internal signals• Do some basic settings of the DUT• Save Functional Coverage of your SystemVerilog Assertions (SVA)

    If a SV OVM Testbench environment already exists newTCL functionality can easily be added even without hava TCL IF OVC. For the beginning it may be enough onl

    be able to force some internal signals or to do some intecalculation for other scripts.

    Below you see an overview of how to connect your TCLscript to the OVM environment. The TCL IF – OVC canbe modeled in various ways. In our case it is built up lika usual OVC. The monitor is observing the TCL vir tualinterface for any activities. Whereas the driver will sendsome responds to this interface. The communication tothe other OVCs can now be handled via ports or by usinthe sequencer as a virtual sequencer and in this way wecan take control over all other sequencers within the OVenvironment.

    CONCLUSION / SUMMARYUsing scripting languages like TCL can give some powefeatures for writing test cases. By using a generic TCLinterface these scripts can be used for various testbenchNo matter which HDL (Hardware Description Languagis used. Design engineers do not need to have muchverification expertise for verifying their designs. Due toeasier test case definition they also can do the basicverification of their design on its own. Additionally if thTCL scripts are structured well they also can be used foother topics too. Think of using these scripts as input focomponent verification to control the whole equipment.Furthermore you can reuse it for test engineering toconfigure the test setup. Therefore only the TCL-CIP neto be adapted.

    Depending on the available resources and timeframe theTCL functionality can be extended at any time. Extendiyour TCL functionality will not have too much impact oyour existing testbench structure.

    What I want to point out is that it is never too lateto implement scripting functionality to your test caseby using that approach.

  • 8/9/2019 Verification Horizons Publication Lr

    15/40

  • 8/9/2019 Verification Horizons Publication Lr

    16/40

    16

    DESIGN CHALLENGE• Verify IP for a R/W channel to be integrated into a

    STMicroelectronics hard disk component (an SoC)• Build a verification process from reusable steps

    SOLUTION

    • Focus on integration into the larger SoC,not just on the IP

    • Make maximum use of standards, start with OVM• Start writing verification components as soon

    as design work in RTL is underway• Use Mentor Graphics Questa ADMS tool

    for analog accuracy and easy integrationinto digital verification flow

    By now it’s a cliché to speak of the rise of digital technology.Follow technology coverage in the media for any length oftime and it doesn’t take long to note the tacit assumptionthat nearly anything with an on/off switch will eventuallycommunicate with the world at-large exclusively in strings

    of 0s and 1s. Of course, as long as the electronics industryis built on harnessing the laws of physics, the importanceof analog signals will never go away. Nature speaks inwaveforms, not regimented bitstreams. So the challenge,and one that must be repeatedly solved by those buildingever more complex semiconductor devices, is how to verifywhat’s happening at the analog-digital interface.

    One recent solution comes from a trio of engineers atSTMicroelectronics in Milan, Italy working on verifying animportant bit of intellectual property (IP) for an ST hard disk

    component. The team’s approach was to combine OVMmethodology, Mentor Graphics Questa ADMS simulator andlots of collaboration among engineers at both companies.

    A PRIMER ON HARD DRIVESIn general hard drives offer a case study in handling theanalog-digital hand off. Consider a stream of binary datato be written to a drive. Those 1s and 0s in the bitstreammust be encoded and then output as an analog waveform,

    a conversion handled by the ST IP. The waveform isimprinted on the magnetic regions of the drive’s spinnindisk, or platter, thus storing the binary stream.

    To retrieve the data from the drive, the process more or runs backward. The head of the actuator, the writing andreading element, moves over the disk where the data arestored. The pattern of magnetization on the disk changethe current in the head, and this change can be representas a waveform. Next comes sampling of this waveform the ST device and finally a new stream of binary data, wall the algorithms in place to check data integrity and apdata corrections, to be fed back to that emerging digitalfuture we mentioned earlier.

    Allara and his colleagues were working to verify IP for read/write channel to be integrated into the larger ST SoPerhaps the biggest challenge was how to meaningfullywork across analog and digital domains. Historically,engineers specialize and develop a set of skills relevantto just one domain. Digital verification engineers eschewgraphics and spend most of their time writing and compvast amounts of code.

    Figure 1. Typical

    hard disk drive; ST’s

    component converts

    analog waveforms

    into digital bitstreams

    and vice versa —

    critical to reading

    from and writing

    to the drive.

    Figure 2. At their

    core, hard disk

    drives are analog

    devices, storing

    data magnetically

    on tracks — con-

    centric circles placed

    on the surface of each

    hard disk platter.

    STMicroelectronics Engineers Find Mixed Analog-DigitalVerification Success with OVM and Mentor GraphicsQuesta ADMS Simulator by Alberto Allara, Matteo Barbati, and Fabio Brognara, STMicroelectronics

  • 8/9/2019 Verification Horizons Publication Lr

    17/40

    17

    By contrast, analog verification engineers look warily atcode as most of their work is done via graphical interfaces.Though references to “mixed-mode simulation” abound, Allara believes the phrase generally refers to implementingan analog design digitally, not truly working across domains.

    “The bottom line is that this is complex IP,” says Allara, whoearned his master’s degree from Politecnico di Milano in1994 and has been working in the industry since then. “Andthrough the years, the complexity of the digital portion ofall of our analog-digital devices has constantly increased,forcing us to look for leading edge verification technology.”

    Another challenge was how to approach the verificationprocess in such a way that at least some of the stepsand tasks could be subsequently reused. A major issue,one faced by many verification engineers, is how to avoidstarting from scratch with each new project. Yes, a customapproach may give a team a chance to demonstrate itstechnical prowess. However, one-off verification is alsoundeniably a huge sunk cost, particularly as verificationcomplexity skyrockets. Recall that verification complexityincreases at some multiple of the rate of increase of the

    number of gates, a troubling maxim given that gate countsalready number in the hundreds of millions or more.

    BEFORE CODING, THINK FIRSTThe first step for the ST team had nothing to do withcoding or even formal planning. Rather, they worked tobegin thinking differently about the verification process.For example, rather than focus exclusively on the read/write IP, they instead chose to consider how this IP couldbe integrated into the SoC. The team realized that given the

    nuanced relationships among the various building blocksthat make up the SoC, there is more to verification than justlooking at the various signals expected to pass through theIP they were assigned to verify.

    Allara and his colleagues also were determined to makemaximum use of standards by leaning heavily on OpenVerification Methodology, or OVM. (As an aside, it’s worthnoting that since this ST project was done, Accellera ratifiedversion 1.0 of Universal Verification Methodology, or UVM,

    a direct derivative of OVM; see http://bit.ly/eK4oUI (PDF). Allara, who with his nearly 20 years of experience countsas a seasoned pro in IP and SoC verification, says the riseof standards is the biggest change he’s experienced in howhe approaches his work. It wasn’t long ago that engineerswrestled with HDL and other description languages to buildand verify RTL designs. Companies cobbled together theirown techniques — in fact this state of affairs still existstoday — using everything from C code to SPICE simulation.

    Standards came into play immediately when the actualverification work began. A second ST team working on theprocessor’s analog front end used VHDL AMS to build amodel of the analog domain which was provided to Allaraand his colleagues, Fabio Brognara and Matteo Barbati.The model allowed Allara to close the loop with the RTLdescribing the digital channel front end and was relativelyeasy to integrate with other languages, particularly VHDL AMS.

    Allara says his approach was to start writing verificationcomponents as soon as his ST designer colleagues starteddeveloping the digital design in RTL. Accordingly, the first

    phase in Allara’s verification effort was to verify the digitaldesign in greater and greater detail as more of the RTL tookshape. Later, when the RTL was finished and Allara hadreceived the model of the analog front end, his team movedto true mixed mode simulations, while reusing much of theirverification infrastructure.

    The OVM methodology requires specifying verificationcomponents for each interface of the device. Thesecomponents are coordinated via a multi-channel sequenceroperating at a higher, more abstracted layer of the OVM

    environment. The components developed by Allara’steam allowed them to program the registers of the read/write IP. For the “read” portion of the IP, they created acomponent that generated a model stream of bits similar tothat which might be read from a hard drive’s spinning disc. Another verification component extracted the informationafter the stream was processed to compare the output tothe expected result. The team developed a similar set ofcomponents for the “write” portion of the IP.

  • 8/9/2019 Verification Horizons Publication Lr

    18/40

    18

    The simulated bitstream was produced by a patterngenerator written in C code and embedded in a verificationcomponent. To reuse their environment for mixed mode

    simulation, the team created a layer that could convert thebitstream into a phased series of various input signals thatroughly approximated the expected input to the IP’s analogfront end. This simulated conversion from analog to digitalsignal was done via VHDL AMS.

    Using the methodology they built, the team was able to findbugs in the analog domain that the ST analog designersthemselves missed. The reason, Allara says, is that histeam’s approach was able to feed the analog front end apattern very similar to one that might actually be read froma disc. By contrast, in analog-only verification, the simulatedpatterns are often simple and fairly symmetrical, twocharacteristics that don’t describe the complex, sprawlingand often asymmetrical waveforms produced by themagnetic fields.

    FINDING BUGS IS THE FINISH LINE OF VERIFICATIONIn the end, the goal of verification at the analog-digitalinterface is to make a discovery, or more precisely, tofind problems with the design before it’s built and thoseproblems are found by customers. Any methodology thatseems to ratify designs as mostly okay is more apt to beevidence of faulty verification than proof of a good design,which is why Allara’s success in finding bugs that othershad overlooked is perhaps the best evidence of his team’ssuccess.

  • 8/9/2019 Verification Horizons Publication Lr

    19/40

    19

    INTRODUCTIONThe SystemVerilog language is increasing in adoption foradvanced verification techniques. This enables the creationof dynamic test environments using SystemVerilog classesamong other things. The SystemVerilog virtual interfacehas been the primary method for connecting class-basedSystemVerilog testbenches with a VHDL or Verilog DUT,but this construct has cer tain limitations, especially whenthe interface is parameterized. In this ar ticle we willdiscuss some of these limitations and demonstrate analternative approach called as the Polymorphic Interface.The recommended approach can also work genericallywherever one uses virtual interfaces and not justparameterized interfaces.

    For the SystemVerilog testbench, we will use OVMinfrastructure, but this same discussion applies to UVMtestbenches. This article assumes SystemVerilog OVM/UVM class and interface syntactical understanding.

    SYSTEMVERILOG OVM/UVM TESTBENCHDesign demands have grown exponentially over timeas our fabrication capabilities and geometry sizes havedramatically improved. Verification methods can beadvanced with the SystemVerilog language, which providesa whole gamut of methods. SystemVerilog amalgamatesadvanced methods including Object-Oriented Programming(OOP) by extending a well-known design language: Verilog.Since it is an IEEE standard and packaged with RTLsimulators, advanced verification is now easily accessible.

    The SystemVerilog language can be divided into fiveparts. One is an enhancement of Verilog RTL designconstructs that captures design intent more accurately.The other four parts are constructs meant to deliver theadvanced verification features: assertions; object orientedprogramming (OOP) or the ‘class’ syntax; constrained-random stimulus or automatic stimulus generation; andfunctional coverage.

    The next big development in this area was the UVM(Universal Verification Methodology) Library, which isan Accellera standard SystemVerilog code-base jointlydeveloped by multiple EDA Tool vendors and industryleaders. This library simplifies adoption of the object-oriented methods that are part of SystemVerilog. Simplyput, UVM tames the SystemVerilog behemoth. UVM islargely based on its predecessor OVM (Open VerificationMethodology) Library.

    In a traditional testbench, the user is required to use ahardware description language (HDL) to also define thetestbench. Unfortunately, the HDL only allowed creatingstatic design elements. This has many disadvantageswhich include: testbench interfering with design interactionscausing race conditions, static testbench prevents plug-n-play, verification reuse meant cut-n-paste of the test code,limited coverage of design state space and no metrics ofwhich states were covered. In an advanced OVM/UVMtestbench a user is required to use ‘class’ syntax and

    create dynamic testbench components which do not rely onpin-level interactions on the DUT but are at an abstractionabove the pin-level interaction called as transaction-level(shown in Figure 1).

    Figure 1: Transaction-level versus Pin-level activity

    A typical OVM/UVM testbench (See figure 2) contains astimulus generator that generates transaction, a driver thatconverts transactions into pin-level activity, a monitor thattransforms pin-level activity back into transaction objects,a coverage collector to measure the transaction activity onthe pins, a predictor that predicts what the expected outputtransaction should be based on the stimulus applied, and ascoreboard that compares observed output transaction with

    Polymorphic Interfaces:An Alternative for SystemVerilog Interfacesby Shashi Bhutada, Mentor Graphics

  • 8/9/2019 Verification Horizons Publication Lr

    20/40

    20

    the expected transaction. This approach has the followingadvantages: test does not interfere with design under test,

    dynamic test components are plug-n-play, verification reuseis possible for a given standard bus interface protocol,transaction-level code runs typically faster than pin- levelinteraction, larger state space can be covered when usingconstrained random transaction generation, and realmetrics can be obtained by observing transactionsaround the testbench.

    SYSTEMVERILOG INTERFACESAND VIRTUAL INTERFACES:

    In the OVM/UVM dynamic transaction-level testbench oneneeds to ultimately connect with the DUT at different points.The driver needs access to design pins to generate theright pin-activity based on the transaction. Similarly, themonitor needs access to design pins to observe pin-activity.Dynamic constructs cannot access static design itemsdirectly.

    SystemVerilog provides the ‘interface’ construct toencapsulate a set of pins on a DUT. The interfaces typicallycan be design dependent but often they encapsulate

    industry standard bus protocols for maximum reuse. ASystemVerilog interface is a static construct and resides onthe static side of the testbench.

    The SystemVerilog language provides the virtual interfaceconstruct that allows one to pass the ‘interface’ handlefrom the static side to the dynamic side. This connects theclass-based dynamic testbench to the static HDL design.Following are the different approaches to pass the interfacefrom static side to the dynamic side:

    • Directly setting of the virtual interface handle whereneeded via helper function in the class.

    • Package-based global storage for the virtual interfachandle.• The preferred approach is by saving virtual interface

    handle into the OVM/UVM configuration database.In OVM, this requires an extra wrapper class to holdthe virtual interface.

    LIMITATIONS WITHVIRTUAL INTERFACES:The SystemVerilog virtual interface basedapproach to connect the OVM testbench

    and the DUT involves passing an interfacehandle from the top-level module to the OVtestbench. In most of the use cases virtualinterfaces work just fine. As we will see inthe example below, virtual interfaces becomdifficult to use when it comes to parameterizinterfaces.

    To demonstrate the problem, let’s see theexample code below which tries to use vir tual interface

    class vif_wrapper #(type T=int) extends ovm_object T m; function new(string name=””); super.new(name); endfunctionendclass

    interface bus_intf #( int aw, int dw, int tp = 8,strinf proc=”ppc”) (

    //port declarations …);

    module top parameter T1=10; parameter T2=T2*2; parameter T3=T3*5;

    bus_intf #( .aw (8), dw (12), .tp (T2), .proc(“arm”) )intf_arm(…);

    bus_intf #( .aw (16), dw (32), .tp (T3) .proc(“nios”) ) intf_nios(…);

  • 8/9/2019 Verification Horizons Publication Lr

    21/40

    21

    //One must define unique and correct types

    typedef virtual bus_intf#( .aw (8), dw (12), .tp (T2), .proc(“arm”) )vif_arm_t;

    typedef virtual bus_intf#( .aw (16), dw (32), .tp (T3), .proc(“nios”) )

    vif_nios_t;

    // If using a config object method then one cando the following

    vif_wrapper #(vif_arm_t) vif_arm; vif_wrapper #(vif_nios_t) vif_nios;

    initial begin vif_arm=new(); vif_nios=new(); vif_arm.m = intf_arm; //Pass the actual

    handle interface vif_nios.m = intf_nios; //Pass the actual

    handle interface set_config_object(“*”,”wrapper”,wrapper,0); run_test(“test”);

    endendmodule

    The issue is compounded if we want to define a genericagent based on the parameterized interface. Each agentwill be instantiated to communicate with two similar butdifferently parameterized interfaces. As you see abovethe interface specialization has to be inside the top due tothe dependencies on the top-level parameters. In such anagent, what is the virtual interface handle inside the driver

    going to look like? It has to be specific to the specialized-type of interface it will be communicating through.

    Here’s the pseudo code:

    class driver #(type T = int) extends ovm_driver#(T); `ovm_component_param_utils(driver #(REQ)) transaction item; virtual intf (…) vif;endclass

    POLYMORPHIC INTERFACE:The Polymorphic Interface approach provides a very

    elegant solution for sharing parameterized interfaces.This approach is also known as “Abstract/ConcreteClass” approach or “Two Kingdoms” approach. Thisapproach involves defining a pure virtual (i.e. “abstract”)class and then making it concrete inside the scope of theSystemVerilog interface (or module). The abstract class inessence provides the “interface” like construct in Java, andeach concrete class inside the SystemVerilog interface (ormodule) provides functionality for the abstract functions inthe context of the SystemVerilog interface (or module).

    One could have multiple levels of inheritance hierarchyeach attached to a different type of SystemVerilog Interface(or module), The OVM testbench will use the abstractclass handle and, via polymorphism, the function calls getforwarded to the concrete class functions inside the rightinterface - hence the name “polymorphic interface.” Aswe have seen, the virtual interface is not polymorphic. Inthe example below, the polymorphic interface approachelegantly solves the issue with parameterized interfaces.

    CREATING POLYMORPHIC INTERFACE:Following are the steps involved in creatinga Polymorphic Interface:

    Step 1: Create anabstract or virtual class :

    virtual class abstract_c extends ovm_object; function new(string name); super.new(name); endfunction pure virtual task wr_cycle(int addr, int data);

    pure virtual task rd_cycle(int addr, ref int data);endclasspackage abs_pkg; `include “ovm_macros.svh” import ovm_pkg::*; `include “abstract_c.svh”endpackage

  • 8/9/2019 Verification Horizons Publication Lr

    22/40

  • 8/9/2019 Verification Horizons Publication Lr

    23/40

  • 8/9/2019 Verification Horizons Publication Lr

    24/40

  • 8/9/2019 Verification Horizons Publication Lr

    25/40

    25

    Many protocols have a hierarchical definition. For example,PCI express, USB 3.0, and MIPI LLI all have a TransactionLayer, a Transport Layer, and a Physical Layer. Sometimeswe want to create a protocol independent layer on topof a standard protocol so that we can create protocolindependent verification components ( for example TLM2.0 GP over AMBA AHB ). All these cases require that wedeconstruct sequence items of the higher level protocolinto sequence items of the lower level protocol in orderto stimulate the bus and that we reconstruct higher level

    sequence items from the lower level sequence items inthe analysis datapath.

    THE ARCHITECTURE OF A LAYERING In order to do this we construct a layering component. A layering component:

    • Must include achild sequencer for each non-leaflevel in the layering.

    • Must create, connect and starttranslator sequence for each non leaf level in the layering.

    • Must have a handle to the leaflevel protocol agent.This protocol agent may be achild of the layeringor external to it.

    • May include areconstruction monitor for eachnon leaf level in the layering.

    • Should create and connect externalanalysis ports for each monitor contained within the layering

    • Will usually have a configuration object associatedwith it that contains the configuration objectsof all the components contained within it.

    CHILD SEQUENCERS A child sequencer in a layering is simply the usual

    sequencer for that protocol. Very often an appropriatelyparamterized uvm_sequencer is quite sufficient. If the

    higher level protocol has been modeled as aprotocol UVC, then the layering should instantiatean instance of the sequencer used by the agent forthat protocol so that sequences can be targetedeither at the bus agent or the layering.

    For example, the ABC layering below has an A_sequencer and a B_sequencer.

    class ABC_layering extends uvm_subscriber #( C_item ); `uvm_component_utils( ABC_layering ) ...

    A_sequencer a_sequencer; B_sequencer b_sequencer; ... C_agent c_agent; function new(string name, uvm_component

    parent=null); super.new(name, parent); endfunction function void build_phase(uvm_phase phase);

    a_sequencer = A_sequencer::type_id::create(“a_sequencer”,this);

    b_sequencer = B_sequencer::type_id::create(“b_sequencer”,this);

    endfunction ...endclass

    Layering in UVM(Extracted from the UVM/OVM Online Methodology Cookbook found on verificationacademy.com)

  • 8/9/2019 Verification Horizons Publication Lr

    26/40

    26

    TRANSLATOR SEQUENCES A sequence which translates from upstream items to

    downstream items runs on the downstream sequencerbut has a reference to the upstream sequencer. It directlyreferences the upstream sequencer to call get_next_itemand item_done to get upstream items and tell the upstreamsequencer that it has finished processing the upstreamsequence item. Between get_next_item and item_done itsends data to and gets data from the lower level sequencerby calling start_item and finish_item. A simple BtoCtranslator sequence is shown below:

    class BtoC_seq extends uvm_sequence #(C_item); `uvm_object_utils(BtoC_seq); function new(string name=””);

    super.new(name); endfunction uvm_sequencer #(B_item) up_sequencer;

    virtual task body(); B_item b;

    C_item c; int i; forever begin up_sequencer.get_next_item(b); foreach (b.fb[i]) begin c = C_item::type_id::create();

    start_item(c); c.fc = b.fb[i]; finish_item(c); end

    up_sequencer.item_done(); end endtaskendclass

    The run phase is responsible for creating the translatorsequences, connecting them to their upstream sequencer

    and starting them on their downstream sequencers:

    virtual task run_phase(uvm_phase phase); AtoB_seq a2b_seq; BtoC_seq b2c_seq;

    a2b_seq = AtoB_seq::type_id::create(“a2b_seq”) b2c_seq = BtoC_seq::type_id::create(“b2c_seq”);

    // connect translation sequences to their respective

    upstream sequencers a2b_seq.up_sequencer = a_sequencer; b2c_seq.up_sequencer = b_sequencer;

    // start the translation sequences fork a2b_seq.start(b_sequencer); b2c_seq.start(c_agent.c_sequencer); join_none endtask

    THE PROTOCOL AGENTEvery layering must have a handle to the leaf level protoagent. If we are delivering verification IP for a layeredprotocol, it usually makes sense to deliver the layering wan internal protocol agent. On the other hand, we may badding a layering for use with a shrink wrapped protocoagent instantiated elsewhere in the testbench. Under thecircumstances we will want the leaf level protocol agentbe outside the layering.

    Internal Protocol Agent

    In the case of an internal protocol agent, the layeringcomponent inherits from uvm_component and createsa child layering agent:

  • 8/9/2019 Verification Horizons Publication Lr

    27/40

    27

    class ABC_layering extends uvm_component;

    `uvm_component_utils( ABC_layering ) ...

    A_sequencer a_sequencer; B_sequencer b_sequencer; ... C_agent c_agent; function new(string name, uvm_component

    parent=null); super.new(name, parent);

    endfunction function void build_phase(uvm_phase phase);

    a_sequencer = A_sequencer::type_id::create(“a_sequencer”,this);

    b_sequencer = B_sequencer::type_id::create(“b_sequencer”,this);

    c_agent = C_agent::type_id::create(“c_sequencer”,this); endfunction ...

    endclass

    External Protocol Agent

    In the case of an external protocol agent, the layeringis a subscriber parameterized on the leaf level sequenceitem and the agent is not constructed inside the layering.The code introducedabove shows the code for anexternal agent.

    THE ANALYSIS PATH

    Really, there is nothing special in the analysis path of alayering. For each layer in the monitoring we provide a

    reconstruction monitor which assembles high level itemsfrom low level items. These reconstruction monitors have ananalysis export which is connected to the analysis ports ofthe lower level monitor and an analysis port. This analysisport is connected to an external analysis port and theanalysis export of the upstream monitor if there is one.

    An outline of a reconstruction monitor is shown below:

    class C2B_monitor extends uvm_subscriber #(C_item);// provides an analysis export of type C_item `uvm_component_utils(C2B_monitor) uvm_analysis_port#(B_item) ap;

    // declarations omitted ... function new(string name, uvm_component parent);

    super.new(name, parent); ap = new(“ap”,this); endfunction function void write(C_item t);

    // reconstruction code omitted ... ap.write( b_out ); ... endfunctionendclass

  • 8/9/2019 Verification Horizons Publication Lr

    28/40

    28

    The reconstruction monitors are connected upin the normal way:

    class ABC_layering extends uvm_subscriber #( C_item );

    `uvm_component_utils( ABC_layering ) uvm_analysis_port #( A_item ) ap;

    A_sequencer a_sequencer; B_sequencer b_sequencer; C2B_monitor c2b_mon;

    B2A_monitor b2a_mon; C_agent c_agent;

    function new(string name, uvm_component

    parent=null); super.new(name, parent); endfunction function void build_phase(uvm_phase phase);

    a_sequencer = A_sequencer::type_id::create(“a_sequencer”,this);

    b_sequencer = B_sequencer::type_id::create(“b_sequencer”,this);

    c2b_mon = C2B_monitor::type_id::create

    (“c2b_mon”,this); b2a_mon = B2A_monitor::type_id::create

    (“b2a_mon”,this);

    ap = new(“ap” , this );

    endfunction function void connect_phase(uvm_phase phase);

    c2b_mon.ap.connect(b2a_mon.analysis_export); b2a_mon.ap.connect( ap ); endfunction ...endclass

    For more details or to download the examples, go tohttp://verificationacademy.com/uvm-ovm/Sequences/

    Layering

  • 8/9/2019 Verification Horizons Publication Lr

    29/40

    29

    ABSTRACTOVM has gained a lot of momentum in the market tobecome the dominant verification “methodology” and theindications are that UVM will gain even greater adoption.OVM is built around SystemVerilog and provide libraries thatallow the user to build advanced verification test benchesmore quickly. There is extensive documentation, trainingand support for how to best develop such test benchesand to encourage test bench re-use. However, there is lessadvice on how to adapt your verification processes on yourproject to best use OVM and even less advice on how to do

    this for company wide adoption.

    In this article we discuss the experiences of the authors ofa company-wide adoption of OVM. We consider the originalmotivations for that adoption and the expected benefits, andthe actual results achieved and problems that have beenovercome. The aim of the article is to give advice to othercompanies considering adopting OVM.

    WHY ADOPT A METHODOLOGY? AND WHY OVM?Figure 1 shows a brief of industry test bench methodologiesand goes beyond OVM to UVM.

    Dialog had already decided that they wanted to improvetheir verification process by moving it from a directed

    testing approach to use pseudo-randomverification. The main

    question was the most effective way of making sucha transition and it was clear that adopting an existingmethodology that supported a pseudo-random verificationwas the best way forward.

    OVM is currently the leading industry pseudo-randomverification methodology and has a well planned andsupported roadmap to UVM (the Universal VerificationMethodology) which is supported by Cadence, Mentor

    Graphics and Synopsys. Thus the decision byDialog to adopt OVM was reasonably obvious.

    Figure 2 (on the followng page) “OVM VerificationHierarchy” shows how OVM is made up of aclass library and methodology. The OVM classlibrary is built on SystemVerilog which has beenstandardised (IEEE 1800). The class library isfreely distributed and can be run across multipleindustry simulators from different vendors. TheOVM methodology is then supported by the OVMclass libraries. The methodology is also supportedby other languages: SystemC® (IEEE 1666),and e (IEEE 1647) languages.

    Successful Adoption of OVM and What We Learned Along the Way by Mike Bartley, Founder, TVS and Steve Holloway, Senior Verification Engineer, Dialog Semiconductor

  • 8/9/2019 Verification Horizons Publication Lr

    30/40

    30

    Figure 2: OVM Verification Hierarchy

    The fact that the language, library and methodologyhave been standardised and are supported by multiplecompanies and vendors brings significant additionaladvantages to Dialog:

    • Availability of training & support: As the adoptionof a common methodology gains widespread industryacceptance then the market for services and productsbased on the standard increases. Thus vendors areable to start providing those services and products.

    • Compatibility & availability of 3rd party IP:Similarto training and support, the availability of 3rd party IPincreases with growing adoption of a methodology. Inparticular, vendors are able to produce and sell theirverification IP (VIP) to a larger market. It should alsobe easier for the user to incorporate that VIP into theirverification environment if that environment and the VIPare compatible through the standard – a “Write once,run everywhere?” concept. Note also that the OVMstandard supports the use of plug-and-play verificationIP (VIP) written in SystemVerilog (IEEE 1800),SystemC® (IEEE 1666), and e (IEEE 1647) languages.

    • Cross-tool compatibility: The standardisation processshould ensure that tools from different vendors supportboth the methodology and the underlying class librariesmaking it easier for users to switch tools and thusvendors more easily.

    We can all point to differences in the ways thatsimulators interpret the semantics of a particularlanguage. Indeed, in many places the languagestandard does not completely cover the implementationand the tool vendor is left to make their own decision

    – race conditions being an obvious example. Howevthese issues are usually minor enough not to block aswitch to an alternative supplier.

    The ability to switch supplier is vitally importantcommercially when negotiating with tool vendors. Italso means that the user has an alternative should thecurrent supplier cut support for their simulator.

    • Reduced ramp up time for subcontractors& new starters: Skilled verification resources remainin high demand and short supply. Use of an industrystandard methodology both increases the probabilityof finding resource with the appropriate skills andknowledge, and reduces the time to ramp thosepeople in your particular verification environment.

    ADVANTAGES OF ADOPTING EXTERNAL VIPIn addition to the above reasons from Dialog, TVS hasnoticed additional advantages that other clients have seefrom the adoption of OVM. The first of these is in the u3rd party OVM VIP:

    • The ability to more rapidly develop complex testbenches

    • Independent interpretation (of protocols) leadingto higher quality verification of interfaces

    • Ensure higher coverage of protocols through the covpoints and assertions built into the VIP and mapped the protocol specification (assuming your VIP suppldoes this)

    • Easier re-use of VIP between OVM-compliant projeand test benches, and across hierarchies in the sameproject (e.g. from block to SoC – the VIP is used mofor traffic generation in the latter given the interfaceis already proven)

    • Leverage knowledge and experience from others• Huge potential ROI (return on investment) from the

    ROLLING OUT OVM ACROSS

    Figure 3 “Dialog OVM test bench structure” shows a tyOVM-style test bench and this is the style of test bench Dialog chose to adopt. The agents shown could be eitheinternally developed or external VIP (such as the VIP frTVS).

  • 8/9/2019 Verification Horizons Publication Lr

    31/40

    31

    Figure 3: Dialog OVM test bench structure

    In the rest of this section we look specifically at the methodsthat Dialog used to roll out adoption of OVM across theirorganisation. It should be noted that this was a multisiteroll out. [1]

    The following is the list of steps deployed by Dialog.

    1. External training courses & workshops: As discussedabove the fact that Dialog were adopting an industrystandard methodology meant that there were externalproviders of training and workshops.

    2. Internal seminars to consolidate knowledge & discuss

    best practices: Dialog were of course not starting fromscratch. They had engineers with experiences from:previous jobs; attending industry events; reading online,press and papers, etc; external training and workshops.Dialog found it useful to consolidate that knowledgevia internal seminars where they also discussed bestpractise.

    3. Set of golden rules for implementing OVMenvironments: Distilled from the internal seminars anddiscussions, and from externally available materials.

    4. Internal Wiki for knowledge sharing: This allowed Dialogto have a “living” knowledge base where engineerscould add their growing knowledge and practicalexperiences of using OVM.

    5. Using external resources on OVM: For example, theMentor Graphics Verification Academy and TVS whitepapers on deploying OVM VIP.

    6. Set of “ template” components: These templates werea good way to encapsulate knowledge in a practicalformat and were distributed as an example environment

    a) Demonstrate best practiseb) Form the basis of module environmentsc) Scripts automated generation of environments

    from template code7. Code review sessions: These were used to ensure

    that engineers started on the right track and give themfeedback on their coding style. These were used tocontinuously improve the code being written and theenvironments being developed

    8. Library OVC components created: An internal librarywas created containing components for potential re-use.

    9. Standard VIP procured: For example, Dialog procuredOVM compliant VIP from TVS.

  • 8/9/2019 Verification Horizons Publication Lr

    32/40

    32

    PROBLEMS IN EXECUTION

    Dialog overcame the following problems in their executionof OVM adoption.

    1. Problems with implementing checking at the

    appropriate level of abstractiona) Data & transformation checking in scoreboardb) Timing and protocol checking with SVA

    2. Problems with expending effort on coverage closuremetricsa) Appropriate priority setting on coverage goalsb) Keeping vplan “live” – linking coverage model

    and reviewing coverage modelc) If functional coverage is high, but code coverage

    is low then suspect the completeness of the model…d) Achievable coverage goals (e.g. defining appropriate

    coverage buckets for a 32-bit real-time counters)3. Problems with appropriate constraints on stimulus.

    For example,a) Don’t randomly toggle test mode signals

    in functional testsb) Use highly-directed sequences required to arrive

    at system state of interest and then use CRVDomain specific knowledge is usually requiredin these circumstances.

    4. RTL-centric engineers found it hard to grasp ideasfrom the software worlda) Encapsulationb) Inheritancec) Polymorphismd) The OVM factory & overrides

    TVS have encountered this situation in numerousorganisations. OVM often requires a software mindsetwhich RTL engineers often find it hard to adopt.

    5. Initially the OVM reuse paradigm was not fullyleveraged. TVS have found that other companiesovercome this through the use of a central VIP libraryand “librarian” who is able to actively market the VIPacross the company and help engineers in the reuseof the VIP.

    6. Reuse from module level to chip-level is non-tr iviala) Many issues were found when vertical reuse was

    done (requiring rewrite of module environments)Dialog sees this as a learning process and is nowachieving significant vertical reuse.

    7. SV is a “rich” language, so there are many waysto code functionality. For example, it is not alwaysobvious how to...

    Figure 4: Basic Dialog OVM Guidelines

    a) make appropriate choice of language constructsb) write the interface between module-based code

    and class-based code8. OVM is merely a library of parts and a user guide

    so a strong adherence to the methodology is needed.Dialog resisted the temptation to create Dialog-specclass libraries, macros or an OVM wrapper layer.This may ease the introduction of OVM but ultimatethis approach is potentially restrictive and requiresmore ramp-up and longer-term maintenance.Dialog thus had a strong drive to stick with “vanilla”

    OVM and the approach suggested in the referenceguide examples.

    9. Dialog had an expectation that OVM would reducetheir verification effort whereas the reality was thatthe effort increases (at least until reuse is leveragedand engineers become more familiar and experiencein its usage). The initial perception of the verificatioengineers was “why do I need to write so much codeDialog thus sees the main initial advantage of OVMas improved quality in verification. This will be of h

    1. Reusable Verification Components encapsulate

    a number of:

    • Agents

    • Scoreboards

    • Checks

    • Coverage2. Verification Components shall be scalable

    for module, subsystem or chip level

    3. Verification Components shall be located

    in SystemVerilog packages

    4. Agents shall follow OVM guidelines

    5. DUT interaction should not be done outside

    the agent

    6. Synchronization should be handled with

    ovm_event_pool

    7. Assertion-based checks should be placed

    in interfaces8. TLM should be used wherever possible

    to communicate data

    9. Configuration should be stored in a config

    component

    10. Class and Module based code should

    only communicate via interfaces

  • 8/9/2019 Verification Horizons Publication Lr

    33/40

    33

    importance to Dialog going forward as the complexity oftheir designs increases. Dialog also expects to improveefficiency within a relatively short time period.

    THE MAIN RESULTS ACHIEVEDIn this section we first consider the main results achievedby Dialog in their rollout:

    • The engineers were able to get running quickly withrandom stimulus generation and transaction coveragemore quickly when they were supplied with reusabletemplates.

    • Dialog did indeed reap all of the benefits normallyassociated with a Constrained Random Verification(CRV) methodology: high rates of bug discovery; easiertracking of real progress; managed closure driven by a

    metric-driven approach.• Dialog was able to add coverage collected on directed

    tests (in passive mode) to that achieved via CRV. Thiswas of high importance as it meant that Dialog wereable to exploit the value contained in their legacy testswithin their OVM environment

    Dialog now has a well-defined approach to exploitingOVM and a roadmap for its deployment on future projects.

    RECOMMENDATIONSFrom their experience, TVS and Dialog would recommendthe adoption of the vanilla OVM with a planned roll outtaking most of the points from the Dialog roll out plan.Realistic expectations need to be set: expect improvedverification but that improved efficiency will follow in timeafter the initial ramp up.

    From experience through working with other adoptersof OVM, TVS also recommends a multi- layered buildingapproach to test benches and verification environments:

    • Have OVM experts develop the main infrastructureand test benches.

    • Use verification engineers to develop projectsequences.

    • Consider using non-verification engineersto develop the specific scenarios.

    This allows engineers to work at different levels of the testbench according to their need to understand OVM and theirproject specific knowledge. For example, at one TVS clienta software engineer used the TVS SDCard VIP to set upvarious scenarios to allow them to test corner casesin their driver code.

    BACKGROUNDTest and Verification Solutions (TVS, Bristol, UK) deliversan independent testing service that not only reduces costsand time-to-market, but also improves product quality.TVS combines skills and experience in software testing,hardware verification and outsourcing to provide customerswith an efficient, well-managed, quality assurance service.The company provides both consultancy and executionservices using experienced engineering resourcesin several locations around the world.

    Dialog Semiconductor creates energy-efficient, highlyintegrated, mixed-signal circuits optimised for personalportable, short-range wireless, lighting, display andautomotive applications. TVS have been helping Dialogwith their successful adoption of OVM.

    For more information about TVS,please visit www.testandverification.com

    END NOTE[1] Dialog has development sites around the world including Asia and the US, as well as numerous countries in Europe.

  • 8/9/2019 Verification Horizons Publication Lr

    34/40

  • 8/9/2019 Verification Horizons Publication Lr

    35/40

    35

    Figure 1- Example of an Excel testplan.

    Verification Management (VM) can read and importtestplans from any file format, including Excel or Calcspreadsheets, and Word, Adobe or Write documents.

    The utility can be customized to read any format of datainto the database, so these test requirements can evencome from an external requirements database like IBMDoors or Mentor Graphics ReqTracer. The UCDB itselfunifies coverage from multiple verification engines,including analog and digital simulation, formal verificationand emulation. This enables the testplan to pull together theresults of all verification methods. And the testplan can alsocarry other planning information such as a feature’s priority,the engineer responsible for verifying it and the verificationmethod that will be used.

    Attaching this kind of information to the plan allows resultsto be filtered in any number of ways. For example, bybreaking down the verification requirements into prioritycategories, it’s possible to assign numbers to the must-have and nice-to-have features, and those in between.This makes it possible to focus verification on the most

    important features first. Likewise, by adding informationon engineering resources to the testplan, it is possible totrack each engineer’s progress in verifying features forwhich they are responsible. This allows managers to spotproblems and shuffle resources around during the projectto ensure that schedules are still met.

  • 8/9/2019 Verification Horizons Publication Lr

    36/40

    36

    Figure 2 - Showing two queries based on priority

    of features.

    CONTROLLING MEASUREMENTBecause metrics from multiple sources are being combinedto give an overall measurement of completeness, the

    testplan tracker must allow users to control calculationswithin the testplan document. Other attributes such asweights and goals can be attached to the testplan sectionsto allow further control over how the metrics are combined.By giving a higher weighting to a section of the testplan,it’s possible to increase its contribution to the overalltestplan coverage calculation. If a more automated wayof weighting sections against each other is required, thenother coverage calculation algorithms can be used. (Anexample is a calculation based upon the number of linkedobjects.) It’s also important to bear in mind that not all

    coverage is implemented when a testplan is first generated.Thus, the fact that a coverage object doesn’t yet existneeds to be taken into account. By marking links within thetestplan as yet to be implemented, Questa testplan trackingautomatically generates coverage objects as part of themerge process. Until they are implemented, these objectsthen show as being uncovered in the overall testplancoverage data.

    REUSEIt’s extremely common today for the latest project to usor VIP from a past project or external source. Verificatioof this IP also requires a testplan, thus the need to reusetestplans within new projects. For example, all MentorGraphics Questa VIP is shipped with testplans that can bread into the Questa testplan tracking tools. This meansthat support for hierarchical testplans is required not onallow reuse but also to make it possible to break down latestplans into more manageable pieces. Because differenusers and IP vendors rarely provide their testplans in thesame file formats, it also must be possible to mix and mthe testplans from any source document. Questa testplantracking provides the ability to add links within a docum

    to refer to any other document type. This means that anExcel spreadsheet can have a testplan section that refersa Word document or vice versa.

    Another important requirement that enables reuse isthe ability to parameterize settings that vary based onenvironment. For example, some links embedded withintestplan may have path settings for a particular coverageobject. If a particular setting refers to a path in a pieceof IP, then this path will change when used in different

  • 8/9/2019 Verification Horizons Publication Lr

    37/40

    37

    environments. Likewise, if a specific use of a given block ofIP means that a coverage object will never be used, then

    the object needs to be excluded. Accommodating thesekinds of changes require parameters that can be set basedon testplan usage.

    IMPROVING TESTPLAN ENTRYForcing a user to capture their testplan with a particularentry method or proprietary editor creates heartburn. Thegood news is that building a proprietary testplan editor isnot the only way to make capturing testplans easier. Mostdocument applications like Excel, Calc and Word have their

    own extension languages that make it possible to integratecustom features. Questa testplan tracking provides add-ins that enable creating testplans with the applicationof choice. For example, the Excel add-in has a testplancreation wizard to make generating a testplan templatevery straightforward. Working in Excel allows for easyaddition and deletion of sections and user attributes, andalso automatic re-numbering and formatting. And the Exceltestplan in this example interfaces directly to the UCDB forinformation on the objects available for linking.

    Testplan validation can be run to ensure the plan is in theright format before importing it to Questa and it is possibleto save the testplan directly to a UCDB file, thus completely

    Figure 3 - Excel add-in making creation easy.

    automating the import process. Once coverage datais available, the add-in is able to annotate coverage

    information back into the spreadsheet. To ease adoption itwill also regenerate a spreadsheet directly from a testplanthat’s already been imported.

    COVERAGE MANAGEMENT Another important part of the process is how coverageis captured over regressions throughout the project.The UCDB is used to capture both coverage and testinformation from single verification runs across all engines.Then UCDB utilities combine, merge or optimize the data

    via ranking. A merged database can be used to answerall kinds of queries about the effectiveness of a particulartestplan section, while at the same time reducing thespace needed to store all the single verification runs.The database still holds information such as how andwhere the test was run, and how long it took. This allowsfor optimization of test suites by using the ranking utilityto either reduce the number of simulations (by findingredundancy) or find a particular set of tests that will run inthe shortest possible time while still achieving a particularcoverage goal. Data management over multiple regressions

    and the overall project duration are flexible due to the factthat the data is available on a test by test basis.

  • 8/9/2019 Verification Horizons Publication Lr

    38/40

  • 8/9/2019 Verification Horizons Publication Lr

    39/40

    39

    of source code and the number of lines that have changedfrom week to week can suggest how design stability ischanging. Monitoring the number of tests that are run, the

    type of tests, and the number that have passed and failedover time also gives us a measurement of completeness.The trend merge allows any user metrics to be added to theUCDB and trended over time with the coverage metrics.

    CONCLUSIONMentor Graphics anticipated that the cornerstone ofverification solutions would be an optimized and unifieddatabase. Thus, UCDB was architected and implementedto allow the unification of all verification data and the

    development of powerful verification managementcapabilities, all of which can be found within Questatoday. These capabilities include testplan tracking to allowelectronic closure of the process with a plan, utilities thatallow data to be combined and optimized when generatedfrom multiple tests and engines, and the ability to reducedata over time while still exposing data needed to seeproject progress.

    Today, you wouldn’t leave home without your car navigationsystem to guide you to your destination and give you real-time location information. Questa VM gives you the samevisibility and guidance for your verification process.

    Figure 5 - Trending metrics over the duration of the project.

  • 8/9/2019 Verification Horizons Publication Lr

    40/40

    Editor: Tom FitzpatrickProgram Manager: Rebecca Granquist

    Wilsonville Worldwide Headquarters8005 SW Boeckman Rd.Wilsonville, OR 97070-7777Phone: 503-685-7000

    To subscribe visit:www.mentor.com/horizons

    To view our blog visit:VERIFICATIONHORIZONSBLOG.CO