Skip navigation

I gave a talk on the intelligent test generation technology we are developing at Nusym at this year’s HLDVT conference. The term “intelligent testbench” has become one of those terms, like DFM, that is so vaguely defined that it can be used to mean anything anybody wants.  So, in the first part of my talk, I defined precisely what we mean when we say that we do intelligent test generation. This was well received so I decided to post this part of the talk here.

The goal of verification is demonstrate that a design works for all possible inputs. A testbench is a software environment that can generate a set of one or more possible inputs. For example:

  begin
     a = $random;
     b = $random;
     c  = $random;
     cmdv = 1'b1;
     @(posedge clk);
     cmdv = 1'b0;
     @(posedge clk);
     ...
   end

In this example, variables a, b, and c are the primary inputs. A testbench can generate a large number of tests. A test corresponds to a particular set of all values for each primary input. Values can be randomized over time, so, more generally, a test consists of sequences of values:

   time 0 1 2 3 4 5 ...
   a =  0 9 5 2 8 5 ...
   b =  9 7 2 4 1 7 ...
   c =  8 1 3 7 2 2 ...

An input sequences defines a point in the input space of the design. The input space is the set of all possible input sequences
slide11

In general, the input space is vast. The number of points in the input space exceeds the number of atoms in the universe even for simple designs. So, the question is: how do we verify the design,  given that there is no hope of exercising all points in the input space?

The first simplification that we do is to limit testing to the legal input space.

slide12

The legal input space is vastly smaller than the total input space, but is still vast. So, we need some other way of managing the legal input space. This is where coverage comes in.

A coverage point is defined as:  a condition that must be true at some point during testing. For example, a branch condition is defined by the conditions that cause the branch to be taken. A functional coverage point is defined by the set of conditions specifying the functionality to be exercised.

We can define a semantics of coverage points in terms of the input space. Semantically, a coverage point is the set of all points in the input space that hit that coverage point. A coverage point is considered hit if at least one of these tests is executed.

slide15

(A coverage model is a set of coverage points, each one specifying a different subset of the input space. A coverage model has the effect of partitioning the input space into disjoint areas (note: this does not mean that coverage points are necessarily disjoint). Generally, the goal is to define coverage points such that exercising one test that hits that coverage point is sufficient to consider all functionality defined by that coverage point to have been tested. In this way, coverage points reduce the vast input space into a tractable set of tests that need to be generated to consider the design fully tested.

slide14

In the old days, when designers did their own verification, they would define a test plan, which basically was a set of functional coverage points defining a coverage model. Designers had the most insight into how to partition the input space to maximize the probability of finding bugs.

slide13

They would then write directed tests (represented by the white dots in the slide below) to hit all testplan items.

slide17

With the advent of the RTL+synthesis, verification was done by separate engineers who did not have the same insight into the design.  Without an good understanding of the design, verification engineers may come up with a completely different test plan.

slide18

If they write directed tests, they may get 100% coverage, but end up missing fairly easy to find bugs.

slide19

The solution to this problem is random testing, specifically constrained random testing, which tests only within the legal input space. Today, constrained random testing is the dominant pre-silicon verification methodology.

Before the advent of specialized Hardware Verification Languages (HVLs), constraining random values was often done naively. For instance, a naive way to constrain a value to being within a max and min range is as follows:

   task rand_range(min,max)
   begin
      rand_range = $random;
      if (rand_range > max)
         rand_range = max;
      else if (rand_range < min)
         rand_range = min;
   end
   endtask

This would result in a lot of max and min values being generated, but very few in between:

slide110

A better solution is constraint-based random testing. Modern HVLs have the ability to specify static constraints on generated values. A built-in constraint solver generates random values that satisfy all constraints.

     rand reg[3:0] x,y;
     rand integer z;
     constraint c1 {
         x < 100;
         y > 5;
         z == x + y;
      }

Static constraints and constraint solving more uniformly distribute values across the legal space, which increases the probability of finding bugs.

slide111

We can then combine constraint-based random simulation with a coverage model

slide112

The result is of this is that high coverage can be achieved fairly easy, but it can be difficult to get coverage closure, which is defined as achieving 100% reachable coverage. Today, coverage closure is achieved in two ways: 1) directed tests can be written to fill in the missing holes, or 2) the constraints can be biased to try to influence the solver into generating values to fill the holes. Both of these methods are labor intensive, fragile, and require designer insight, which makes coverage closure very painful.

This is where the intelligent testbench comes in. Gary Smith invented this term back in 1998 or thereabouts. He defined it as:

intelligent testbench
the generation of a testbench from a system-level design description…

This is an intractable problem. Taking an existing testbench and automating the generation of tests to fill coverage holes is a much more tractable problem (although still very difficult, in general). There are two properties that an automated test generator must have to be considered intelligent:

intelligent test generation

  1. it must be able to find a test (settings of random variables or other primary inputs) to hit a specified coverage point with probablity significantly greater than random.
  2. it must adapt automatically to design and coverage model changes. That is, a design or testbench change may cause the semantically defined set of tests for a given coverage point to change such that a test that was hitting the coverage point no longer hits it. An intelligent test generation tool will be able to find a new test to hit the coverage point with no other changes required to the testbench.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: