Skip navigation

Tag Archives: constrained random testing

Constrained random testing is the predominant method used to verify chips today. It has two advantages over directed testing. First, randomness allows testing things not explicitly thought of by the verification engineer or designer. Second, automated random testing is more productive than manual directed testing in terms of number of test vectors produced as a function of human effort put in.

There are several variants on the basic concept of random testing. Unconstrained random testing generates random values for all input variables without any consideration of legality. Constrained random testing is used when the legal set of input values is a subset of the entire input space. In the last dozen years, hardware verification languages (HVLs) have enabled the use of constraint-based random testing. Constraint-based random testing uses static constraints and constraint solvers to do constrained random testing.

Constraint-based random testing improves productivity in writing constrained random testbenches because it allows writing tests at a higher level of abstraction. Static constraints specify the legal input space, but not how to randomize values; this is left up to the solver. As designs/protocols have become more complex, constraint-based random testing’s productivity gain has increased. As a result, there is a proliferation of large, complex constraint sets in today’s testbenches.

Increasingly, however, solving these large, complex constraint sets has become a bottleneck in simulation due to the large amount of time spent solving these constraint sets. Consequently, it is common that a lot of time is spent debugging and trying to workaround these solving bottlenecks. In the worst case, it is necessary to discard the static constraints and rewrite the randomizer using imperative code, negating the productivity gains of using static constraints.

The fundamental cause of this problem is the fact that uniformly randomizing across a constrained input space is a NP-hard problem. As we have seen before, NP-hard problems are optimization problems. Uniformity of randomization is the optimization goal. For example, suppose we have a simple constraint: 0 <= X <= 10. If we randomize X a number of times and each time it returns the value 0, this meets the constraint, but is not very uniform. Randomization that produces the value 0 to 9 with equal probability is optimal. Because NP-hard problems have the additional burden of requiring an optimal value, not just a satisfying value, they are generally harder to solve than NP-complete problems.

What are solutions to this problem?

The obvious solution is to build better solver/randomizers. Companies that produce HVLs already spend a large amount of effort trying to improve their solvers. If it were that easy, it would have been done already.

Barring better solvers, the only other choice is to change methodologies. One change is to ditch static constraints and go back to writing imperative code with the ensuing reduction in productivity. Not a pleasant thought to contemplate.

An alternative to this is keep the static constraints, but reduce the complexity from NP-hard to NP-complete. This means dumping uniform distribution for, potentially, very poor, but still legal, randomization. This is feasible considering that the real goal of random testing is to find bugs and improve coverage. We believe the uniform distribution is required to do this. However, if the randomizer knew precisely what values to generate in order to find a bug or hit a coverage point, it could just generate these values, which may or may not be uniform. If it were possible to do this, randomization would go from being an NP-hard problem to being a NP-complete problem. Technologies such as Nusym’s path tracing technology allow this.

One thing I think would help to implement this strategy would be to provide an open solver API in today’s HVLs. This would allow plugging in different solvers for different circumstances and developing custom solvers on a case-by-case basis rather than being forced to have a one-size-fits-all solver.

Advertisements