Skip navigation

Tag Archives: NP-hard

Constrained random testing is the predominant method used to verify chips today. It has two advantages over directed testing. First, randomness allows testing things not explicitly thought of by the verification engineer or designer. Second, automated random testing is more productive than manual directed testing in terms of number of test vectors produced as a function of human effort put in.

There are several variants on the basic concept of random testing. Unconstrained random testing generates random values for all input variables without any consideration of legality. Constrained random testing is used when the legal set of input values is a subset of the entire input space. In the last dozen years, hardware verification languages (HVLs) have enabled the use of constraint-based random testing. Constraint-based random testing uses static constraints and constraint solvers to do constrained random testing.

Constraint-based random testing improves productivity in writing constrained random testbenches because it allows writing tests at a higher level of abstraction. Static constraints specify the legal input space, but not how to randomize values; this is left up to the solver. As designs/protocols have become more complex, constraint-based random testing’s productivity gain has increased. As a result, there is a proliferation of large, complex constraint sets in today’s testbenches.

Increasingly, however, solving these large, complex constraint sets has become a bottleneck in simulation due to the large amount of time spent solving these constraint sets. Consequently, it is common that a lot of time is spent debugging and trying to workaround these solving bottlenecks. In the worst case, it is necessary to discard the static constraints and rewrite the randomizer using imperative code, negating the productivity gains of using static constraints.

The fundamental cause of this problem is the fact that uniformly randomizing across a constrained input space is a NP-hard problem. As we have seen before, NP-hard problems are optimization problems. Uniformity of randomization is the optimization goal. For example, suppose we have a simple constraint: 0 <= X <= 10. If we randomize X a number of times and each time it returns the value 0, this meets the constraint, but is not very uniform. Randomization that produces the value 0 to 9 with equal probability is optimal. Because NP-hard problems have the additional burden of requiring an optimal value, not just a satisfying value, they are generally harder to solve than NP-complete problems.

What are solutions to this problem?

The obvious solution is to build better solver/randomizers. Companies that produce HVLs already spend a large amount of effort trying to improve their solvers. If it were that easy, it would have been done already.

Barring better solvers, the only other choice is to change methodologies. One change is to ditch static constraints and go back to writing imperative code with the ensuing reduction in productivity. Not a pleasant thought to contemplate.

An alternative to this is keep the static constraints, but reduce the complexity from NP-hard to NP-complete. This means dumping uniform distribution for, potentially, very poor, but still legal, randomization. This is feasible considering that the real goal of random testing is to find bugs and improve coverage. We believe the uniform distribution is required to do this. However, if the randomizer knew precisely what values to generate in order to find a bug or hit a coverage point, it could just generate these values, which may or may not be uniform. If it were possible to do this, randomization would go from being an NP-hard problem to being a NP-complete problem. Technologies such as Nusym’s path tracing technology allow this.

One thing I think would help to implement this strategy would be to provide an open solver API in today’s HVLs. This would allow plugging in different solvers for different circumstances and developing custom solvers on a case-by-case basis rather than being forced to have a one-size-fits-all solver.

Advertisements

[Note: this is another in my series on computational complexity for dummies.]

The field of Artificial Intelligence (AI) got its start in 1950 with the famous Turing test. It was thought at that time that, by the year 2000, computers would possessĀ  human-level intelligence. Well, this didn’t come to pass and human-level intelligence still seems as far off now as it did then.

One metric back then was the game of chess. It was thought that for a machine to beat the best human at chess would require human-type thinking. This metric was met in 1997 when Deep Blue beat Gary Kasparov. However, as the designers of Deep Blue freely admit, Deep Blue won mostly by brute force enumerating all the solutions. it was clear, and had been for a while, that the chess problem was going to be solved by exploiting computers vastly faster speed compared to humans rather than by any form of human-type thinking.

Chess is a particular type of game in which there are two players, with one player moving first, the second making a counter-move, and the players alternating until one player wins or a draw is achieved. The goal is to find a winning strategy. We can formulate this as follows (for player 2 in this case):

for all first moves player 1 makes,
there exists a first move that player 2 makes such that,
for all second moves player 1 makes,
there exists a second move that player 2 makes such that,
...
player 2 wins.

The computationally complexity of problems with this structure (alternating “for all” and “there exists” quantifiers) is in the class PSPACE-complete. Recall that the NP-hard class is the set of problems that tend to be hard. The NP-complete class is a subset of the NP-hard class and these tend to be the easiest problems to solve in the NP-hard class. PSPACE-complete problems are the hardest subset of the NP-hard class. In general, there is no easier way to solve them than by enumerating all the solutions until a satisfying one is found. This is why research into chess programs focusses on making search speed as fast as possible and why it takes specialized hardware to beat a human.

But, what has this got to do with verification? At the most abstract level, a verification system consists of two parts: the device-under-test (DUT) and the environment it sits in. The DUT doesn’t do anything unless the environment initiates something by injecting a request. The DUT responds to this in some way. This is analogous to a two player game in which the DUT is one player and the environment is the other. Requests and responses correspond to moves and counter-moves. The question then is: what is analogous to a game in verification?

There are two fundamental categories of properties that all verfication checks fall into: safety properties and liveness properties. A safety property is one that says “nothing bad will happen” and a liveness property says “something good will always happen.” An example of a safety property is that a traffic light should never be green in both directions simultaneously. An example of a liveness property is that, for a given direction, the light should always eventually turn green.

The vast majority of properties that we verify on a day-to-day basis are safety properties. For example, the property, 2+2=4, in an adder is a safety property. It is not possible to prove any type of property using simulation alone. It is possible to demonstrate the existence of safety property violations, but not liveness property violations using simulation alone. For example, suppose we simulated the traffic light controller for a million time steps and the light was always red in one direction. Does this constitute a violation of the liveness property? It is not conclusively a violation because it is possible the light could turn green if we simulated one more time step. At the same time, even if we saw the light turning green, this does not mean that it won’t get stuck red at some point.

It is possible to prove liveness properties using formal verification. How would we formulate the liveness property of the traffic light? One way is to consider it as a game between the cars arriving at the light (the environment) and the traffic light state. Then we could formulate the problem:

for all possible combinations of cars arriving at step 1,
there exists a state of the traffic light at step 1 such that
for all possible combinations of cars arriving at step 2,
there exists a state of the traffic light at step 2 such that
....
if the traffic light state is red, 
                     then in the next state it will be green.

This has the same structure as a game and so is PSPACE-complete. Since PSPACE-complete problems are solveable to a certain degree, we can prove liveness properties, but generally only on very small designs. The most effective use of this is to verify highly abstracted designs, typically when doing protocol verification.

This deep connection between the formal verification field and AI field goes further than having similar complexity. The core tool used in formal verification today, SAT solving, emerged from research into AI problems such as planning. The cross fertilization between these two fields has yielded dramatic improvements in efficiency in SAT solving over the last 15 years.

The computational complexity class, NP-hard, is at the core of a number of problems we encounter on a daily basis, from loading the dishwasher (how do I get all these pots to fit?) to packing a car for a vacation, to putting together a child’s train tracks.

If we look at these things, they have several things in common. First, they each involve a potentially large number of parts (pots, luggage, pieces of track) that need to be put together in some way. Second, we want to meet some objective, such as fitting all the dishes in the dishwasher. Third, there are a large number of constraints that must be met. In the case of loading the dishwasher, no two dishes can be put in the same place. There are N^2 constraints just to specify this, among many others. A fourth characteristic is that we may get close to an optimal solution, but find it difficult and not obvious how to get to a more optimal one (just how are we going to fit that last pot in the dishwasher). Furthermore, getting from a near optimal solution to an optimal one may involve a complete rearrangement of all the pieces.

One way to solve problems like packing a dishwasher is to view it as a truth table. Each dish can be put in one of, say, 100 slots, in, say, one of ten different orientations. This results in 1000 combinations, requiring 10 bits. If there are 40 dishes, 4000 bits are required to represent all possible configurations of dishes in the dishwasher. The resulting truth table is vast. Each entry in the table indicates how much space is left in the dishwasher if dishes are put in according to the configuration of that entry. A negative number indicates an infeasible solution. There will be many invalid configurations which have two or more dishes occupying the same location. We give all of these entries a large negative number.

The resulting table describes a landscape that is mostly flat with hills sparsely scattered throughout. We can also imagine that this landscape is an ocean in which negative values are under water and positive values represent islands in the ocean. The goal is to find the highest island in the ocean. We start in some random location in the ocean and start searching. We may find an island quickly, but it may not be the highest one. Given the vastness of the ocean, it is understandable why it can take a very long time to find a solution.

But, wait a minute. What about polynomial algorithms like sorting? A truth table can be constructed for these also. For example, to sort 256 elements, we can create 8 bit variables for each element to describe the position of that element in the sorted list. The value of each entry would indicate the number of sorted elements for that configuration. The complete table would again be around 4000 bits and have vast numbers of infeasible solutions in which two or more elements occupy the same slot in the list and only one satisfying solution. Yet, we know finding a solution is easy. Why is this?

The ocean corresponding to the sorting problem is highly regular. If we are put down in an arbitrary point in the ocean, we can immediately determine where to go just be examining the current truth table entry (point in the ocean). Knowing the structure, we may be able to determine from this that we need to go, say, northeast for 1000 miles. We may have to do this some number (but polynomial) times before getting to the solution, but is guaranteed to get to the solution. Structure in a problem allows us to eliminate large parts of the search space efficiently.

In contrast, for an NP-hard problem, there is no guarantee of structure. Furthermore, as we are sailing around this ocean, we are doing so in a thick fog such that we can only see what is immediately around us. We could sail right by an island and not even know it. Given this, it is easy to see that it could take an exponential amount of time to find a solution.

But then, how do we account for the fact that, often, NP-hard problems are tractable? The answer to this question is that there usually is some amount of structure in most problems. We can use heuristics to look for certain patterns. If we find these patterns, then this gives guidance similar to the sorting example above. The problem is that different designs have different patterns and there is no one heuristic that works in all cases. Tools that deal with NP-hard problems usually use many heuristics. The trouble is that, the more heuristics there are, the slower the search. At each step, each of the heuristics needs to be invoked until a pattern match is found. In the worst case, no pattern match will be found meaning it will take an exponential time to do the search, but the search will be much slower due to the overhead of invoking the heuristics at each step.

I hope this gives some intuition into NP-hard problems. In future posts I will talk about even harder classes of problem.